Category Archives: Artificial Intelligence

Is Artificial Intelligence Helping Healthcare Professionals?

The concept of what truly defines artificial intelligence (AI) has changed over time, but at the core, there has always been the idea of building machines which are capable of thinking like humans and that these machines will help humans perform better and it will speed up the completion of tasks in general. Various industries are jumping in on AI so apply it to their activities and reap its benefits. In the healthcare industry, there has been a continuous debate on the reality of Artificial Intelligence Helping Healthcare Professionals.

 

After all, human beings have proven uniquely capable of interpreting the world around us and using the information we pick up to effect change. If we want to build machines to help us do this more efficiently especially in the healthcare industry, then it makes sense to use ourselves as a blueprint. Artificial intelligence can be thought of as simulating the capacity for abstract, creative, deductive thought – and particularly the ability to learn which this gives rise to logic in computers i.e computers being able to use logic to discern and make decisions. Research and development work in AI is split into two branches. One is labeled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” – which seeks to develop machine intelligence that can turn their hands to any task, much like a person.

 

So what do we mean by context? DeepMind spent years playing Go, and Watson had the context for Jeopardy, having been fed terabytes of trivia and natural language examples to help it decode the show’s answer-question format. It is only because of this human hand-holding and intense training that these machines were able to deliver such dominating performances. Even a seemingly simplistic application takes years to learn the context around meeting scheduling in order to reach a consumer-acceptable level of competence.

 

AI without context is the singularity. Since we have yet to achieve this, it’s perhaps unfair to expect AI to develop its own intelligence. Indeed, our hopes, fears, and expectations for AI technologies have been far too high. When given the appropriate context and designed to solve specific problems like how to play a game or fight cybercrime, these technologies can indeed fuel meaningful innovation. For instance, the software powering self-driving cars is poised to be one of those breakthroughs of AI.

 

There have been lots of movies on the topic of AI and how machines come to gain self-awareness and then use that self-awareness to take over the world. While such storylines are made to be overly sensational for rating movies like iRobot featuring Will Smith were able to show a broader spectrum of AI by showing self-driving cars. Which is a realist first step for AI?

In industry, AI  is employed in the financial world for uses ranging from fraud detection to improving customer service by predicting what services customers will need. In manufacturing it is used to manage workforces and production processes as well as for predicting faults before they occur, therefore enabling predictive maintenance. In the consumer world more and more of the technology we are adopting into our everyday lives is becoming powered by AI – from smartphone assistants like Apple’s Siri and Google’s Google Assistant, to self-driving and autonomous cars which many are predicting will outnumber manually driven cars within our lifetimes.

Generalized AI is a bit further off – to carry out a complete simulation of the human brain would require both a complete understanding of the organ than we currently have and more computing power than is commonly available to researchers. You cannot attempt to recreate or copy what you do not fully understand.But that may not be the case for long, given the speed with which computer technology is evolving. A new generation of computer chip technology known as neuromorphic processors is being designed to more efficiently run brain-simulator code. And systems such as IBM’s Watson cognitive computing platform use high-level simulations of human neurological processes to carry out an ever-growing range of tasks without being specifically taught how to do them.