AI in healthcare: Are we there yet?

AI in healthcare: Are we there yet?

By Priyanka Ravikumar

We all keep hearing the term Artificial intelligence (AI) in everyday life. So, what exactly is AI?

Remember the crystal ball from Harry Potter movie The Prisoner of Azkaban? The one that predicts Voldemort’s death in hands of Harry? Well, in some respects, AI is starting to be like that ball that predicts future based on the past and present.

Modern AI often involves deep learning tools. And in numerous ways, machines are starting to think in ways that resemble the human brain; while also often being able to surpass human biases and limitations. One of the most important ingredients of modern “supervised” machine learning methods though, is data: consider, as an analogy, a child learning to differentiate human faces. The more a child is exposed to faces (i.e. the relevant data), the better it usually gets at recognizing them. And the situation is similar with most AI methods: they need relevant data, in order to learn from them.

Where does AI get its data from?

Internet, images, audio, social media and smart devices are all sources of information for AI. So are specialized sources, such as medical records, X-ray scans, and DNA sequences.

Role of AI in Mental Health

The use of AI in healthcare has exponentially increased in the last decade. The most expensive part of our healthcare system today is mental health. Mental Health treatment has a much larger total cost, as compared for example to treating heart disease. According to the World Health Organization, approximately 300 million people are affected by depression alone around the globe. Bipolar disorder is present in roughly 60 million people and schizophrenia in 23 million. Moreover, around 50 million people have dementia. If one also includes brain diseases in these figures, such as Alzheimer’s and Dementia, then the numbers become much larger. Thus, technological advances are essential to enable clinicians in the diagnosis of mental illnesses.

One of the most widely referred-to systems in the history of AI, was ELIZA: a chat-bot that was emulating a psychotherapist, created by MIT’s Weizenbaum in 1965. The main goal of this program was to pass the so-called Turing Test, by fooling humans into thinking that they were chatting with a real psychotherapist; and not to provide expert care through an AI doctor. However, things have changed a lot since then!

Consider the following non-exhaustive samples:

IBM researchers are now working on an AI system by integrating audio inputs between patients and medical doctors, write-ups coupled with machine learning, to help clinicians accurately predict and monitor psychosis, schizophrenia, mania and depression. According to these scientists, “In five years, what we say and write will be used as indicators of our mental health and physical well-being.”

Alison Darcy, a psychologist from Stanford university built an AI powered chatbot called “Woebot”. Her vision was to provide as much help as possible to people without proper access to mental healthcare. The chatbot asks you questions about how you feel, your emotional status, and your likes and dislikes over a series of text messages. Finally, it coaches you using Cognitive Behavioural Therapy. The developers hope that this would be a do-it-yourself (DIY) tool during episodes of panic attacks or anxiety. It could also be used as complementary therapy by clinicians.  

Recently, in 2015 University of Southern California developed a 3-D rendered Al therapist called “Ellie”. Ellie observes 66 points on a patient’s face and takes into account his or her speech. Ellie’s own speech and actions mimic an actual therapist. This virtual therapist is a big hit since several patients feel less judged by an Al and are more willing to share their emotions.

From 2002 to 2012, 99.6% of investigational Alzheimer’s drugs failed in clinical trials. Marilyn Miller, director of AI research in Alzheimer’s at the National Institute on Aging believes that AI should be able to design clinical trials better. “One of the biggest challenges in Alzheimer’s drug development is that we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team. This further emphasizes the importance of integrating data from people with common genes, characteristics, and imaging scans that would likely develop better therapeutics for that population.

On a similar note, researchers at National Institute of Neurological Disorders and Stroke

(NINDS), part of the National Institutes of Health, trained an AI with images belonging to brains suffering from Parkinson’s disease. These images were classified based on structural features of the brain cells. The drawback of this is that large data sets are needed to make the AI predict the disease with minimal errors. Another risk is that of data over-fitting and erroneously predicting the disease. 

Ethical Concerns

Al is rapidly being developed, and efforts are being made to integrate this technology into

current healthcare systems. This raises several questions in terms of ethical concerns.

Firstly, we are dealing with huge stores of patient-sensitive data. Are the systems secure enough to store these valuable data? Beyond security, who should be given access to the data, and what are the ways to effectively “desensitize” or “anonymize” them so that they do not lose valuable aspects of their scientific value, but at the same time privacy is not compromised?

Another important issue that needs to be addressed is in a situation where a patient has been wrongly diagnosed with a disorder, who should be blamed: the system developers, or the clinicians? One must also be aware of the software developer’s bias when creating the algorithm. One has to also take into account the inherent biases and patient discrimination when training the AI systems. Based on the training dataset, the prediction made by the AI would vary. Thus, factors such as differences in gender, race or age must be considered in the process.

Another notorious concern would be disenfranchising people based on their disorder predictions. Lastly, if we are developing models for mental health disorders, aren’t we defining “normality” in humans? How much do we rely on an Al to make these decisions? In a sense, established definitions of “normality” are already used in our medical systems. The celebrated “DSM” (Diagnostic and Statistical Manual for Mental Disorders) of the American Psychiatric Association, which has now reached its 5th edition, is not only the de facto standard used worldwide, but also contains quite specific definitions of mental disorders which are also linked to standard procedures. The new tools are promising to enable an earlier diagnosis and effective treatment of such disorders, with much lower cost, better accuracy, and wider coverage worldwide.

All these concerns must be addressed proactively before we put this technology into use for disease prediction. Clinicians must be trained on this technology, and made aware of algorithms designed for AI – especially in relation to parameters such as age and gender biases. Certain mental issues are likely to affect a particular sex more than the other and certain age groups affect risk as well.

These, and many more such important concerns, will further be discussed in our International Forum on Women’s Brain and Mental Health, held on 8-9 June 2019 in Zurich, Switzerland. A large number of world-class academics as well as experts from non-profits, government, and other relevant entities will participate. Only with appropriate foresight, experimentation, and policy changes, will we be able to harness all the tools that AI is now giving us, towards a better future for Mental Health and well-being across borders and cultures.