Artificial intelligence in Health: New technology, Old biases

Artificial intelligence in Health: New technology, Old biases

By Frances-Catherine Quevenco

Artificial intelligence (AI) has become an intrinsic part of our daily lives and opens a vast array of possibilities that make life a little easier. It helps us choose the next Netflix series to watch, it suggests items we may want to purchase online, and has in more recent years, become more prominent in the healthcare field. People can now talk to chatbots about their symptoms, which then either refer them to an appropriate physician or suggest possible home remedies.

AI’s capability of spotting patterns in vast amounts of unstructured data has opened the doors for ‘precision medicine’.

In an age where we have access to extraordinary amounts of medical data, AI offers the potential to override the ‘one size fits all’ solution, and provide suggestions for medical treatments tailored to the patient and their needs. However, AI systems are as good as the data they are trained on – and as a consequence, AI systems can inherit or amplify the prejudices that they are said to avoid.

Examples of these include the Amazon recruitment AI that showed a preference for men, and instances of where several AI systems struggled to identify female and minority faces. This raises several sociological concerns, but notably in the domain of healthcare, this could severely impact the diagnosis and the treatment of underrepresented populations.

One instance of AI bias is the auditory tests built by Winterlight labs in Toronto to detect Alzheimer’s disease. Although the promising system was published in the Journal of Alzheimer’s Disease, the authors realised that this system only worked with Native English speakers of a specific Canadian dialect. This resulted from a lack of diversity in the data used to train their AI. Unfortunately, this is not an isolated case.

Medical datasets that are openly available to AI researchers are heavily biased towards male Caucasian samples. Furthermore, studies on human genomics also use a very homogenous dataset, with around 81% of participants in 2,511 studies being of European descent. This is a big issue seeing that datasets like these are being by companies such as Deep Genomicsto find treatments for Huntington’s disease and cystic fibrosis; or Sophia Genetics using genomics data to give on-site patient diagnoses, and upcoming projects for tech giants like IBM’s Watson.

It is particularly crucial to avoid these biases in the light of gender and sex differences in mental and brain disease. Men and women show differences in risk of developing certain diseases, as well as the manifestation of these diseases. With homogenous datasets used to train these AI-based technologies, this could widen the treatment gap between genders, and further worsen the ‘one size fits all’ solution that AI was seen as an instrument to counter.

In order to use AI as a step towards precision medicine tailored to the patient, one needs to address the biases that these AI systems may face and consider the steps we can take to avoid them.

The Women’s Brain Project (WBP) aims to raise awareness of gender bias in brain and mental health. This includes promoting discussion about gender biases in AI technologies, and how this impacts brain and mental health. At the International Forum on Women’s Brain and Mental Health that will be held on June 8-9 at the University of Zurich in Switzerland, one of the panels – featuring Sophia the Humanoid Robot – will address gender, novel AI technologies and healthcare.

(Until April 1, submit questions for Sophia the Humanoid Robot on Twitter using #AskSophiaWBP and read more about this campaign here.)