AI in Healthcare, London October meetup with Babylon Health

https___cdn.evbuc.com_images_76847881_189682453342_1_original.jpeg

Words by Hannah Hare

Last week Women of Wearables and Babylon Health organised a Q&A panel discussion on the topic of AI (Artificial Intelligence) in Healthcare. It proved really popular, with well over 100 people filling up the trendy Babylon break-out space in Sloane Avenue, London. The crowd was not disappointed: it was a fantastic evening of stimulating conversation, with topics ranging from data bias and data privacy to the changing role of doctors.

0 (1).jpeg

First to introduce herself was Ivana Bartoletti of Women Leading in AI. Her interest in privacy began when she noticed that she was being watched by a camera whilst in the ladies’ bathroom when she was visiting the US aged only 17.

She studied law at university and has gone on to provide legal advice on issues relating to privacy and data protection for global companies including Barclays and Sky.

0 (2).jpeg

Next was my colleague Rebecca Wray from TTP. Rebecca studied product design engineering and has already had a varied career working in pharma, autonomous vehicle design and now connected medical devices.

She is now leading programmes at TTP to develop connected drug delivery devices and digital therapeutics that have the potential to revolutionise patient care.

0 (3).jpeg

Third was Juemin Xu of Think and Decide. She has done extensive research on decision making in gambling, and the differences between human and AI decision making.

Her research has been published by The Economist, The New Yorker and The Wall Street Journal.

0 (4).jpeg

Caroline Hargrove from Babylon Health completed the panel. Caroline trained as a mechanical engineer and started her career developing F1 simulators before making the switch into healthcare.

She is now CTO at Babylon Health.

The moderators were Marija Butkovic and Anja Streicher from Women of Wearables.


AI is a powerful tool to help doctors, but it won’t replace them

We are hearing more and more about AI being applied to healthcare. Could it one day replace doctors altogether? Caroline was clear that there are some tasks that only doctors should be doing. For example, nobody should be given a cancer diagnosis via an app. Similarly, it should be a doctor or a nurse who takes blood draws, administers drugs and explains treatment options to patients.

However, there are lots of areas of healthcare where AI could relieve the substantial time pressure that doctors are under, freeing up more of their time to speak directly with patients. For example, AI could automate part of the triage process, helping patients to seek help from the most appropriate source – which may be a pharmacy rather than a GP or a hospital. That way the patients who needs to be seen urgently will have shorter wait times, and doctors can spend their time with the patients who will benefit from it the most. Babylon is also working on a product which will save doctor’s time by writing up their notes automatically at the end of each appointment.

Not every problem needs AI

So how do you decide when to apply AI? Rebecca suggested a 3-step process for defining any new product: there must be a user need, a technology to solve it, and a business case. Only when you combine all 3 do you have a valuable proposition for a new product. In other words, the case for applying AI to healthcare is strongest when there is a need that can’t be adequately addressed by using classical data analytics alone.

Babylon applies AI to interpret human input, pulling the key facts out of varied, everyday language. For example, the colloquial phrase “my stomach is killing me” might be translated simply to “stomach ache”. This can be useful both for patient symptom checkers and when writing up doctor’s notes automatically. However, when it comes to predicting health conditions based on a list of symptoms, Babylon uses Bayesian statistics– which are not AI. This is because it is important to be able to trace the decision-making process and understand directly which symptoms are related to which health outcomes. It also helps to avoid unintended biases creeping in due to limited data input.

Data bias and privacy are real concerns, but regulation can help

When used correctly AI is a very valuable tool, but there is – rightly – a lot of fear about its misuse. Perhaps you have heard of the AI CV screening tool that was pulled after it was found to be biased against women. Or the facial recognition system that was biased against dark skin tones. In both cases, limited historical data was used to train the algorithms, baking past prejudices into the software. This is especially dangerous in AI where the decision-making process is hidden. Similarly, companies like Cambridge Analytica have generated a lot of concern regarding the privacy of our information. Who owns our data, and who controls where it goes?

How can we overcome these challenges of data bias and privacy? The panel agreed that AI is here to stay and suggested that the best route forward was through open, informed conversations and increased regulation. Perhaps industries could agree on a “trust stamp” that is earned by performing due diligence on the quality of input data given to AI algorithms. From May 2020 medical apps will be regulated in the same way as class 2 medical devices. This will put an additional burden on health apps to demonstrate efficacy and transparency before they can launch, but was seen by the panel to be a very positive step in earning trust from consumers and ensuring high quality medical advice.

Humans and AI make decisions differently

Imagine you have a coin, with a 50% chance of heads. If it lands on heads, you win £120, and if it lands on tails, you lose £100. Would you take the bet? A computer would say yes – on average, you will win more than you lose. However, most of us humans wouldn’t take the bet, because we consider the pain of losing £100 to be greater than the joy of winning £120. Juemin explained that you would have to offer a £300 win compared to a £100 loss for most humans to take the bet! This emotional decision making is very different to the cold, predictable logic of a computer.

In healthcare very little is black and white as in this example. How can we code hundreds, if not thousands, of years of experience into a computer? How can we teach it to take contextual information into account, such as the impact of side effects on quality of life? And what happens when a computer program makes a mistake? Patients are remarkably forgiving when their doctor gets something wrong. After all, they recognise that their doctor is – at the end of the day – just human. It seems unlikely that they will be equally forgiving of an algorithm. Striking the right balance between accuracy and caution (in handing over to a human doctor) will likely be critical in AI being accepted in routine healthcare.

What changes would the panellists like to see in the future?

Ivana would like to see AI become a topic for dinner table conversation, and for the media to stop portraying it as a “scary terminator figure”. Caroline agreed, asking for a more knowledgeable debate from all the different stakeholders including the public. And Rebecca would like to relieve some of the burden on healthcare professionals, giving them more time with patients instead of with computers.

My reflections

AI has become a buzzword and is often used as a catch-all for any process involving analytics and big data. But there is a spectrum of different analysis techniques which involve different amounts of human input, or “supervision”. Because of this they can have quite different levels of traceability, which is particularly important in the highly regulated healthcare market.

Traditional data analytics is predictable, repeatable, and “classical” in the sense that the human does all the thinking and the computer simply executes the commands. This can be applied to large datasets very successfully and is often more than good enough for analysing existing data to find interesting trends.

In supervised Machine Learning a human will provide the computer with a large data set that includes a performance measure (e.g. a disease diagnosis) and a large amount of additional information that may or may not be related to the performance (e.g. patient information and medical scan results). The algorithm finds correlations between the data and weights each piece of information to predict the performance measure. These weights can then be used in future datasets (future patients) to predict the outcome. The software developer can look under the hood and see which variables the algorithm has picked out and can even change the weightings manually before applying the algorithm, but the power lies in the computer finding complex correlations that a human may never have thought to check for.

By contrast Deep Learning is an inherently untraceable process which acts like a “black box”. Similar to Machine Learning you feed in a large amount of training data, but here the algorithm is free to apply multiple layers of analysis in order to predict the outcome. Each layer will have its own weightings, and can loop back to previous layers, making it essentially impossible to trace how each input relates to the outcome. This additional freedom can allow the computer to “learn” extremely complex tasks, such as the AlphaGo computer which famously beat the world’s top ranked player at the complex board game Go in 2015. However, the uncertainty in how the computer has reached each decision is a real challenge for healthcare regulators.

Before reaching straight for AI, it is worth thinking carefully about what user needs you are addressing and what your constraints are. Deep Learning is very powerful, but also very power-hungry – it will take a lot of time and computational power to develop a good computer programme, it will need a large amount of unbiased training data, and in a healthcare context, it will be very difficult to prove its efficacy to regulators. Supervised or semi-supervised Machine Learning is appropriate for most complex datasets, like 3D medical scans or interpreting voice data. And, although it is less trendy, don’t be afraid to revert back to traditional data analytics – its simplicity can often provide much faster and equally effective results!



hannah_hare_2016 (1).jpg

Hannah Hare is a product development physicist who specialises in creating new products and technologies for healthcare applications. She collaborates with young companies to help them turn their most ambitious ideas into real products. Connect with Hannah via LinkedIn!