It’s been really interesting to watch how digitalisation has become the biggest game changer in healthcare. And I believe we’re reaching a pivotal point when it comes to AI and healthcare. There’s been a huge shift in the volume of data being created and stored – it’s increasing at a phenomenal rate. The speed at which computers can analyse this data. And the depth of insight we can draw from the analysis has also increased rapidly.
When I start to think about the potential for healthcare and AI applications, it’s vast. There’s so much opportunity. It ranges from disease prevention, early detection, better and more affordable diagnosis, and supporting general decision making to design new pharmaceutical products. But we must guarantee that AI technologies are safe and ensure that medical professionals are trained to work alongside them and more importantly, that you and I understand what they are doing and telling us.
We’re at the start of a journey that has huge potential
Our AXA Doctor at Hand service from AXA Health is a great example of how digital innovation has progressed in healthcare. AXA Doctor at Hand allows you to see a GP whenever you want, from anywhere in the world. It’s a private online GP service. A GP can diagnose a condition, recommend treatment, issue prescriptions and refer you to a specialist. But AXA Doctor at Hand is more than virtual consultations, it brings together multiple clinical information sources to build models that identify and inform ‘best practice’ treatment pathways. And insights from this service are shared with clinical providers to inform best practice and create a better experience for patients. The use of this service has sky-rocketed from 500 sessions per month pre-pandemic to over 20,000 per month and is here to stay.
AI needs data; are people ready to share that data?
The healthcare sector already handles huge amounts of data on a daily basis: patient information, medical history, diagnostic results, genetic data, hospital billing, clinical studies. And most of this data is highly private and sensitive. When this data is used with AI technology, it can detect patterns, make predictions and most importantly, it can make recommendations. But for this to happen, people need to trust sharing their most valuable information with a computer. We need to establish relationships built on trust between us and our patients to reassure people that their data is being collected and analysed to support their health and wellbeing. It’s crucial that we explain how AI came to a decision, so people can trust it.
Understanding what the algorithms are telling you
The ability to explain how we came to a conclusion on a diagnosis is vital. It could be the thing that ensures you get the right treatment. Through the AXA Research Fund, we support Professor Thomas Lukasiewicz, from the University of Oxford. His research looks at ‘Explainable AI for healthcare’. It aims to develop a way to explain what the algorithms mean, in a language that everyone can understand. Without this level of understanding, people won’t be willing to share the data they have. It’s a critical piece in moving forward if we want to give patients the transparency and confidence they need to trust AI.
Taking care of you
I believe the amount of data available will increase, as will patients’ trust in sharing data thanks to the reliance on devices, apps, and the use of internet of things (IOT). This data is invaluable to the healthcare industry, but only if it is captured, stored and analysed correctly. And to do that, we need to earn the trust of patients. AI has the potential to revolutionise the healthcare industry, but the best tool we have in our hands is looking after ourselves. So, the biggest challenge right now, is how can we use AI to energise and motivate people to take responsibility for their personal health and wellbeing?