News & Academies' activities

"Collaboration between human expertise and AI, if implemented responsibly, could lead the way to a more effective and humane healthcare system"

What does the future of AI in Healthcare hold? In which areas can AI be helpful in our healthcare system, and in which areas do we need to tread carefully?

On this AI Appreciation Day, some of our questions are answered by Alison Noble. She is a Technikos Professor of Biomedical Engineering at the University of Oxford, Foreign Secretary at the Royal Society, EASAC Council Member and Working Group Member of the AI in Healthcare project lead jointly by EASAC and FEAM.

Could you tell us about the most significant benefits AI brings to both patients and medical practitioners?

A. Noble: For patients, AI may offer earlier and more accurate diagnoses by identifying subtle patterns in complex data like medical images or genetic profiles that human eyes might miss or just cannot see. This may lead to personalized treatment plans tailored to individual needs, improving efficacy and reducing side effects. Imagine AI predicting a patient's response to a particular medication, or flagging early signs of a chronic disease. For practitioners, we are already seeing the benefit of AI streamlining administrative tasks, freeing up time to focus on direct patient care. It can also act as an analytical tool to assist in clinical decision-making, providing access to medical literature, and even enhancing surgical precision through robotic assistance. Indeed, the collaboration between human expertise and AI, if implemented responsibly, could lead the way to a more effective and humane healthcare system.

There are bound to be challenges to such a promise. What are the primary hurdles in integrating AI into existing healthcare systems and workflows, and how can these be addressed?

A. Noble: It is definitely early days, and there are significant challenges. One major hurdle is data access: AI thrives on high-quality, diverse, and well-structured data, but healthcare data is often siloed, incomplete, or in disparate formats across different sites. Interoperability between legacy systems and new AI platforms is therefore crucial, but at the moment is lacking. Furthermore, building trust among healthcare professionals is vital. There's scepticism regarding AI tool accuracy and accountability, and in some areas of medicine a fear of AI replacing human roles. Addressing these end-user concerns requires robust validation studies, transparent AI models that explain their reasoning in ways comprehensible to a medical practitioner (domain expert), and comprehensive training programs that demonstrate how AI complements, rather than supplants, clinical decision-making. Finally, regulatory frameworks need to evolve to keep pace with rapid AI advancements, ensuring safety, efficacy, and ethical deployment while fostering innovation.

Data privacy and ethical considerations are recurring topics in discussions about AI. Could you expand on the ethical concerns related to data privacy in healthcare AI? How can we safeguard sensitive patient information?

A. Noble: Data privacy is one of the most critical ethical considerations for the implementation of AI. In healthcare, AI systems may be trained on immense amounts of highly sensitive patient data including medical histories and genetic information. It is important to protect this data from unauthorized access, misuse, and breaches. Robust security measures like advanced encryption and strict data access controls are paramount. AI models can also be built with what are called privacy-enhancing techniques such as federated learning techniques whereby training data is not centralised or shared but remains safely stored at individual sites. 

Ethical AI demands transparent data governance frameworks. Patients must provide informed consent to state how for their data can be used, especially for purposes beyond direct care, such as research or commercial development. Another major concern is algorithmic bias: if AI is trained on unrepresentative datasets, bias will likely perpetuate and might even exacerbate existing healthcare disparities. Therefore, diverse and inclusive data collection, awareness of and continuous monitoring for bias, and a strong emphasis on explainable AI are crucial to build trust and ensure equitable healthcare outcomes.

What aspect of AI in Healthcare is currently underexplored?

A. Noble: The focus to date has primarily been on automation of tasks, and assistive AI-based technology to support experts typically for routine tasks performed in hospitals. In the latter case, if the AI fails the expert can take over.  Less attention has been given to developing assistive-AI clinical decision-making tools to empower non-specialists, trainees and occasional users such as a general practitioner. Clearly to do so would be potentially transformational as it may allow aspects of what is currently hospital-based care to be moved to the community setting which benefits the patient and health system as a whole. You can’t necessarily use AI tools trained on expert data and designed for experts in these cases. This is because greater attention may need to be given to understand the failure modes of the AI and to decide how to deal with AI errors as the user does not have the knowledge of an expert. An ideal solution is to co-design with the end-user in mind from the beginning, but to do this from scratch every time would be costly. Greater understanding of this topic which falls under the theme of human-AI collaboration is emerging as an active area of inter-disciplinary research involving AI technology researchers, clinicians and behavioural scientists.

back to overview