‘Black box medicine’ and machine learning

University of Birmingham experts in AI in health have contributed to a series of reports from the PHG Foundation that consider the human interpretability of machine learning in the context of healthcare and medical research.

Continued improvements in computing and AI, especially machine learning, are beginning to offer benefits for health services thanks to their ability to make sense of highly detailed information. From supporting healthcare professionals in making diagnoses and determining risk, to optimising treatment decisions and patient management, further expansion of the use of machine learning in healthcare seems assured.

Like other complex algorithms, machine learning can be ‘black box medicine’ – where conclusions (that may influence decisions related to care) are made without patients and health professionals understanding why. Though such systems can help medical research and clinical practice in many ways, the amount of data they use and their complexity may mean that how they make decisions cannot be explicitly understood or even adequately explained. This raises a number of legal and ethical issues.

To further understanding of the black box medicine problem, the PHG Foundation was awarded seed funding from the Wellcome Trust to examine interpretability in the context of healthcare and relevant regulation. In clarifying the requirements for transparency and explanation, we aim to improve patient and public trust in these technologies and better ensure that the benefits for healthcare are realised for all.

Professor Alastair Denniston and Dr Xiaoxuan Liu commented “These Black Box Medicine and Transparency reports are an important contribution to our understanding of contemporary issues around artificial intelligence in healthcare, including how humans interact with these AI systems and how the machine learning models are made sufficiently interpretable to humans to provide confidence in their performance.

“This is a complex area which is relevant to making sure that we unlock the benefits of AI systems, while providing sufficient scrutiny and regulation to avoid hidden and unintended biases.”

An overview of the PHG reports is available here: https://www.phgfoundation.org/briefing/black-box-medicine-transparency

Professor Denniston is the AI Theme Lead for the newly formed Centre for Regulatory Science and Innovation. He and Dr Liu lead a cross-BHP programme of work between University Hospitals Birmingham and the University of Birmingham focusing on the evaluation of AI systems and the route to implementation in the NHS.