The AI Ethics Initiative supports research and practical interventions that could strengthen the ethical adoption of AI-driven technologies in health and care. We are translating principles into practice by building the evidence base needed to introduce new measures for mitigating risk and providing ethical assurance.
We invest in work that complements existing efforts to validate, evaluate and regulate AI-driven technologies. A primary focus of the Initiative is countering the inequalities that can arise from the ways in which these technologies are designed and deployed.
What we do
The NHS AI Lab is well placed to make a difference to the ethical assurance of AI given our role in supporting all aspects of the AI life cycle. This involves working with innovators as they define the purpose of their products, and guiding health and care professionals as they use these technologies to assist them in providing care.
A core focus of the AI Ethics Initiative is on how to counter the inequalities that may arise from the ways that AI-driven technologies are developed and deployed in health and care. We believe these inequalities aren’t inevitable, and that if they are proactively addressed we can realise the potential of AI for all users of health and care services.
We support projects that can demonstrate they are patient-centred, inclusive, and impactful. We collaborate with academia, the third sector, and other public bodies to achieve greater impact and positively transform how patients, citizens, and the workforce experience AI in health and care.
Community of practice
Join our Community for Racial and Ethnic Equity in AI on the Ai Virtual Hub for early insights into projects and to learn from researchers and practitioners working in this area. We hope to advance knowledge and inform practice related to the use of AI in healthcare, with the aim of improving health outcomes for minority ethnic populations in the UK.
We have a range of research projects underway, delivering work within the following areas:
These projects explore how and why access to health data should be granted for AI purposes.
These projects focus on ensuring that AI leads to improvements in health outcomes for minoritised populations.
These projects contribute to increasing the trustworthiness of AI systems and encouraging appropriate confidence in their clinical use.
Governing the use of data for AI
We want to involve patients and the public in deciding how and why access to health data should be granted for AI purposes, and are working closely with the AI Imaging team on these projects.
Honing approaches to data stewardship
We have partnered with Sciencewise (UKRI) to hold a public dialogue that will inform which model(s) of data stewardship the AI Ethics Initiative should invest in developing and refining through further research, with reference to national medical imaging assets.
Data stewardship describes practices relating to the collection, management and use of data. There is a growing debate about what a ‘responsible’ approach to data stewardship entails, with some advocating for a more participatory approach. The AI Ethics Initiative is seeking to ensure that the data stewardship model used for national (medical imaging) assets inspires confidence among patients, the public and key stakeholders. The central question we will seek to explore is how access to data for AI purposes should be granted.
The participants in the dialogue will inform the Terms of Reference for a research competition (a ‘Participatory Fund for Patient-Driven AI Ethics Research’) that we will hold to improve data stewardship approaches for national medical imaging assets established by the NHS AI Lab and and more broadly across the NHS.
There is an Oversight Group in place to provide advice on the dialogue process and materials. We are grateful to the following individuals for their time and invaluable input as members of this Group:
Oversight Group members
Natalie Banner (Chair), Genomics England
Kira Allmann, Ada Lovelace Institute
Phil Booth, medConfidential
Sophie Brannan, British Medical Association
Mark Halling-Brown, Royal Surrey County Hospital
Margaret Charleroy, Centre for Improving Data Collaborations, NHS Transformation Directorate
Vicky Chico, Office of the National Data Guardian
Jasmine Leonard, Freelance
Ruth Keeling, Data Policy, NHS Transformation Directorate
Sinduja Manohar, HDRUK
Joseph Savirimuthu, University of Liverpool
Susheel Varma, ICO
Joseph Watts, Data Analytics, NHS Transformation Directorate
Improving how decisions about data access are made
We have partnered with the Ada Lovelace Institute to design a model for an Algorithmic Impact Assessment (AIA), which is a tool that enables users to assess the possible societal impacts of an algorithmic system before it is used.
The AIA is being trialled from spring 2022 as part of the data access process for national medical imaging assets, such as the National Covid-19 Chest Imaging Database and any planned expansion. It will entail researchers and developers engaging with patients and the public about the risks and benefits of their proposed AI solutions, prior to gaining access to medical imaging data for training or testing. The AIA thus helps address the question of why access to data for AI purposes should be granted.
Through the trial, we hope to demonstrate the value of involving patients and the public earlier in the development process, when there is greater flexibility to make adjustments and address possible concerns about AI systems.
Striving for health equity
We want to ensure that AI leads to improvements in health outcomes for minoritised populations.
We have partnered with the Health Foundation to support research in response to concerns about algorithmic bias. A research competition, enabled by the National Institute for Health and Research (NIHR), was held to address the racialised impact of algorithms in health and care and explore opportunities to improve health outcomes in minority ethnic groups.
While algorithmic bias does not only affect racialised communities, examples of deploying AI in the US indicate that there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients. At the same time, there has been limited exploration of whether and how AI can be applied to address racial and ethnic disparities in health and care.
There were two categories of this research competition:
1. Understanding and enabling opportunities to use AI to address health inequalities
The focus of this first category is on how to encourage approaches to innovation that are informed by the health needs of underserved minority ethnic communities and/or are bottom-up in nature.
2. Optimising datasets, and improving AI development, testing, and deployment
The focus of this second category is on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities. For example, this may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.
The following 4 projects were awarded 2-year funding in October 2021:
Assessing the acceptability, utilisation and disclosure of health Information to an automated chatbot for advice about sexually transmitted infections in minoritised ethnic populations
Dr Tom Nadarzynski at the University of Westminster
This project aims to raise the uptake of screening for STIs/HIV among minority ethnic communities through an automated AI-driven chatbot which provides advice about sexually transmitted infections. The research will also inform the development and implementation of chatbots designed for minority ethnic populations within the NHS and more widely in public health.
I-SIRch - Using artificial intelligence to improve the investigation of factors contributing to adverse maternity incidents involving Black mothers and families
Dr Patrick Waterson and Dr Georgina Cosma at Loughborough University
This project uses AI to investigate factors contributing to adverse maternity incidents amongst mothers from different ethnic groups. This research will provide a way of understanding how a range of causal factors combine, interact and lead to maternal harm. The aim is to inform the design of interventions that are targeted and more effective for these groups.
Ethnic differences in performance and perceptions of AI retinal image analysis systems (ARIAS) for the detection of diabetic retinopathy in the NHS Diabetic Screening Programme
Professor Alicja Rudnicka (St. George's Hospital) and Professor Adnan Tufail (Moorfields Eye Hospital and Institute of Ophthalmology, UCL). Co-investigators: The Homerton University Hospital, Kingston University, and University of Washington, USA.
This project aims to ensure that AI technologies that detect diabetic retinopathy work for all, by validating the performance of AI retinal image analysis systems that will be used in the NHS Diabetic Eye Screening Programme (DESP) in different subgroups of the population. This study will provide evidence of effectiveness and safety prior to potential commissioning and deployment within the NHS.
STANDING together (STANdards for Data INclusivity and Generalisability)
Dr. Xiaoxuan Liu and Professor Alastair Denniston at University Hospitals Birmingham NHS Foundation Trust
University Hospitals Birmingham NHS Foundation Trust and partners will lead STANDING Together, an international consensus process to produce standards for datasets underpinning AI systems, to ensure they are diverse, inclusive and can support the development of AI systems which work across all demographic groups. The resulting standards will help inform regulators, commissioners, policy-makers and health data institutions on whether AI systems are underpinned by datasets which represent everyone and don’t risk leaving underrepresented and minority groups behind.
Building confidence in clinical use of AI
We want to improve the trustworthiness of AI systems and encourage appropriate confidence in their clinical use.
Strengthening accountability for AI through 'trustworthiness auditing'
AI accountability toolkits are being used to encourage trustworthiness in AI by enabling users to confront and address potential risks, such as algorithmic bias and opacity. For example, the algorithmic impact assessment (AIA) we are developing with the Ada Lovelace Institute is a type of ‘accountability toolkit’ intended to support AI developers with auditing their technology at an early stage and to ultimately increase trust in the use and governance of AI systems. Other accountability toolkits include commercial tools, such as Google’s ‘What-If’ Interface and IBM’s ‘Fairness 360’, which support users with making technical fixes to improve the interpretability of a model or to measure bias.
We are collaborating with the Wellcome Trust and Sloan Foundation to support the Oxford Internet Institute (OII) with developing the necessary evidence base and tools to assess and enhance the efficacy of AI accountability toolkits used in health and care. This project will complement our work with the Ada Lovelace Institute to trial an AIA, helping us to ensure that we have the necessary policies and standards in place to support the cultural and organisational adoption of such accountability toolkits.
The OII research team will ultimately produce a 'meta-toolkit' for trustworthy and accountable AI that comprises technical methods, best practice standards, and guidelines designed to encourage sustainable development, use, and governance of trustworthy and accountable AI systems. The meta-toolkit will help health and care practitioners, administrators, and policy-makers determine which accountability tools and practices are best suited to their particular use cases, and will ultimately be most effective at identifying and mitigating risks of AI systems at a local level.
Research published by the team as part of this project has already shown that all state of the art ‘bias preserving’ fairness methods in computer vision, used for example in medical imaging AI systems, make things fairer in practice by decreasing performance for the most disadvantaged groups. The team has recommended simple alternative best practices for improving performance without the need to ‘level down’ in the interest of fairness.
Developing appropriate confidence in AI among healthcare workers
We have partnered with Health Education England (HEE) to research factors influencing healthcare workers’ confidence in AI-driven technologies and how their confidence can be developed through education and training.
Read the first report, ‘Understanding healthcare worker’s confidence in AI’.
As part of the Topol Review in 2019, it was recommended that the NHS should develop a workforce able and willing to transform into a world leader in the effective usage of healthcare AI and robotics.
The first report argues that confidence in AI used in healthcare can be increased by establishing its trustworthiness through the governance and robust implementation of these technologies.
In the context of clinical decision making, once trustworthiness in AI technologies has been established, high confidence in AI-derived information may not always be desirable. For example, a clinician may accept an AI recommendation uncritically, potentially due to time pressure or limited experience in the clinical task - a tendency referred to as automation bias.
The report concludes that clinicians must be supported through training and education to manage potential conflicts between their own intuition or views about a patient’s condition and the information or recommendations provided by an AI system.
The report identifies broader efforts that primarily aim to improve patient safety and service delivery, but could also contribute to developing confidence in AI within the healthcare workforce. These include further development of regulatory frameworks for AI performance, quality, and risk management, and finalisation of formal requirements for evidence and validation of AI technologies.
Much of this work is already underway, being led by Health Education England, the NHS Transformation Directorate, Integrated Care Systems and trusts, regulators, legal professionals, academics, and industry innovators. The AI Ethics Initiative is working with these organisations to ensure our findings on AI confidence are considered as part of this broader work.
The second report, which will be published later this year, will determine educational and training needs, and present pathways to develop related education and training offerings.
How to get involved
We have a Community for Racial and Ethnic Equity in AI on the Future NHS platform.
The purpose of this community of practice is to bring together researchers, innovators, healthcare practitioners, civil society groups and members of the public to:
- facilitate connections that benefit the delivery and impact of relevant research, including making international links
- convene ‘Insight sessions’ for researchers to share developments in their work with wider audiences, including the public
- disseminate early research findings and elicit constructive feedback and support
- share successes, challenges and learnings as part of the research process with one another.
We welcome you to join our NHS AI Virtual Hub and become a member of our community of practice.