The NHS AI Lab is committed to accelerating the safe, ethical, and effective adoption of AI across health and care.
We are introducing the AI Ethics Initiative to ensure that AI products used in the NHS and care settings will not exacerbate health inequalities. We will invest in research and trial practical interventions that complement and strengthen existing efforts to validate, evaluate, and regulate AI-driven technologies.
The ethical assurance of AI
The NHS AI Lab is well-positioned to make a difference to the ethical assurance of AI given our role in supporting all aspects of the AI life cycle, from working with innovators as they define the purpose of their products to guiding health and care professionals as they utilise these technologies to assist them in providing care.
The focus of the AI Ethics Initiative will be on how to counter the inequalities that may arise from the ways that AI-driven technologies are developed and deployed in health and care. We believe these inequalities aren’t inevitable, and that if they are proactively addressed we can realise the potential of AI for all users of health and care services.
Our intention is to support projects that can demonstrate that they are patient-centred, inclusive, and impactful. We will collaborate with academia, the third sector, and other public bodies, involving and encouraging them to help us shape our key programmes, including the AI in Health and Care Award Award, AI Imaging, and AI Skunkworks. We will also contribute to and promote the existing efforts of our partners on furthering the ethics of AI to achieve greater impact and positively transform how patients, citizens, and the workforce experience AI in health and care.
Expand the areas below to find out more about the range of research projects underway:
Optimising AI to improve the health and care outcomes of minority ethnic communities
The NHS AI Lab is partnering with the Health Foundation on a joint research call (now closed), being enabled by the National Institute for Health and Research (NIHR). The research call is in response to concerns about algorithmic bias, and in particular, the racialised impact of algorithms in health and care. While algorithmic bias does not only affect racialised communities, examples of deploying AI in the US indicate that there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients. At the same time, there has been limited exploration of whether and how AI can be applied to address racial and ethnic disparities in health and care.
The research will focus on how to ensure that AI accounts for the health needs of diverse communities and how it can be leveraged to improve health outcomes in minority ethnic populations.
There are two categories for this call:
- Understanding and enabling opportunities to use AI to address health inequalities
The focus of this first category is on how to encourage approaches to innovation that are informed by the health needs of underserved minority ethnic communities and/or are bottom-up in nature.
- Optimising datasets, and improving AI development, testing, and deployment
The focus of this second category is on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities. For example, this may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.
The following projects are being funded:
Assessing the acceptability, utilisation and disclosure of health Information to an automated chatbot for advice about sexually transmitted infections in minoritised ethnic populations
Dr Tom Nadarzynski at the University of Westminster
Aims to raise the uptake of screening for STIs/HIV among minority ethnic communities through an automated AI-driven chatbot which provides advice about sexually transmitted infections. The research will also inform the development and implementation of chatbots designed for minority ethnic populations in public health more widely and within the NHS.
I-SIRch - Using artificial intelligence to improve the investigation of factors contributing to adverse maternity incidents involving Black mothers and families
Dr Patrick Waterson and Dr Georgina Cosma at Loughborough University
Aims to use AI to improve the investigation of factors contributing to adverse maternity incidents amongst mothers from different ethnic groups. This research will provide a way of understanding how a range of causal factors combine, interact and lead to maternal harm, and make it easier to design interventions that are targeted and more effective for these groups.
Ethnic differences in performance and perceptions of AI retinal image analysis systems (ARIAS) for the detection of diabetic retinopathy in the NHS Diabetic Screening Programme
Professor Alicja Rudnicka (St. George's Hospital) and Professor Adnan Tufail (Moorfields Eye Hospital and Institute of Ophthalmology, UCL)
Aims to ensure that AI technologies that detect diabetic retinopathy work for all, by validating the performance of AI retinal image analysis systems that will be used in the NHS Diabetic Eye Screening Programme (DESP) in different subgroups of the population. In parallel, the perceptions, acceptability and expectations of health care professionals and people with diabetes will be evaluated in relation to the application of AI systems within the North East London NHS DESP. This study will provide evidence of effectiveness and safety prior to potential commissioning and deployment within the NHS. (Co-investigators: The Homerton University Hospital, Kingston University, and University of Washington, USA)
STANDING together (STANdards for Data INclusivity and Generalisability)
Dr. Xiaoxuan Liu and Professor Alastair Denniston at University Hospitals Birmingham NHS Foundation Trust
University Hospitals Birmingham NHS Foundation Trust and partners will lead STANDING Together, an international consensus process to develop standards for datasets underpinning AI systems, to ensure they are diverse, inclusive and can support development of AI systems which work across all demographic groups. The resulting standards will help inform regulators, commissioners, policy-makers and health data institutions on whether AI systems are underpinned by datasets which represent everyone and don’t risk leaving underrepresented and minority groups behind.
Facilitating early-stage exploration of algorithmic risks
In recent years, a number of tools have been developed in response to problems of algorithmic bias, privacy and security, and explainability of AI, for example. Algorithmic impact assessments (AIAs) have been considered a particularly promising tool because they enable users to assess the possible societal impacts of an algorithmic system before it is used, with ongoing monitoring often advised once the technology has been implemented.
The Ada Lovelace Institute has led research in the UK on assessing algorithmic systems, making the case for trialing AIAs in the public sector in the hope that they can improve the transparency, accountability, and public legitimacy of AI and data-driven technologies.
The NHS AI Lab will partner with the Ada Lovelace Institute to design and trial AIAs, initially as part of our programme on AI imaging. We will seek to implement AIAs with academic researchers and technology companies proposing to develop AI solutions based on medical imaging data curated by the NHS AI Lab. Given some of the aforementioned issues that algorithms can present, AIAs could enable the NHS AI Lab to support developers with auditing their technology at an early stage when there is greater flexibility to make adjustments and address possible concerns.
Empowering healthcare professionals to make the most of AI
As part of the Topol Review in 2019, it was recommended that the NHS should develop a workforce able and willing to transform into a world leader in the effective usage of healthcare AI and robotics.
Health Education England has since established a national programme to explore and respond to the impacts of advancements in AI on the future education and training needs of healthcare professionals. A key priority of this programme is to identify the knowledge, skills, behaviours, and professional attributes a healthcare professional needs as a user, designer, implementer, and critical appraiser of AI technologies and how this can be learnt. As part of this programme, an iteratively expanding skills and capabilities framework will be produced to support curriculum review and guide healthcare practitioners towards future required learning as they navigate AI-driven technology in the workplace.
Through the AI Ethics Initiative, the NHS AI Lab will partner with Health Education England to better understand levels of trust and engagement with AI solutions in health and care, and what this in turn means for AI-enabled patient care. The findings will inform the skills and capabilities framework, which, for example, could set out what is expected of healthcare professionals with respect to post-market surveillance of AI solutions. We hope that this framework will ultimately help empower healthcare professionals to make the most of AI and realise its benefits for both patients and themselves.