The AI Ethics Initiative

Embedding ethical approaches to AI in health and care

The NHS AI Lab is committed to accelerating the safe, ethical, and effective adoption of AI across health and care.

We are introducing the AI Ethics Initiative to ensure that AI products used in the NHS and care settings will not exacerbate health inequalities. We will invest in research and trial practical interventions that complement and strengthen existing efforts to validate, evaluate, and regulate AI-driven technologies.

The ethical assurance of AI

The NHS AI Lab is well-positioned to make a difference to the ethical assurance of AI given our role in supporting all aspects of the AI life cycle, from working with innovators as they define the purpose of their products to guiding health and care professionals as they utilise these technologies to assist them in providing care.

The focus of the AI Ethics Initiative will be on how to counter the inequalities that may arise from the ways that AI-driven technologies are developed and deployed in health and care. We believe these inequalities aren’t inevitable, and that if they are proactively addressed we can realise the potential of AI for all users of health and care services.

Our intention is to support projects that can demonstrate that they are patient-centred, inclusive, and impactful. We will collaborate with academia, the third sector, and other public bodies, involving and encouraging them to help us shape our key programmes, including the AI in Health and Care Award Award, AI Imaging, and AI Skunkworks. We will also contribute to and promote the existing efforts of our partners on furthering the ethics of AI to achieve greater impact and positively transform how patients, citizens, and the workforce experience AI in health and care.

Research projects

Expand the areas below to find out more about the range of research projects underway:

Optimising AI to improve the health and care outcomes of minority ethnic communities...

The NHS AI Lab is partnering with the Health Foundation on a joint research call (now closed), being enabled by the National Institute for Health and Research (NIHR). The research call is in response to concerns about algorithmic bias, and in particular, the racialised impact of algorithms in health and care. While algorithmic bias does not only affect racialised communities, examples of deploying AI in the US indicate that there is a particular risk of algorithmic bias worsening outcomes for minority ethnic patients. At the same time, there has been limited exploration of whether and how AI can be applied to address racial and ethnic disparities in health and care.

The research will focus on how to ensure that AI accounts for the health needs of diverse communities and how it can be leveraged to improve health outcomes in minority ethnic populations.

There are two categories for this call:

Understanding and enabling opportunities to use AI to address health inequalities

The focus of this first category is on how to encourage approaches to innovation that are informed by the health needs of underserved minority ethnic communities and/or are bottom-up in nature.

Optimising datasets, and improving AI development, testing, and deployment

The focus of this second category is on creating the conditions to facilitate the adoption of AI that serves the health needs of minority ethnic communities. For example, this may include mitigating the risks of perpetuating and entrenching racial health inequalities through data collection and selection and during the development, testing, and deployment stages.

Facilitating early-stage exploration of algorithmic risks...

In recent years, a number of tools have been developed in response to problems of algorithmic bias, privacy and security, and explainability of AI, for example. Algorithmic impact assessments (AIAs) have been considered a particularly promising tool because they enable users to assess the possible societal impacts of an algorithmic system before it is used, with ongoing monitoring often advised once the technology has been implemented.

The Ada Lovelace Institute has led research in the UK on assessing algorithmic systems, making the case for trialing AIAs in the public sector in the hope that they can improve the transparency, accountability, and public legitimacy of AI and data-driven technologies.

The NHS AI Lab will partner with the Ada Lovelace Institute to design and trial AIAs, initially as part of our programme on AI imaging. We will seek to implement AIAs with academic researchers and technology companies proposing to develop AI solutions based on medical imaging data curated by the NHS AI Lab. Given some of the aforementioned issues that algorithms can present, AIAs could enable the NHS AI Lab to support developers with auditing their technology at an early stage when there is greater flexibility to make adjustments and address possible concerns.

Empowering healthcare professionals to make the most of AI...

As part of the Topol Review in 2019, it was recommended that the NHS should develop a workforce able and willing to transform into a world leader in the effective usage of healthcare AI and robotics.

Health Education England has since established a national programme to explore and respond to the impacts of advancements in AI on the future education and training needs of healthcare professionals. A key priority of this programme is to identify the knowledge, skills, behaviours, and professional attributes a healthcare professional needs as a user, designer, implementer, and critical appraiser of AI technologies and how this can be learnt. As part of this programme, an iteratively expanding skills and capabilities framework will be produced to support curriculum review and guide healthcare practitioners towards future required learning as they navigate AI-driven technology in the workplace.

Through the AI Ethics Initiative, the NHS AI Lab will partner with Health Education England to better understand levels of trust and engagement with AI solutions in health and care, and what this in turn means for AI-enabled patient care. The findings will inform the skills and capabilities framework, which, for example, could set out what is expected of healthcare professionals with respect to post-market surveillance of AI solutions. We hope that this framework will ultimately help empower healthcare professionals to make the most of AI and realise its benefits for both patients and themselves.

Related Pages