The AI Lab is running collaborative programmes and establishing partnerships to enable a world leading ecosystem for the development and deployment of AI technologies.
We are developing mechanisms to ensure that approved AI technologies are safe and ethically robust. By protecting patient safety and ensuring systems are transparent and accurate, we will increase the confidence and trust of the public and clinicians.
Our focus includes:
- providing a regulatory advice and approval service (MAAS)
- better post market surveillance and guidance on best practice for putting an AI system into service
Highlights from our regulations programme
Multi-agency advisory service (MAAS)
This new service aims to give innovators and health and care providers developing AI technologies a one stop shop for support, information and guidance on regulation and evaluation. It will also help make clear the regulatory pathway for safely scaling technologies
Streamlining the process for technological review
This project will address issues in the process of regulatory approval of AI and data-driven medical devices and a wider range of healthcare technologies which require access to identifiable patient datasets. Developing these technologies relies on high-quality research to generate evidence of safety and efficacy to enable smooth deployment in the NHS. We want developers to have easy access to the appropriate research approvals and to start their research as soon as possible.
The Health Research Authority (HRA) will lead a project to streamline the review of AI and data-driven research and modernise the technology platform used to make applications for approvals. The HRA will work with the MHRA to develop a simplified and co-ordinated process for reviewing AI and data-driven medical devices.
The HRA will also streamline the review of research using confidential patient information without consent, overseen by the Confidentiality Advisory Group. Through the project, the application and review process will be modernised to enable a quicker and more robust oversight of projects and enhance the public visibility of approved studies.
The Medicines and Healthcare products Regulatory Agency (MHRA) succeeded in developing synthetic data which mimics ground truth data so closely that it can be used to validate algorithms - including AI algorithms - in medical devices in a project funded by the Regulators’ Pioneer Fund that finished in June 2020.
This project will build on that work to scale the development of synthetic datasets. The creation of these datasets will enable innovators to train and validate their algorithms against datasets that may otherwise be difficult to access or obtain.
The project will establish the process for a regulatory pathway that incorporates the use of synthetic data. This will contribute to a regulatory environment in the UK that enables and supports the introduction of innovative AI software medical devices into healthcare.
Post market surveillance
The MHRA will enhance the post-market surveillance of healthcare products, including AI solutions, by transforming the Yellow Card system for adverse incident reporting. This will include a requirement to report all incident types, including software and AI as medical devices.
The project will also research novel AI signal detection techniques in medicines and devices. This will facilitate data-driven regulation and enable the MHRA to be more responsive to the use of high-profile technology products of concern to the public and healthcare professionals.