As artificial intelligence (AI) transforms the UK healthcare sector, navigating its complex regulatory landscape requires a proactive and collaborative approach. Professor Alastair Denniston introduces CERSI-AI, a new national centre of excellence tasked by the Government with shaping the future of AI and digital health regulation, ensuring innovation delivers real benefits without compromising safety, equity or trust in clinical care.
Artificial intelligence (AI) and digital technologies are poised to transform healthcare across several critical domains. One significant area is the automation of high-volume, repetitive tasks, including activities such as managing waiting lists and supporting screening programmes such as those for detecting cancer or eye disease.
In clinical decision-making, AI tools have the potential to assist clinicians by analysing vast and complex datasets, including medical imaging, laboratory results and patient histories. These tools can provide evidence-based recommendations, help to reduce diagnostic errors and support the personalisation of treatment plans.
Digital health technologies also offer considerable promise in improving healthcare outcomes. Tools such as remote patient monitoring systems and predictive analytics enable more proactive care. They can also help reduce hospital readmissions and support the effective management of chronic conditions.
AI can also play a crucial role in enhancing patient safety. By predicting adverse events, automating the detection of clinical errors and enabling timely interventions, AI systems can serve as an early warning mechanism, for example, by flagging signs of patient deterioration in real time and enabling prompt clinical responses.
Rapid advancement leads to regulatory challenges
The rapid advancement of AI and digital health technologies presents several significant regulatory challenges within clinical practice and patient care. These challenges span many areas, each of which requires careful consideration to ensure safe, effective and equitable implementation.
AI tools must undergo rigorous testing and validation to demonstrate their safety and efficacy in real-world clinical environments. One of the foremost regulatory concerns is the establishment of robust clinical evaluation frameworks that can accurately assess these technologies. Additionally, continuous post-market surveillance is essential to monitor ongoing performance and safety. Particular attention must also be given to adaptive algorithms that evolve, as their dynamic nature poses additional complexities for regulatory oversight.
Successful integration of AI tools into clinical practice depends on their compatibility with existing NHS systems and workflows. Regulatory oversight should ensure interoperability with electronic health records, as well as the usability of these tools to minimise disruption to clinicians’ workflows. Furthermore, human oversight must remain a core component of AI-assisted clinical decision-making to uphold accountability and maintain clinical judgment.
AI systems risk perpetuating or exacerbating health disparities if they are trained on biased or unrepresentative datasets. Regulatory bodies must implement measures to detect and mitigate bias within AI models. This includes ensuring datasets are representative of diverse populations and promoting equitable access to AI-driven healthcare solutions, thereby supporting inclusivity and fairness in clinical outcomes.
Many advanced AI models, particularly those based on deep learning, function as ‘black boxes’, making it difficult for clinicians and regulators to interpret how specific decisions are reached. This lack of transparency raises concerns around accountability in clinical decision-making and may hinder trust among both clinicians and patients. Regulators must determine the appropriate level of explainability required, as well as identify the key stakeholders who need access to such explanations.
Traditional medical device regulations often fall short in capturing the dynamic and adaptive nature of AI technologies. Key challenges include defining appropriate regulatory categories, such as software as a medical device (SaMD) and adapting regulatory processes to accommodate systems that learn and evolve. Additionally, coordination across international regulatory bodies is necessary to facilitate harmonisation and ensure consistency in oversight across borders.
Given that AI systems rely heavily on large volumes of sensitive patient data, ensuring compliance with data protection legislation, including UK GDPR, is of paramount importance. Regulators must address challenges such as maintaining data anonymisation without compromising the utility of data for AI training, preventing data breaches or unauthorised access, and managing cross-border data sharing within international research and development collaborations.
Developing CERSI-AI: the rationale
The Centre of Excellence for Regulatory Science and Innovation in AI and Digital Health (CERSI-AI) is a Government-supported initiative, jointly funded by Innovate UK, the Medicines and Healthcare products Regulatory Agency (MHRA) and the Office for Life Sciences (OLS).
It aims to shape the future of AI and digital health regulation by working across the health and technology ecosystem to address current and emerging challenges. Through scientific methodology, community engagement and a whole-system approach, CERSI-AI will accelerate access to innovative treatments and ensure patients benefit from cutting-edge healthcare technologies.
The University of Birmingham was selected to host CERSI-AI due to its internationally recognised strengths in regulatory science and in the complexities of evaluating, regulating and implementing AI health technologies.
CERSI-AI brings together collaboration with academia, healthcare providers, industry, patients and regulators, both in the UK and internationally, to accelerate the safe and efficient development of AI-driven healthcare innovations. By leveraging the strengths of its founding partners – including leading universities (such as the University of Birmingham and University of York), NHS organisations (University Hospitals Birmingham NHS Foundation Trust and NHS Greater Glasgow and Clyde) and industry (Hardian Health, Newton’s Tree, Romilly Life Sciences and the Association of British HealthTech Industries) – CERSI-AI fosters a coordinated approach to address current regulatory and implementation challenges.
This integrated network enables joint research that balances the need for rapid innovation with essential public health values such as safety, equity and cost-effectiveness. By aligning scientific inquiry with real-world regulatory and clinical needs, CERSI-AI helps to streamline regulatory assessments and informs policy development. These collaborations ensure that promising technologies can reach patients faster, without compromising on safety and equity.
Supporting innovators to navigate the regulatory landscape
CERSI-AI will develop a suite of tools, frameworks and guidance to support innovators in navigating the UK’s regulatory landscape for AI and digital healthcare. These include:
- A public database of AI as a medical device (AIaMD) and SaMD products with market approval and adverse event reports, that supports transparency and regulatory decision-making
- A borderline manual to help innovators determine whether their AI health technologies qualify as medical devices
- A regulatory framework for evaluating frontier technologies such as large language models, ensuring adaptability to emerging innovations
- A post-market surveillance framework for AIaMD/SaMD, enabling agile safety monitoring and efficient data return
- Guidance on algorithmic performance bias, promoting equity and inclusion in AI systems.
These initiatives are being developed in collaboration with key regulatory and policy bodies, including the MHRA, the National Institute for Health and Care Excellence, the Care Quality Commission, the Department of Health and Social Care, the OLS and the Information Commissioner’s Office.
Supporting clinicians to integrate AI into workflows
CERSI-AI aims to improve the integration of AI tools into NHS workflows by fostering collaboration, standardisation and stakeholder engagement. Through demand-led workshops, webinars and stakeholder surveys, CERSI-AI identifies real-world clinical needs and barriers to adoption. These insights inform the development of practical guidance and standards that align with NHS priorities. Pilot testing of new standards in NHS settings also supports the real-world validation and refinement of these standards.
For clinicians to effectively adopt and utilise AI tools within the NHS, a range of targeted support measures is essential. Comprehensive training and education are needed to ensure healthcare professionals understand both the capabilities and the limitations of AI technologies. This foundational knowledge is crucial for fostering informed and confident use in clinical settings.
Clear regulatory guidance is required. Such guidance helps to build trust and confidence in AI tools, particularly those used for clinical decision support. When clinicians are assured that these technologies meet rigorous standards, they are more likely to engage with them as reliable aids in patient care.
Furthermore, the design of AI systems must be centred on the needs of users. The tools should be intuitive and seamlessly integrate into existing clinical workflows, enhancing care delivery rather than disrupting it. Poorly designed interfaces or overly complex systems risk becoming a barrier rather than a benefit.
Additionally, the development and communication of target product profiles are crucial. These profiles allow the NHS to articulate its priorities and signal demand to industry innovators. By aligning product development with the needs of both patients and the health service, AI tools can be designed to deliver real value and support strategic healthcare goals.
Together, these elements form a comprehensive framework for supporting clinicians as they integrate AI into their everyday NHS practice.
Balancing the needs of innovators, clinicians and patients
CERSI-AI will balance the needs of innovators, clinicians and patients by adopting a regulatory science approach that ensures innovation is evidence-based and shown to be both safe and impactful. This means:
- Empowering innovators by streamlining regulatory processes and providing clear, evidence-based guidance to accelerate development
- Prioritising patient safety and outcomes by embedding the patient voice in stages of innovation, ensuring technologies are aligned with real-world needs
- Supporting clinicians by ensuring AI tools are clinically relevant, easy to use and integrated into existing NHS workflows
- Facilitating collaboration between innovators, clinicians, patients and regulators to co-design solutions that are both innovative and practical.
By acting as a trusted intermediary, CERSI-AI will ensure that innovation serves both the health of patients and the operational needs of the healthcare system.
CERSI-AI: a leader in AI regulation for healthcare
A key aim for CERSI-AI is to position the UK as a global leader in AI regulation for healthcare by combining scientific excellence, international collaboration and proactive policy engagement.
It will play a pivotal role in advancing regulatory science for AI by leveraging its extensive global network. This includes strategic partnerships with international regulators such as the US Food and Drug Administration, Health Canada, Singapore’s Ministry of Health and Australia’s Department of Health. Through these collaborations, we aim to co-develop regulatory approaches that address shared challenges and establish international benchmarks for best practice.
CERSI-AI will generate cutting-edge, evidence-based insights through collaborative research initiatives. These insights will not only inform the evolution of UK regulatory frameworks but also contribute to the global conversation on safe and effective AI governance in healthcare.
Central to its mission is a commitment to knowledge sharing. Insights derived from UK-led projects will be disseminated internationally, while the Centre will also actively learn from global best practices. Ongoing engagement with UK policymakers and key stakeholders will ensure that regulatory innovation remains aligned with national healthcare priorities and is responsive to real-world needs.
Finally, CERSI-AI will position the UK as a leading testbed for responsible AI innovation by fostering an environment where such tools can be developed, evaluated and deployed at scale.
Supporting the future through CERSI-AI
CERSI-AI will play a pivotal role in ensuring AI technologies are safe, effective and equitably deployed in healthcare. This includes developing robust frameworks and standards to rigorously assess AI tools for safety, performance and fairness before and after deployment.
Clinicians and patients will be actively involved in evaluating these tools, ensuring they are usable, trustworthy and meet real-world clinical needs. CERSI-AI will also support innovators in navigating regulatory pathways, helping beneficial technologies reach patients more quickly without compromising safety.
To ensure ongoing oversight, the Centre will establish systems for post-market surveillance and continuous learning, allowing regulations to adapt as technologies evolve. Crucially, CERSI-AI will champion equity by addressing algorithmic bias and ensuring AI works effectively across diverse populations and healthcare settings.
Through this integrated approach, CERSI-AI will help establish the UK not only as a hub for innovation but also as a global standard-setter in AI regulation for healthcare.
Author
Alastair Denniston MA MRCP FRCOphth PhD
Executive director of CERSI-AI, professor of regulatory science and innovation, University of Birmingham, and honorary consultant ophthalmologist at University Hospitals Birmingham NHS Foundation Trust