Published on December 28, 2025
Regulating Artificial Intelligence in Healthcare: An Open Call for Evidence

Regulating Artificial Intelligence in Healthcare: An Open Call for Evidence

Artificial intelligence is rapidly transforming many sectors of society and the healthcare system is among the most profoundly affected. The integration of artificial intelligence technologies into clinical practice promises improvements in diagnosis, treatment, patient management, administrative efficiency, and health outcomes. Yet at the same time there are serious questions around safety, accountability, equity, and legal responsibility. In recognition of this moment of change, the United Kingdom’s Medicines and Healthcare products Regulatory Agency has published a call for evidence on the regulation of AI in healthcare. This initiative invites members of the public, patients, healthcare professionals, industry representatives, researchers, and technology developers to share their views and experience on how artificial intelligence should be regulated within the healthcare context. This blog explores the background, objectives, key themes and implications of this important consultation process. (GOV.UK)

Why This Consultation Matters

Artificial intelligence technologies are increasingly embedded in healthcare settings. Algorithms are used to interpret medical images, guide clinical decision making, prioritise patient risk, personalise treatment pathways, automate administrative workflows, and support clinical research. Many of these tools qualify as software regulated under the medical device framework, but traditional regulatory approaches were developed before the rise of advanced machine learning. AI systems can learn, adapt, and update over time in ways that do not easily fit into static approval models. The dynamic nature of these technologies presents new challenges for regulators who must ensure patient safety without stifling innovation. The call for evidence represents a pivotal opportunity to gather a wide range of perspectives to shape a future regulatory approach that is proportionate, forward looking, and adaptive to the specific characteristics of healthcare AI.

The consultation was published on 18 December 2025 and will remain open until 2 February 2026. Submissions can be made online, by email or by post, and respondents are encouraged to contribute to questions that are relevant to their experience. There is no obligation to answer every question, but the collective input will form the basis for recommendations to the MHRA and help inform policy development in 2026. (GOV.UK)

The National Commission on AI Regulation in Healthcare

Central to this initiative is the newly established National Commission into the Regulation of AI in Healthcare. This Commission brings together experts from across medicine, regulation, technology, policy and patient advocacy to advise the MHRA on long term strategy. Its role is to guide the development of a robust regulatory framework that aligns with the goals of the UK Government’s 10 Year Health Plan and the Life Sciences Sector Deal. The Commission will consider evidence submitted during this consultation and work towards recommendations that balance innovation with public safety and trust.

By inviting input from diverse stakeholders, policy makers hope to capture a comprehensive picture of how artificial intelligence is currently being used in healthcare, where gaps and risks exist, and how regulation can evolve to support benefit while reducing harm. This approach is grounded in the principle that regulation should be evidence based, transparent and grounded in real world insights.

Key Themes in the Call for Evidence

The call for evidence is structured around a series of questions and themes that aim to capture insights from lived experience and professional practice. Respondents are asked to consider multiple aspects of AI and its regulation including safety, performance, data governance, liability, accountability, and interoperability with existing frameworks. The main themes include the following:

Safety and Performance

One of the most important themes is ensuring that AI technologies operate safely and perform as expected throughout their lifecycle. Respondents are asked to provide examples of how AI tools are already used, what safety mechanisms are in place, and where existing regulatory approaches may fall short. This includes issues around real world performance, clinical validation, monitoring post deployment, and the risk of unintended behaviours such as model drift. (GOV.UK)

Traditional regulatory assessments focus on demonstrating safety and efficacy through static evidence at a fixed point in time. But artificial intelligence systems can evolve as they encounter new data or as models adapt. The call for evidence aims to gather information on how to ensure ongoing safety without imposing unnecessary barriers. This is a core challenge for regulators seeking to protect patients while supporting innovation.

Integration with Current Regulatory Frameworks

Another theme considers how AI regulation should interact with existing healthcare regulation. AI technologies that meet the definition of a medical device are regulated as software under the current framework. But for tools that support administrative functions or clinical decision guidance without making autonomous decisions, boundaries are less clear. Respondents are asked to identify how current frameworks handle these overlaps and whether there is a need for bespoke approaches.

There is also an interest in how other regulatory bodies interact with the MHRA in this context. For example, clinical governance frameworks within the NHS, professional standards for healthcare practitioners, and data protection rules under the Information Commissioner’s Office all intersect with AI deployment in healthcare settings. Understanding how these systems overlap and where clarity is needed is vital for effective regulation. (GOV.UK)

Liability and Accountability

Liability is a critical area for AI regulation. Healthcare providers, technology developers, clinicians, and institutions all share responsibility when AI is used in patient care. The consultation invites input on how liability and responsibility should be distributed across the supply chain. For instance, if an AI tool misclassifies a condition or a clinician misinterprets AI output, how should accountability be shared? Clear rules for liability help build public trust and provide certainty for innovators and healthcare providers.

Transparency and Explainability

Transparency is closely linked to accountability. AI systems can be complex and opaque, particularly those developed using deep learning techniques. Regulators and clinicians need to know how these systems make decisions, what data they were trained on, and how they might behave in different scenarios. Respondents are asked to provide insights on what level of transparency is necessary to build trust and ensure patient safety. This includes practical proposals on explaining AI outputs to clinicians, patients and regulators. Although not explicitly required by current UK law, transparency considerations are seen as essential for ethical AI use.

Proportionate Regulation and Innovation

It is important that regulation is proportionate to risk. Overly restrictive rules may slow innovation, delay patient access to beneficial technologies, and discourage investment in UK health tech. On the other hand, insufficient regulation could expose patients to harm, undermine trust in healthcare systems, and create barriers to adoption. The call for evidence encourages respondents to suggest where proportionate approaches could be applied, including tiered regulation based on risk level or adaptive compliance models that evolve alongside technology.

Wider Context of AI Regulation in Healthcare

The MHRA’s call for evidence is part of a broader global conversation around regulating AI in healthcare. International regulators and organisations are grappling with similar questions about how to adapt existing medical device rules to emerging technologies and how to coordinate oversight with other regulatory domains such as privacy and professional standards. In Europe, the European Medicines Agency has published recommendations on the regulatory implications of AI in medicines and healthcare technologies. These discussions emphasise risk based approaches, evidence generation, transparent governance and international cooperation. (European Medicines Agency (EMA))

Within the UK, other initiatives such as the Software and AI as a Medical Device Change Programme reflect ongoing work to clarify how AI and software will be regulated under UK law. This programme aims to ensure that regulatory requirements are clear, that patients are protected, and that companies understand the expectations for bringing new products to market. These broader reforms provide context for the MHRA’s consultation and underscore the importance of aligning AI regulation with wider health system goals. (GOV.UK)

There have also been pilot initiatives such as the MHRA’s AI Airlock regulatory sandbox which allow developers to test AI technologies in controlled environments under regulatory supervision. These pilots help identify real world challenges and inform future regulatory models that can foster innovation safely.

How to Respond and Participate

Anyone with an interest or stake in the regulation of AI in healthcare is encouraged to respond to the call for evidence. This includes patients, carers, healthcare professionals, academic researchers, data scientists, technology developers, health service providers, and members of the public. In preparing a response, participants are encouraged to provide concrete examples, evidence from practice, and suggestions for how regulation can be improved.

Responses should be submitted by 2 February 2026 to ensure they are considered by the Commission. The MHRA provides online submission tools, but responses can also be submitted via email or traditional mail for accessibility reasons. By contributing to this consultation, individuals and organisations have a direct opportunity to shape the future regulatory framework for AI in UK healthcare. (GOV.UK)

Implications for the Future

The outcomes of this consultation and subsequent recommendations from the National Commission will shape AI regulation in UK healthcare for years to come. A clear, risk based regulatory framework has the potential to unlock innovation, improve patient outcomes, strengthen public trust in AI technologies, and position the UK as a global leader in health tech regulation. At the same time, robust regulation is essential to mitigate risks of bias, errors, unintended consequences, data misuse, and patient harm.

For technology companies, clinicians, and healthcare leaders, staying engaged in this process is not only a chance to influence policy but a strategic necessity. AI will continue to evolve and become more central to healthcare delivery, and responsive, thoughtful regulation is needed to ensure that these powerful tools are used safely and effectively.

Conclusion

The MHRA’s open call for evidence on the regulation of AI in healthcare marks a key moment for the health system, regulators, innovators, and society. It reflects the recognition that AI technologies present both opportunities and challenges that cannot be fully addressed by existing regulatory models. By soliciting diverse views and experiences, the UK Government and the MHRA are taking an inclusive and evidence based approach to policy making. The consultation will help to shape a regulatory environment that protects patients, supports innovation, and ensures the safe and responsible use of AI in healthcare for the benefit of all.

If you are interested in contributing, the consultation remains open until 2 February 2026. Your voice and insights could help define the future of health AI regulation in the UK.

Share this post

Explore Related Articles for Deeper Insights

Revolutionizing Dental Training: NYU College of Dentistry Adopts Mixed and Virtual Reality
NYU College of Dentistry, dental education, mixed reality, virtual reality in dentistry, dental trai...
View
How Money Worries and Food Insecurity Can Age Your Heart Faster
How Money Worries and Food Insecurity Can Age Your Heart Faster
Heart disease is often associated with high blood pressure, diabetes, smoking, or a family history o...
View
Challenges in Spinal Cord Injury Recovery for Seniors
Challenges in Spinal Cord Injury Recovery for Seniors
Spinal cord injuries affect thousands of people each year, and recovery can be a long and challengin...
View

To get more personalized answers,
download now

rejoy-heath-logo
Company

Your trusted health companion, delivering personalized and precise answers in real-time.