Medtech Insight is part of Pharma Intelligence UK Limited

This site is operated by Pharma Intelligence UK Limited, a company registered in England and Wales with company number 13787459 whose registered office is 5 Howick Place, London SW1P 1WG. The Pharma Intelligence group is owned by Caerus Topco S.à r.l. and all copyright resides with the group.

This copy is for your personal, non-commercial use. For high-quality copies or electronic reprints for distribution to colleagues or customers, please call +44 (0) 20 3377 3183

Printed By

UsernamePublicRestriction

Twists And Turns Ahead For Medtech In Europe As AI Dominates October Regulatory Debates

Executive Summary

The regulation of artificial intelligence in medtech products is increasingly under the spotlight, in the EU and globally. A glance at October’s news sees a proliferation of schemes and actions to support medtech AI development in as safe and well-controlled an environment as possible.

Artifical Intelligence (AI) complicates the regulation and oversight of medtech products. And it is not the only new area of technology that is overlapping with medical devices and IVDs in the EU. Other horizontal  existing and proposed regulations, namely the General Data Protection Regulation (GDPR), the AI Liability Directive, the Cyber Resilience Act, the Data Act, the Data Governance Act, the European Health Data Space Regulation (EHDS) and the revised Product Liability Directive are all adding to what many in the sector now describe as a “regulatory lasagna”.

To address this increasingly complex environment regulatory sandboxes are increasingly being suggested as a safe space for regulators, industry and other stakeholders to 'play' with innovative device concepts and to manage the ‘regulatory lasagna’ that applies to many of these innovative products.

This is leading to a new tool being increasingly adopted in the EU and beyond, and one that is being heavily discussed this autumn.

The September meeting of the International Medical Device Regulators Forum, IMDRF, reported in Medtech Insight in October, highlighted the different ways that sandboxes can be used to manage the growing number of cutting-edge medical technologies that sit uncomfortably within current regional or local regulatory structures.

Agreement On EU’s AI Act Due Soon

The end of 2023 is a key date when it comes to AI. There is a political will to have an agreement during the co-legislative process on the AI Act by the end of this year. This act covers medical devices within its scope even though they are regulated under their own sectoral legislation.

The scale of the proliferation of news related to the regulation of artificial intelligence in medtech products has led Medtech Insight to feature this round-up of European AI medtech news on its own, in addition to the usual monthlysummary of October's EU regulatory top ten news.

While regulatory sandboxes are not a concept directly recognized in the MDR nor the IVDR, and are relatively new structures, they are tools that feature in the context of the EU’s proposed AI Act which is due to overlap with the sectoral legislation and which offers some interesting opportunities for medtech manufacturers to create cutting-edge products addressing growing demands for healthcare at the same time as health service budgets globally are being heavily squeezed.

The commission considers that some of the requirements of the proposed AI act, such as testing, training, validation of high-quality data, are among the aspects that could be undertaken in a regulatory sandbox, in addition to testing robustness, accuracy, and the cybersecurity measures that have been implemented for AI. This is what Nada Alkhayat, policy officer, directorate general for health and food safety at the commission, told the IMDRF meeting.

The intention is that these regulatory sandboxes be used in the pre-market phase as well as in the reassessment phase.

UK and AI Regulatory Sandboxes

In the UK, meanwhile, the Medicines and Healthcare products Regulatory Agency (MHRA) has announced it going ahead with its own new regulatory sandbox scheme, the AI-Airlock.  This will offer a regulator-monitored virtual area for developers to generate robust evidence for their advanced medical technologies.

The intention is to help NHS patients to benefit earlier from emerging technologies before they are available anywhere else in the world.

The UK says that the AI-Airlock follows a robust process, so manufacturers of software and AI medical devices understand and deliver what is required to ensure the real-world viability of these devices. The intention is for the experience and learning to then be shared, helping to provide an evidence base that promotes a wider understanding of the challenges and potential solutions that are available.

Paul Campbell,head of software and AI at the Innovative Devices Division UK Medicines and Healthcare products Regulatory Agency spokeabout how the UK has been pioneering work in this area at the IMDRF meeting.

There, he explained how, by moving beyond conventional product concepts and associated regulations, sandboxes "offer a unique and safe learning space for manufacturers to work with regulators and many other interested parties to explore new, cutting-edge solutions that otherwise might potentially struggle to make it onto the market due to challenges with their system, technology, regulatory or evidence processes".

Creating these sandboxes, which are very different from learning hubs, Campbell noted, involves a high level of dedication, and an open-minded approach.

EMA And WHO and AI

Also on the topic of AI, Vincenzo Salvatore, leader of the Healthcare and Life Sciences Focus Team at law firm BonelliErede and former head of the legal service at the EMA, explained during a London meeting in October that EU regulators, including the EMA, are “quite concerned” with the “accountability, transparency and testing” of AI systems used during the medical product lifecycle. These concerns are highlighted in the EMA’s reflection paper, he noted, which is open for consultation until the year-end.

On global scale, the World Health Organization called in October on all governments and regulatory authorities to introduce “robust legal and regulatory frameworks” to safeguard the use of artificial intelligence (AI) in the health sector.

It outlined six main areas where AI should be regulated in the context of health, in a bid “to outline key principles that governments and regulatory agencies can follow to develop new guidance or adapt existing guidance on AI at national or regional levels.”

Among the six priorities are:

  • Fostering collaboration between regulatory bodies, patients, health care professionals, industry representatives and government partners to ensure products and services stay compliant with regulation throughout their lifecycles.

  • Externally validating data and being clear about the intended use of AI to “assume safety and facilitate regulation”.

  • Using high-quality data, for instance by rigorously evaluating software pre-release, to ensure systems do not amplify biases or errors. 


Topics

Latest Headlines
See All
UsernamePublicRestriction

Register

MT148412

Ask The Analyst

Ask the Analyst is free for subscribers.  Submit your question and one of our analysts will be in touch.

Your question has been successfully sent to the email address below and we will get back as soon as possible. my@email.address.

All fields are required.

Please make sure all fields are completed.

Please make sure you have filled out all fields

Please make sure you have filled out all fields

Please enter a valid e-mail address

Please enter a valid Phone Number

Ask your question to our analysts

Cancel