EU AI Act: considerations for medical device developers

Medical Device Developers

The European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 came into force on 1 August 2024, with provisions coming into operation gradually over the next 6 to 36 months. The Act establishes a common regulatory and legal framework for AI within the EU, to promote the uptake of ‘trustworthy and human centric’ AI.
In this blog, Eamonn McGowran, Associate Director of Regulatory Affairs at Boyds explains the scope of the Act and key considerations for medical device developers.

The long-awaited EU AI Act applies to providers and deployers of AI systems and/or general-purpose AI (GPAI) models who intend to make available or put into service these technologies on the EU market. According to the European Parliament, the Act is the world’s first comprehensive AI law, with Korea now becoming he second global regulator to introduce AI legislation by enacting the “Basic Act on the Development of Artificial Intelligence and the Establishment of Trust.”

The new EU regulation has been developed to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. Applicable regardless of sector, the horizontal legislation is designed to promote the adoption of trustworthy, human-centric AI to ensure the EU reaps the ‘potential economic, environmental, and societal benefits across the entire spectrum of industries and social activities.’

The regulation applies irrespective of whether the providers and deployers are established in the EU, so long as the output (at a minimum) is intended to be used within the EU.

 

Risk-based approach

The AI Act takes a risk-based approach and focuses on applications of AI systems to ensure that the standard conformance requirements for such systems, and the obligations on providers and deployers, are targeted and proportionate. Monitoring and enforcement of the Act will take place at EU and at Member State level.

In addition, the Act aims to stimulate investment and innovation in AI. One of the key measures introduced to support innovation is the requirement for EU Member States to establish a regulatory sandbox for AI. This is a defined space where new technologies can be deployed and used in a way that is safe and responsible.

 

High-risk AI systems

The Act also aims to improve the functioning of the internal market by prohibiting certain AI practices. It lays down solid methodology to define ‘high-risk’ AI systems that pose significant risks to the health, safety, or fundamental rights of individuals.

Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).

These AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI and follow conformity assessment procedures before they can be placed on the EU market. It doesn’t apply to research or testing activities prior to placing products on the market.

The requirements for high-risk devices will come into effect on 2 August 2026.

 

High-risk AI in healthcare

A large proportion of AI’s use in healthcare will be classified as ‘high-risk’ under the Act and is therefore subject to multiple requirements if it is developed or deployed within the EU. The rules will also apply to existing AI systems but only if they undergo ‘significant changes’ to their design after this Act comes into effect.

In healthcare, high-risk AI will be that which is used for purposes such as diagnosis, monitoring physiological processes, and treatment decision-making, amongst others. Most of these uses are also software that is classed as a medical device and therefore require ‘conformity’ assessment by a regulator.

 

Medical devices

Devices are deemed high-risk if they are placed in the Class IIa category or higher under the Medical Devices Regulation (MDR). As such, manufacturers of high-risk devices will have to establish a risk management system throughout the product’s lifecycle, conduct data governance to attest that data is free from errors, and provide technical documentation attesting that their products comply with the Act.

High-risk systems should be designed and developed in such a way as to ensure their operation is transparent and can enable deployers to interpret a system’s output. These systems should also be accompanied by instructions for use that provide information that is ‘concise, complete, correct, and clear’ to deployers.

 

Regulatory requirements for high-risk AI

There are several regulatory requirements for high-risk AI, that apply to both providers and developers. These include:

• Data and data governance

• Standards, including accuracy, robustness, and cybersecurity

• Technical documentation and record keeping

• Privacy

• Accessibility

• Specific obligations for providers, importers, distributors and users

 

Continual compliance:

– Risk management system (RMS), including a fundamental rights impact assessment

– Quality management system (QMS) – Authorised representatives – Post-marketing requirements

Oversight:

– Transparency and provision of information to deployers

– Human oversight

– Regulatory sandboxes

Whilst the act calls for a QMS and for technical documentation, developers could leverage their EUMDR/IVDR technical documentation and QMS for their regulated products and simply expand the documentation of the AI system or sub-system.

 

Assessment of high-risk AI in healthcare

Compliance with the high-risk AI requirements will be assessed by Notified Bodies through the existing conformity assessment procedure, providing these bodies have been authorized by Member States to assess compliance with this Act. Compliance assessment occurs at a single point in time but because AI updates and evolves, the Act helpfully includes the concept of pre-agreed changes that do not require re-assessment.

 

Key timeline considerations for developers

Once the Act comes into force, obligations will apply after:

• 6 months for prohibited AI

• 12 months for general-purpose AI

• 3 years for high-risk AI

• By the end of 2030 for existing high-risk AI used within specific large-scale IT systems, or 4 years for those intended to be used by public authorities

 

Action taken for non-compliance

Penalties for non-compliance are serious and may reach up to 7% of an organization’s global turnover. Therefore, it is crucial that organizations take steps to futureproof their operations. They should look to develop an AI governance and compliance strategy, based on principles of responsible and ethical AI use and take advantage of existing risk management processes. Developers should also consider establishing AI regulatory affairs roles to manage AI regulatory risks from this Act and other regulations that will be introduced globally.

 

Developing EMA guidance

We are seeing guidance on the use of AI gradually emerging from the European Medicines Agency (EMA). Last year, the EMA published a reflection paper providing considerations on the use of AI and ML (machine learning) in the lifecycle of medicinal products, including medicinal products development, authorization, and post-authorization. It reflects on the scientific principles that are relevant for regulatory evaluation when such technologies are applied, to support safe and effective development and use of medicines.

This year, the EMA published its ‘Guiding principles on the use of large language models in regulatory science and for medicines regulatory activities.’ The document provides high-level recommendations to regulatory authorities to facilitate the safe, responsible, and effective use of large language models (LLMs).

These resources will serve as a valuable reference point for developers navigating the new AI Act and its requirements.

 

Navigating the EU AI Act

Our Regulatory Affairs team closely monitors the changes and developments in AI regulation for medical devices. If you are a device developer and need advice or guidance on the EU AI Act and how it may affect you, please get in touch today.

Facebook
Twitter
LinkedIn