The EU's AI Regulation - what does it mean for UK medtech?

by

Brett Lambe, senior associate in the technology team at Ashfords LLP, examines what a new AI Regulation from the EU could mean for medtech. 

A new AI Regulation has been published which will affect all medical businesses deploying or using AI systems, potentially as soon as 2024.

In April 2021, the EU Commission published a 'landmark proposal' for a new AI Regulation which will have a major impact on the ongoing use and development of AI in the medical sector.  

Described as the 'first-ever legal framework on AI', the aim is to strengthen Europe's position in the development of 'human-centric, sustainable, secure, inclusive and trustworthy AI.' The new laws attempt to guarantee the safety and fundamental rights of individuals and businesses whilst boosting investment and innovation in AI.

What the Regulation means and application to the medical sector

The Regulation proposes banning certain "Prohibited AI Practices", imposing a detailed compliance regime for "High Risk AI Systems" and lighter regulation for "Limited Risk" and "Minimal Risk" activities.

The Prohibited AI Practices include subliminal techniques beyond an individual's consciousness in order to materially distort their behaviour, and exploiting the vulnerabilities of a specific group of individuals due to their age.

The High Risk AI Systems are perhaps more relevant to medical technology providers. Providers in this context mean an organisation that develops an AI system, or that has an AI system developed, with a view to putting it into service under its own brand, whether for payment or free of charge.

An AI system will be "high risk" if it creates a high risk to the health and safety or fundamental rights of natural persons. These systems are part of a comprehensive list in the Regulation, which includes medical devices and certain diagnostics.

The list of High Risk AI Systems may be updated by the Commission as a means of future-proofing and keeping up to speed with the evolution of AI applications and use cases.

Providers of High Risk AI Systems will be subject to strict obligations. Providers will need to prove the safety of the technology, document how it makes decisions, conduct conformity assessments and guarantee human oversight in how AI applications are used and created.

In addition, importers will need to ensure that the provider carried out the conformity assessment and provided relevant technical documentation, and users will need to use the AI in accordance with its instructions, while actively monitoring for and flagging any problems to the provider.

If a business breaches the new rules, the proposal includes a range of fines depending on the breach. Where organisations are found to be using AI systems for prohibited purposes or are in breach of their data governance obligations they may face fines of 6% of their annual global turnover or €30 million. Non-compliance of the AI system with other obligations under the Regulation may lead to fines of €20 million or 4% of turnover. The supply of incorrect or misleading information in reply to a request by an authority may lead to fines of 2% of turnover or €10 million.

The benefits

The proposed Regulation may help accelerate the realisation of AI's potential in the medical and healthcare sectors, helping to unify fragmented data sets and harmonise diverse legal frameworks regulating the use of AI and data in medical technology.

It may also serve as a catalyst for greater international AI regulation, a field where the speed of progress by certain high profile providers, often left unchecked, has led to some disquiet amongst consumers.

International application

Given that these rules stem from the EU, it is reasonable to ask whether medical businesses operating in the UK need to pay attention. The answer is almost certainly 'yes'.

The new rules have a very broad application, applying to providers deploying AI systems throughout the EU (regardless of where they are based), and they will also apply to users of AI systems located within the EU, as well as providers and users of AI systems located in third countries where the output produced by the system is used in the EU.

So we can see that UK businesses who deploy and use AI systems in this way must carefully review and ensure compliance with the Regulation. Of course, many medical businesses operate across multiple territories, including the EU, so these businesses will be affected by the Regulation in any event.

Next steps

The Regulation remains subject to extensive scrutiny and amendment by EU Member States and the EU Parliament. Parts of the Regulation are currently drafted in very broad terms. This is by design, as the rapid pace of change in AI and tech (particularly compared with the slow pace of change in implementing international legislation) means that scope for flexibility is essential. However many businesses in the AI sector will not wish to have their plans for innovation curtailed, as well as wanting some degree of certainty, so expect further changes before the Regulation comes into force.

Thankfully for medtech businesses, the EU has recognised the risk of duplication of existing regulations and the potential to impose unnecessary burdens. Medical devices have been specifically identified as an area where overlaps may occur (including the Medical Device Regulation), with the result that existing conformity assessment procedures will be used to check the new AI requirements.

Once adopted, the final AI Regulation will apply across the EU. The new law includes a two-year period for application following adoption of the final regulation, which means that the new law will not realistically be in force before late 2024.

Implications

The wide application of the proposed Regulation means it is likely to act as a benchmark, potentially shaping the practices of providers, distributors and users of AI systems globally, whether they are based in the EU or elsewhere.

For the UK, this represents an early test in how to respond to technology regulation introduced by the EU in the aftermath of Brexit. The UK is not compelled to implement the new legislation. However, a failure to seek alignment in areas of common ground will leave businesses facing the prospect of multiple regulatory regimes and costly compliance measures.  

Businesses generally have to comply with the regulations of their major markets, with the GDPR being a key example. It would therefore be prudent for medical businesses to engage with the new rules sooner rather than later, taking steps to plan any changes needed around their processes and operations and smooth the compliance burden. If your strategy includes AI in the European market, then understanding the proposals now will help you avoid time and expense remedying issues when the new law comes into effect.

Back to topbutton