The opportunities and challenges of AI

by ,

Tom Phipps, consultant and Jocelyn Ormond, partner in Ashfords LLP’s healthcare, digital health & life sciences sector group, comment on whether a stronger, more innovate framework is needed to boost closer working together between the NHS and digital health industry.

The growth of AI in the medical sector continues at considerable speed. A cursory search of the internet reveals ground-breaking AI applications in development technology which improves the precision and accuracy of surgical procedures; continuous patient monitoring so as to optimise treatment intervention points and diagnostic processing of large numbers of medical images to name only three. This is all good news for patients, as it comes with the promise of better healthcare and improved outcomes from treatment. However, from a legal and regulatory perspective, it raises a number of issues the NHS in particular is having to grapple with.

The UK/EU data protection regime treats medical-related data as “special category”, which means that a more rigorous set of rules applies to its collection, retention and security when compared with ordinary data. Whilst in principle there is no problem with that provided patients are fully informed as to how their data will be used, for how long and are free to withdraw as and when they wish (including requiring data to be destroyed). AI is likely to depend on the collection and aggregation of very large quantities of data so as to take advantage of its instructive capabilities, and thereby providing AI with the opportunity to “learn”. 

So how can such data and its use be regulated so it’s compliant with data protection law? In practice, that is going to depend on effective and rigorous anonymisation, but NHS institutions will need to be convinced that is achievable and the providers promising to do so are of integrity and to be trusted. The risks are obvious; data breaches as a result of individuals being identifiable would result in civil claims and more pressure on NHS budgets already stretched by legal action. This with the possibility of fines and time and cost-consuming investigations by the Information Commissioner could also result in consequent reputational damage.

In turn, the role of diagnostic AI, and its inter-relationship with clinicians, raises regulatory and liability concerns. At its most basic level, clinicians relying on AI will have to be absolutely convinced as to the quality of its output if diagnosis and treatment is, at least in part, dependent on it. In turn, the medical insurance sector will have its own concerns as to AI and reliance on it when setting premiums for professional indemnity cover. 

As for patients themselves, there is the issue of what information must be given to them as to their treatment and consent. It is a basic principle, both legally and ethically, that patients must be told what their treatment options are and the likely outcomes and risks. That must apply to the use of AI as well, but that begs the question of how detailed that information should be. Does transparency require the patient to know, for example, whether the AI in question has been developed via live cases or synthetic ones? Patients are entitled to know whether the AI is safe and effective or early stage and be able to make an informed decision as to what they want. In the absence of that, the legal risk is obvious.

A further issue is that AI is not a medical device as such. The sophisticated and long-established regulatory and approvals regimes that apply to medical devices have not yet been developed for AI – as its use in treatment grows, appropriate regimes will become essential. 

There is no doubt that the use of AI in the medical sector will develop exponentially as it will in society as a whole. That is to be welcomed, as it has so much to offer right across the medical landscape. Without it, developments such as Addenbrooke hospital’s use of AI to process prostate cancer scans or AI use in connection with CT scans of coronary heart disease patients would not have happened and patient care would have suffered as a result. The legal, regulatory and ethical challenges will be to meet the associated privacy, safety, treatment consent and liability issues that arise. There is no reason that cannot be achieved and in turn, that will build confidence in AI among both clinicians and those they help. 

Back to topbutton