Rise of the machines: How should the NHS prepare for AI?

by

Antonio Marino, consultant at healthcare policy and communications group, Incisive Health, looks at how the NHS can prepare for the rise of artificial intelligence technologies.

In her speech in Davos last month, Theresa May became the latest senior politician to add her voice to the growing hype around the potential of artificial intelligence in healthcare by drawing the world’s attention to machine learning algorithms that could drastically reduce the number of unnecessary breast cancer surgeries carried out each year.

With applications ranging from treatment-prescribing chatbots to advanced algorithms capable of analysing medical scans, many see AI as the NHS’s saviour, enabling high-quality, efficient care at a fraction of the existing cost.

However, the NHS’s relationship with the AI industry has so far been characterised by incoherence, short-termism and controversy. Although it was always going to be difficult to integrate such a fast-moving technology into an organisation renowned for its heavy-footedness, the NHS has much to do to get its house in order.

Not only must it act quickly so that patients and clinicians benefit from this ground-breaking technology as soon as possible, it must also take measures to ensure that the introduction of AI does not do more harm than good. If attempts to incorporate the new technology are made without a clear blueprint and without the necessary reforms in place, there will be serious consequences for NHS finances and for patient confidence in the health service.

Here, then, are five challenges that the NHS must overcome if it is to reap the rewards that AI can offer:

Protect its data assets – For all their sophistication, AI algorithms need accurate, real-world data to train on. In this sense, the NHS, with its centralised, cradle-to-grave datasets, is sitting on a goldmine.

Critically, the NHS must avoid a situation in which it provides companies with the material they need to develop their technology, only for those same companies to charge it vast sums further down the line.

Clearly, an agreement must be struck before the technology is widely adopted. While Professor Sir John Bell has suggested the NHS retain full or part ownership of the algorithms developed on its data, policy-makers should also consider striking a deal that secures discounted access for the health service in the future. This would ensure that tech firms are adequately incentivised to continue working with the NHS, whilst also providing sufficient remuneration for UK taxpayers.

Open up competition – Access to NHS data is currently dominated by the giants of the technology landscape and there is a real danger of a stagnant monopoly emerging.

With start-ups complaining of being locked out, action needs to be taken to avoid NHS systems becoming exclusively dependent upon tech giants’ software. As such, there must be regulation to guarantee that the infrastructure around NHS data is adaptable for use by any company, so that there are no unfair barriers when it comes to introducing the latest, most effective technology.

Engage the population – Patients need to have confidence that their data are serving a bona-fide purpose that will improve existing care, otherwise they simply will not consent to their information being used to train AI technology.

A 2016 Wellcome Trust study found that public scepticism to data-sharing reduces significantly once the value to patients, society and future generations is fully explained. The NHS must learn from this, and from previous failures such as care.data.

Embed transparency – Transparency is the other half of the formula for securing public trust.

Firstly, the NHS’s relationships with private companies must be open. In 2016 a deal between Google-owned Deep Mind and the Royal Free Trust to share the records of 1.6 million patients attracted immense criticism for its secrecy. It was subsequently declared illegal for failing to seek patients’ permission – a reminder that contracts involving such data must have a clearly-defined objective and must be grounded in patient consent.

Secondly, for patients and clinicians to place their faith in clinical AI technology, they need to broadly understand why and how a particular algorithm reaches its conclusions. To avoid giving rise to so-called ‘black boxes’, NHS authorities should require AI firms to explain – in layman’s terms – how their technology works.

Develop robust regulation models – The algorithms behind AI technology present a unique regulatory dilemma. They change constantly as they are exposed to new data, and are more autonomous than other regulated products, influencing clinical decision-making to a far greater extent.

To create a system that recognises these unique features, support should be provided for data ‘sandboxes’, in which tech companies, regulatory bodies and other stakeholders, including clinical leaders and patient representatives, come together around anonymous data to develop regulatory best practice at the same time as the algorithms are trained. Only through such a collaborative, bottom-up approach will an adequate regulatory framework for this unique technology be constructed.

The future of AI in UK healthcare could be very bright, but the NHS needs to prepare and prepare well if it is to maximise the benefits to its finances, services, and patients. Still reeling from another winter crisis and under constant financial pressure, the NHS really cannot afford to miss this boat.

Back to topbutton