AI: Human augmentation in healthcare

by ,

Bethan Halliwell, partner, and Harry Strange, associate, are both patent attorneys at European intellectual property firm, Withers & Rogers, outline how recent technological advances are helping to remove bias and open the door to more potential healthcare applications.

13_Phunkod Shutterstock

AI systems are increasingly finding use in diagnostics and early-stage disease detection, as well as helping to inform clinical decision making - ultimately, with the aim of improving patient outcomes. However, regulators and clinicians still have some concerns about the safety, explainability and fairness of these fast-developing systems, and want to ensure that humans have sufficient oversight and control. So, what are innovators doing to help?

Image recognition and more

Among the most common AI systems used in the field of diagnostics are image recognition systems, which can help clinicians to spot physical signs of disease and minimise the risk of human error. These systems can assess patient data reliably and efficiently, so doctors can prescribe the right treatments as quickly as possible. Other systems known as ‘recommender systems’ can propose a course of action, helping to optimise clinical decisions based on the most probable patient outcomes.

Imperial College London (ICL) has published details of several AI-based innovations for application in the healthcare sector. For example, the Biomedical Imaging Analysis Group is exploring the use of AI in supporting the diagnosis of rare cancers based on visual data alone. This work is being undertaken on the basis that even the most experienced radiologists won’t have seen every form of cancer and therefore some of the rarest forms of the disease could easily be missed. Another example of an AI-based image recognition system for use in the healthcare sector has been developed by Histofy, a spin out from the University of Warwick’s Tissue Image Analytics Centre. This tool is designed to help histopathologists with transparent tissue-based diagnostics and prognosis.

Human augmentation in action

Of course, it is important that AI systems for application in the healthcare sector must be safe, and therefore, some element of human oversight and moderation is required. The most innovative tools are designed to work alongside humans; adding value to what they do by improving diagnostic accuracy or guiding clinicians to make decisions that will deliver the best possible outcome for the patient. AI-powered services developed by Merative, a US-based data, analytics and software business, which was formerly a subsidiary of IBM Watson Health, are an example of human augmentation in action. Trained using vast medical datasets, these services are designed to provide up-to-date information about drugs and diseases, combined with smart recommendations to support clinicians’ decisions at the point of care.

An example of generative AI supporting clinicians is a system developed by US-based Abridge AI, which creates summaries of patient-doctor conversations to assist them in the preparation of patient notes. This is transformed into clinical documents, which can then integrate with patients’ electronic health records.

The value of ‘synthetic data’

When it comes to ensuring the safety and fairness of AI systems, regulators increasingly expect outputs to be explainable. If an AI system makes a wrong prediction or misses something crucial, the developer will need a full understanding of what has happened in order to correct it. Based on this understanding, it may then be possible to retrain the model to reduce the likelihood of such a wrong prediction occurring again. 

One area of increasing importance is the identification of potential bias within an AI system. Often, such bias arises due to a lack of diversity in the data sets used to train AI systems. Where a gender, racial or socio-economic bias is identified, developers are now able to adapt their algorithms by retraining them using a diverse and bespoke set of ‘synthetic data’ – i.e. data generated artificially instead of by real-world events. This is a major step forward and could help improve the fairness of AI systems in practice. In other situations, if data privacy is a particular issue, synthetic ‘twin’ datasets can be created to eliminate the risk of personal information being disclosed. 

Synthetic data is rapidly gaining popularity within the R&D community for other reasons too. For example, it could help to speed up AI-related project delivery, as it allows developers to generate the datasets they need for a specific application on demand. The use of artificial rather than real-world data could also help to address concerns over ethics and copyright issues. According to a study by Gartner, 60% of the data used for the development of AI and analytics projects, will be synthetic by 2024.

Developing AI regulation

The UK Government published a white paper setting out how it intends to regulate AI on 29 March 2023. It acknowledges that there is currently a patchwork of regulatory powers which can be challenging for developers to understand and proposes a new principles-led approach.

The current lack of clarity regarding AI regulation, means the onus is on developers and tech businesses to stay up to date with current guidance and the direction of travel. Depending on the nature of an existing AI-system or one under development, this is likely to involve staying in touch with sector-specific regulators, such as the Medicines and Healthcare Products Regulatory Agency (MHRA), as well as other regulatory organisations such as the Information Commissioner’s Office or Competition and Markets Authority. To assist those developing AI systems for use in the healthcare sector specifically, a multi-agency service is providing information and guidance, backed by the NHS AI Lab.

The IP opportunity

From an intellectual property (IP) perspective, the outlook for AI systems developers is somewhat clearer. They should be assured that many AI and healthcare focused algorithms could be eligible for patent protection under IP law in the UK and most other countries. As with any other type of software-based invention, the algorithms should be new, inventive, and provide some form of technical benefit beyond the scope of the algorithm (e.g., a more memory efficient approach to detecting regions of interest within a medical image). With patent protection in place, the patents can then be licensed to third parties or sold to health sector service providers.

The potential of AI systems to transform healthcare services and deliver benefits for society as a whole is unfathomable, but there are problems on the horizon. In addition to developers’ concerns about copyright issues, end users require regulatory reassurance that personal data is protected and that humans are still in control. The good news is that AI innovators are already on the case. 

Back to topbutton