AI in healthcare: What are the real barriers to progress?

by

Naveen Kumar S, VP of engineering and head of the India Innovation Center at Altran, outlines the challenges that must be overcome to tap the true potential of AI in healthcare.

What was once the reserve of science fiction is now being made possible in healthcare thanks to an explosion of artificial intelligence (AI) in the industry. In recent years, the use of AI and machine learning technology in the healthcare industry has soared, giving us everything from assisted robotic surgery and 3-D image analysis to smart biosensors that can aid with remote disease management. But in order to tap the true potential of AI in healthcare, several major challenges must be overcome:

Regulations and compliance: The first challenge is around regulation. On April 2, 2019, the U.S. Food and Drug Administration published a discussion paper to ignite debate around what regulatory frameworks should be in place for the modification and use of AI and ML in medical environments. AI/ML-based software, when intended to treat, diagnose, cure, mitigate or prevent disease or other conditions, is also classed as a “medical device” under the Federal Food, Drug and Cosmetic Act. Regulatory frameworks catching up with practice are, of course, a good thing. However, with new frameworks come new rules, boundaries and potential obstacles that will have to be overcome before AI can deliver on its promise. 

Data quality and availability: Another challenge revolves around data quality and data availability. For AI/ML technology to work effectively, it requires data inputs to be as accurate as possible. Not only that, the data has to be fully accessible to the technology in order for it to have any tangible benefit to doctors and patients. The digitisation of health records will obviously be critical here, but it remains a big mountain to climb for governments and healthcare providers, and interoperability is often nowhere near where it needs to be. Both of these factors will impact the viability of AI-enabled therapeutics moving forward. 

Transparency: It goes without saying that for AI to provide an accurate diagnosis, data training and access to a plethora of reliable data sets are critical. These necessities can be problematic, not least because of legislation like the General Data Protection Regulation (GDPR) in the EU, which mandates a “right to explanation” for algorithmically generated user-level predictions that have the potential to “significantly affect” users. This need to make results effectively retraceable would require an AI assistant to not only make a decision, but also demonstrate how it arrived at that decision. 

Bias: Many forms of bias can creep into AI algorithms (dataset shift, unintended discrimination against a group, generalisation when it comes to new scenarios). These detract from the true efficacy of the solution and can have negative, unintended consequences. Other forms of bias can also develop as AI solutions are commercialised, such as pharmaceutical companies competing to be the supplier of choice for a particular condition. It’s therefore important that AI algorithms used for diagnosis or triaging are clinically validated for their accuracy. High-quality reporting of machine learning studies also plays a crucial role. 

Consumer trust: Without accurate and accessible data, AI-based analyses and projections won’t be reliable. Garbage in, garbage out, as they say. One way of ensuring that the data we feed into AI systems is reliably accurate is to nurture public trust and encourage society to see AI as an asset rather than a threat. In a paper entitled “Five Pillars of Artificial Intelligence Research”, two leading academics in data engineering and AI outline five key “pillars” for building this trust:

  1. Rationalizability: For humans to cultivate a greater acceptance of AI models, they need to develop an understanding, knowledge and appreciation of technology such as deep neural networks, which are traditionally opaque by their very nature. These technologies and their reasons for being opaque need to be “rationalized” in the public’s mind. 
  2. Resilience: AI technology needs to prove itself to be resistant to tampering and hacking, perhaps with legislation and policy around maintenance and check-ups. 
  3. Reproducibility: Typically in research, a consensus needs to be reached between a group of experts before something is deemed “true.” It’s one of the reasons we seek second opinions for medical diagnoses. There needs to be a universally agreed standard for things like code documentation, formatting and testing environments, so that AI systems can be cross referenced with each other. 
  4. Realism: This refers to the ability of AI to make decisions with a degree of emotional intelligence, such as the ability of voice assistants to recognize tone of voice, or the ability of a chatbot to provide appropriate emotional feedback.  
  5. Responsibility: We have a code of ethics in all aspects of society, from business to healthcare. Naturally, AI might disrupt these ethics in a variety of ways, so it’s important we establish a code of machine ethics, too. 

Privacy and the right to anonymity: There are, quite rightly, tight regulations around patient data and how it can be shared and used. In some use cases, it might be possible to protect patients’ identities by anonymising enough of their data to let the AI do its work. However, other areas could prove more problematic, such as image-dependent diagnoses like ultrasounds. Beyond patient confidentiality, AI systems themselves must be periodically audited and validated for their accuracy. The FDA does have some guidelines already in place, such as Algorithm Change Protocols (ACP), but regulation around the servicing and maintenance of AI is still evolving. 

Security: And then there’s the issue of cybersecurity and data protection. In the United States alone, there have been close to 120 sophisticated ransomware attacks targeting the healthcare sector over the past four years. This rightly requires the highest levels of security and privacy protocols, with end-to-end encryption of personal data being a good first step. Technologies such as blockchain can help with auditing data and making it tamper-proof, but its implementation is still relatively new and not fully tested in the field. 

The temptation to DIY: With healthcare information at our fingertips and wellbeing chatbots just a few clicks away online, the temptation to self-diagnose is strong for patients. It has created an interesting shift in the power dynamic between physician and patient. But no matter what technology becomes available to a patient, it will never replace the expertise and experience of a trained doctor. While this may seem obvious, it’s a legitimate concern for many in the healthcare profession as more and more information becomes readily available. 

It’s still too early to tell what the lasting impact of the current global pandemic will be when it comes to attitudes toward healthcare provision, but it’s highly likely that people will become more accepting of digital solutions and remote diagnosis. This could have a positive impact on AI’s trajectory in healthcare as dependence on digital technologies and the exchange of data become more commonplace. But the tipping point won’t come until the significant challenges outlined above can be surmounted

Back to topbutton