Mental health: The role of AI-powered solutions

by

Amlan Basu, chief medical officer at The Huntercombe Group and Active Care Group, analyses where artificial intelligence can impact the treatment of mental health. 

Artificial intelligence will certainly have a profound impact on the way that we detect, diagnose, and treat mental disorder. Yet most mental health professionals have little understanding of AI or the ways in which it will affect efforts to reduce the suffering caused by poor mental health. 

As history teaches us, that which is poorly understood is feared or denied; but the real danger is that mental health professionals fail to engage in AI’s development, uses and limitations, only to awaken one day to find that the delivery of mental healthcare has permanently changed, seemingly without notice or consultation.

The case for change is surely well made already. Globally, about one in four people will experience mental illness at some point in their lives, 350 million people suffer depression every day and there are 25 million suicide attempts each year. Both admission rates to hospital for self-harming and suicide rates in children are increasing in the developed world. Our efforts to treat mental illness, from a public health perspective, have surely failed. 

As potential solutions, the UK attempts to persuade more medical students to choose psychiatry as a career are laudable, but unlikely to make a dent. Current workforce shortages and COVID-related mental health pressures only serve to heighten the size of the challenge. In this context, AI-powered solutions have the potential to revolutionise the diagnosis and treatment of mental illness irrespective of workforce capacity and geography. However, these solutions need to be properly understood so that their promise can be harnessed without unintended consequences.

Early detection and diagnosis

AI in the context of imaging, whether x-ray or CT, and the technology’s ability to diagnose abnormalities with remarkable accuracy and speed was hailed with much fanfare. Despite this, its use is far from routine or embedded, and there continue to be examples of NHS Trusts being criticised for missing cancer diagnoses due to a backlog of scans. 

In the world of mental health, diagnoses are made based on a detailed history, physical examination and – critically – the mental state examination. There are a significant number of variables that a psychiatrist considers, evaluates, and prioritises for this examination, including assessing a patient’s appearance and behaviour, their speech, their reported mood and thoughts, their perceptions, cognitive ability, and insight. 

The digitisation of this data – the digital phenotype – is possible through many routes, including the analysis of an individual’s speech, voice, and face, how they interact with a keyboard or their smartphones and through various wearable sensors.

Analysis of the digital phenotype has led to some eye-opening findings. Just the in-depth analysis of the human voice (e.g., pitch, volume, jitter) has been able to predict marital difficulties as well as, if not better, than therapists, and whether at-risk youths transition to a psychotic illness with 100% accuracy, outperforming classification from clinical interviews. 

From an academic perspective, combining functional magnetic resonance imaging (fMRI) and machine learning (a subset of AI) has also proven to be powerful in relation to suicidality, for example distinguishing those suffering from suicidal ideation from those were not. In addition, algorithms that simply scanned electronic health records were able to predict (with 80% accuracy) those patients who would attempt suicide within the next two years, and (with 84% accuracy) those who would attempt suicide within the next week. Significant progress has also been made in relation to the diagnosis of other conditions such as autism and posttraumatic stress disorder (PTSD).

Treatment

It is perhaps in relation to treatment, rather than diagnosis or early detection, that psychiatrists and psychologists feel they are most immune to the impact of AI. After all, treatment itself is so reliant on the interpersonal and therapeutic connection made between one human being and another. 

As it happens, there is good evidence that we are prepared to disclose much more to a ‘virtual’ human than a real one; many report that they do not feel comfortable talking about such personal information and that this is much easier to do when talking to a ‘bot’; others fear being judged by others including their therapist or doctor. 

It is no surprise that there are now many apps that deliver talking treatments through ‘virtual’ interactions with therapists rather than face-to-face interactions; a meta-analysis showed that depressive symptoms improved significantly through this medium, particularly when treatments were based on cognitive behavioural therapy (CBT) models. 

Although the smartphone apps evaluated through this meta-analysis all relied on interactions with human beings, the development of apps using text-based natural-language processing is also well underway. These kinds of fully automated conversational agents (Woebot) have also been subjected to randomised controlled trials (RCTs) and reviews and, again, the results seem very promising. 

Their advantages are obvious, particularly for the huge numbers waiting for face-to-face treatment. As an example, during the coronavirus pandemic, rates of substance misuse increased and yet capacity for engaging in traditional methods of face-to-face treatment were significantly limited. An RCT looking at the use of an AI text-based CBT intervention for substance misuse, W-SUDS (Woebot for Substance Use Disorders), found that those engaging with W-SUDS used substances less than those on the waiting list, and that this reduction in use, unsurprisingly, also improved their general mental health, including anxiety and depression-related symptoms.

Ethics and hazards

Many of these scientific advances are in their early stages of maturity, but AI is fast coming of age. Drawing out which digital biomarkers are critical, and in what combination, is a big data challenge and it is important that successes are not exaggerated or over-generalised. In addition, establishing the ‘ground truths’ – defining the gold standard with respect to diagnosis, for example – is difficult, not least because of changing diagnostic criteria and the subjectivity of establishing the criteria needed for diagnosis.

Science to one side, there are plenty of other issues that need to be considered when it comes to AI in mental health. For example, who is accountable when calculating and considering suicide risk? How do we maintain confidentiality of sensitive data, and how do we handle someone disclosing potential risk that they pose to others or themselves? 

I am optimistic that if the current workforce were informed about AI, irrational fears about job losses would be quickly put to rest and a much-needed conversation could ensue about how best to use the sophistication of AI to alleviate and indeed prevent the psychological suffering of so many.

What is clear is that AI-assisted efforts stand a genuine chance of reducing a huge global burden of disease, so much of which currently goes totally unseen and untreated by health care professionals.

Back to topbutton