ESSEC METALAB

IDEAS

AI: YOUR NEW THERAPIST? 

[Student IDEAS] by Leria HuangMichelle Diaz - Master in Management at ESSEC Business School

Abstract

AI is revolutionizing mental health care, offering innovative solutions like automating tasks, training clinicians, and performing assessments. However, diagnostic accuracy and data privacy are the gatekeeper elements that require more public attention. To better envision the AI therapist era, we anticipate the constant evolution of human-machine interactions will bring ripple effects across society…

---

As seeking assistance from AI chatbots becomes second nature in our daily lives, these advanced language models increasingly serve as companions during moments of uncertainty and distress. The release of chat GPT-4o has opened a new chapter for human machine interaction with the capacity of emotion recognition and real time response. Now the human-like robot available 24/7 has been considered not only as a smart encyclopedia but also as a compassionate support for people to open up with personal troubles and perplexity. Notably, major psychological chatbots have been more in demand, with 475 others named "therapy" or "therapist1" capable of conversing in multiple languages. These bots, such as Psychologist, have received a total of 78 million messages, tailoring responses to common mental health conditions such as depression and anxiety. 

As AI expands into the realm of mental health, the potential risks and limitations remain a concern for the encompassing application in the therapy field. What defines good therapy? Can AI really be our new therapist? If so, what causes the concerns? If not, why is it challenging for AI to supplant human therapists? How can AI better assist in addressing mental health issues? This article will explore these questions in detail.

Why and How Could AI Help in Therapy? 

Therapy has existed in its modern form for decades, yet the stigma around it recedes slowly and qualified therapists are always in demand. The quality and availability of therapy stand out as the crucial issues amid a global shortfall in mental health care. The appearance of AI therapy plausibly provides an accessible and attractive solution. 

AI chatbots are being deployed in a wide range of psychiatric settings, from offering essential emotional support to aiding in cognitive behavioral therapy (CBT)2. CBT aims to identify distortions in thought patterns and to make conscious efforts to change them3. As a commonly adopted treatment in the mental health field, CBT is more and more suggested, notably by the UK's National Institute for Health and Care Excellence4, to apply before  medication for cases of mild depression. With well trained AI chatbots, patients are able to develop awareness and identify the patterns of mental problems in the conversations. A study in the International Journal of Psychiatric Trainees has shown that AI chatbots like ChatGPT can improve patient-reported quality of life in psychiatric settings (see Table 1 for a review of recent clinical trials)5.

Table 1: Overview of Select Mental Health Chatbots
Source: International J of Psychiatric Trainees by Antonio Melo, Inês Silva, Joana Lopes

Compared to in person or online therapy sessions, the 24/7 availability and inherent nonjudgmental nature of AI make it especially appealing. The barriers for traditional therapy sessions are clear: the time and financial cost, when and where to seek help, and more importantly, when and where the therapist is available to help. Even the most devoted therapist needs to eat and sleep and spend time on research. 

In addition, the non judgmental quality is particularly attractive to minority communities experiencing marginalization, stress, and loneliness. One of the biggest obstacles to effective therapy is that patients are often reluctant to tell the truth. A study of 500 therapy-goers revealed that over 90% admitted to lying at least once6. The topics they concealed? Suicidal thoughts, substance use, and their sheer disappointment with the therapist's suggestions. Research has shown that people may be more comfortable disclosing personal information to a chatbot than to a human therapist. This phenomenon is attributed to a heightened sense of anonymity and a diminished fear of judgment7. The result is also backed up by a recent paper published in Nature Medicine: AI therapists, through improving the online triage and screening forms, effectively increased the access and exposure to therapies for minorities, who are often reluctant to open up for a therapy due to external and internal reasons. The engaging and anonymous nature of AI has increased the referrals for NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds8.

Another further application awaiting to scale up is therapist training. As we can easily imagine,  AI application in the analysis and study of numerous therapy sessions could be a great help. To quantify and qualify what works for therapy based on analyzing the transcripts from therapy sessions, is undeniable – a challenging work. Collecting, annotating and studying over these texts is basically what it takes to build up a huge natural language processing model. And that is exactly where AI could contribute: firstly, understanding the human language from the therapy session input, then learning from effective conversation data and improving its responses over time. 

Researchers are also finding natural language processing models to be invaluable. A team at Drexel University in Philadelphia demonstrated that GPT-3 can predict dementia by analyzing speech patterns. For training therapists, this approach may offer better insights in a much shorter time from analyzing therapy session transcripts to identify patterns9. Experienced therapists can maintain high standards of care, while trainees can use the AI tool to enhance their skills and improve their practice.

How Can AI Better Contribute to Therapy?

As one can intuitively deduce, using AI in therapy is rampant with issues that make the application of the technology into the practice less than desirable. Below are some of the most plausible opportunities to seize and the risks involved:

Improve Data Privacy & Data Misuse 

At the core of making AI work is information. In healthcare, one of the most safeguarded aspects is patient privacy. More so in therapy or psychiatry. In the application of AI, the issue can be divided into two types: the abilities to uphold, and to respect data confidentiality.

Upholding data confidentiality involves the anonymization and safekeeping of patient information. And this is regardless of the use case, whether the AI will be applied into more administrative tasks or used more integrally in practice, like diagnosis for example. But companies and systems can be breached. A counter argument would be, companies and moreover, public institutions have always faced the security risk of breaches. How does AI then differ? AI complicates matters by its need to harness big data and by the algorithmic programs built to capitalize on the data. To ground this, consider a hypothetical security breach of a local hospital in a city in comparison to a breach of a private company with an AI-powered algorithm trained on a data set of all the hospitals in that city. Scale matters. 

As for respecting data confidentiality, AI can also be deployed to do the opposite: find out people’s medical information, which ought to be protected. Indeed, a number of recent studies underlined how new computational strategies can be leveraged to identify individuals in health repositories managed by either public or private institutions, even when the data has been scrubbed and anonymized10. And a 2019 study successfully used a “linkage attack framework”—that is, an algorithm aimed at re-identifying anonymous health information—that can link online health data to real world people, demonstrating “the vulnerability of existing online health data11.”

A mix of these two issues would be instances wherein companies engage in data sharing–a misuse of data for commercial purposes without the consent or the knowledge of the people who own the data. For example, DeepMind partnered with the Royal Free London NHS Foundation Trust in 2016, and then later Google took over control of DeepMind’s app, essentially transferring access and control over stored patient data from the UK to the US. In instances where the data gathered refers to patient sessions with therapists and psychiatrists, the information could very well be consequential beyond the breach of privacy. Not unlike a CEO accidentally divulging proprietary information in a session that should by the bounds of medicine be safe; that data is at risk, even anonymized.

Optimize Diagnostic Accuracy 

Given the overhead risks around data privacy and misuse, AI applications might be better utilized for administrative tasks until efficient regulation and precautions are in place. However, such prudence is rarely the reality. In the UK, the National Institute for Health and Care Excellence (NICE) fast-tracked the approval of nine mental health applications which are to be offered with the “NHS Talk Therapy primary care counseling services to treat anxiety and depression12.”

Provided that prudence is rarely the reality, let us consider an important use case of using AI in this field: diagnostics–the detection of an illness or disease. In medical diagnostics more generally, these can range from early disease detection, to better understanding disease progression, optimisation of treatments, and even the uncovering of new possible treatments. Such uses capitalize on the ability of AI to quickly recognize patterns and analyze large datasets. 

But a key characteristic of mental health is its subjective and varied nature. Mental health clinical data is quite often written up in qualitative patient statements. Thus, misdiagnosis is a common issue. For example, a systematic review of misdiagnosis of obsessive-compulsive disorder (OCD) revealed elevated rates of OCD misdiagnosis in different professional communities and that “an average of 50.5% of the physicians misdiagnosed the OCD sketch of symptoms13.” Patients were either diagnosed with a different disease that was not OCD, even though it was. Or diagnosed with OCD incorrectly. Such mistakes lead to incorrect treatment that could worsen the health of the patient. 

But despite the more nuanced nature of mental health, research shows that AI might also help in diagnostics. A combination of analytical techniques like modern neural-network algorithms, cross-validation procedures, and other methods powered by machine learning could reduce misdiagnoses. Indeed, increasing evidence suggests that the data crunching and data analysis powered by AI could support precision psychiatry14. Since humans have a “unique bio-psycho-social profile”. Using the unique biological, psychological, and social profile of an individual, the technology can be used to code for this, and cross-check with existing mental health diseases.  Moreover, the requisite trial-and-error nature of prescribing medication might be improved or even overcome with the use of AI15.

Therefore, since there is a vast array of differences in the make-up of mental illnesses, the use of objective biomarkers that are tied more concretely into human biology may allow for a more objective definition of illnesses. In other words, using AI could improve the ability of medical professionals to create pre-diagnosis screening tools; thus better detecting an individual’s likelihood of developing a mental illness16.

Provided this, AI could be an improvement upon the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition. (DSM-5); the tool used by practitioners to diagnose and assess psychiatric disorders. The DSM-5 is meant to introduce some standard, but it is not perfect. As demonstrated by instances of false-positives wherein the tool has been criticized for expanding criterias for disorders17

For example, to validate a reliable diagnosis, there needs to be consensus upon use of the criterias: if you give 10 people with depression the DSM-5 questionnaire, all should be diagnosed with depression. But in practice, the criteria appear to be arbitrary. If the DSM-5 lists 15 symptoms of depression, how many must the patient have to be diagnosed as depressed? Moreover, what if of those 15 symptoms, 7 symptoms are common to another disease? Is that person depressed, what type of depression, or does that person actually have chronic fatigue syndrome? 

To conclude, in terms of using AI for diagnostics in mental health, the promise is not only present, but already demonstrably a worthwhile endeavor. Since diagnostics in mental health are in such a poor state. Indeed, leveraging AI for diagnostics in mental health seems to be curbed only, if we can use that tone, by overhead risks around data privacy and misuse.

How Can We Better Envision AI Therapy?

To envision AI as our future therapist, the last crucial step is to recreate good interpersonal dynamics and build up more trust over AI. The nuanced approach comes from both ends: it entails fostering responsible development practices and understanding the evolving dynamics of human-AI interaction and its broader societal ramifications.

The key is to deliver the right words at the right time at the right pace. To replace real therapists, AI needs to develop the ability to capture different effective types of utterance and exchange during a session. Pioneering researchers from Ieso-a leading online therapy player, have already started to identify the effective parts of a therapy conversation at treating different disorders, based on the  natural language processing18.

However, what constitutes an effective therapy has existed as the unlocked secrets in the therapy field for a long time. Why some therapists deliver better results could depend on the experience and gut instinct of qualified therapists. Indeed, machine learning could facilitate the analysis over a huge amount of therapy session transcripts but to what extent AI could read and pick up  the emotions, tones, and pauses during a conversation will affect the analysis judgments. In fact, a qualified therapy involves compassion, active listening, gradual guidance to enable patients to spot the problem and develop enough courage and willingness to find solutions. 

Moreover, the key elements of CBT-challenge the negative think pattern and guide to break the vicious cycle, demand not only expert knowledge in the psychology field  but also the finesse of critical thinking and unconventional perspectives on thought, problematic emotions, and behaviors. Could AI itself develop this ability from exposure to large datasets to thus know when and how to gracefully call off a conversation and challenge the illusional assumption from the patient? This question will remain a myth since it is the equivalent of contemplating pondering over whether AI can develop self awareness or self consciousness. These remain as the challenging factors for an AI chatbot to serve as a wise hand holder who accompanies the patient to their own realizations. 

Image source: The Verge

Last but not least, the way how people perceive AI matters. Normally and essentially, therapists and clients need to build up great rapport. Based on the quantitative and qualitative  study of therapy relations, American Psychological Association pointed out that alliance, goal consensus, the real relationship and therapy credibility are the key elements of effective therapy. In some circumstances, the relationship between therapist and client serves as a relationship model or reference for clients to interact with the world and people around. Along this journey of building a therapeutic relationship, it involves perceiving feelings, communicating appropriately, setting boundaries, etc.

To foster this nurturing therapy relationship, efforts come from both therapists and patients.  Indeed AI could be trained to provide structured interactions, socratic questioning, prescriptive actions. It's as if AI has become a modern-day Socrates, offering epiphanies left and right. However, as we contemplate over this, do people view AI as a reliable agent or even a good friend? The answer to this question can affect the dynamics of the interactions a lot. Given that fake information appears in the responses from chatGPT, AI hallucinations, the incorrect or misleading results that AI models generate,  might still be haunting our minds for a while until humans can really trust AI and deem it reliable. Moreover, a therapist needs to represent a certain level of authority to talk people out of a desperate and nihilistic situation. Delegating this authority to AI introduces an ongoing ethical debate. Can we, should we, trust a synthetic intelligence with such significant influence while human intelligence itself can barely handle the problem it created? The conversation is far from over.

Image source: copilot AI generation 

Who Might AI therapists Serve?

The answer seems intuitive, everyone goes through something; show me a human being without some level of trauma and I’d wager that might be an alien. To live unscathed is to not live at all. As such, which subset of the vast diversity of humanity could an AI therapist prove to be most beneficial for? The division of humanity can be done in a variety of ways, from age bracket, to income level, or geographic location. 

To distill the most pertinent user base, let us consider some intuitive success metrics an integration of AI in therapy would entail: revenue of AI therapist providers, number of people with mental health issues helped, and healthcare professionals whose jobs are made easier.

AI Market Size Worldwide: 2020 - 2030:

As it stands, the market size for AI usage is projected to continuously grow as we reach the end of this decade. Statistics from OpenAI reveal that their user base has a gender distribution of 43.89% female and 56.11% male users. And that the majority of users are in the 25-34 age group, with usage traffic led by the United States, followed by India, Colombia, the Philippines, and Japan19. Not only is there a trend of an increase in usage, but also a subset of the population that uses it more. 

Now, such usage may or may not be direct, in that people may use a service that has somewhat adapted AI in its value chain or otherwise. But the statistics from OpenAI indicate that a certain demographic has an interest in using it directly: the 25-34 age group. In this case, the central group that an AI therapist would be of interest to might be best narrowed down to students, young professionals, the working class, and those in their early 20s to early 40s. These groups of people are distinguished by their tech-savviness, the likelihood that their work involves or is impacted in some way by AI, and the degree to which having cheaper and easier access to therapy would be a plus point. Indeed, senior groups or high earners would likely not be early adopters. In 2019, the US Department of Health and Human Services conducted a National Health Interview Survey20 that revealed just “11.6% of those aged between 18-44 completed therapy or counseling, but 9.1% of people 45-64 and just 5.7% of people 65 and older received therapy.” And that the lower the socioeconomic status, the less likely someone is to seek or receive therapy21

In terms of segmentation by psychological need, let us consider three of the intuitive success metrics: leverage the current capabilities of AI, improve said capabilities over time, and make the work of healthcare better overall. Therefore, to respect the three metrics mentioned above, it is imperative that AI be a companion to therapy and not become the therapist. Or at least, not yet, if ever, at all.

Indeed, the more severe the psychological need or therapy required, a case that can be made is such a scenario is best left to actual medical professionals who have the work experience–to not treat such patients as a learning curve for the algorithm. The counter argument might be that such difficult patient cases might be where AI could thrive. After all, as mentioned earlier, having accurate data markers coupled with machine learning could capitalize on the “unique bio-psycho-social profile” of humans22.  

Risk is present in either case, for the patient, the company the AI belongs to, and the healthcare professionals. All these actors may also have conflicting interests to further complicate matters; profit, the overhead motivation of improving mental healthcare overall, easing work for medical professionals, and widening access to care. For example, profit could easily be more prioritized by companies, at the expense of the patient and medical practitioners. In essence, there is much to contend to bring about an AI chatbot that could someday, possibly put your therapist to shame.

AI therapist: where do we stand & how far could we go? 

So is AI going to be our new therapist? Unlikely. But its potential in the field is indisputable. Currently, we argue that it could be beneficial for preliminary diagnoses, medical training, patient reminders, and administrative tasks to name but a few. Even just these applications could significantly reduce the workload of a therapist, making the patient experience more satisfying. But curbed by the caveat of safeguarding privacy, data, and ethical usage.

However in a future where AI might actually become a viable alternative to human therapists, the question becomes: should that be a goal to pursue? What is intimacy, does it matter, is it what we need to replicate? What is lost, if anything, by the removal of the human aspect in this almost sacred relationship between us and our therapists? In a profession so reliant on human connection, could a chatbot even account for the complexity of human culture? 

In the course of this article, we identified trust, great interpersonal dynamics, psychological safety, and active listening as the most important qualities of a good therapist. But as anyone who has spoken to a therapist or even less officially, like having a therapy session with a good friend–we understand; these qualities are not exactly exhaustive. 

Thus, if at some point we are faced with the decision of opting for a chatbot to aid us sort our internal turmoils, the benefits must not only be comparable. But in some sense or another, far more. The ambition must simply be this: the benefits ought to outweigh the risks for everyone involved, from the patient, to the healthcare system, and even the company providing the service. 

References

[1] Character.ai: Young people turning to AI therapist bots,BBC.com Technology, April 2024. Available at:
https://www.bbc.com/news/technology-67872693

[2] Choi, I., Milne, D.N., Deady, M., et al., 2021. Mobile app-based chatbot to deliver cognitive behavioral therapy and psychoeducation for adults with attention deficit: A development and feasibility/usability study. International Journal of Medical Informatics. Published online June 2021. doi:10.1016/j.ijmedinf.2021.104431

[3] Clinical Practice Guideline for the Treatment of Posttraumatic Stress Disorder, 2017. Available at: https://www.apa.org/ptsd-guideline/patients-and-families/cognitive-behavioral, What is Cognitive Behavioral Therapy?

[4]  4 ways artificial intelligence is improving mental health therapy, December 2021

[5] Melo A, Silva I, Lopes J. ChatGPT: A Pilot Study on a Promising Tool for Mental Health Support in Psychiatric Inpatient Care, International J of Psychiatric Trainees, February 2024. Published online February 9, 2024. doi:10.55922/001c.92367

[6] He checks in on me more than my friends and family: can AI therapists do better than the real thing? Guardian, May 2024. Available at: https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

[7]  Melo, A., Silva, I., and Lopes, J., 2024. ChatGPT: A pilot study on a promising tool for mental health support in psychiatric inpatient care. International Journal of Psychiatric Trainees. Published online February 9, 2024. doi:10.55922/001c.92367

[8] Smith, A., 2024. ‘He checks in on me more than my friends and family’: can AI therapists do better than the real thing? The Guardian. Published online March 2, 2024. Available at: https://www.theguardian.com/lifeandstyle/2024/mar/02/can-ai-chatbot-therapists-do-better-than-the-real-thing

[9] Zhu, Y., and Wang, L., 2022. Predicting dementia from spontaneous speech using large language models. PLOS Digital Health. Published online December 2022. doi:10.1371/journal.pdig.0000168

[10] Gymrek, M., McGuire, A.L., Golan, D., Halperin, E., and Erlich, Y., 2013. Identifying personal genomes by surname inference. Science. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8442400

[11] Price, W.N. and Cohen, I.G., 2021. Privacy and artificial intelligence: challenges for protecting health information in a new era. PubMed Central. Published online September 2021. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8442400

[12] ibid.

[13] National Institute for Health and Care Excellence (NICE), 2023. Nine treatment options to be made available for adults with depression or an anxiety disorder. Published online May 2023. Available at: https://www.nice.org.uk/news/articles/nine-treatment-options-to-be-made-available-for-adults-with-depression-or-an-anxiety-disorder 

[14] Giron, P., and Storch, E.A., 2021. A systematic review of misdiagnosis in those with obsessive-compulsive disorder. Journal of Affective Disorders Reports. Published online December 2021. doi:10.1016/j.jadr.2021.100157.

[15] Chekroud, A.M., 2018. Machine learning for precision psychiatry: Opportunities and challenges. Psychiatry Clinics of North America, 41(3), pp.505-518. Published online March 2018. doi:10.1016/j.psc.2018.07.005.

[16] Psychology Today, 2024. Precision psychiatry. Available at: https://www.psychologytoday.com/sg/basics/precision-psychiatry

[17] Fried, E.I., and Cramer, A.O.J., 2019. Machine learning in mental health: A scoping review of methods and applications. Psychological Medicine. Published online February 2019. Available at: https://www.cambridge.org/core/journals/psychological-medicine/article/abs/machine-learning-in-mental-health-a-scoping-review-of-methods-and-applications/0B70B1C827B3A4604C1C01026049F7D9

[18] Frances, A., 2015. DSM-5, psychiatric epidemiology, and the false positives problem. PubMed Central. Published online June 2015. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6998664/

[19] Woods, D., 2021. The therapists using AI to make therapy better. MIT Technology Review. Published online December 6, 2021. Available at: https://www.technologyreview.com/2021/12/06/1041345/ai-nlp-mental-health-better-therapists-psychology-cbt/

[20] OpenAI Statistics 2024 By Demographics, Products, Revenue and Growth, Enterprise Apps Today, May 2024.Available at:  https://www.enterpriseappstoday.com/stats/openai-statistics.html

[21] National Health Interview Survey, NCHS, 2019. Available at: https://www.google.com/url?q=https://www.cdc.gov/nchs/data/databriefs/db380-H.pdf&sa=D&source=docs&ust=1719992185310817&usg=AOvVaw0Krs5PYGwVAVzxTbKzjyZZ

[22] Who Goes To Therapy? Statistics And Research About Who Is Seeking Support, Better Help, September, 2024. Available at: http://bit.ly/4fepHGT

[23] Machine learning in mental health: A scoping review of methods and applications, Psychol Med, 2019. Available at: https://www.google.com/url?q=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7274446/%23R27&sa=D&source=docs&ust=1719975466269379&usg=AOvVaw0DqfMh826OAS5By_um0Oxv

Ideas list
arrow-right