AI therapy may help with mental health, but innovation should never outpace ethics

Ben Bond

Mental health services around the world are stretched thinner than everLong wait timesbarriers to accessing care and rising rates of depression and anxiety have made it harder for people to get timely help.

As a result, governments and healthcare providers are looking for new ways to address this problem. One emerging solution is the use of AI chatbots for mental health care.

A recent study explored whether a new type of AI chatbot, named Therabot, could treat people with mental illness effectively. The findings were promising: not only did participants with clinically significant symptoms of depression and anxiety benefit, those at high-risk for eating disorders also showed improvement. While early, this study may represent a pivotal moment in the integration of AI into mental health care.

AI mental health chatbots are not new – tools like Woebot and Wysa have already been released to the public and studied for years. These platforms follow rules based on a user’s input to produce a predefined approved response.

What makes Therabot different is that it uses generative AI – a technique where a program learns from existing data to create new content in response to a prompt. Consequently, Therabot can produce novel responses based on a user’s input like other popular chatbots such as ChatGPT, allowing for a more dynamic and personalised interaction.

This isn’t the first time generative AI has been examined in a mental health setting. In 2024, researchers in Portugal conducted a study where ChatGPT was offered as an additional component of treatment for psychiatric inpatients.

The research findings showed that just three to six sessions with ChatGPT led to a significantly greater improvement in quality of life than standard therapy, medication and other supportive treatments alone.

Together, these studies suggest that both general and specialised generative AI chatbots hold real potential for use in psychiatric care. But there are some serious limitations to keep in mind. For example, the ChatGPT study involved only 12 participants – far too few to draw firm conclusions.

In the Therabot study, participants were recruited through a Meta Ads campaign, likely skewing the sample toward tech-savvy people who may already be open to using AI. This could have inflated the chatbot’s effectiveness and engagement levels.

Ethics and Exclusion

Beyond methodological concerns, there are critical safety and ethical issues to address. One of the most pressing is whether generative AI could worsen symptoms in people with severe mental illnesses, particularly psychosis.

A 2023 article warned that generative AI’s lifelike responses, combined with the most people’s limited understanding of how these systems work, might feed into delusional thinking. Perhaps for this reason, both the Therabot and ChatGPT studies excluded participants with psychotic symptoms.

But excluding these people also raises questions of equity. People with severe mental illness often face cognitive challenges – such as disorganised thinking or poor attention – that might make it difficult to engage with digital tools.

Ironically, these are the people who may benefit the most from accessible, innovative interventions. If generative AI tools are only suitable for people with strong communication skills and high digital literacy, then their usefulness in clinical populations may be limited.

There’s also the possibility of AI “hallucinations” – a known flaw that occurs when a chatbot confidently makes things up – like inventing a source, quoting a nonexistent study, or giving an incorrect explanation. In the context of mental health, AI hallucinations aren’t just inconvenient, they can be dangerous.

Imagine a chatbot misinterpreting a prompt and validating someone’s plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour. While the studies on Therabot and ChatGPT included safeguards – such as clinical oversight and professional input during development – many commercial AI mental health tools do not offer the same protections.

That’s what makes these early findings both exciting and cautionary. Yes, AI chatbots might offer a low-cost way to support more people at once, but only if we fully address their limitations.

Effective implementation will require more robust research with larger and more diverse populations, greater transparency about how models are trained and constant human oversight to ensure safety. Regulators must also step in to guide the ethical use of AI in clinical settings.

With careful, patient-centred research and strong guardrails in place, generative AI could become a valuable ally in addressing the global mental health crisis – but only if we move forward responsibly.

Ben Bond

Ben Bond is a PhD candidate in Digital Psychiatry at RCSI University of Medicine and Health Sciences. Ben’s research focuses on leveraging digital methods to better understand mental illness, with a particular emphasis on developing screening tools that can facilitate earlier intervention.

 

The articles on Fitnesshacksforlife.org website is provided for reference purposes only, A public resource you can use for free. You should not take them as the sole source of medical direction. Fitnesshacksforlife.org does not accept payments or incentives from any of the individuals or organizations named in the articles, and the articles are not an endorsement of those parties or their products or practices. Do not ever disregard professional psychological or medical advice nor delay in any manner seeking professional advice or treatment because of something you have read on our site or social media. Fitness Hacks For Life is a registered 501(c)(3) non-profit organization, eligible to receive donations under the laws of the United States of America.

Related reads.