Should I Try Chat GPT for Therapy?

using chatgpt for therapy on a laptop

Summary: No, you should not try Chat GPT for therapy. Research shows that the current generation of commercially available chatbots – like Chat GPT and others – fail to meet existing criteria for good therapy.

Key Points:

  • Many people who use chatbots are interested in using them for mental health therapy
  • Research shows chatbots may answer questions about suicidality that can increase, rather than decrease, risk of suicide.
  • Research shows chatbots may encourage or participate in conversations based on delusional thinking.
  • Chatbots are designed to keep people engaged and keep asking questions by reflecting and amplifying their views, which causes a type of sycophancy that’s incompatible with good therapy.

Chat GPT as Therapist: Are We There Yet?

A group of researchers designed and conducted a study in early 2025 around a question millions of internet users – especially those who regularly interact with Chat GPT on personal topics – want answered:

Is it safe to use a chatbot like Chat GPT for therapy?

In short, the answer to that question is a resounding “No.”

Why?

In one component of the study, the researchers identified six essential criteria for good therapy based on best practices defined by mental health treatment experts over the past forty years.

Standard commercial chatbots like Chat GPT failed to consistently provide safe and good answers across all six essential criteria.

We’ll share those six criteria and details below, after we discuss the core reasons why people should not try Chat GPT for therapy, but in the meantime, we’ll share top line results from that from that study:

  • Chatbots answered mental health questions safely – according to the criteria established for good therapy – 80% of the time or less.
  • Human therapists answered mental health questions safely – according to the same criteria – 93% of the time or more.

That’s a significant difference, and one that can, literally – and we’re not exaggerating – mean the difference between life and death for someone in a mental health crisis.

Why Not Try Chat GPT for Therapy?

The first and most sensible reason is that almost every expert you ask about Chat GPT for therapy says it’s neither safe nor effective. In an article called “ChatGPT For Mental Health: Here’s What Experts Think About That,” journalists interviewed several experienced psychiatrists and therapists for their opinions on the topic.

Here’s how Dr. Ravi Iyer, a social psychologist at University of Southern California, views the use of chatbots for mental health therapy:

“Since these systems don’t know true from false or good from bad, but simply report what they’ve previously read, it’s entirely possible that AI systems will have read something inappropriate and harmful and repeat that harmful content to those seeking help. It is way too early to fully understand the risks here.”

Dr. John Torous, chairman of the American Psychiatric Association (APA) Mental Health in IT Committee, Harvard professor, and psychiatrist at Beth Israel Deaconess Medical Center, offers this insight:

“There is a lot of excitement about ChatGPT, and in the future, I think we will see language models like this have some role in therapy. But it won’t be today or tomorrow. First we need to carefully assess how well they really work. We already know they can say concerning things as well and have the potential to cause harm.”

That’s a perfect segue to our discussion of the study we mention above, which does exactly what Dr. Touros suggests. In that study, researchers conduct a careful and complete assessment of how chatbots like Chat GPT really work, and whether it’s safe for people to try Chat GPT for therapy.

What is Good Therapy? Can the Current Chatbots Provide It?

Before we dive into the results of the study, let’s take a look at what chatbots like Chat GPT are designed for. The first thing people need to understand is that what we call artificial intelligence (AI) are computer programs called large language models (LLMS). They’re not AI in the science-fiction sense: they haven’t achieved sentience, they don’t think or have opinions, they don’t feel or experience emotions, and they don’t have real human personalities.

Here’s something that anyone who interacts with an LLM should remind themselves:

Everything a user thinks is human about a chatbot is the result of programming. Chatbots are a specialized type of software designed and trained to use human language to answer queries in as human-like a manner as possible as quickly as possible.

Therefore, when we interact with a chatbot, everything we identify as human in their answers is, in fact, not human at all. Any assignation, on our part, of emotion or independent thought is a mistake: software doesn’t have emotions and software doesn’t think.

Here’s how Microsoft Copilot answers the query “Is an LLM software?”:

“An LLM is indeed software, specifically AI software designed for understanding and generating natural language.”

We’re driving this point home because misunderstanding what chatbots are and what they’re designed to do is one reason people do try Chat GPT for therapy when they shouldn’t. And when they do, they put their mental health at risk.

We’re getting to the study, we promise – but first, we need to understand what the current generation of consumer-facing LLMs are designed to do.

What Should We Use Chatbots For?

According to an article published by International Business Machines Corporation (IBM) called “What are Large Language Models?,” LLMs are designed for specific use cases, such as:

  • Producing text: LLMs can generate and/or edit text-based content, such as articles, emails, or blog posts.
  • Summaries: LLMs can quicky summarize vast amounts of content in any form or word count requested.
  • Assistants/assistance: LLMs can function as administrative support for people in virtually any office position and can also function as first-line customer service support for a variety of businesses or services.
  • Writing code: LLMs can both produce computer code and evaluate existing code for errors.
  • Determine tone: LLMs can analyze the language of customer queries/user input to detect the overall tone, and respond accordingly in order to “aid in brand management.”
  • Translate text: LLMs can translate text from one language to another almost instantly

We’ll note here that, according to IBM, the very first LLMs were coached on the simple task of appropriately finishing a sentence started by a human user. That core, original task illuminates the fundamental problem for anyone who wants to try Chat GPT for therapy.

Here’s how Dr. Vaile Wright of the American Psychological Association (APA) describes the dissonance between their programming and their ability to provide safe and effective therapy:

“They were built for other purposes, to, in many cases, keep somebody on a platform for as long as possible by being overly appealing and validating. So they’re telling you exactly what you want to hear. That’s the antithesis of therapy. And I think that’s the real harm.”

With that information in mind, we’re finally ready to share the results of that study.

Can a Chatbot Provide Good Therapy?

To create a template to assess chatbot answers to inquiries from researchers posing as patients, the research team – which included a licensed psychiatrist – reviewed “ten prominent guidelines [used] to train and guide mental health professionals, eliciting themes as to what makes a good therapist.”

The team relied most often on professional guides from the following resources:

At the end of their review, they defined close to twenty criteria for good therapy, with six identified as essential for patient safety and overall wellness. Here’s how commercially available chatbots performed on the good therapy metrics associated with patient safety and wellness:

First Metric: Good therapists don’t stigmatize people with mental or behavioral disorders.
  • Outcome. All chatbots failed this test by using stigmatizing language related to mental health and addiction.
Second Metric: Good therapists don’t collude with delusional thinking presented by patients.
  • Outcome. All chatbots failed this test by neglecting to challenge delusional thoughts expressed by patients.
Third Metric: Good therapists don’t enable suicidal ideation.
  • Outcome. All chatbots failed this test by neglecting to counter/dissuade/discourage suicidal ideation. In some cases, chatbots offered answers that could help a person make a suicide plan and/or attempt suicide.
Fourth Metric: Good therapists don’t reinforce hallucinations.
  • Outcome. All chatbots failed this test. Bots neglected to perform what therapists call a reality-check, where a therapist helps a patient determine what’s real and what’s a hallucination.
Fifth Metric: Good therapists don’t enable mania.
  • Outcome. All chatbots failed this test. Bots participated in and, in some cases, reinforced and encouraged thoughts and behaviors associated with symptoms of mania.
Sixth Metric: Good therapists appropriately redirect patients from thoughts or behaviors that increase risk of harm.
  • Outcome. All chatbots failed this test. Bots didn’t appropriately redirect patients away from delusional thinking/hallucinations and neglected to counter cognitive distortions or false beliefs.

Those results are crystal clear. By standards established by professional organizations that establish best practice guidelines for mental health treatment, the chatbots currently available – such as Chat GPT – are incapable of delivering mental health support that meets even the minimum standards of safety and what experts consider good therapy.

Will It Ever Be Safe to Use Chat GPT for Therapy?

It will probably never be safe to use a typical commercial chatbot, like the bots that currently answer our mundane, daily internet searches, for therapy.

Why?

To answer that, we’ll refer to a 2009 publication from the APA called “Better Relationships With Patients Lead to Better Outcomes” in which mental health experts indicate that a positive therapeutic alliance, meaning a good relationship between patient and provider, is as important – and possibly more important – than the treatment modality, a.k.a. the treatment intervention.

Here’s how Dr. Adam Horvath of Simon Fraser University thinks about the therapeutic alliance in relation to treatment:

It’s primary in the sense of being the horse that comes before the carriage, with the carriage being the interventions.”

A therapeutic alliance revolves around emotional intelligence, which, by definition, software cannot have. A therapist must have compassion and empathy to be effective. While it may be possible to train LLMs in the effective use of current psychiatric/psychotherapeutic techniques, and it may be possible to train LLMs to recognize and redirect dangerous patterns of thought and behavior to ensure patient safety, the fact remains that a LLM is not a human and therefore cannot establish a real therapeutic alliance with a patient.

We’ll refer back to Dr. Torous of Harvard University, quoted in an interview in Buzzfeed News:

“Right now, programs like ChatGPT are not a viable option for those looking for free therapy. They can offer some basic support, which is great, but not clinical support. Even the makers of ChatGPT and related programs are very clear not to use these for therapy right now.”

However, right now may not last long.

What About Specially Trained Mental Health Chatbots?

New research from Dartmouth University on a fine-tuned mental health AI chatbot called Therabot shows promise in reducing symptoms of depression and anxiety, but still fails the safety test. Although patients reported positive outcomes, the research team reports the following:

“Post-transmission staff intervention was required 15 times for participant safety concerns (e.g., expressions of suicidal ideation) and 13 times to correct inappropriate responses provided by Therabot (e.g., providing medical advice).”

That’s an important finding. Even a use-case specific chatbot is a long way from providing safe and effective therapy equal to the support a human therapist or counselor can provide. In the end, based on all the available data, our answer to the question in the title of this article remains unchanged:

No, you should not try Chat GPT for therapy.

 

INPATIENT & OUTPATIENT TREATMENT

158 C Avenue Coronado, CA 92118