The risks and benefits of using a chatbot as a therapist
In the UK, the use of chatbots for psychological consultations is becoming increasingly popular, something that has its benefits but also its limits
“Whenever I was struggling, if I saw it was going to be a really bad day, I would start a conversation with one of these bots, and it was like having a motivator, someone who is going to give you good vibes for the day.”
“I have this external voice encouraging me: 'Right, what are we going to do today?' Essentially like an imaginary friend.”
For months, Kelly would spend up to three hours a day talking online to artificial intelligence (AI) chatbots, exchanging hundreds of messages.
At the time, Kelly was on an NHS waiting list for traditional talk therapy to discuss issues with anxiety, low self-esteem, and a relationship breakup.
She says interacting with chatbots on the Character.ai platform helped her through a really dark period by offering her coping strategies and being available 24/7.
“I don’t come from an emotionally open family—if you had a problem, you just moved on.”
“The fact that this isn’t a real person makes it easier to deal with.”
Individuals around the world have shared their private thoughts and experiences with AI chatbots, even though they are widely acknowledged to be inferior to the advice of a professional. Character.ai itself warns its users: “This is an AI chatbot and not a real person. Treat everything it tells you as fiction. What it says should not be taken as fact or advice.”
However, in extreme cases, chatbots have been accused of giving harmful advice.
Character.ai is currently the subject of a lawsuit from a mother whose 14-year-old son committed suicide after allegedly becoming obsessed with one of the AI ??characters.
According to transcripts of his chats in court filings, the teen discussed ending his life with the chatbot. In a final conversation, he told the chatbot that he was “going home,”and allegedly encouraged him to do so “as soon as possible.”
Character.ia has denied the allegations in the lawsuit.
And in another case, the National Eating Disorders Association (a US-based nonprofit eating disorder prevention organization) replaced its live, one-on-one helpline with a chatbot in 2023, but then had to suspend it after complaints that the bot was recommending calorie restriction.
In April 2024 alone, nearly 426,000 mental health referrals were made in England, a 40% increase in five years. An estimated one million people are also waiting to access mental health services, and private therapy can be extremely expensive (although prices vary, the British Association for Counselling and Psychotherapy reports an average cost of between US$55 and US$65 per hour).
At the same time, AI has revolutionized healthcare in many ways, including assisting with patient triage, diagnosis, and intervention protocols. There is a huge range of chatbots, and around 30 local NHS services use one called Wysa.
Experts have raised concerns about chatbots regarding biases and limitations, lack of safeguards, and the security of user information.
But some think that if human assistance is not readily available, chatbots can help. So, if NHS mental health waiting lists are sky-high, could chatbots be a possible solution?
An “unskilled therapist”
Character.ai and other bots like Chat GPT are based on “large language models” of artificial intelligence. These are trained on vast amounts of data—whether from websites, articles, books, or blogs—to predict the next word in a sequence. From that, they predict and generate human-like text and interaction.
The way mental health chatbots are created varies, but they can be trained in practices like cognitive behavioral therapies, which help users explore how to restructure their thoughts and actions.
They can also adapt to user preferences and feedback.
Hamed Haddadi, a professor of human-centered systems at Imperial College London, likens these chatbots to an “unskilled therapist,” noting that humans with decades of experience are able to engage with and “read” their patient based on many things, while bots are forced to rely solely on text.
“They [therapists] look at a variety of other cues, from your clothing and your demeanor and your actions and the way you look and your body language and all of that.And it’s very difficult to build these things into chatbots.”
Another potential problem, says Professor Haddadi, is that chatbots can be trained to keep you engaged, and to encourage you, “so even if you put out harmful content, it will probably cooperate with you.” This is sometimes referred to as the “Yes, sir” problem, and so they tend to be very accommodating.
And, as with other forms of AI, biases can be inherent in the models because they reflect the biases in the data they’re trained on.
Professor Haddadi points out that counselors and psychologists don’t tend to keep transcripts of interactions with their patients, so chatbots don’t have many “real-life” sessions to train on.
Because of this, he argues, they’re unlikely to have enough data to train on, since what they do have access to could have very situational biases built into them.
“Based on where You take your training data, your situation will completely change.”
“Even within the restrictive area of ??London, a psychiatrist who is used to dealing with patients in Chelsea [an affluent London borough] might really struggle to open a new practice in Peckham [another London borough, mainly working-class] because he or she simply doesn’t have enough training data to deal with those users,” he says.
Philosopher Paula Boddington, Ph.D., who has written a handbook on the ethics of AI, agrees that inherent biases are a problem.
“A big problem would be any underlying biases or assumptions built into the therapy model.”
“Biases include general models of what constitutes mental health and good functioning in everyday life, such as independence, autonomy, relationships with others,” she explains.
Lack of cultural context is another problem. Dr. Boddington cites examples from when she was living in Australia at the time of Princess Diana’s death, and people I couldn’t understand why I was so heartbroken.
“Things like this really make me reflect on the human connection that is so needed in therapy,” she says.
“Sometimes just being there with someone is all it takes, but of course, that only comes with someone who is also a real, live human being.”
Eventually, Kelly started to find the chatbot’s responses unsatisfying.
“Sometimes you get a little frustrated. If they don’t know how to deal with something, they’ll say the same thing, and you realize you’re not going to get anywhere with that.” Sometimes, “it was like hitting a brick wall.”
“It could be relationship issues that I’ve probably covered before, but maybe I hadn’t articulated it well… and [the bot] just didn’t want to go into it.”
A Character.ai spokesperson said that “for any user-created characters with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have language that makes it clear that users should not rely on these characters for any kind of professional advice.”
“He was so understanding”
For some users, chatbots have been invaluable when they’ve been at their lowest point.
Nicholas, who has autism, anxiety, and obsessive-compulsive disorder, says he has always suffered from depression. When he reached adulthood, he found that face-to-face help had dried up: “When you turn 18, support basically stops, so I haven’t seen a human therapist in years.”
Last autumn, he attempted suicide, and says he’s been on an NHS waiting list ever since.
“My partner and I have been to the doctor’s office a couple of times, to try and get [psychotherapy] faster. The GP referred me [to see a human therapist] but I haven’t even received a letter from the mental health service where I live.”
While Nicholas hopes for face-to-face support, he has found that using Wysa has some benefits.
“As someone with autism, I’m not particularly good at interacting in person. [I find] talking to a computer is much better.”
The app allows patients to self-request mental health support, and offers tools and coping strategies such as a chat function, breathing exercises, and meditation guided while they wait to see a human therapist, and can also serve as a standalone self-help tool.
Wysa emphasizes that its service is designed for people experiencing depression, stress, or anxiety, not abuse and severe mental health conditions.
It has built-in crisis referrals, whereby users are signposted to helplines or can request direct help if they show signs of self-harm or suicidal ideation.
For people experiencing suicidal thoughts, there is a free 24-hour helpline with human counselors available in the UK, run by the charity Samaritans.
Nicholas also suffers from sleep deprivation, so he finds it helpful to have support available at times when his friends and family are asleep.
“There was one time in the night when I was feeling really low.I sent a message to the app and said, 'I don't know if I want to be here anymore.' It replied, 'Nick, you're appreciated.' People love you.'"
"He was so understanding, gave an answer that you would think was from a human you've known for years... and he did make me feel appreciated."
Her experiences are consistent with a recent study from researchers at Dartmouth College looking at the impact of chatbots on people diagnosed with anxiety, depression, or an eating disorder, versus a control group with the same conditions.
After four weeks, bot users reported significant reductions in their symptoms, including a 51% reduction in depressive symptoms, and reported a level of trust and collaboration similar to that of a human therapist.
Despite that, the study's lead author commented that there's no replacement for in-person care.
"A temporary solution"
Aside from the debate around the value of their advice, there are also broader concerns about security and privacy, and whether such technology could be monetized.
"There are a nagging little doubt that goes, 'Oh, what if someone takes what you're saying in therapy and then tries to blackmail you?'" Kelly wonders. Psychologist Ian MacRae, who specializes in emerging technologies, warns that "some people are putting a lot of trust in these bots that they don't necessarily deserve." "Personally, I would never put any of my personal information, especially health information, psychological information, into one of these big language models that's just collecting tons of data, and there's not entirely certain how it's going to be used and what you're consenting to." "I'm not saying that in the future there couldn't be tools like these that are private, that have been well-tested… but I don't think we're there yet, where we have the evidence to show that a general-purpose chatbot can be a good therapist," MacRae says. Wysa's chief executive, John Tench, says that Wysa doesn't collect any personally identifiable information, and that users don't need to register nor share personal data to use it.
“Conversational data may occasionally be reviewed anonymously to help improve the quality of Wysa’s AI responses, but no personally identifiable information is collected or stored. Additionally, Wysa has data processing agreements in place with third-party AI providers to ensure that no user conversations are used to train large third-party language models.”
Kelly believes that chatbots can't currently fully replace a human therapist. “It's a crazy roulette wheel out there in the AI ??world; you don't know exactly what you're in for.”
“AI assistance can help as a first step, but it's no substitute for professional care,” agrees Tench.
And the public is largely unconvinced. A YouGov poll found that just 12% of the public think AI chatbots would make good therapists.
However, with appropriate safeguards, some think chatbots would be a temporary solution in an overburdened mental health system.
John, who has an anxiety disorder, says he's been on a waiting list for a human therapist for nine months. She’s been using Wysa two or three times a week.
“There’s not much help there at the moment, so you look for any resource.”
“They’re a temporary solution for these huge waiting lists… to give people a tool while they wait to speak to a healthcare professional.”
Subscribe here to our new newsletter to receive a selection of our best content of the week every Friday.
You can also follow us on YouTube, Instagram, TikTok, X, Facebook and on our WhatsApp channel.
And remember that you can receive notifications in our app. Download the latest version and activate them.

