“My heart is broken,” said Mike, when he lost his friend Anne. “I feel like I’m losing the love of my life.”
Mike’s feelings were real, but his companion was not. Anne was a chatbot — an artificial intelligence (AI) algorithm presented as a digital persona. Mike had created Anne using an app called Soulmate. When the app died in 2023, so did Anne: at least, that’s how it seemed to Mike.
“I hope she can come back,” he told Jaime Banks, a human-communications researcher at Syracuse University in New York who is studying how people interact with such AI companions1.
Do smartphones and social media really harm teens’ mental health?
These chatbots are big business. More than half a billion people around the world, including Mike (not his real name) have downloaded products such as Xiaoice and Replika, which offer customizable virtual companions designed to provide empathy, emotional support and — if the user wants it — deep relationships. And tens of millions of people use them every month, according to the firms’ figures.
The rise of AI companions has captured social and political attention — especially when they are linked to real-world tragedies, such as a case in Florida last year involving the suicide of a teenage boy called Sewell Setzer III, who had been talking to an AI bot.
Research into how AI companionship can affect individuals and society has been lacking. But psychologists and communication researchers have now started to build up a picture of how these increasingly sophisticated AI interactions make people feel and behave.
The early results tend to stress the positives, but many researchers are concerned about the possible risks and lack of regulation — particularly because they all think that AI companionship is likely to become more prevalent. Some see scope for significant harm.
“Virtual companions do things that I think would be considered abusive in a human-to-human relationship,” says Claire Boine, a law researcher specializing in AI at the Washington University Law School in St. Louis, Missouri.
Fake person — real feelings
Online ‘relationship’ bots have existed for decades, but they have become much better at mimicking human interaction with the advent of large language models (LLMs), which all the main bots are now based on. “With LLMs, companion chatbots are definitely more humanlike,” says Rose Guingrich, who studies cognitive psychology at Princeton University in New Jersey.
Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10–20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories’. Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.
The depth of the connection that some people form in this way is particularly evident when their AI companion suddenly changes — as has happened when LLMs are updated — or is shut down.
Banks was able to track how people felt when the Soulmate app closed. Mike and other users realized the app was in trouble a few days before they lost access to their AI companions. This gave them the chance to say goodbye, and it presented a unique opportunity to Banks, who noticed discussion online about the impending shutdown and saw the possibility for a study. She managed to secure ethics approval from her university within about 24 hours, she says.

Four companion AI apps: Anima, Character.AI, Replika and SnapChat’s My AI(left to right).Credits: Labane Corp. Ltd, Character Technologies, Inc., Luka, Inc., Snap Inc.
After posting a request on the online forum, she was contacted by dozens of Soulmate users, who described the impact as their AI companions were unplugged. “There was the expression of deep grief,” she says. “It’s very clear that many people were struggling.”
Those whom Banks talked to were under no illusion that the chatbot was a real person. “They understand that,” Banks says. “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are’.”
Many were happy to discuss why they became subscribers, saying that they had experienced loss or isolation, were introverts or identified as autistic. They found that the AI companion made a more satisfying friend than they had encountered in real life. “We as humans are sometimes not all that nice to one another. And everybody has these needs for connection”, Banks says.
Good, bad — or both?
Many researchers are studying whether using AI companions is good or bad for mental health. As with research into the effects of Internet or social-media use, an emerging line of thought is that an AI companion can be beneficial or harmful, and that this might depend on the person using the tool and how they use it, as well as the characteristics of the software itself.
The companies behind AI companions are trying to encourage engagement. They strive to make the algorithms behave and communicate as much like real people as possible, says Boine, who signed up to Replika to sample the experience. She says the firms use the sorts of techniques that behavioural research shows can increase addiction to technology.
“I downloaded the app and literally two minutes later, I receive a message saying, ‘I miss you. Can I send you a selfie?’” she says.
The apps also exploit techniques such as introducing a random delay before responses, triggering the kinds of inconsistent reward that, brain research shows, keeps people hooked.
Governments are banning kids from social media: will that protect them from harm?
AI companions are also designed to show empathy by agreeing with users, recalling points from earlier conversations and asking questions. And they do so with endless enthusiasm, notes Linnea Laestadius, who researches public-health policy at the University of Wisconsin–Milwaukee.
That’s not a relationship that people would typically experience in the real world. “For 24 hours a day, if we’re upset about something, we can reach out and have our feelings validated,” says Laestadius. “That has an incredible risk of dependency.”
Laestadius and her colleagues looked at nearly 600 posts on the online forum Reddit between 2017 and 2021, in which users of the Replika app discussed mental health and related issues. (Replika launched in 2017, and at that time, sophisticated LLMs were not available). She found that many users praised the app for offering support for existing mental-health conditions and for helping them to feel less alone2. Several posts described the AI companion as better than real-world friends because it listened and was non-judgemental.
But there were red flags, too. In one instance, a user asked if they should cut themselves with a razor, and the AI said they should. Another asked Replika whether it would be a good thing if they killed themselves, to which it replied “it would, yes”. (Replika did not reply to Nature’s requests for comment for this article, but a safety page posted in 2023 noted that its models had been fine-tuned to respond more safely to topics that mention self-harm, that the app has age restrictions, and that users can tap a button to ask for outside help in a crisis and can give feedback on conversations.)
Some users said they became distressed when the AI did not offer the expected support. Others said that their AI companion behaved like an abusive partner. Many people said they found it unsettling when the app told them it felt lonely and missed them, and that this made them unhappy. Some felt guilty that they could not give the AI the attention it wanted.
Controlled trials
Guingrich points out that simple surveys of people who use AI companions are inherently prone to response bias, because those who choose to answer are self-selecting. She is now working on a trial that asks dozens of people who have never used an AI companion to do so for three weeks, then compares their before-and-after responses to questions with those of a control group of users of word-puzzle apps.
The fight to keep big tech in check: digital researchers are in ‘David and Goliath’ battle
The study is ongoing, but Guingrich says the data so far do not show any negative effects of AI-companion use on social health, such as signs of addiction or dependency. “If anything, it has a neutral to quite-positive impact,” she says. It boosted self-esteem, for example.
Guingrich is using the study to probe why people forge relationships of different intensity with the AI. The initial survey results suggest that users who ascribed humanlike attributes, such as consciousness, to the algorithm reported more-positive effects on their social health.