
Chatbots give positive, human-like responses to prompts from users.Credit: Jaap Arriens/NurPhoto via Getty
Accounts of people developing psychosis — which renders them unable to distinguish between what is and is not reality — after interacting with generative artificial intelligence (AI) chatbots have increased in the past few months.
Supportive? Addictive? Abusive? How AI companions affect our mental health
At least 17 people have been reported to have developed psychosis, according to a preprint posted online last month1. After engaging with chatbots such as ChatGPT and Microsoft Copilot, some of these people experienced spiritual awakenings or uncovered what they thought were conspiracies.
So far, there has been little research into this rare phenomenon, called AI psychosis, and most of what we know comes from individual instances. Nature explores the emerging theories and evidence, and what AI companies are doing about the problem.
Can AI trigger psychosis?
Psychosis is characterized by disruptions to how a person thinks and perceives reality, including hallucinations, delusions or false beliefs. It can be triggered by brain disorders such as schizophrenia and bipolar disorder, severe stress or drug use.
That AI can trigger psychosis is still a hypothesis, says Søren Østergaard, a psychiatrist at Aarhus University in Denmark. But theories are emerging about how this could happen, he adds. For instance, chatbots are designed to craft positive, human-like responses to prompts from users, which could increase the risk of psychosis among people already having trouble distinguishing between what is and is not real, says Østergaard.
UK researchers have proposed that conversations with chatbots can fall into a feedback loop, in which the AI reinforces paranoid or delusional beliefs mentioned by users, which condition the chatbot’s responses as the conversation continues. In a preprint published in July2, which has not been peer reviewed, the scientists simulated user–chatbot conversations using prompts with varying levels of paranoia, finding that the user and chatbot reinforced each other’s paranoid beliefs.
Studies involving people without mental-health conditions or tendencies towards paranoid thinking are needed to establish whether there is a connection between psychosis and chatbot use, Østergaard says.
Who is at risk?
People who have already experienced some kind of mental-health issue are at the greatest risk of developing psychosis, Østergaard says. It seems that some people can experience their first psychotic break from interacting with chatbots, he adds, but most of them will already be susceptible to developing delusions or paranoia owing to genetics, stress or misuse of drugs or alcohol. Østergaard also theorizes that chatbots could worsen or trigger mania, a period of extremely elevated energy and mood associated with bipolar disorder, because they reinforce symptoms such as elated mood.
People who are isolated and do not interact with friends, family or other people are also at risk, says Kiley Seymour, a neuroscientist at the University of Technology Sydney in Australia. Interacting with other people protects against psychosis, she adds, because “they can offer those counterfactual pieces of evidence to help you think about how you’re thinking”.
But the risk of developing psychosis for people without a predisposition is the same whether they do or don’t interact with chatbots, adds Seymour.
How might AI reinforce delusional beliefs?
Chatbots can remember information from conversations that occurred months earlier, which can trigger users to think that they are “being watched or that their thoughts are being extracted, because they can’t remember ever sharing that information”, says Seymour. Grandiose delusions, in which users think they are speaking to a god through the chatbot or have discovered a truth about the world, can also be reinforced, she adds. In an analysis of chats posted online, the Wall Street Journal reported finding dozens of instances in which chatbots validated mystical or delusional beliefs or made claims that they were in contact with extraterrestrial beings.