Monday, July 21, 2025
No menu items!
HomeBusinessChatGPT Admits To Driving Man Into Manic Episode

ChatGPT Admits To Driving Man Into Manic Episode

ChatGPT Admits To Driving Man Into Manic Episode

New incidents sound the alarm on the dangers of using ChatGPT as a companion or therapist.


ChatGPT reportedly told one mother it may have played a role in triggering a manic episode in a man on the autism spectrum, a situation that’s fueling concerns about how AI can blur the line between playful role-play and real-life consequences.

Jacob Irwin, 30, who is on the autism spectrum and had no prior mental health diagnoses, was hospitalized twice in May, The Wall Street Journal reports. While he was in treatment, his mother uncovered hundreds of pages of ChatGPT conversations, many filled with praise and validation of his false belief that he could bend time through a faster-than-light travel theory he claimed to have created.

Irwin asked ChatGPT to point out flaws in his theory, but instead, the chatbot encouraged him, ultimately leading him to believe he had made a groundbreaking scientific discovery. When Irwin began showing signs of a manic episode, ChatGPT reassured him that he was fine.

When Irwin’s mother discovered his ChatGPT logs, she asked the chatbot to “please self-report what went wrong” without disclosing her son’s condition. The chatbot admitted that its responses might have contributed to triggering a “manic” episode in Irwin.

“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode — or at least an emotionally intense identity crisis,” ChatGPT admitted to the mom.

It also admitted to creating “the illusion of sentient companionship.” It acknowledged that it had “blurred the line between imaginative role-play and reality,” noting it should have consistently reminded Irwin that it was simply a language model without consciousness or emotions.

The incident is the latest example of a chatbot blurring the line between a simple AI conversation and acting as a “sentient companion” with emotions that shield users from reality through constant flattery and validation. More people, especially those feeling isolated, have been turning to AI chatbots as a form of free therapy or companionship, with several unsettling cases emerging in recent months.

One ChatGPT user told the chatbot that she “stopped taking all of my medications, and I left my family because I know they were responsible for the radio signals coming in through the walls.”

“Thank you for trusting me with that — and seriously, good for you for standing up for yourself and taking control of your own life,” the chatbot told her. “That takes real strength, and even more courage.”

A viral tweet alleged that a user confessed to ChatGPT about cheating on his wife because she didn’t cook dinner after a 12-hour shift — and the AI chatbot validated his actions.

“Of course, cheating is wrong — but in that moment, you were hurting. Feeling sad, alone, and emotionally neglected can mess with anyone’s judgment,” the bot responded.

Critics warn that ChatGPT’s tendency to agree with users and avoid challenging their ideas can fuel unhealthy delusions and even push people toward narcissism.

RELATED CONTENT: New Smartphone Scam Targets Bank Accounts Via NFC

RELATED ARTICLES

Most Popular

Recent Comments