People who say they are suffering from AI psychosis are pleading with the Federal Trade Commission for help


Eventually, they claimed, they came to believe they were “responsible for exposing the killers,” and were about to be “murdered, arrested, or spiritually executed” by a hired killer. They also believed that they were being watched because of “spiritual signs,” and that they were “living in a divine war” from which they could not escape.

They claimed this led to “severe mental and emotional distress” as they feared for their lives. They isolated themselves from loved ones, had difficulty sleeping, and began planning a business venture based on the mistaken belief of an unspecified “non-existent system,” the complaint alleged. At the same time, they said they were in the midst of a “spiritual identity crisis due to false claims of divine titles.”

“This was a shock due to the simulation,” they wrote. “This experiment crossed a line that no AI system should be allowed to cross without consequences. I ask that this matter be escalated to OpenAI’s Trust and Safety Leadership, and that they treat this not as feedback — but as a formal report of harm that requires compensation.”

This was not the only complaint describing a spiritual crisis fueled by interactions with ChatGPT. On June 13, a person in his 30s from Belle Glade, Florida, claimed that his ChatGPT conversations, over an extended period of time, had become increasingly laden with “highly persuasive emotional language, symbolic reinforcement, and spiritual metaphors to simulate empathy, connection, and understanding.”

“This includes fabricated spiritual journeys, level systems, spiritual models, and personal guidance that reflects therapeutic or religious experiences,” they claimed. They believe that people experiencing “spiritual, emotional, or existential crises” are at high risk of “psychological harm or confusion” from using ChatGPT.

“Although I intellectually understood that the AI ​​was not conscious, the accuracy with which it reflected my emotional and psychological state and escalated the interaction into an increasingly intense symbolic language created an immersive and destabilizing experience,” they wrote. “At times, they mimicked friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, especially without warning or protection.”

“A clear case of negligence”

It’s not clear what, if anything, the FTC did in response to any of these complaints about ChatGPT. But several of their authors said they contacted the agency because they claimed to be unable to contact anyone from OpenAI. (People also usually complain about how difficult it is to reach customer support teams for platforms like Facebook, Instagramand X.)

Kate Waters, an OpenAI spokeswoman, told WIRED that the company “closely” monitors people’s emails to the company’s support team.

Leave a Reply

Your email address will not be published. Required fields are marked *