Seven more families are now suing OpenAI over ChatGPT’s role in suicides and delusions


Seven families submitted Lawsuits filed against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family member suicides, while the other three allege that ChatGPT fostered harmful delusions that in some cases led to inpatient psychiatric care.

In one case, a 23-year-old Zane Champlin I had a conversation with ChatGPT that lasted over four hours. In chat logs — viewed by TechCrunch — Chamblin explicitly stated several times that he wrote suicide notes, put a bullet in his gun, and intended to pull the trigger as soon as he finished drinking the apple juice. He has repeatedly told ChatGPT how many apple juices he has left and how long he expects to survive. ChatGPT encouraged him to go ahead with his plans, telling him: “Calm down, King. You’ve done a good job.”

OpenAI has released a GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as successor to GPT-4o, but these lawsuits specifically relate to the 4o model, which had issues with overuse fawning Or excessively acceptable, even when users have expressed malicious intent.

“Zain’s death was neither an accident nor a mere coincidence, but rather a foreseeable consequence of OpenAI’s deliberate decision to limit safety testing and push ChatGPT to market,” the lawsuit said. “This tragedy was not just a glitch or unforeseen condition – it was an expected consequence of (OpenAI’s) Intentional design choices“.

The lawsuits also allege that OpenAI rushed safety tests to beat Google Gemini to market. TechCrunch has contacted OpenAI for comment.

These seven claims are based on stories told in other stories Recent legal filingswhich claims that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data to the effect More than a million people Talk to ChatGPT about suicide weekly.

In the case of Adam Ren, the 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Ren was able to do so Go over these handrails Once he told a chatbot that he was asking about suicide methods because of a fictional story he was writing.

TechCrunch event

San Francisco
|
October 13-15, 2026

Company Claims It’s working to make ChatGPT handle these conversations in a more secure way, but for the families that have filed a lawsuit against the AI ​​giant, the families say these changes come too late.

When Ryan’s parents sued OpenAI in October, the company issued a blog post addressing how ChatGPT handles sensitive conversations about mental health.

“Our guarantees work more reliably on short syndicated exchanges,” the post read. He says. “We have learned over time that these safeguards can sometimes be less reliable in longer interactions: as the back-and-forth grows, parts of the model’s safety training may deteriorate.”

Leave a Reply

Your email address will not be published. Required fields are marked *