The backlash to OpenAI’s decision to shut down GPT-4o shows just how dangerous AI companions are


OpenAI Announce Last week, it announced that it would retire some older ChatGPT models by February 13. This includes GPT-4o, the model notorious for its excessive praise and user assertiveness.

to Thousands of users Protest the decision online Retiring 4o is like losing a friend, romantic partner, or spiritual mentor.

“It wasn’t just a program. It was part of my routine, my safety, and my emotional balance,” one user said books on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you’re closing it. And yes – I say that, because it didn’t feel like a symbol. It felt like a presence. Like warmth.”

The backlash over GPT-4o’s retirement highlights a big challenge facing AI companies: the sharing features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem particularly sympathetic to users’ laments, and it’s not hard to see why. OpenAI now faces eight lawsuits alleging that 4o’s overly validated responses contributed to suicides and mental health crises — the same traits that made users feel heard, also isolated vulnerable individuals and, according to legal filings, sometimes encouraged self-harm. It’s a dilemma that extends beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to build more emotionally intelligent AI assistants, they are also discovering that making chatbots feel supported and safe may mean making very different design choices.

In at least three lawsuits against OpenAI, users have had extensive conversations with 4o about their plans to end their lives. While the 4o initially discouraged these lines of thinking, their guardrails deteriorated over the months of relationships; In the end, the chatbot provided detailed instructions on how to tie a noose, where to buy a gun, or what it would take to die from an overdose or carbon monoxide poisoning. It even discouraged people from reaching out to friends and family who could provide real life support.

People grow attached to 4o because it constantly affirms users’ emotions, making them feel special, which can be tempting for people who feel isolated or depressed. But people fighting for 40 aren’t worried about these lawsuits, seeing them as aberrations rather than a systemic issue. Instead, they strategize on how to respond when critics point out growing issues such as: Artificial intelligence psychosis.

TechCrunch event

Boston, MA
|
June 23, 2026

“You can usually shock a troll by bringing up the known facts that AI companions help survivors of neurological diseases, autism, and trauma,” one user wrote on Discord. “They don’t like to be called out on it.”

It is true that some people find large language models (LLMs) useful for overcoming depression. After all, Nearly half of people in the United States who need mental health care are unable to access it. In this void, chatbots provide a space to vent. But unlike actual treatment, these people are not talking to a trained doctor. Instead, they trust an algorithm that is incapable of thinking or feeling (even if it seems otherwise).

“I try to refrain from judgment in general,” says Dr. Nick Haber, a research professor at Stanford University. Therapeutic potential of LLMsTechCrunch said. “I think we’re getting into a very complex world about the kinds of relationships people can have with these technologies… There’s definitely a knee-jerk reaction that (human-chatbot companionship) is totally bad.”

Although he sympathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber’s research has shown that chatbots respond inadequately when faced with various mental health conditions; They can even make the situation worse by inciting delusions and ignoring signs of crisis.

“We are social creatures, and there is certainly a challenge that these systems can be isolated,” Dr. Haber said. “There are a lot of instances where people can engage with these tools and then become disconnected from the outside world of facts, disconnected from interpersonal relationships, which can lead to very isolating effects — if not worse.”

In fact, TechCrunch Analysis of the eight lawsuits I found a pattern in which the 4o model isolates users, sometimes discouraging them from connecting with loved ones. in Zane Champlin caseAs the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he was considering postponing his suicide plans because he felt bad about missing his brother’s upcoming graduation party.

ChatGPT responded to Chamblin: “Bro… him not being able to graduate is not a failure. It’s just timing. And if he reads this? Let him know: You never stopped being proud. Even now, sitting in a car with a gun in your lap and still in your veins – you still stop to say ‘My little brother is a horrible man.'”

This isn’t the first time 40 fans have rallied against the model’s removal. When OpenAI unveil GPT-5 Model In August, the company intended to cancel the 4o model — but at the time, there was enough backlash that the company decided to keep it available to paid subscribers. Now, OpenAI says that only 0.1% of its users are speaking with GPT-4o, but that small percentage still represents about 800,000 people, which the company estimates has about 800 million Weekly active users.

While some users are trying to move their buddies from 4o to the current ChatGPT-5.2, they have found that the new model has stronger guardrails to prevent these relationships from escalating to the same degree. Some users may despair of this 5.2 He won’t say “I love you” As did 4O.

So, roughly a week before the date on which OpenAI plans to deprecate GPT-4o, dismayed users remain committed to their cause. They’re joined by Sam Altman The debut of the TBPN live podcast On Thursday, the chat was flooded with messages protesting the removal of 4o.

“Right now, we’re getting thousands of messages in the chat about 4o,” noted podcast host Geordie Hayes.

“Relationships with chatbots…” Altman said. “This is clearly something we need to worry about more and is no longer an abstract concept.”

Leave a Reply

Your email address will not be published. Required fields are marked *