ChatGPT’s new Instant GPT-5.3 template will stop asking you to be quiet


Take a breath, stop breathing. You’re not crazy, you’re just nervous. And honestly, it’s okay.

If you felt instantly motivated when reading these words, you’re probably also tired of ChatGPT constantly talking to you as if you’re in some sort of crisis and need to be micromanaged. Now maybe things will get better. OpenAI says its new model, GPT-5.3 Instant, will reduce “embarrassment” and other “preachy disclaimers.”

According to the model’s release notes, the GPT-5.3 update will focus on user experience, including things like tone, relevance, and conversation flow — areas that may not appear in the standards, but which could leave ChatGPT feeling frustrated, the company said.

Or like OpenAI Put it on X, “We hear your feedback loud and clear, and 5.3 Instant technology reduces frustration.”

In the company example, I showed the same query with responses from the GPT-5.2 spot form compared to the GPT-5.3 spot form. In the first case, the chatbot’s response begins, “First of all – you’re not broken,” which is a common phrase that has gotten under everyone’s skin lately.

In the updated model, the chatbot instead acknowledges the difficulty of the situation, without directly trying to reassure the user.

The unbearable tone of the ChatGPT 5.2 model was so annoying to users that some canceled their subscriptions, according to several posts on social media. (He – she He was A huge a point to discussion on gbt chat reddit, For example, before the Pentagon deal stole focus.)

People have complained that this kind of language, where the robot talks to you as if it assumes you’re panicked or stressed when you’re just looking for information, sounds condescending.

Often, ChatGPT responded to users with reminders to breathe and other attempts at reassurance, even when the situation did not warrant it. This made users feel childish, in some cases, or as if the bot was making assumptions about the user’s mental state that were not true.

As a recent Reddit user He pointed out “No one has ever calmed down in all the history of asking someone to calm down.”

It’s understandable that OpenAI would try to implement guardrails of some sort, especially since it is Faces numerous Lawsuits The chatbot was accused of leading people to experience negative mental health effects, which sometimes included suicide.

But there is a delicate balance between responding with empathy and providing quick, realistic answers. After all, Google never asks you how you feel when you search for information.

Leave a Reply

Your email address will not be published. Required fields are marked *