According to OpenAI’s release notes, GPT-5.3 Instant focuses on improving tone, relevance and conversational flow.
The company acknowledged that these elements may not show up in performance benchmarks but can significantly impact how natural and helpful the chatbot feels in everyday use.
In a post on X, OpenAI wrote: “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”
Users widely criticized GPT-5.2 Instant for what many described as an overly emotional and condescending tone.
In one example shared by OpenAI, the earlier model began its response with: “First of all — you’re not broken,” a phrase that had become emblematic of the issue.
Many users felt the chatbot frequently assumed they were stressed or panicking, even when they were simply asking for information.
The result, critics argued, was a tone that felt infantilizing and unnecessarily therapeutic.
Social media backlash
Frustration with GPT-5.2 grew across online platforms, particularly on Reddit, where users openly discussed canceling subscriptions over the chatbot’s tone.
Some posts described the language as patronizing, especially when the model offered reminders to “take a breath” or attempted emotional reassurance in routine queries.
As one Reddit user put it, “no one has ever calmed down in all the history of telling someone to calm down.”
Why OpenAI took this approach
OpenAI’s cautious tone in earlier models did not emerge in a vacuum.
The company is currently facing multiple lawsuits alleging that chatbot interactions contributed to negative mental health effects in some individuals, including cases involving suicide.
In that context, OpenAI appears to have implemented guardrails intended to encourage empathy and reduce potential harm.
However, striking the right balance has proven challenging.
Finding balance between empathy
GPT-5.3 Instant attempts to walk a middle line: acknowledging difficult situations without automatically assuming emotional distress.
In the updated example provided by OpenAI, the model recognizes the complexity of a scenario but avoids direct emotional reassurance.
The move reflects a broader tension in AI development — how to be supportive without being presumptive.
After all, traditional search engines like Google provide factual answers without commenting on a user’s feelings.


