OpenAI’s latest flagship model, GPT-5, was supposed to simplify ChatGPT by removing the need for users to choose between multiple AI models.
Instead, it has revived the very feature it aimed to kill — the model picker — and added even more complexity.
GPT-5’s big promise falls short
When GPT-5 launched last week, OpenAI touted it as an all-in-one AI system equipped with a built-in “router” that would automatically decide how best to answer user questions.

The plan was to retire the model picker — a feature CEO Sam Altman has openly criticized. But after just a few days, the picker is back, now offering three GPT-5 modes: Auto, Fast, and Thinking.
The Auto mode works like the original router, while Fast and Thinking give users direct access to different performance levels — a move that effectively bypasses the unified approach.
Old favorites return after backlash
In a surprise twist, paid subscribers can once again use several older AI models, including GPT-4o, GPT-4.1, and o3. GPT-4o is now a default option, while the others can be enabled via settings. These models were only deprecated last week, sparking user backlash that caught OpenAI off guard.
Altman acknowledged the criticism, promising that if GPT-4o is ever retired again, users will get plenty of advance notice.
OpenAI is also working on adjusting GPT-5’s “personality,” aiming to make it warmer without replicating what Altman called the “annoying” traits of GPT-4o. The company is considering greater per-user customization so individuals can tune their AI to match preferred response styles.
The GPT-5 router malfunctioned on launch day, prompting performance complaints and forcing Altman to address the issue in a Reddit AMA. While VP of ChatGPT Nick Turley praised the team’s quick iterations, the incident underscored how difficult it is to match AI responses to both the nature of a prompt and a user’s personal style preferences.
Beyond speed, users often form attachments to specific models for their verbosity, tone, or even quirky personality traits — a phenomenon that OpenAI admits it didn’t fully anticipate.


