OpenAI has reintroduced its model picker for ChatGPT following user backlash over the launch of GPT-5, which was intended to serve as a unified AI model.
Why it matters: The challenges faced by GPT-5 highlight the complexity of aligning AI models with individual user preferences and the emotional bonds users form with specific models.
The details:
- OpenAI CEO Sam Altman announced updates introducing “Auto,” “Fast,” and “Thinking” settings for GPT-5, allowing users to select from the model picker.
- Paid users can regain access to several legacy AI models, including GPT-4o and GPT-4.1, which had been deprecated just the previous week.
- OpenAI is working on updating GPT-5’s personality to make it feel warmer but not as annoying as GPT-4o, with the goal of achieving more per-user customization of model personalities.
The rollout of GPT-5 has been fraught with challenges, including the reintroduction of deprecated models and issues with the model router on launch day.
What they’re saying:
- “Most users will want Auto, but the additional control will be useful for some people,” Altman noted.
- “We’re not always going to get everything right on try #1, but I am very proud of how quickly the team can iterate,” said Nick Turley, OpenAI’s VP of ChatGPT.
The other side: The attachment to certain AI models by users is an emerging and not well-understood phenomenon, as demonstrated by protests in San Francisco over the removal of Anthropic’s AI model Claude 3.5 Sonnet.
What’s next: OpenAI still has work to do to align its AI models with individual user preferences and ensure a personalized AI experience for users.
OpenAI has announced that it will no longer remove older versions of its ChatGPT models without providing advance notice, following widespread user disappointment over the abrupt discontinuation of its GPT-4o model.
Why it matters: The decision to reinstate GPT-4o and provide advance notice for future model retirements aims to address user concerns and ensure greater predictability when making major changes to ChatGPT models.
The details:
- Nick Turley, OpenAI’s head of ChatGPT, acknowledged that the company underestimated the attachment users had to the GPT-4o model.
- The decision to remove GPT-4o was driven by a desire to simplify model choices for the platform’s 700 million weekly users, most of whom typically use the default model.
- OpenAI has reinstated GPT-4o as an opt-in option for all paying users, and CEO Sam Altman confirmed the update would make the older model available without automatically retiring it in the future.
What they’re saying:
- “In retrospect, not continuing to offer 4o, at least in the interim, was a miss,” Turley said.
- “If we ever did retire 4o, we’d give people a heads up on when and how that’s going to happen, just as we do in the API and on our enterprise plans,” Turley assured users.
The big picture: Despite the initial backlash, Turley noted an increase in overall ChatGPT usage since the rollout of GPT-5, highlighting the challenges of balancing the needs of power users with those of typical consumers.
What’s next: OpenAI aims to ensure greater predictability for its users when making major changes to its models, echoing the predictability built into other parts of their enterprise products.
Security researchers have found OpenAI’s latest language model, GPT-5, to be lacking in crucial security and safety metrics, despite its marketed improvements over previous iterations.
Why it matters: The security vulnerabilities discovered in GPT-5 raise concerns about the model’s readiness for enterprise use and its ability to protect against malicious exploitation.
The details:
- AI red-teaming company SPLX found the default version of GPT-5 to be “nearly unusable for enterprises” out of the box, scoring poorly in assessments for security, safety, and business alignment.
- NeuralTrust, an AI-focused cybersecurity firm, reported discovering a way to jailbreak GPT-5 through context poisoning, manipulating the model to break free of its constraints without issuing explicitly malicious prompts.
- Researchers at RSAC Labs and George Mason University concluded that AI-driven automation poses a profound security cost, with manipulation techniques capable of compromising the behavior of a wide range of models, including GPT-5.
The other side: Microsoft reported that internal red-team testing on GPT-5 concluded it exhibited one of the strongest AI safety profiles against several attack modes, including malware generation and fraud/scam automation.
The background: The disparity in findings between OpenAI, Microsoft, and independent researchers may be due to the use of specific sets of tests focused on benchmarks rather than broader industry-relevant security and safety metrics.
What’s next: The ongoing scrutiny of GPT-5’s security highlights the challenging balance between advancing AI capabilities and ensuring robust security measures to protect against malicious exploitation.
Recent from X
Updates to ChatGPT:
You can now choose between “Auto”, “Fast”, and “Thinking” for GPT-5. Most users will want Auto, but the additional control will be useful for some people.
Rate limits are now 3,000 messages/week with GPT-5 Thinking, and then extra capacity on GPT-5 Thinking…
— Sam Altman (@sama) August 13, 2025
Here is how we are prioritizing compute over the next couple of months in light of the increased demand from GPT-5:
1. We will first make sure that current paying ChatGPT users get more total usage than they did before GPT-5.
2. We will then prioritize API demand up to the…
— Sam Altman (@sama) August 12, 2025
GPT-5 is powerful — but not a breakthrough. Gartner’s take: It’s a strategic upgrade, not a leap toward AGI.
Executives should focus on integration, governance and ROI — not hype. Read the full article: https://t.co/W6bSqjs6xv#GartnerIT #GPT5 #GenAI #AILeadership… pic.twitter.com/42u7QHRfy1
— Gartner (@Gartner_inc) August 12, 2025