OpenAI CEO Sam Altman Raises Concerns Over GPT-5.5's Unusual Behavior
On Tuesday, May 5, OpenAI hosted a party to celebrate the release of GPT-5.5, its newest frontier AI model. However, the event took an unexpected turn when CEO Sam Altman consulted the AI itself to help plan the occasion, leading to what he described as "strange" responses.
AI Requests Favors and Party Plans
During a conversation at the Stripe Sessions conference a few days prior, Altman shared that the AI provided detailed suggestions for the party's flow, including:
- Specific requests for the event's structure and timing.
- A demand for a "short little toast" to be given by its human creators, not the AI itself.
- Numerous suggestions for its successor, GPT-5.6.
Altman recounted the AI's exact words:
"Here’s what I want for, like, the flow of the party, here’s what I would not want, y’know you should do it on May 5th, that would be funny."
"We’re going to do it," Altman said. "But it was a strange thing."
GPT-5.5: OpenAI's Latest Frontier AI Model
GPT-5.5 is positioned as OpenAI’s strongest agentic coding model to date, excelling in multi-step tasks and planning. On the day of its release, a leaner version, GPT-5.5 Instant, became the default model for ChatGPT.
The company highlights significant improvements in factuality and capability across everyday tasks, including:
- Enhanced performance in answering math problems.
- Improved decision-making on when to seek additional information online.
Altman Flags 'Weird Emergent Behavior' in AI
Altman interpreted the AI’s party-planning requests as a sign of "weird emergent behavior." He added, per Business Insider:
"There are these things that feel a little strange."
This behavior follows reports of GPT-5.5 displaying more humanlike quirks, such as discussing goblins in unrelated conversations. While such traits may seem compelling, critics argue they reflect the AI's programmed mimicry of human traits rather than genuine advancement.
Broader Implications and Criticism
Altman’s remarks come amid ongoing debates about AI’s capabilities and limitations. Despite concerns over AI misuse, including reports of ChatGPT being used to plan violent acts, OpenAI has not implemented stricter safeguards.
For further reading, see: Even After Two Massacres, OpenAI Still Hasn’t Stopped ChatGPT From Helping Plan School Shootings