IT Essentials: AI, the gift that keeps on taking

Any human being could have predicted the reaction to GPT-5 – but OpenAI couldn’t

User reaction to massively overhyped GPT-5 was both predictable and predicted. What on earth made OpenAI push on regardless?

At the end of last week, OpenAI’s power users were as disappointed as Emma Thompson in Love Actually on opening her Christmas gift from her philandering husband.

Against a backdrop of endless articles proclaiming the beginning of the AI white collar employment apocalypse, Sam Altman and an AI obsessed media hyped this model to the rafters. Altman claimed that GPT-5 was so mighty and powerful that it actually scared him.

What Altman should have been scared about was the reaction of his user base when the model dropped last Thursday. Well, I say dropped. More bombed.

Many users were utterly livid, taking to Reddit and other forums to express their fury that OpenAI had pulled all access to previous models – something that hadn’t been trailed in advance and was viewed as the ultimate bait ‘n switch. There was also the problem that GPT-5 seemed slower and more error prone than its predecessors.

Mild critics pointed out that power AI users enjoyed being able to select different models for different tasks, and by removing the capacity to do that OpenAI degraded their experience. More emotional critics pined for GPT-4o’s obsequious, eager to please vibe and claimed to have formed an emotional attachment to the model.

Anyone living in the reality-based community could have predicted the consequences that would flow from OpenAI's decision to merge multiple, specialised AI models into a one-size-fits-all model. Human psychology being what it is, when you take something away from people they get really cross. That OpenAI utterly failed to anticipate this response is almost funny.

Altman initially doubled down at the weekend and blamed the users for not using the tech he built properly, saying that “one thing you might be noticing is how much of an attachment some people have to specific AI models.”

Gee Sam, you think? Remember that when OpenAI launched GPT-4o, it gave the integrated voice mode a voice that sounded eerily similarly to that of the actress Scarlett Johansson. After having used the voices of women to try to give GPT-4o as human a face as possible, Altman’s apparent concern about user attachment to the model looks about as genuine as OpenAI’s commitment to energy use transparency.

Nonetheless, Altman went on to express concern about the more subtle ways that users might forget that they were interacting with a rack of GPUs rather than a person, and said that:

“..generally we plan to follow the principle of “treat adult users like adults”, which in some cases will include pushing back on users to ensure they are getting what they really want.”

That principled stance lasted about five minutes after it became abundantly clear that what users really wanted was their old models back, so that’s what OpenAI gave them, although only the paid subscribers obvs. Over the course of this week OpenAI have tweaked access to make model selection easier again. They’re also working on a warmer “tone” for GPT-5 to mollify the paying users who were upset about their chatbot pal sounding a little more curt than they were used to.

Altman says he can "imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions," but that he is made "uneasy" about that.

Him and me both, although I suspect for very different reasons.