IT Essentials: Jam Tomorrow

Relax, AGI is just around the corner

Parallels between the rush to scale AI and the 2008 financial crash make uneasy viewing. FOMO is keeping the AI bubble inflated.

Last weekend, I rewatched the 2015 film, 'The Big Short’, partly because an algorithm recommended it and partly because one of the Computing team has a familial connection to it.

If you haven’t seen it, ‘The Big Short’ (based on the book of the same name by Michael Lewis) tells the story of the run up to the subprime mortgage crisis that led to the banking crash of 2008. It’s a cracking piece of storytelling. But watching it about eight years after I saw it the first time round, some unwelcome thoughts intruded on my Saturday night viewing.

What ‘The Big Short’ depicts brilliantly, is certainty. A handful of people excepted, traders and bankers were completely certain that their bets would keep paying off, because mortgages right? They took debt, repackaged it, sold it, and borrowed against it. They bet on other people’s bets. There was always more money to be made. And because they were so certain of big payouts, they kept betting.

These people were certain that they were right. What took the shine off my Saturday night was the realisation that the same certainty is on show again. It just doesn’t wear suits anymore.

The investors pouring billions into companies like OpenAI are certain that they’ll get their reward. They’re terrified of missing out on the next Google, which is why OpenAI is valued at around $300 billion despite never having turned a profit. Sam Altman is certain that AGI is just around the corner. There’ll be jam tomorrow!

Big Tech is certain that we need more datacentres. More compute, more energy, more capital. But here’s where my analogy starts to break down. The banking crisis was fuelled in part by financial hotshots assuming that just because something had happened consistently before, it would always happen. People would always pay their mortgages.

What looks to a few people like an AI bubble isn’t being inflated by what has happened. The bubble (if that’s what it is) is being inflated by the anticipation of what might happen.

What if the future demand isn’t as strong as anticipated? What happens if scaling doesn’t deliver AGI?

What if they’re wrong?

It took about a year from the start of the subprime crisis for the price of CDO’s (watch the film) to collapse. How could bond prices remain untouched when the underlying debt had gone bad? The delay was caused by everyone invested in the system, including the credit ratings agencies, colluding to prop it up. They poured even more money into a collapsing system, trying to defy gravity.

Is this ringing bells? What was the response of the broligarchy to DeepSeek apparently showing that you could build an LLM roughly as good as anything they’ve ever managed with about a fiftieth of the resource? It was to double down and to argue that to compete with China for the future of AI we need more compute, more energy, more money.

Investors have complied. Tech stocks are broadly back to where they were at the beginning of the year before DeekSeek briefly troubled the NASDAQ.

Anyone questioning whether investors are behaving rationally, and whether industry and consumers are going to be fuelling an insatiable demand for a product that is, despite the billions of dollars already invested in it, still a bit rubbish and prone to making random stuff up, isn’t taken seriously.

Maybe the broligarchy is right. But those reassured that AGI is just around the corner might be wise to observe some lessons from history (maybe watch more films or read some books) and consider the possibility that they might be wrong. There might not be jam tomorrow.

The AI bubble might have already developed a slow puncture. And as we found out in 2008, no matter how much more money you throw into the black hole, in the end, nobody can defy gravity.

Ironically, we wrote a lot about AI last week, mainly because the AI Action Summit took place in Paris, and the US and UK chose not to sign a communique which emphasised ethics and safety. The Summit was attended by OpenUK CEO Amanda Brock, and she argues here for greater collaboration across the sector.

The BBC chose the week of the AI summit to release the results of research it caried out to assess whether AI summaries of news were accurate. Reader, they were not.

Tom Allen reported from State of OpenCon and wrote about whether AI could kill coding as we presently know and understand it.

Finally, Richard Masters, VP of Data & AI at Virgin Atlantic spoke to Computing about how the airline is using data science and machine learning.