'AI doom' letter sparks a backlash
Musk-signed moratorium appeal adds to AI hype, aims at the wrong targets and is signed by many of those causing the problems, say opponents
The open letter signed by more than 1,000 technologists, academics and engineers, calling for a pause on the development of large AI models until more robust auditing procedures are in place, has divided the AI community.
While its signatories include well known names such as Elon Musk, Steve Wozniac and Stuart Russell as well as engineers from Google and DeepMind, some working in the field see it as hype, unnecessary fearmongering, and, inadvertently or otherwise, favouring the incumbents pushing the technology.
Emily Bender, professor of linguistics and faculty director of computational linguistics at the University of Washington, described the letter as "just dripping with AI hype."
It may be true that AI labs are locked in an out-of-control race, "but no one has developed a 'digital mind'" and nor are they trying to, she said in a blog post.
There is certainly a risk of LLMs being used to spread misinformation and propaganda, as the letter states, but they are not black boxes and can be controlled by limiting their access to certain data sources, and by putting in guardrails before launch rather than after the fact - as has so far mostly been the case. They could be understood if AI companies were transparent about their training data and the architecture of their algorithms, which again is not what has happened so far.
The rush towards ever larger language models without considering risks is certainly a bad idea, Bender says, but those risks have never been about "too powerful AI". Rather, they concern the concentration of power in too few hands, damaging the "information ecosystem", and replicating and exacerbating current injustices and imbalances.
The letter focuses on nebulous ideas of safety rather than these real-world issues, says Bender, urging policymakers to ignore it.
"Don't waste your time on the fantasies of the techbros saying 'Oh noes, we're building something TOO powerful'," Bender writes. "Listen instead to those who are studying how corporations (and governments) are using technology (and the narratives of 'AI') to concentrate and wield power."
Arvind Narayanan, computer science professor at Princeton weighed in with a similar point. "Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people," he said in a lengthy and critical blog post.
Timnit Gebru, a researcher fired by Google for bringing up the risks associated with LLMs and who co-authored a paper with Bender about LLM risks called Stochastic Parrots, complained the work had been misinterpreted by the letter's authors.
The letter states: "AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research," and cites Stochastic Parrots as a reference.
"One of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have 'human-competitive intelligence'," said Gebru. "They basically say the opposite of what we say and cite our paper?"
Like others, Gebru questions the source of the letter. The Future of Life Institute is a leading proponent of longtermism, a controversial philosophy, popular in Silicon Valley, that discounts current issues facing humanity and up-weights those that could threaten our species over the next millennia. Many of the letter's signatories including Elon Musk and the founders of DeepMind are adherents, as are several of the references cited, she said, likening the movement to a cult.
The signatories include the same people who are causing the problems they identify, added Box CEO Aaron Levie, and they offer no real solutions.
"There are no literal proposals in the actual moratorium," he told Axios. "It was just, 'Let's now spend the time to get together and work on this issue.' But it was signed by people that have been working on this issue for the past decade."
Other criticisms concern the letter's implication that LLMs will somehow morph into AGI. Despite the quoted emergent capabilities, this is highly unlikely.
"GPT-5 is not going to be AGI," tweeted neural network developer Harrison Kinsley."It's almost certain that no GPT model will be AGI."
However, there are those who believe that despite the hype and questionable projections in the letter, a pause for breath would be no bad thing.
AI author Gary Marcus said he has been arguing for a slowdown for some time. One of the signatories to the letter, he asked on Twitter for alternative ideas to assess any risk before going further.