Tech leaders warn about 'the risk of extinction' from AI

Tech leaders warn about 'the risk of extinction' from AI

Image:
Tech leaders warn about 'the risk of extinction' from AI

AI technology should be considered as a societal risk comparable to pandemics and nuclear wars, they assert

Tech leaders, scientists and other experts have warned that AI technology could potentially pose an existential threat to humanity and should be considered as a societal risk comparable to pandemics and nuclear wars.

"Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," reads the statement released by the Center for AI Safety (CAIS), a non-profit organisation, on Tuesday.

More than 350 executives, researchers, and engineers actively involved in the field of AI signed the open letter.

Notably, the signatories included prominent figures from the commercial world, such as Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and "godfather of AI" Geoffrey Hinton. Well-known names from academia include the professors of Computer Science at UC Berkeley Stuart Russell and Dawn Song, Peter Norvig Education Fellow at Stanford University, and security expert Bruce Schneier, currently a lecturer at Harvard Kennedy School.

CAIS argues that while the AI risk has emerged as a global priority, the aspect of AI safety has been surprisingly neglected.

"Currently, society is ill-prepared to manage the risks from AI," it added.

There has been a growing concern among global leaders and industry experts regarding the regulation and risks associated with AI and its potential impact on various aspects of society.

These concerns encompass the potential effects on job markets, public health, as well as the weaponisation of disinformation, discrimination, and impersonation.

Geoffrey Hinton, often touted as the godfather of AI, resigned from Google last month, saying that he was concerned about the potential dangers of the technology he helped pioneer.

Hinton specifically expressed worries about the automated proliferation of fake information, videos and photos on the internet.

In March, several prominent figures in the tech industry, including Elon Musk signed a letter calling for a halt to the development of the most potent AI systems for at least six months, citing grave societal and human risks.

The letter specifically cautioned that the competitive pressures surrounding AI development are propelling its progress at a pace that could potentially lead to a loss of control, resulting in disastrous consequences.

Calls to address AI threats have intensified in the wake of the remarkable success of ChatGPT, which was introduced in November. This language model has gained widespread adoption by millions of users and has progressed at a rapid pace, surpassing the expectations of even the most knowledgeable experts in the field.

Earlier this month, Sam Altman, Demis Hassabis and Dario Amodei had a meeting with US President Joe Biden and Vice President Kamala Harris to discuss AI regulation.

During his testimony before the Senate following the meeting, Altman expressed his concerns about the significant risks posed by advanced AI systems and emphasised the need for government intervention.

In a recent blog post, Altman, along with two other OpenAI executives, put forward a series of proposals for the responsible management of powerful AI systems. They advocated for increased collaboration among leading AI companies, emphasising the importance of sharing knowledge and expertise.

Additionally, they suggested the need for further technical research into large language models to better understand their capabilities and limitations.

Signatory Michael Osborne, a professor in machine learning at the University of Oxford and co-founder of Mind Foundry, told The Guardian that it was really remarkable that so many people signed up to the letter released this week, advocating for responsible AI development.

"That does show that there is a growing realisation among those of us working in AI that existential risks are a real concern," Mr Osborne said.

He warned that there is a possibility that AI "might play a role as a kind of new competing organism on the planet, so a sort of invasive species that we've designed that might play some devastating role in our survival as a species."