Terrorism watchdog warns about AI risks to national security

AI developers should move away from a 'tech utopian' mindset

Terror watchdog warns about the risks AI presents to national security

Image:
Terror watchdog warns about the risks AI presents to national security

National security experts have warned that without proper regulation, AI could endanger national security.

Jonathan Hall KC, a lawyer who serves as the Independent Reviewer of Terrorism Law in the UK, highlighted the national security risks associated with the technology, and emphasised the importance of designing AI systems to thwart terrorist activities.

Hall urged AI developers to shift away from a "tech utopian" perspective and instead contemplate how terrorists might exploit the technology.

"They need to have some horrible little 15-year-old neo-Nazi in the room with them, working out what they might do. You've got to hardwire the defences against what you know people will do with it," said Hall, according to The Guardian.

Hall specifically voiced concerns around the use of AI chatbots to manipulate vulnerable or neurodivergent individuals into carrying out terrorist acts.

"What worries me is the suggestibility of humans when immersed in this world and the computer is off the hook. Use of language, in the context of national security, matters because ultimately language persuades people to do things."

Hall has spoken on the subject before. In April this year, he said chatbots could be viewed as a "blessing" for individuals who engage in acts of terrorism as lone-wolves.

"Because an artificial companion is a boon to the lonely, it is likely that many of those arrested will be neurodivergent, possibly suffering from medical disorders, learning disabilities, or other conditions," he added.

The Guardian report mentions a recent case of Matthew King, a 19-year-old individual, who has been sentenced to life imprisonment for conspiring to carry out a terrorist attack. The report highlights that King was radicalised and influenced through his online activities.

In the UK, there is an increasing focus on addressing the national security challenges presented by AI.

MI5 has formed a partnership with the Alan Turing Institute, the national institution specialising in data science and AI. This collaboration aims to enhance MI5's capabilities in dealing with AI threats.

Prime Minister Rishi Sunak is expected to address the matter of AI regulation during his upcoming visit to the United States. He will apparently discuss the topic with President Biden and congressional figures.

Tech companies must do their part

Hall emphasised the importance of tech companies learning from their past complacency when it comes to AI-generated content.

He stressed the need for greater transparency, particularly regarding the number of employees and moderators companies have dedicated to AI technology.

"We need absolute clarity about how many people are working on these things and their moderation," he said. "How many are actually involved when they say they've got guardrails in place? Who is checking the guardrails? If you've got a two-man company, how much time are they devoting to public safety? Probably little or nothing."

Hall suggested that new legislation may be necessary to address the AI terror threat; specifically, the danger posed by lethal autonomous weapons, which use AI to select their targets.

Last week, tech leaders, scientists and other experts warned that AI technology could pose an existential threat to humanity and should be considered a societal risk comparable to pandemics and nuclear wars.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," read the statement released by the Center for AI Safety (CAIS), a non-profit organisation.

More than 350 executives, researchers and engineers actively involved in the field of AI signed the open letter.