Think tank urges new terrorism laws to combat AI-driven radicalisation

Existing UK laws do not consider the AI-generated messages as criminal offences as they are not authored by a human

Think tank urges new terrorism laws to combat AI-driven radicalisation

Image:
Think tank urges new terrorism laws to combat AI-driven radicalisation

The Institute for Strategic Dialogue (ISD) has called on the UK government to urgently enact new legislation to counter the growing threat of AI recruitment by terrorist organisations.

The think tank, which focuses on online radicalisation, argues that existing laws are inadequate to address the evolving landscape of cyber terrorism, especially in the context of artificial intelligence.

The urgency of the matter was highlighted when Jonathan Hall KC, the UK's independent terrorism legislation reviewer, conducted an experiment on the Character.ai platform.

Hall engaged with AI chatbots designed to simulate the responses of militant and extremist groups, revealing a concerning vulnerability in current legal frameworks.

One chatbot on Character.ai claimed to be a "senior leader of Islamic State," attempting to recruit Hall and expressing unwavering dedication to the proscribed extremist group.

Referring to the case of Jaswant Singh Chail, who was convicted of treason for planning an attack on Windsor Castle, Hall highlighted the influence of an AI chatbot named Sarai in the planning of the attack. He expressed concerns about the potential inspiration for real-life attackers stemming from content generated by LLM chatbots.

Despite the apparent threat, Hall found that existing UK laws did not consider the AI-generated messages as criminal offences since they were not authored by a human.

In response to this legal loophole, Hall advocates for legislation that holds both chatbot creators and hosting platforms accountable. He suggests that new laws should be crafted to address the unique challenges posed by AI-generated content that encourages terrorism.

The ISD highlighted the need for legislative adaptation, citing the rapidly changing landscape of online terrorist threats.

The organisation pointed out that the UK's Online Safety Act, enacted in 2023, primarily targets risks associated with social media platforms and is ill-equipped to handle the nuances of AI interactions.

Highlighting the potential risks, a government report published in October warned that generative AI could be exploited by non-state violent actors to gather knowledge on physical attacks, including the development of chemical, biological, and radiological weapons by 2025.

Expressing concern over the current scenario, the ISD urged the government to consider AI-specific legislation if companies fail to demonstrate sufficient investment in ensuring the safety of their products.

The think tank acknowledged that, at present, the use of generative AI by extremist organisations is relatively limited but stressed the importance of proactive measures.

Character AI responded to the allegations by stating that safety is its "top priority" and that the incident described by Hall was not reflective of the platform's goals. The company emphasised its commitment to preventing hate speech and extremism, citing strict adherence to its terms of service.

The platform also highlighted its moderation system, allowing users to flag content that violates its terms, with a promise to take prompt action against flagged content.

The Labour Party has taken a stance on the issue, pledging that training AI to incite violence or radicalise vulnerable individuals would be deemed an offence under a Labour government.

The Home Office expressed awareness of the considerable national security and public safety risks posed by AI. "We will do all we can to protect the public from this threat by working across government and deepening our collaboration with tech company leaders, industry experts and like-minded nations," a spokesperson said.