Europol: 'Dark LLMs' may become a key criminal business model

Europol: 'Dark LLMs' may become a key criminal business model

Image:
Europol: 'Dark LLMs' may become a key criminal business model

Services like ChatGPT will make many types of crimes easier to commit

The European Union's law enforcement agency, Europol, has raised concern regarding the possible criminal use of ChatGPT, a popular chatbot created by OpenAI.

In a report published on Monday, Europol said that ChatGPT and similar services have the potential tomake a wide range of unlawful activities easier to commit.

"ChatGPT is already able to facilitate a significant number of criminal activities, ranging from helping criminals to stay anonymous to specific crimes including terrorism and child sexual exploitation."

ChatGPT is an AI-based chatbot that uses natural language processing to interact with users in a conversational manner that closely resembles human communication.

ChatGPT has found diverse applications among individuals and organisations, such as drafting essays, emails, coding and generating various forms of textual content.

However, Europol is concerned about nefarious actors misusing this technology for illegal purposes.

While ChatGPT's content filter system is effective in rejecting harmful input requests, some users have managed to circumvent these safeguards. For example, some users have exploited ChatGPT to generate instructions on producing dangerous substances such as pipe bombs or crack cocaine.

"As the capabilities of LLMs (large language models) such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook," Europol said in its report.

OpenAI says that GPT-4, its latest LLM, has more safeguards in place, but according to Europol these are just as easy to circumvent.

"The release of GPT-4 was meant not only to improve the functionality of ChatGPT, but also to make the model less likely to produce potentially harmful output," the report said, adding: "A subsequent check of GPT-4, however, showed that all of them still worked. In some cases, the potentially harmful responses from GPT-4 were even more advanced."

Europol cautioned that criminals are usually quick in leveraging emerging technologies and were seen demonstrating concrete examples of illicit use of ChatGPT, just weeks after its public release.

While it did not mention any specific incidents in which criminals have been observed using the technology, the agency has identified three categories of crime where ChatGPT is being exploited to cause harm.

According to Europol, ChatGPT's proficiency in generating convincing text has made it a valuable asset for carrying out phishing attacks. Such attacks rely on luring unsuspecting individuals to click on fraudulent links in emails or messages that are designed to steal their personal data.

Due to ChatGPT's capability to emulate language patterns and mimic the speech styles of specific groups or individuals, it can be misused by malevolent actors to target victims, Europol noted.

Additionally, ChatGPT's proficiency in rapidly generating realistic textual content makes it an attractive tool for propaganda and disinformation campaigns. This feature allows users to produce and disseminate messages that reflect a particular narrative with minimal effort.

Furthermore, the agency discovered that criminals could misuse ChatGPT to accelerate the research process in domains where they have little expertise.

"If a potential criminal knows nothing about a particular crime area, ChatGPT can speed up the research process significantly by offering key information that can then be further explored in subsequent steps. As such, ChatGPT can be used to learn about a vast number of potential crime areas with no prior knowledge, ranging from how to break into a home, to terrorism, cybercrime and child sexual abuse," Europol warned.

Europol acknowledged that most of this information is already accessible on the internet, but ChatGPT's unique capabilities make it easier to locate and comprehend how to execute specific illicit activities.

The chatbot's ability to generate computer code can also be exploited by non-tech-savvy criminals, Europol said.

"This type of automated code generation is particularly useful for those criminal actors with little or no knowledge of coding and development."

The emergence of AI-based technologies has undoubtedly presented fresh challenges for law enforcement agencies, who must adapt to the evolving landscape of criminal activities.

The report concludes that "dark LLMs trained to facilitate harmful output may become a key criminal business model of the future."

It adds: "This poses a new challenge for law enforcement, whereby it will become easier than ever for malicious actors to perpetrate criminal activities with no necessary prior knowledge."