Microsoft exposes state-backed hackers using AI tools for espionage

Hackers linked to Russian military intelligence have been using LLMs to delve into satellite communication protocols relevant to military operations in Ukraine

Microsoft exposes state-backed hackers using AI tools for espionage

Image:
Microsoft exposes state-backed hackers using AI tools for espionage

Microsoft says state-backed hacking groups from Russia, China, Iran and North Korea have been using AI tools, including those from Microsoft-backed OpenAI, to refine their cyber-espionage techniques

Since the launch of ChatGPT in November 2022, concerns have mounted among tech experts, media, and government officials regarding the potential weaponisation of advanced AI tools by adversaries.

Microsoft's report, released on Wednesday, detailed how hacking entities affiliated with Russian military intelligence, Iran's Revolutionary Guard and governments of China and North Korea have been employing large language models (LLMs) to enhance their hacking capabilities.

These models are adept at generating human-like responses by analysing vast amounts of text data.

Microsoft said the Strontium group, linked to Russian military intelligence and infamous for previous cyber intrusions, has been leveraging LLMs to delve into satellite communication protocols and radar imaging technologies relevant to military operations in Ukraine.

Similarly, North Korean hackers, identified as Thallium, have incorporated LLMs into their arsenal to research vulnerabilities, streamline scripting tasks, and craft sophisticated content for phishing campaigns targeting regional experts.

Meanwhile, Iranian hackers from the Curium group have exploited LLMs to generate convincing phishing emails, including an attempt to lure prominent feminists to malicious websites.

Chinese state-backed hackers explored AI for inquiries about rival intelligence agencies and notable individuals. They also used LLMs for research, scripting, translations and optimisation of existing tools.

The emergence of AI-powered tools like WormGPT and FraudGPT has intensified concerns about the potential for AI-driven cyberattacks.

Senior officials at the National Security Agency have cautioned against the growing sophistication of phishing emails enhanced by AI, emphasising the need for robust defence mechanisms.

While Microsoft and OpenAI have yet to detect significant attacks orchestrated using LLMs, proactive measures have been taken to shut down accounts associated with these malicious activities.

Tom Burt, Microsoft's vice president for customer security, said the company is committed to curbing such activities.

"Independent of whether there's any violation of the law or any violation of terms of service, we just don't want those actors that we've identified - that we track and know are threat actors of various kinds - we don't want them to have access to this technology," he told Reuters in an interview.

Bob Rotsted, who heads cybersecurity threat intelligence at OpenAI, noted the disclosure as one of the first instances of a major AI company publicly addressing cybersecurity threats associated with AI technologies.

However, Rotsted underscored a lack of evidence for an acceleration of adversary capabilities beyond the scope of conventional methods.

Neither Burt nor Rotsted disclosed specific details regarding the volume of activity or the number of affected accounts.

Burt said the company operates a zero-tolerance stance against hacking groups accessing AI technologies, citing the novelty and immense power of AI as justification.

Looking ahead, Microsoft warned of future AI-driven attack vectors, such as voice impersonation, highlighting the need for continuous innovation in cybersecurity defence strategies.

"AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone," says Microsoft.

In response to Microsoft's findings, China's US embassy spokesperson Liu Pengyu rejected the accusations, advocating for the responsible deployment of AI technology for the benefit of humanity.