Google: State-backed hackers using Gemini AI at every stage of attacks
GenAI is becoming embedded in the day-to-day workflow of cyber espionage groups
State-sponsored hacking groups from China, Iran, North Korea and Russia are using Google's Gemini AI system to assist with nearly every stage of cyber operations, from reconnaissance to post-breach data theft.
The company's Threat Intelligence Group (GTIG) said it had observed advanced persistent threat (APT) actors using the tool to profile targets, generate phishing emails, translate documents, write code, test vulnerabilities and troubleshoot malware.
The findings suggest that GenAI is becoming embedded in the day-to-day workflow of cyber espionage groups, even if it has not yet delivered the dramatic leaps in capability some experts had feared.
AI as an operational assistant
According to Google, state-backed actors used Gemini as a "support tool" rather than a fully autonomous attack platform. The model was most often employed for open-source intelligence gathering, technical research and code assistance.
North Korea-linked hackers known as UNC2970, believed to overlap with the notorious Lazarus Group, used the system to profile high-value targets in the defence and cybersecurity sectors.
The group searched for information on major companies, specific technical roles and salary data to help craft convincing job-recruitment lures.
UNC2970 has previously been associated with a long-running campaign known as Operation Dream Job, in which attackers pose as recruiters to deliver malware to aerospace, defence and energy workers.
Google said Chinese state-aligned actors had also experimented with Gemini. In one case, a hacker adopted the persona of a cybersecurity expert and asked the model to automate vulnerability analysis in a fabricated scenario.
The prompts involved remote code execution flaws, web application firewall bypass techniques and SQL injection testing against US-based targets.
Another Chinese actor repeatedly used the system to debug code, conduct research and receive advice on technical intrusion techniques.
Social engineering and malware development
The report also described Iranian group APT42 using Gemini to support social-engineering campaigns and speed up the development of custom malicious tools. The model was used for debugging, code generation and research into exploitation techniques.
Cybercriminal groups, rather than state-backed actors, were also found to be experimenting with AI-assisted malware.
Researchers identified a downloader called HonestCue that uses Gemini's API to generate code for later stages of an attack.
Another tool, a phishing kit dubbed CoinBait, masqueraded as a cryptocurrency exchange to harvest credentials. Artefacts suggested it had been developed with the help of AI coding platforms.
In separate campaigns, criminals used genAI services to support "ClickFix" operations, in which victims were tricked by search-engine ads into running malicious commands that installed information-stealing malware on macOS systems.
Despite the breadth of activity, Google said it had not seen major breakthroughs in capability driven by AI. Instead, attackers were mainly using the technology to improve efficiency and automate routine tasks.
However, the company warned that interest in AI tools among criminal groups was rising, and that some attackers were attempting to replicate advanced models through large-scale prompt-based "knowledge distillation" efforts.
In one case, Gemini was targeted with roughly 100,000 prompts in multiple languages in an apparent attempt to extract its reasoning patterns and train rival systems.
Google says it has disabled accounts linked to the abuse and has added new safeguards to its AI models to make malicious use more difficult.