Hackers used Anthropic AI chatbot to automate unprecedented cybercrime spree
Attacker used advanced evasion techniques to bypass Anthropic's multiple security layers
A hacker exploited the AI chatbot Claude, developed by Anthropic, to automate what the company describes as the most extensive AI-driven extortion campaign documented to date.
The hacker used the AI chatbot to identify vulnerable targets, create malicious software, steal sensitive data, and craft ransom demands, demonstrating a sophisticated integration of AI into cybercrime.
Anthropic disclosed the incident in a detailed report [pdf] published Tuesday.
According to the company, the unnamed hacker employed Claude Code – a version of their chatbot designed for "vibe coding," which generates programming code based on user prompts – to execute nearly every step of a criminal extortion campaign against at least 17 companies over three months.
Jacob Klein, Anthropic's head of threat intelligence, said the attacker seemed to operate outside of the USA and used advanced evasion techniques to bypass Anthropic's multiple security layers.
The hacker's AI-assisted method began with Claude Code scanning for companies with exploitable weaknesses. Then, the chatbot wrote malicious code that successfully extracted sensitive information, including Social Security numbers, banking data, and patients' confidential medical records.
The stolen data also included files subject to International Traffic in Arms Regulations (ITAR), indicating the compromise of sensitive defence-related information.
Afterward, Claude organised and analysed the stolen files to identify the most valuable data for extortion. Using the hacked companies' financial documents, the AI calculated realistic bitcoin ransom amounts – ranging from about $75,000 to over $500,000 – tailored to each victim.
The chatbot also drafted suggested extortion emails demanding payment in exchange for withholding the stolen information from public release.
AI tools lower the barrier to sophisticated attacks
Anthropic declined to disclose the names of the targeted organisations but confirmed the victims included a defence contractor, a financial institution, and multiple healthcare providers.
The company has implemented additional safeguards in response but acknowledged that such AI-enabled cybercrime is expected to become increasingly common as AI tools lower the barrier to sophisticated attacks.
In response to the report, Satish Swargam, principal security consultant at Black Duck, noted that hackers are leveraging AI chatbots like Claude to plan and execute cyberattacks with greater efficiency and less effort, enabling even novices to launch complex operations.
"In this case, AI security controls have helped in identifying unethical use of AI chatbots, but they are often too late in preventing an attack. Interestingly, the AI chatbot also helped determine the ransom amount to be demanded from the breached company in exchange for not disclosing the stolen data. Companies should proactively address these vulnerabilities when using AI tools by adopting robust cybersecurity measures such as DLP controls and staying abreast of technological advancements to prevent such scenarios and ensure uncompromised trust in software, especially in today's regulated and AI-powered world," Swargam stated.
Nivedita Murthy, senior security consultant at Black Duck, added, "Attackers using AI to improve their attack methods or increase automation is not surprising. However, in this case, it is interesting to note that Claude Code had a wealth of information on which organisations were vulnerable and where. It also freely gave away this information in the form of an attack vector. What organisations need to really look into is how much the AI tools they use know about their company and where that information goes."
"While AI usage has been highly beneficial to all, organisations need to understand that AI is a repository of confidential information that requires protection, just like any other form of storage system. Accountability and compliance are core requirements of doing business. While embracing AI at scale, these two factors need to be kept in mind."
Anthropic's findings align with broader trends.
Just a day prior to Anthropic's disclosure, cybersecurity firm ESET reported similar usage of AI in ransomware campaigns. The company said its researchers have uncovered PromptLock, the first known ransomware strain powered by generative AI.