How AI is being used for cyberattacks

‘Something that may have taken a couple of days, they’re doing it in minutes now’

Image:
How AI is being used for cyberattacks

At a recent media event in London, red and blue team leaders at Orange Cyberdefense, the Paris-headquartered cybersecurity subsidiary of Orange Telecom, revealed the way that AI tools are being used by attackers and how the landscape is evolving.

With the exception of deepfake impersonations, there is nothing particularly new about the way AI is being used for cyberattacks. Rather, it is being adopted to speed up and automate existing processes and to make fake sites and messaging more convincing.

Which is no cause for complacency. Far from it. Hackers may now do in minutes or hours what used to take then days. And in the time-honoured tradition of IT marketing, that will give them time to focus on tasks that are more valuable and productive - and that’s not good news for the rest of us.

“Threat actors are rapidly utilising new techniques and technologies, such as AI, then tagging them along with VPNs and they're just sailing in. And once they're in, it's game over,” said CSIRT analyst Samantha Caven, a member of the blue (defensive) team.

“They’re using it to evade EDR [endpoint detection and response] and ultimately manipulate users. Something that may have taken a couple of days, they’re doing it in minutes now.”

Moving sideways faster

More specifically, threat actors are using AI to become more productive in their reconnaissance and fuzzing (probing defences with unexpected data) and in chaining attacks together.

“We're utilising AI to help us move laterally very, very quickly,” said red teamer and pen testing team lead Stuart Kennedy, whose job it is to try to evade the defences put up by the blue team, emulating the real-world conditions faced by organisations.

Used in combination with a pen testing tool like BloodHound, which reveals hidden relationships within an Active Directory environment, AI hacking tools can tell the threat actor who to target, what to do and what to look for, he added. “Then I don’t need to think about any of that.”

Instead, he can come in at the end of the process when fine discretion and human experience is required. “As a hacker it isn’t going to replace my job, but it’s a tool to help me move quicker.”

Autonomous attack agents and deepfakes

Attackers often use open source tools that are free to download and adapt, including, worryingly, attack-focussed LLMs that can be downloaded from Hugging Face and other model repositories. Kennedy said he knows of two such tools explicitly targeting black hats, and is aware of efforts to build ever more effective autonomous attack agents.

Those behind such tools may be overt about their offensive intent (at least until they are taken down) but most open source tools used by hackers, including Cobalt Strike and BloodHound are legitimate defensive solutions. It’s the same with AI. These are dual use technologies.

Freely downloadable tools can also be used to create convincing deepfake videos based on just a photo plus a few seconds of audio. Infamously, an employee at engineering firm Arup was tricked into authorising a fraudulent multimillion pound transaction on the basis of a deepfake video call, but such videos or voiceprints are more likely to be used in support of other methods, to build trust or increase pressure to act as part of a multi-pronged attack.

AI may also be used to create polymorphic malware that changes over time to avoid leaving telltale signatures that can be picked up by virus checkers and propagated by threat intelligence teams. While this is an emerging trend with few samples picked up so far in the wild, that doesn’t mean it’s not out there, just that it hasn’t been detected, Kennedy said, adding that his red team is using AI models to generate polymorphic malware as it engages its blue adversary.

Polymorphism is also present in phishing emails, with small changes and unique details made to increase personalisation and reduce duplication and therefore blocking by gateways and filters. Phishing messages are much better written these days too, thanks to LLM chatbots like Claude and ChatGPT, and AI is also being used to generate more convincing fake landing pages, but fortunately in the absence of the human touch the believability factor is often lacking.

“The English is better, but the context is often worse than if human generated” commented a participant in a recent Computing survey. Again, though, we’d be unwise to rest easy: improving nuance will no doubt be an area that threat groups are working on right now.

Another growth area is so-called cognitive attacks as practised by state-backed hacktivist groups, where the aim is to alter opinion on a large scale, tip elections or reduce trust in institutions. AI tools allow for scale and tailored narratives.

AI in defence

But like any automation tool, the main goal in using AI is to increase efficiency, and that cuts both ways. Defenders are also using AI in detection, triaging and mitigation, and there is no reason to believe that attackers have an advantage at the moment, according to Orange Cyberdefense CEO Hugues Foulon. “We are also using AI to try to protect, to defend our customers, so we don't see yet an imbalance thanks to Gen AI,” he said.

Defence is about finding patterns and acting quickly when they are found, and even polymorphic malware and messages exhibit telltale signs, because GenAI itself is to some extent predictable.

However, the blue team, which is involved in testing defensive tools and techniques, is proceeding with caution in its use of GenAI solutions on the market, some of which are hyped unproven. There’s still an issue as to what extent you should trust their output. “We’re taking it a bit slowly,” said Caven “We want to make sure it's right first, the last thing we need is chaos.”