This is what the cyber security will look like once attackers weaponise AI

Artificial intelligence will enable threats to learn as they go, remaining undetected for longer

Researchers from cyber defence company Darktrace have said that advances in artificial intelligence will make it increasingly difficult to tell whether an attack is carried out by a human or a machine.

"We expect AI-driven malware to start mimicking behaviour that is usually attributed to human operators by leveraging contextualisation; but we also anticipate the opposite,i.e., advanced human attacker groups utilising AI-driven implants to improve their attacks and enable them to scale better," said Darktrace director of threat hunting Max Heinemeyer.

In a new report, the Darktrace team describe three threats that they have caught ‘in the wild' over the past year. They go on to make some predictions about what similar attacks would look like if charged with AI and contextual awareness - which is how human threat actors manage to blend into a compromised environment.

‘Autonomous' malware

The first scenario follows a victim at a law firm, who was affected with the Trickbot modular malware. Although the human security team detected the threat they could not react fast enough to prevent the infection, and it spread to more than 20 other devices on the network with outdated SMB services.

Researchers spotted the Empire Powershell post-infection framework on two of the machines, added as part of Trickbot's modular nature. This threat is often used under the direct control of human attackers.

AI attack scenario

Darktrace expects AI-driven malware in the future to spread based on a series of autonomous decisions, tailored to the infected system. For example, a WannaCry that doesn't only rely on a single form of a lateral movement (EternalBlue), but could switch techniques on-the-fly.

‘The malware can learn context by quietly sitting in an infected environment and observing normal business operations, such as the internal devices the infected machine communicates with, the ports and protocols it uses, and the accounts which use it', said the researchers. This would invalidate the need for a traditional command and control (C2) channel, making the attack more difficult to detect.

Intelligent evasion

The second case occurred at a power and water company, where malware on a device had ‘taken steps' to disguise its activity as legitimate. This involved downloading a file from the Amazon S3 cloud and using self-signed SSL certificates to trick standard security controls, plus sending communication from ports 443 and 80 to blend into regular network traffic.

Darktrace picked the malware up due to destination IP address for the traffic, which didn't fit with the rest of the network.

AI attack scenario

Contextualisation will become a key part of AI-based malware, enabling it to learn and adapt to the environment: both targeting weak points and hiding its true nature.

Some targeted attacks today already use a basic version of these techniques, such as only establishing C2 channels during business hours and, as above, communicating through popular ports. However, human attackers have to guess at what is ‘normal'; AI malware will learn and know, making detection much more difficult.

Low and slow attacks

Many cyber attacks happen quickly and without warning, but ‘low-and-slow' crime - which is difficult to detect - is on the rise. Darktrace uncovered such a case at a medical technology company, where data was being exfiltrated so slowly and in such small quantities that it failed to set off any alerts in legacy security tools.

The infected device connected to an external IP address multiple times. Each connection was less than 1MB, but the total volume of data sent amounted to 15GB.

AI attack scenario

As well as providing a conduit for these types of attacks, artificial intelligence could learn what rates of data transfer would alert security solutions, dynamically adjusting the size and timing of its exfiltration to avoid detection.

‘As soon as the malware no longer uses a hard-coded data volume threshold but is able to change it dynamically, based on the total bandwidth used by the infected machine, it will become much more efficient. Instead of sending out 20KB every 2 hours, it can increase the data volume exfiltrated during suitable times, e.g. when the employee whose laptop is infected is video-conferencing and sending out a lot of data anyway,' wrote Darktrace.

Another way in which AI can pose a threat is in scalability. There are only a finite number of hackers in the world; but automating some of the skill-intensive parts of an intrusion will lead to a higher return on investment for attackers.

The combination of increasingly sophisticated malware and AI's contextual understanding will represent ‘a paradigm shift' for the security industry, Darktrace concluded.