The new threats: trust attacks and AI malware

The latest methods are designed to manipulate, not destroy, says Darktrace

Among the newer types of cyber threats are two which use subtlety rather than brute force to achieve their ends: trust attacks and malware that adapts intelligently to its surroundings.

In a presentation at Computing′s Enterprise Security and Risk Management Summit Live event last week, Greg Charman, senior cyber security manager at vendor Darktrace, gave examples of each.

One customer, a legal outfit, was prosecuting a case. The defendant (who was a hacker) managed to write some code that navigate the firm′s network and sought out and extracted files relevant to his case.

"But what if instead of extracting the files it had changed them?" asked Charman. This is the nature of a trust attack, it′s about silently undermining credibility and reputation.

Another example would be in the oil and gas industry, where competitors and adversaries frequently try to shut down oil rigs, knowing that any downtime would cost millions per hour.

"But if I was in cyber and I was really looking to affect a company like that, perhaps I wouldn't target an oil rig. Perhaps I'd target the seismic data used to pinpoint places to drill," said Charman.

Such an attack could be made extremely difficult to detect by only making very small changes and operating sporadically, he went on. The effect would be to make the data untrustworthy which would be hugely costly for the company to put right.

"Perhaps my attack would be a piece of code that turns on for 15 seconds a month and just alters various data points. You can imagine the weeks or months it would take for an organisation to identify that, and the implications for the drilling teams trying to locate the resources."

Another way that attackers can hide their presence is by adding intelligence to their malware code so that it can evade detection and optimise the operations.

"We already have things like polymorphic malware which changes its identifiable characteristics to avoid detection," Charman said. "If you add AI to it you have malware that's able to enter the environment, observe its surroundings, and adapt and change itself to remain hidden on the network. As well as using the environment to evade detection it also uses it to gain a foothold in the network."

Such malware could search out zero-day flaws and exploit them, thus entrenching itself further. In case this sound like science fiction, the US Defense Advanced Research Projects Agency (DARPA) ran a ′Grand Challenge′ in 2016 to automate vulnerability patching. Teams competed to create code that automatically detects vulnerabilities then patches them, and the three winners all produced working prototypes. In this case the mission was defensive, but the techniques could be used by the other side too.

AI can also be used to increase the efficiency of a spam or mass phishing or malware attack, by autonomously finding out what works and doing more of it while dropping the less effective methods.

"Even if it only increases the efficiency by a few per cent, that′s still hugely valuable to the malware operatives," Charman said.