IBM's DeepLocker malware targets people using their face and voice

The best way to counter AI malware is to build your own

Artificial intelligence is creating new jobs and opportunities today, just as the PC and cloud eras have done in the past. But, like those times, new threats are coming to light; and the major one is AI itself.

"As machine learning matures into AI, nascent use of AI for cyber threat defense will likely be countered by threat actors using AI for offense," Rick Hemsley, managing director at Accenture Security, told us earlier this year.

So on the face of it, IBM's development of DeepLocker - ‘a new breed of highly targeted and evasive attack tools powered by AI' - seems like it sets a dangerous precedent.

There is method to the madness. IBM reasons that cybercriminals are already working to weaponise AI, and the best way to counter such a threat is to watch how it works.

While normal malware can be ‘captured' and reverse-engineered to figure out what makes it tick (and thus build a vaccine), it's much more difficult to analyse how a neural network reaches its decisions.

The company built DeepLocker to understand how existing AI models can be combined with malware techniques to create a new type of attack. Its proof-of-concept tool hides itself in other applications until it identifies its victim: when that unlucky individual is tagged (through indicators like facial recognition, geolocation and voice recognition), the malware strikes.

The AI model will only ‘unlock' the malware to begin the attack if it identifies certain criteria; these can be based on any number of attributes, including visual, audio, geolocation and system-level features. It's almost impossible to identify all possible triggers, making reverse-engineering the deep neural network (DNN) a difficult prospect.

Making it work

Most firms don't willingly install WannaCry on any system, but that's exactly what IBM did to test DeepLocker. The firm hid the ransomware in a video conferencing application so that it couldn't be detected, and trained the AI model to unlock it based on facial recognition.

When the DNN saw the right person in front of their PC, through a webcam (remember, video conferencing), it provided the key to open the payload and lock down the victim's system.

The clever part of IBM's work is that it has turned a traditional weakness of black box AI - the fact that you can't see inside to understand how it reaches its decisions - into a strength.

"A simple ‘if this, then that' trigger condition is transformed into a deep convolutional network of the AI model that is very hard to decipher," wrote IBM's Marc Stoecklin. "In addition to that, it is able to convert the concealed trigger condition itself into a ‘password' or ‘key' that is required to unlock the attack payload."

IBM will be discussing its work at Black Hat USA 2018 today.