A brief look at AI-enhanced security solutions

IT leaders aren't ready to hand over security to a benevolent machine just yet, but their next purchase will probably have an AI label

It's easy to see why AI might be a good fit for IT security. The rapidly changing cast of threat actors, the shady exploit markets, the rate at which malware mutates into more virulent or more targeted strains, and the fact that ransomware can bring down a network within the blink of an eye - all of this is too much even for squads of highly trained professionals to keep up with. And most security teams are chronically under-staffed and arguably underfunded too.

AI-enhanced security tools use machine learning techniques to baseline what 'normal' looks like. They are able to react in real-time as soon as the bounds of normality are crossed in order to alert security staff, block suspicious activity, or even engage proactively to neutralise the perceived threat.

As an umbrella term, AI covers a wide spectrum. After all, AV software and email filters have used pattern matching for years and of course there exist all sorts of machine learning techniques from the relatively trivial decision trees to fiendishly complex deep neural networks. Some AI-enhanced security tools are simple alerting systems while others may be more or less fully autonomous.

Base: 130 IT leaders with experience of AI-enhanced security solutions

‘AI-enhanced security' also covers a range of deployments from on-premises appliances, to cloud and hybrid cloud applications.

Computing Delta research among 130 organisations with experience with AI-enhanced security solutions found that protecting the corporate network, guarding endpoints and filtering email were the most common use cases for the technology.

Other areas where it was felt it could add protective value included cloud security, identity and access management, and fraud prevention. The more niche IoT and application security use cases were further were mentioned less frequently.

Interestingly, user behaviour modelling, an area of analytics highly suited to machine learning, was not among the most important use cases, which may show that most firms are looking to augment traditional security solutions at this stage, rather than bringing in something new. It could also reflect nervousness about overstepping the fuzzy line between protecting against insider threat and surveillance, or the general immaturity of the market.

End-to-end or specialised solutions?

The holy grail, perhaps, would be an all-in-one, end-to-end, intelligent protective shield, with protective tentacles reaching out to the edge, to mobile devices, across cloud applications and within data centres, as well as covering the corporate network. In theory, such a system could replace firewalls, anti-virus software and SIEMs allowing IT teams to pass responsibility for security over to a benign machine.

Is this a possibility? No, said the IT leaders we spoke to. At least not yet. Maybe never.

A fully autonomous security system would raise all sorts of issues, they pointed out. How can you trust it will learn the right lessons and act on them in the correct way? Who would be responsible in the event of a failure?

Rather than taking over, for most organisations AI is a sort of robot exoskeleton. It adds muscle to existing capabilities, making defences faster and more capable, with the possibility of eventually dialling up the levels of autonomy as and when this becomes feasible and once trust has been built.

Which part is AI really?

There's no doubt that AI has an increasingly important part to play in cyber-security, but the IT leaders we interviewed were having a hard time navigating the AI-enhanced security marketplace.

There was some suspicion that the AI featured so prominently in marketing collateral might be little more than a bolt-on accessory. Make sure you dig into the details, said one IT leader. "Which part is AI really? Is it just some little part or is it really fundamental?"

AI is probabilistic rather than deterministic, and the exact tipping points that lead it to decide one way or another may be virtually impossible to derive by working backwards from the decision, and that's if you have access to the code in the first place.

According to our respondents, security vendors themselves often struggle with explaining how their products work, adopting sales approaches that range from ‘blinding with science' to brushing aside the technicalities altogether.

"Maybe there is no AI. Perhaps it's just some guy in a data centre somewhere fiddling with the configurations," mused one IT head in a conspiratorial mood.

Even more important, does it work? And how can you tell? The figure above shows the main measures of success considered by our respondents. The first two, threats detected and threats that get through, are relatively straightforward to measure so long as you have other layers in place that can detect the malware that the AI might miss. But measuring changes in false positives and working out how much IT time is saved by the AI are complex, time-consuming longitudinal tasks.

Worth the money?

Will AI-enhanced security solutions save money? Our audience was split on this. On one hand, there's the potential for reducing the number of security solutions required, and possibly reducing headcount, although most security teams are already pretty pared back. On the other, savings, if they are seen at all, will take time to arrive.

In the first stages of a deployment, AI will likely be put to work on tedious manual tasks to see how well it performs. As such it will be one layer on top of many others. The trouble is, it's likely to be an expensive layer.

The price of AI-enhanced security solutions was the number one complaint we came across. One CIO said a vendor came in with a price four times higher than his most extreme estimate and refused to negotiate on the figure, instead trying to tie him into a longer contract, something he was reluctant to do given that AI-security is an immature market. Other respondents mentioned a lack of transparency around pricing and also a reluctance to explain the product roadmap.

In comparison with other defences, AI-based security systems don't require all that much integration with other systems, and being self-educating they can be left to and improve over time. But that should not be confused with being plug'n'play because they still need to be configured, and in a complex environment this can take significantly longer than anticipated.

In summary, our respondents felt that AI is an increasingly important element in a layered defensive setup, but that the market is not yet sufficiently mature for AI-enhanced defences to replace other solutions just yet, except perhaps in a few areas such as email. As with any security solution, they advise a rigorous regime of testing before making a decision, with which trustworthy vendors are normally happy to oblige, and they implore vendors to be more upfront about pricing, explainability and product roadmaps.

John Leonard

Author spotlight

John Leonard

View profile

More from John Leonard

Ofcom fines TikTok £1.9m for failure to provide child safety information

UK and Irish police take down 'most prolific' DDoS site