Microsoft: last year we tracked 200 major threat actors, now it's 300

Microsoft chief security adviser Sarah Armstrong Jones calls for more collaboration on AI and security

Microsoft currently tracks 300 large-scale threat actors, meaning state sponsored actors and major ransomware gangs; last year that number was 200.

Not only is the number of serious cyber actors growing, but they are increasingly showing a willingness to up the ante, moving from "disruptive to destructive activities", according to Microsoft chief security adviser Sarah Armstrong Jones. They are also collaborating more closely.

The war in Ukraine saw more cyberattacks in four months than occurred in the previous eight years, she told the audience at Computing's Cybersecurity Festival last week, and the war in Gaza has also seen a leap in incidents. At the same time, cybercrime-as-a-service actors and access brokers enable attackers to move quickly, allowing them to unleash a ransomware attack within as little as 15 minutes of breaching an organisation.

"In one year, we have suddenly gone through the roof with some of these large scale for actors in particular, in terms of what they're doing and their level of aggressiveness," said Armstrong Jones.

"Cybersecurity really has become one of the most challenging and most defining issues of our time."

It's against this backdrop that we need to consider the impact of AI on cybersecurity.

Generative AI is useful to both attackers and defenders. For the former AI is just another tool in the toolkit, especially useful for amplifying what they are already doing: reconnaissance work, refining phishing emails, spreading disinformation and writing malware code. Nation-state actors are actively employing AI in their operations, she said.

Meanwhile, for defenders "we're at a point where we really think AI is a paradigm shift," Armstrong Jones claimed, mentioning the availability of security copilots like Microsoft's, which can summarise information available from the vast volume of threat intelligence data to make it easily digestible and reduce the time lag between vulnerability and fix. "We want to tip the balance into the hands of defenders. We want more intelligence, more data, and more collaboration."

Big tech has an enormous role to play, she went on, as collectively the technology behemoths have access to vast volumes of data across industries and nations, and infrastructure to with that few others, even governments, can aspire. For example, Microsoft collects 78 trillion telemetry signals every day, and employs thousands of analysts and data scientists to make sense of it. However, much of this information is held behind closed doors.

Armstrong Jones spoke in praise of collaborative initiatives such as the Bletchley Declaration, in which 28 nations and blocs came together to collaborate on frontier AI, the more recent Munich Tech Accord against AI threat to democracy, and last year's US Executive Order on safe and trustworthy AI, which called on big tech to step up to the challenge.

"It's really important we share these lessons learned," said Armstrong Jones. "In particular, we need to invest time and research and development, thinking about secure software by design and by default. We're used to talking about security, resilience, privacy, we'll need to bring AI into that conversation as well."