AI-driven attacks are fast, difficult to spot and launched at scale. Toby Lewis of Darktrace argues that a defensive AI is the best and only possible response
Artifical intelligence offers a step change in the way we approach security - but criminals take the same approach. The emergence of AI-enhanced malware is making cyber-attacks exponentially more dangerous, and harder to identify.
As AI-driven attacks evolve, they will be almost indistinguishable from genuine activity, and conducted at an unprecedented speed and scale. In the face of offensive AI, CISO's must look to defensive AI that can fight back, detect even the most subtle indicators of attack in real-time, and respond with surgical precision to neutralise threats - wherever they strike.
In this session, recorded at last week's Cybersecurity Festival - Chapter 2, Darktrace's head of threat analysis Toby Brewer discusses how cyber-criminals are leveraging AI tools to create sophisticated cyber weapons; what an AI-powered spoofing threat may look like, and why humans will not be able to spot them; and why defensive AI technologies are uniquely positioned to fight back.
Please be aware that this video is sponsored by Darktrace. When you watch it, we, Incisive Media, will use the lawful basis of ‘legitimate interests' to pass your contact details on to them. Their use of your data will be governed by their privacy policy.