Deciphering AI's impact on cybersecurity: Friend or foe?

Cybersecurity stands to gain numerous benefits from AI, but so do bad actors

Deciphering AI’s impact on cybersecurity: Friend or foe?

Image:
Deciphering AI’s impact on cybersecurity: Friend or foe?

The landscape of business operations is undergoing a significant transformation with the integration of artificial intelligence (AI). Through automation, data analysis and predictive capabilities, AI is reshaping how businesses operate as companies look to galvanise on the opportunity to boost productivity.

That said, whilst there are myriad benefits to gain from the use of AI in industry, businesses will require consistent education going forward to a new wave of AI-supported cyber threats that could have untold impacts if left unchecked and unguarded.

The role of AI in cybersecurity

The cybersecurity sector stands to gain numerous benefits from the integration of AI. Ongoing research suggests a rapid rise in the adoption of AI for cybersecurity, with a significant number of IT decision-makers intending to invest in AI-driven security solutions within the next two years. This intention is perfectly exemplified in a study conducted by Blackberry, 82% of surveyed IT decision-makers that highlighted their plan to allocate a budget for AI security by 2025, and almost half aim to do so by the conclusion of 2023.

AI is helping to enforce cyber defences through a number of ways such as predictive capabilities and rapid pattern recognition. These automated systems and measures are making it possible for immensely quick threat detection and response in cybersecurity to become the norm. With progress into AI showing no sign of slowing down, this positive impact on the cybersecurity industry should continue in due course.

Unveiling the unseen risks of AI

As highlighted above, there is a sustained push to make AI services that benefit the wider public as well as keep them safe. However, there are always going to be bad actors in the shadows who will look to manipulate it. Our experts at Trustwave have spotted this growing threat landscape with AI-backed tools such as WormGPT and FraudGPT being widely distributed, which enable inexperienced hackers to effortlessly acquire software that aids in generating malicious code and offers AI-driven tools to support cybercriminal activities.

Likewise, the already advanced GPT-4 model has shown it is adept at being able to impersonate customers, raising several concerns related to authenticity for the foreseeable future. With discussions ongoing about the development of GPT-5, which is being touted as a possibly life-changing product, this apprehension will likely only continue until this aspect of AI is rectified with comprehensive safeguards and AI regulation

From what we know, bad actors are primarily using AI's natural language processing to create hyper-realistic and highly personalised phishing emails. This has been highlighted in Trustwave's most recent report looking into cybersecurity threats in the hospitality sector as well as a number of other industries such as healthcare, financial services and manufacturing. In the report, we found that threat actors are using Large Language Models (LLMs) to develop social engineering attacks, which are more sophisticated as LLMs can create highly personalised and targeted correspondence such as instant messages and emails.

This AI-generated content predominantly contains malicious links or attachments, primarily HTML attachments, which are employed for HTML smuggling, credential phishing and redirection. Trustwave was also able to discover in recent research that around 33% of these HTML files employ obfuscation as a means of defence evasion. It is dutifully expected that there will be an uptick in the frequency of phishing attacks, presenting a challenge in detection as AI capabilities advance. Another disconcerting trend involves the increasing prevalence of deepfake technology, enabling the creation of counterfeit audio or video content aimed at deceiving customers by mimicking authenticity.

Being intelligent for the year ahead

In the face of AI's latest advancements, cybersecurity needs to progress towards a more proactive, intelligent and efficient framework. By optimising security processes such as threat response, hunting, and the analysis of extensive datasets, AI presents significant potential for enhancing cyber defences.

AI can help consultants work more productively through time-intensive tasks such as scouring extensive datasets in real-time, identifying patterns and detecting anomalies that could be potential threats. This emergence of AI-supported products has enabled cybersecurity professionals to foresee and mitigate risks before they inflict substantial damage. Nevertheless, cybersecurity consultants must remain mindful that hackers can exploit these same capabilities. This involves conducting thorough vulnerability scans and addressing any gaps identified by AI systems to ensure a robust defence and doing so frequently.

Moreover, consultants need to explore how AI can simulate cyberattacks, providing valuable insights for a resilient incident response strategy. This ensures that, in the event of an organisation falling victim to an AI-powered cyberattack, resources are in place to mitigate the threat and minimise the fallout from the breach.

Without a doubt, the ongoing surge in artificial intelligence technology is reshaping the cybersecurity landscape as well as a plethora of other sectors. While AI empowers organisations to detect threats and streamline security processes more efficiently, it simultaneously equips cybercriminals with new capabilities to commit serious damage to a brand and its customers. As experts in their respective fields, AI and cybersecurity experts must work hand-in-hand to ensure that the products for the future are regulated well and not falling into the hands of bad actors.

Ed Williams is regional VP of pen-testing EMEA at Trustwave