AI is coming for the pentesters, but devs look safe for now
AI is proficient at finding security holes, but not at understanding business logic and architecture finds Aikido Security report
Of all adopters of GenAI, the most enthusiastic group (with the possible exception of students) has been developers.
According to a study by Sapio Research on behalf of Aikido Security, 24% of production code is now generated by AI tools.
Moreover, 96% of 450 security leaders polled believe AI-written code will be nearly perfectly secure within the next decade, with the largest group expecting to see this in 3 to 5 years, which is a remarkable vote of confidence and speaks to the pace at which AI coding tools are improving.
Nevertheless, we are not there yet, and there are signs that overenthusiastic adoption is causing security problems. One in five CISOs reported a major security incident caused by AI-generated code, with a further 49% saying it had led to a minor issue.
The report finds several reasons that AI generated code can cause security problems. The first is cultural, with confusion over whether responsibility lies with the developer who used AI tools to create the code, the person who merged the code or the security team. This is exacerbated by security teams and developers using different tools, and by general tool sprawl, which means that vulnerabilities can slip through the gaps.
“As organisations harness AI’s benefits, they also face rising cyber-risks as a result. The CISO’s role is to ensure security posture scales as fast as the technology," commented Christelle Heikkilä, former CIO/CISO at Arsenal FC.
Despite the confidence that AI tools will be able to write “near-perfect secure code” in the coming years only 21% think AI will be able to do so without human oversight. That’s because business logic and architectures need to be included in the picture if flaws are not to emerge, and few security leaders apparently believe that AI will be up to the task.
Elsewhere, however, AI tools are likely to displace human actors, according to the respondents. Ninety per cent expect AI to fully take over penetration testing, with an average timeline of 5.5 years.
“AI is viewed as capable of running the bulk of testing at scale,” the report says. “Different agents can now map attack surfaces, hunt CVEs, exploit weaknesses, and probe for web vulnerabilities. Together they provide coverage that humans struggle to match. Human testers will still be needed for complex business logic and creative attack chains.”
More teams also trust AI tools to spot and fix bugs in code, particularly as they improve and learn from past fixes.
If you’re a current or aspiring cybersecurity leader check out the Computing Security Leaders Summit on March 26th 2026. Packed with content including business continuity planning, bridging the cyber skills gap and cloud resilience, its promises to be full of insight and practical advice to take away. Register here for your free place.