AI code exposing companies to mounting security risks
Devs using AI for speed raise red flags for software security
Developers have embraced AI-generated code, but too many are neglecting simple security.
The global tech industry is embracing a dramatic shift in how software is written, as AI-generated code rapidly becomes the norm across enterprises.
A survey by software supply chain platform Cloudsmith reveals that nearly half (42%) of developers now admit their codebases are predominantly populated with AI-generated code – accelerating productivity but simultaneously exposing companies to mounting security risks.
According to the study, only 67% of developers say they review code before deployment – leaving as many as a third deploying code without proper inspection.
Moreover, just 29% said they feel "very confident" in their ability to detect vulnerabilities in AI-generated or AI-assisted code.
The use of AI in software development presents an exciting opportunity, but it should not come at the cost of security, warns Glenn Weinstein, CEO of Cloudsmith.
"Software development teams are shipping faster, with more AI-generated code and AI agent-led updates," he said.
"AI tools have had a huge impact on developer productivity, which is great. That said, with potentially less human scrutiny on generated code, it's more important that leaders ensure the right automated controls are in place for the software supply chain."
AI integration has yielded major efficiency gains. The study found that AI tools are helping reduce manual workloads and accelerate release cycles. However, confidence in output remains mixed: 20% of developers said they completely trust AI-generated code, while 59% apply additional scrutiny – a cautious optimism tempered by the recognition of potential risks.
The risks are not theoretical. Cloudsmith cited the growing threat of "AI-specific exploits" like slopsquatting – where attackers exploit hallucinated package names generated by AI coding assistants – as an example of emerging vulnerabilities.
These hallucinated suggestions, particularly dangerous when working with open-source libraries, can lead to compromised packages slipping into production environments.
Alarmingly, the study found that 17% of developers operate without any control policies over the use of AI in code development, and only a third use tools to enforce AI-specific policies. This governance gap could leave enterprises vulnerable to supply chain attacks and zero-day exploits that take advantage of insufficiently reviewed or misunderstood AI-generated code.
Despite these red flags, enthusiasm around AI remains high among major industry players.
Google, Microsoft and others are not only integrating AI deeper into development pipelines but are reshaping their internal workflows around it.
Google CEO Sundar Pichai disclosed last year that around 25% of the company's internal codebase was AI-generated as of late 2024, a figure likely to have risen since then (how much is in production is another matter – Ed.).
Microsoft CEO Satya Nadella went even further, stating at Meta's LlamaCon in April 2025 that up to 30% of the company's code may be AI-written.
Nadella expects the proportion of AI-generated code within the company to grow steadily in the years ahead.
Microsoft CTO Kevin Scott has previously predicted that by 2030, as much as 95% of all code will be generated by AI.