Stack Overflow developer survey identifies “trust gap” in AI development tools

More developers are using AI tools, but scepticism on output has risen

Stack Overflow’s annual Developer Survey provides a useful snapshot into the thoughts and requirements of the global developer community. This year's results show increasing use of AI tools, but also increasing doubt that the output can be trusted.

Now in its fifteenth year, the Stack Overflow Development Survey collected insights from over 49,000 developers representing 177 countries and 314 different technologies. This year’s survey included a new focus on AI tools and found both widespread uses combined with considerable distrust of the output.

84% of those who took part now use or plan to use AI tools, an increase from 76% in 2024. Yet this broadened adoption is accompanied by rising scepticism: 46% reported distrusting the output of AI tools, up sharply from 31% last year.

The "trust gap" should signal a concern for enterprises keen to integrate AI into production. Developers are wary of relying on AI-generated code without human intervention.

“The growing lack of trust in AI tools stood out to us as the key data point in this year's survey, especially given the increased pace of growth and adoption of these AI tools. AI is a powerful tool, but it has significant risks of misinformation or can lack complexity or relevance,” said Prashanth Chandrasekar, CEO of Stack Overflow.

“With the use of AI now ubiquitous and ‘AI slop’ rapidly replacing the content we see online, an approach that leans heavily on trustworthy, responsible use of data from curated knowledge bases is critical. By providing a trusted human intelligence layer in the age of AI, we believe the tech enthusiasts of today can play a larger role in adding value to build the AI technologies and products of tomorrow.”

There has been considerable angst about the impact of AI tool on software development, with the market for entry-level skills being significantly eroded as automation eats away at lower order tasks. The problem developers in this survey have highlighted, is that AI tools are not good enough at the tasks they are increasingly being given.

Debugging AI-generated code was cited as a significant frustration.

These concerns highlight a continuing need for human expertise during code review – especially in critical systems. AI may accelerate code generation, but enterprise governance is jeopardised if poor quality, poorly secured code slips though the net.

Stack Overflow’s findings also match another recently published survey which found that AI coding assistants slow experienced engineers down rather than enabling greater productivity – which has consistently been the pitch from AI vendors.

AI agent adoption is slow. Only 31% of developers use them now, 17% are planning to and 38% having no intention to adopt. Yet satisfaction among users is high with 69% of those using them saying AI agents boost productivity.

Interestingly, despite increasing numbers of CEOs saying the quiet part out loud and acknowledging that yes, they do expect to be able to cut their workforces, 64% of surveyed developers did not perceive AI as a risk to their jobs, though this has slipped slightly from 68% in 2024.

Discernible among the survey results was the fact that even as AI reshapes workflows, the developer community retains a strong desire for human-led knowledge sharing.

Stack Overflow (84%), GitHub (67%), and YouTube (61%) are the top platforms for learning and collaboration.

89% visit Stack Overflow frequently. Ironically, 35% turn to the platform specifically after facing issues with AI-generated answers.

For learning and upskilling, 69% of developers pursued new languages or techniques last year, and 44% tapped AI tools to support their learning—a jump from 37% in 2024. However, emergent AI-driven practices like "vibe coding" remain uncommon in professional settings, with 77% choosing not to go there.