Google DORA: Software delivery caught up to AI coding tools
A Google DORA survey on AI-assisted software development found last year's delivery bottleneck resolved, but stability, trust and organizational issues remain.
AI coding tools, now ubiquitous among software developers, no longer delay software delivery but still impact stability and amplify organizational issues, a Google DORA survey found.
The Google research group's survey, conducted annually, measures software delivery performance in two main categories: speed and efficiency, or throughput, and quality and reliability of releases, termed instability. It also measures individual software developer outcomes such as code quality, friction and burnout.
Last year, DORA's survey of 3,000 respondents found a decrease of 1.5% in software delivery throughput and a 7.2% decrease in delivery stability for every 25% increase in an organization's AI adoption. This year, among 5,000 survey respondents and more than 100 hours of research interviews, those outcomes were measured differently, but significantly differed from last year's results, according to Nathen Harvey, DORA lead and developer advocate at Google Cloud.
"We're using standardized effects this year, looking at how much of a change there is as [respondents are] using more AI, and we measure those changes in standard deviations from the mean," Harvey said. "Essentially what we're saying is that these numbers are relative but show improvements. It's not a huge improvement, but it is decidedly an improvement."
DORA's State of AI-assisted Software Development report this week poses some hypotheses on what accounted for this change.
Nathen Harvey
"If AI is handling some of the grunt work underlying coding processes (scaffolding, boilerplate, routine transformations), developers may have more time to focus on deploying code, leading to increased software delivery throughput and ultimately to improved product performance," the report reads. "We could also be observing organizational systems adapting into more fruitful environments for AI."
These results resonated with one software engineering leader.
"I see more engineers finding a better collaborative relationship with AI tools," said David Strauss, chief architect and co-founder at WebOps company Pantheon. "Expectations are more reasonable, and models have improved in quality as well."
Harvey said the adverse effect of AI on software release stability is to be expected as a technology matures.
"It's no surprise, honestly, that we see throughput starting to inch up first before instability goes down," he said. "There's always pressure to move faster, move faster, move faster, and then stability kind of comes second."
AI trust issues rise
Other changes revealed in this year's survey included an increase in developer adoption of AI, from 76% in 2024 to 90% in 2025. More than 80% of respondents reported increased productivity and 59% reported an increase in code quality.
However, while AI adoption increased significantly, respondents' trust in the technology didn't improve proportionately, Harvey said: 30% of respondents said they trust AI "a little" or "not at all," down from 39.2% last year, but this year 70% said they trusted AI-generated outputs "somewhat," "a lot," or "a great deal," versus 87.9% in 2024.
Harvey interpreted these results as a healthy adjustment in expectations for what AI coding tools can do.
I believe people are caught between awe for AI's current capabilities and frustration over its inability to truly understand the world. The probabilistic nature of AI makes it hard for people to fully understand and trust it.
Torsten VolkAnalyst, Omdia
"The reality is you shouldn't trust something 100% — 100% trust in AI would be wrong," he said. "With software delivery instability continuing to increase, we have to make sure that we have checks in place to validate what's coming out."
One industry analyst said his own research shows a similar disconnect between the fact that IT decision makers are willing to pay more for software with high-quality AI features and claim to trust AI-based decisions, but also frequently overwrite AI decisions in practice.
"I believe people are caught between awe for AI's current capabilities and frustration over its inability to truly understand the world," said Torsten Volk, an analyst at Omdia, a division of Informa TechTarget. "The probabilistic nature of AI makes it hard for people to fully understand and trust it. Sometimes AI provides responses that seem human-like, while at other times, its responses are illogical."
AI best practices emerge
Overall, the DORA report found that AI usage has begun to mirror and amplify existing organizational traits, both for better and for worse.
"We're seeing mixed results with AI across the people that we're looking at, and at the same time, we're seeing 90% of people using AI," Harvey said. "So it's clear that it's not whether or not you're using AI that's driving this, but rather, how you're using AI that's driving its impact."
Based on this year's survey results, DORA identified seven best practices common to organizations benefiting from AI:
A clear and communicated AI stance.
A healthy data ecosystem.
AI-accessible internal data.
Strong version control practices.
Working in small batches.
User-centric focus.
A quality internal platform.
DORA also identified how different applications of these practices can lead to better specific AI outcomes. For example, a team that wants to improve its product performance while using AI should focus on having accessible internal data, working in small batches, and clarifying its AI stance.
"What I tell organizations looking to take advantage of AI is that their house better be in order," said Matthew Flug, an analyst at IDC. "Their workflows, processes, and security posture all need to be rock solid, because AI will find the gaps in the armor."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.