Market research: AI coding tools push production problems
Recent reports show that AI-generated code adds instability and vulnerabilities in production, but auto-remediation tools face persistent organizational friction.
AI coding tools are here to stay, but so are the security and stability issues they introduce in production, which could be a thornier problem to solve.
Those conclusions are consistent among recent market research reports from Google's DORA research division, DevSecOps vendor Harness, and IDC. AI coding tools are widely adopted according to all these surveys: 90% of DORA's 5,000 software developer respondents to its State of AI-assisted Software Development survey reported using AI tools, and 71% said they use these tools for coding. IDC's 2025 DevSecOps, Vulnerability Management and Software Supply Chain Security survey of 511 respondents found that only 2% reported that developers were not actively using these tools, while the largest group, 32.9%, indicated that between 26% and 50% of developers actively use them. Additionally, a survey of 900 respondents conducted by Harness revealed that, on average, development and engineering teams use between 8 and 10 AI tools.
These tools were also consistently shown to boost developer productivity – over 80% of DORA respondents said AI has enhanced their productivity, while 78% of 211 respondents to IDC's 2024 Generative AI Developer Survey reported a mean 35% boost in their productivity. Bottlenecks in software delivery related to AI-generated code, identified in DORA's 2024 report, had eased somewhat by 2025, while the Harness survey found that 63% of organizations delivered code to production faster using AI tools.
However, surveys also consistently reported problems caused by the increase in AI-generated code in production environments, as well as gaps between the use of automated coding tools and automated testing and remediation. In the DORA report, higher AI usage was linked to greater software delivery instability, a metric that combines the change fail rate and rework rate of software deployments. In fact, the link between AI usage and instability, measured in standard deviations from the mean, was stronger than the connections between AI usage and software delivery throughput, product performance and code quality. Similarly, the Harness survey found that 45% of all deployments involving AI-generated code lead to problems.
The impact is already being felt, with 72% of organizations reporting they’ve experienced a production incident tied to AI code.
Trevor StuartSenior Vice President, Harness
A Harness executive clarified what the report meant by "problems" following its release on Sept. 30.
"In subsequent responses, we saw clear themes emerge: 48% expressed concerns about increased vulnerabilities, and 43% flagged greater risk of regulatory non-compliance," wrote Trevor Stuart, senior vice president and general manager at Harness, in an email to Informa TechTarget. "The impact is already being felt, with 72% of organizations reporting they’ve experienced a production incident tied to AI code."
IDC's 2025 DevSecOps survey found that 41.6% of respondents occasionally identified security issues introduced by AI-generated code in fewer than half of all code reviews, while 14.1% and 18.5%, respectively, identified them very frequently in most reviews and frequently in more than half of reviews.
Security worries persist with AI agents, too. Enterprise Strategy Group, now part of Omdia, found that 51% of 350 respondents surveyed between March and April 2025 were actively deploying AI agents. Security and compliance concerns topped the list of challenges, named by 39% of respondents and identified by 17% as the most significant.
DevSecOps tools face stubborn organizational divide
Harness and its DevSecOps competitors, including GitHub, GitLab, CloudBees, Atlassian and JFrog, offer AI assistants and agents to help people manage the growing volume of auto-generated code by automatically testing, detecting and fixing bugs and security vulnerabilities. Harness expanded its lineup of such tools this week with the acquisition of Qwiet AI, which will aid in linking code vulnerabilities to production issues identified by its Traceable WAAP product.
Katie Norton
The integration of Qwiet and Traceable sets Harness apart from competitors, according to Katie Norton, an analyst at IDC.
"GitHub and GitLab have invested heavily in native scanners, although they both allow integration – it's just not their primary focus," she said. "CloudBees and Atlassian rely more on partners and orchestration. Harness is taking a hybrid route: building out its own detection engines through acquisitions, while also orchestrating third-party tools."
Another differentiator for Harness is ease of use, Norton said.
"Security scans and remediation can be added as native pipeline steps, with one-click configuration and prebuilt templates," she said. "It reframes application security from an external process into an embedded pipeline function, making it easier for DevOps and platform teams to standardize security consistently across thousands of builds."
However, IDC's survey results also show inconsistent use of automated security testing tools, even though these tools existed before generative AI. Among 361 respondents to its June 2025 Platform Engineering and DevOps Survey, 6.1% said their organizations didn't perform any automated tests, 2.2% said they conducted all tests automatically, and the largest group, 20.2%, estimated they performed between 40% and 59% automated tests.
The Harness survey found similarly limited downstream automation. While 51% of coding workflows are automated on average, fewer than half of respondents automated QA, security and compliance testing. And Omdia's AI agent survey found that IT managers were overwhelmingly the buyers of AI agents in their organizations, at 56%, while risk and compliance managers represented just 2%.
DevSecOps message, according to Melinda Marks, an analyst at Omdia.
"My latest survey on cloud computing showed that 38% of developer and DevOps security tools are selected without consulting security teams," she said. Another 30% of 370 respondents to the as-yet-unpublished survey select tools and notify security afterwards, while 32% of security teams select tools and roll them out separately.
"[The Qwiet AI acquisition] will help Harness sell to its typical audiences because it can use security as a key differentiator to help teams meet key performance indicators such as application uptime, protection of customer and company data, and compliance," Marks said. "But Harness, like others that sell DevOps, platform engineering, pipeline and development tools, needs to help security teams understand the development tools so that they gain visibility and control."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.
Dig Deeper on Agile, DevOps and software development methodologies