Velocity-over-quality mindset leads to software testing gaps

Insufficient software testing happens because of a lack of talent, time and cash. But inattentive CEOs and development methodologies such as Agile also contribute to the problem.

Software testing addresses bugs and vulnerabilities before they affect users, but the harried race to the finish line -- cluttered with obstacles such as low budgets, incremental sprints and poor management decisions -- can stymie the deployment of quality code.

Testing validates that software functions as expected, identifying vulnerabilities and minimizing bugs before code is deployed into production. Automated testing tools such as Selenium, JUnit or Grinder use code-based scripts to examine software, compare results from runs and report outcomes. However, despite a wide availability of such tools, most code makes it into production untested; contributing factors include the developer shortage, a lack of skills among testing teams and poor business decisions, according to industry analysts.

"Software goes untested simply because testing [is costly], it requires time and resources, and manual testing slows the continuous development process," said Diego Lo Giudice, vice president and analyst at Forrester Research.

Developers only unit test 50% of their code; tester subject matter experts (SMEs) automate only about 30% of their functional test cases, which has nothing to do with code complexity, Lo Giudice said.

"Skills, costs and time are the reasons," he said.

The total cost of poor software quality exceeds $2 trillion per year, which includes $1.56 trillion in operational failures and $260 billion in unsuccessful IT projects, according to the Consortium for Information and Software Quality (CISQ), a nonprofit that develops international software quality standards.

Illustration of software testing gap statistics
Most production code doesn't undergo software testing.

But there's more than money at risk. "For enterprises that do e-commerce, when the system goes down, they lose money, and then they lose reputation," said Christian Brink Frederiksen, CEO of Leapwork, a no-code test automation platform. This can lead to customer retention problems, he said.

However, some CEOs wear blinkers when it comes to software quality. "If you talk to a CEO about testing, you can see their eyes go, 'Really, well, what's that?'" he said. "But if they've experienced an outage on their e-commerce platform and what the consequences were, then it's a different story."

Complex software testing needs complex skills

Software testing poses skills challenges because people must search for unknown vulnerabilities and try to predict where systems might break, said Ronald Schmelzer, managing partner with the Cognitive Project Management for AI certification at Cognilytica.

Yet there is a dearth of technology talent with the necessary testing skills. While employer demand for technology talent is growing exponentially, the quantity of developers and programmers remains flat, creating intense competition among employers to hire skilled staff, according to Quickbase CEO Ed Jennings in a May interview.

In addition to a skills shortage, testing requires task repetition to ensure coverage of all areas and to check that previous bugs haven't resurfaced after updates, Schmelzer said.

Coverage and bug hunting become more of a challenge for systemwide violations of good architectural or coding practices, said Bill Curtis, senior vice president and chief scientist at Cast Software and executive director at CISQ. If a violation is contained in a single module or component, a fix can be tested relatively easily, but system-level violations -- involving defective interactions between multiple components or modules -- require corrections to multiple components across the system, Curtis said.

"Frequently, corrections are made to some but not all of the defective components, only to find that operational problems persist, and other defective components must be fixed in a future release," he said.

Business pressures and methodologies contribute

But while business pressures, such as keeping a competitive advantage or releasing on schedule, contribute to the software testing problem, development methodologies also hold some of the blame, Leapwork's Frederiksen said.

"The core issue in software testing is that companies have probably optimized the entire software development approach with methods like Agile or CI/CD," Frederiksen said. This gives the false impression that ready-for-deployment code has also been optimized, he explained.

This doesn't mean that one methodology is worse or better than another. "With Waterfall, testing was not doing any better -- in terms of amount of testing -- before software would hit production, compared to testing done in Agile," Forrester's Lo Giudice said.

Testing/QA is usually an afterthought and short-staffed. It is what is holding up going live on code.
Holger MuellerVice president and analyst, Constellation Research

But Agile can worsen testing gaps because people overfocus on time to deployment rather than quality, according to Holger Mueller, vice president and analyst at Constellation Research.

Systems that are almost impossible to fix after deployment, such as satellites or missile guidance software, require 99.999% testing, he said.

"Enterprise software and consumer software is often the most sloppy, with MVPs [minimum viable products] often getting released under time pressure," Mueller said, referring to the Lean principle of developing a product quickly with sufficient features along with the expectation of future updates and bug fixes.

"Testing/QA is usually an afterthought and short-staffed. It is what is holding up going live on code," he said.

That doesn't mean teams should throw their methodology out with the bathwater, Mueller noted, but effort is needed to ensure systems are tested as a whole.

"While you can build code incrementally, there are limits to testing incrementally. At some point, software needs to be tested holistically ... the 'soup to nuts' test," Mueller said. Testers should install the full app, test that the code works, then uninstall and look for issues, such as making sure personally identifiable information gets deleted.

"Basically, QA needs to follow the full customer lifecycle in the product," he said.

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close