sdecoret - stock.adobe.com
Devs don't trust AI in software testing
Artificial intelligence can eliminate mundane testing tasks and reduce bugs without human intervention, but its inner machinations make testers uneasy.
AI-based testing has the potential to help solve software quality issues, but it faces significant roadblocks on the way to widespread adoption.
Automated testing uses software tools to automate the manual testing process. Testers can use traditional rules- or code-based scripts or AI -- which builds, initiates and runs testing models without human intervention.
AI-powered tools such as Selenium IDE-compatible Katalon Studio, mabl and Functionize can free developers from mundane task repetition and monitor complex systems for vulnerabilities. However, a distrust of the inchoate technology hinders adoption rates, according to industry experts.
The tools are also not yet a viable solution to the software testing crisis, said Holger Mueller, vice president and analyst at Constellation Research.
Holger MuellerVice president and analyst, Constellation Research
"It is early beginnings; AI is just coming to the IDEs," he said.
Indeed, most AI-based testing is in the early stages of industrial use, said Bill Curtis, senior vice president and chief scientist at Cast Software Inc. Curtis is also executive director at the Consortium for Information and Software Quality (CISQ), a nonprofit that develops international software quality standards.
A CISQ report published in 2020 and scheduled for an update later this year identified AI as a key trend in software development for this decade. However, it couldn't explicitly state that AI was a trend in software quality testing.
"There was not enough reported data to make a stronger presence for AI-based testing in the report," Curtis said.
But AI-based testing may make an appearance in the report soon. "So-called 'TestOps' that use AI systems and automation approaches have taken off in recent years," Schmelzer said.
The battle heats up
TestOps involves the use of both AI and non-AI automated testing to autoscale resources.
Both AI and non-AI software testing tools have similar benefits. "They address issues such as providing more consistent results over repetitive tests, speeding up the testing process, simplifying testing tools and operations, making automated tests more adaptive and resilient, and predicting potential areas of test failure," Schmelzer said.
The elimination of repetitive and mundane tasks frees up the developer and speeds up testing, said Christian Brink Frederiksen, CEO of Leapwork, a no-code automation platform.
"You can bring down the test time in your release cycles from weeks to days," he said.
The reduction in testing time can speed up time to market, enabling companies to quickly adapt to changing market conditions, said Gareth Smith, general manager of software automation at Keysight Technologies, a design and validation company. He offered the following example: "Now in the UK it's a heat wave, but now I want to do a 'hot-wave burger': buy one burger, get one free. You can come up with some campaign and then roll that out this coming weekend," he said.
But where AI-based testing excels is with bug hunting, Frederiksen said.
"AI-powered testing has been shown to reduce the risk of buggy software, which can cause company crises like the one at Volkswagen," Frederiksen said, referring to last month's dismissal of Volkswagen CEO Herbert Diess, reportedly over software quality issues.
Poor software quality has been blamed for a myriad of other headline-making news, including Post Office accounting errors in the U.K., Zoom outages and 4G network unavailability.
But buggy software isn't limited to headlines; it is a pervasive problem, and businesses need to consider a different approach to testing, Frederiksen said.
"Companies are between a rock and a hard place because they're struggling to scale their largely manual testing solutions in the face of increasingly complex software and markets that demand product releases by a certain date," he said.
A vote for lack of confidence
AI faces an uphill battle for acceptance in the software testing arena. The technology is almost impossible for humans to comprehend, which results in a lack of confidence in the AI's abilities, Smith said. He provided an analogy to demonstrate how AI's inner machinations can not only invoke a lack of trust, but also forces people to give up control to the unknown.
"We've developed a self-driving car. It has a seat on it with just a speaker and a microphone, and there's no seatbelt -- it doesn't need one. You sit in the seat you say, 'Take me' to wherever you're going, and it goes off at 100 miles an hour. Who would like to buy that car? They say, 'I'm not going to be the first customer in that car,'" Smith said.
That lack of a track record is why the widespread adoption of AI-based testing systems is facing an uphill battle, Smith said.
"That's the future, but we're not there right now," he said.