Tip

How to gradually incorporate AI in software testing

While some software teams may find themselves wary to weave AI into their software testing routines, a gradual introduction of AI-based testing strategies could be worth the effort.

AI can improve many aspects of software testing. It can detect defects, automate testing and augment test strategies. So, why are dev teams reluctant to do more with AI?

Developers often face challenges when bringing AI into their development and testing workflows. For one, many use cases require teams to adjust their workflows or add another tool. AI tools have different failure modes than other forms of automation. As a result, QA professionals who constantly think about how things break are naturally inclined to be cautious about jumping into AI in software testing.

In this tip, learn how to introduce AI slowly into testing processes where implementation is simple and the stakes are low. Gradually, teams can build confidence to extend AI testing across more workflows. Explore how and when AI supplements processes like continuous testing and defect analysis, as well as how it can benefit your testing strategy.

Benefits of AI in software testing

Benefits to using AI in software testing include improved accuracy, coverage and efficiency.

Improved accuracy

AI-powered tools can often execute test cases faster, with more precision and fewer errors than humans. Ilam Padmanabhan, solutions delivery manager at Nets Groups, a financial solutions consultancy, uses AI in his software testing to bolster test accuracy. "As a result [of using AI], you'll be able to get your software to market faster and with fewer defects," he said.

Increased coverage

AI tools can help increase test coverage since they are not limited by time or resources in the way that humans are, Padmanabhan said. AI-powered tools can generate a large number of test cases and run them in parallel. This helps ensure the software is thoroughly tested before release.

Improved efficiency

Using AI in software testing can increase test efficiency. AI testing enables teams to offload some of the more mundane, intense and repetitive tasks within testing, said Mush Honda, chief quality architect at Katalon, an AI-augmented quality management platform. It can also offer insights into jobs that require high focus, such as performing visual tests across multiple browsers, devices, OSes and screen resolutions.

Continuous testing meets AI testing

Continuous testing automates the handoff between new ideas and testing processes to provide faster feedback. AI can complement continuous testing by helping automate more processes in the interplay among new features, development and testing.

AI can generate a variety of artifacts for testing activities, such as test scripts, test data identification and test suite identification for execution, Honda said. He envisions teams using AI to supplement continuous testing processes in many ways, including for the following:

  • Assign the creation of certain types of automated test scripts for use in test execution.
  • Evaluate and assist in test environment preparations with applicable test data setup and teardown.
  • Evaluate results.
  • Recommend test suites needed for test execution based on code changes.
  • Preemptively manage test data necessary for testing across environments.
  • Orchestrate test executions based on success criteria defined by the team.

Guy Arieli, CTO of continuous testing at Digital.ai, said AI is growing in its ability to generate, execute and maintain test cases. To improve relevant new test cases, Arieli expects that teams might begin to similarly apply new generative AI.

Starting with defect analysis

Defect analysis and prediction are common use cases for AI in software testing. They can provide a good starting point for the gradual adoption of AI by building trust in where the tools work well and how they fall short.

Automation has helped teams dramatically increase their number of tests. Teams might have run 10 tests a day 10 years ago, but now, many organizations run over 100,000 tests daily, Arieli explained. Now, the bottleneck occurs when analyzing results. Even when only a fraction of tests fail, it may take many hours to correlate the results with the root cause.

"When you have 100 tests that are failing, it doesn't mean that you have 100 bugs in your system," Arieli said. "Usually, the failed tests are mapped to a small list of bugs."

Starting with defect analysis can enable teams to trust AI when the tools provide clear interpretations of what constitutes a defect and what factors caused a defect to occur, Honda said. In addition, teams can help train many of these solutions by confirming or rejecting predictions. Over time, this can increase confidence that the AI system is learning what matters to the team.

Shannon Lee, dev evangelist at mobile testing platform provider Kobiton, believes AI can play a crucial role in identifying visual defects by visually comparing the appearance of steps to a baseline and returning discrepancies. This can help automate testing efforts and enable developers to identify when code changes might affect accessibility compliance or impact UI experience.

Challenges of AI in software testing

While AI can be a worthwhile addition, keep in mind the challenges in the adoption of AI in software testing.

Trust

AI tools often come with controls that differ from those in conventional testing tools. "There are minimal custom configurations available, and the tester is at the hands of the AI engine," Lee said. As a result, some false positives or unwarranted analyses may occur in the early usage of AI in testing. AI is not a plug-and-play tool that can generate immediate results.

High costs

Investing in quality AI tools can be expensive.

Lack of human oversight

AI-powered tools can quickly generate and run large numbers of test cases without human intervention. "While this can lead to improved accuracy and increased coverage, it also means that there's potential for things to go wrong if there's no one monitoring the process closely," Padmanabhan said. Have a plan to monitor and review results generated by AI-powered tools before taking any action on those results.

Compatibility

Teams may struggle to make their legacy systems compatible with a new AI tool. If integrations are not appropriately prepared, it could lead to implementation delays and increase the cost of setting up, Arieli said.

How to evaluate tools

New AI capabilities are being introduced into existing testing suites. Among the AI tools on the market are Applitools, Digital.ai, Functionize, Katalon, Kobiton, Mabl, TestCraft, Testim, Virtuoso and Waldo.

Jeet Mehta, CEO of Swift, a B2B SaaS company building software for sports facilities, explored AI tools and found that they vary depending on a team's specific needs. He recommended teams consider a tool's ability to integrate with existing test management platforms, learn and adapt to new test environments, and provide accurate results.

In his work with AI, Honda has also developed questions to consider when evaluating AI tools:

  • How does the tool account for user feedback?
  • How does the tool handle changing needs as teams and systems mature?
  • Which time-consuming tasks can the tool take on?
  • What level of analysis/recommendations does the tool offer?
  • Which technical/analytical skills must the team possess to use the tool?
  • How does the tool help the team evolve its testing process while building quality engineering skills?

Next Steps

How AI changes quality assurance in tech

Using ChatGPT as a SAST tool to find coding errors

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close