Getty Images/iStockphoto

The potential of ChatGPT for software testing

ChatGPT can help software testers write tests and plan coverage. How can teams anticipate both AI's future testing capabilities and the security concerns that come with it?

The use of AI technologies like machine learning have certainly found an established place in software programming over the past decade. However, momentum around the use of AI assistants like ChatGPT for code-related tasks seems to be catching on at an exponentially faster rate than its technological predecessors. Though still in exploratory stages, the specific potential of ChatGPT to support software testing efforts is seemingly both promising and realistic.

Let's examine the growing potential of using ChatGPT to assist in various aspects of software testing. Additionally, we look at the potential security vulnerabilities that using AI in software testing can expose and offer guidance for teams to introduce technologies like ChatGPT safely.

How ChatGPT and software testing come together

Shane Quinlan, vice president of product at cloud platform provider Kion, said software teams can prompt ChatGPT to generate unit tests, provide recommendations on how to manage automated tests and explain what the code is doing as it runs. He believes this information could be a huge help for project managers, testers or new developers getting used to new types of applications or test scenarios.

Similarly, Andrew Sellers, head of technology strategy at Confluent, said that ChatGPT seems adept at offering suggestions for certain test designs with a simple prompt. For instance, a tester could simply ask something like: "Give me integration tests for this Kafka consumer written in Scala." They could then follow that with a pasted segment of code to which the AI assistant could perceivably respond with a series of suggestions and even walk the tester through the steps involved in each one.

ChatGPT also seems capable of generating documentation for application code, as well as brainstorming ideas for testing scenarios and building unit tests for certain software interfaces and classes. For instance, ChatGPT could foreseeably help ensure that the software behaves as expected in unexpected scenarios by providing suggestions for specific failure states and inputs.

Troy Allen, senior vice president of engineering at communication software platform provider Nylas, said ChatGPT has found a place in the company as a way to generate test scenarios, provide suggestions for test case improvements and even assist in the creation of scripts for automated testing. By using ChatGPT, Allen hopes to see a decrease in the time and effort currently spent on manual testing tasks within his organization.

Where AI's role in software testing is headed

As ChatGPT's AI model improves over time, some in the industry expect its role in enhancing both static and dynamic application security testing to grow significantly. Frank Catucci, CTO at web application security provider Invicti Security, believes this will prove especially useful when it comes to performing risk assessments on applications and software systems -- a capability that could become critical for organizations who have already begun to deploy code generated through AI-assisted development tools, like GitHub Copilot.

In addition to supporting test script generation, Allen also believes that ChatGPT's ability to process human language for intent will allow it to handle complex tasks like incorporating domain-specific knowledge for tests or autonomously perform direct code analysis. However, he cautioned that the reliability of such an approach depends on the ability to train ChatGPT across a diverse range of software applications and test data, adding that any software teams gleaning test results from ChatGPT should double-check those results through manual testing for the foreseeable future.

Is it safe to use ChatGPT for testing?

The terms of use outlined by its creator, OpenAI, include a disclaimer that ChatGPT services are provided "as is" and without a security guarantee for content fed into its systems. As such, it behooves teams to think twice before uploading sensitive source code or data into ChatGPT for software testing purposes.

However, while organizations should continue to exercise caution until clear laws are set in place regarding how these kinds of AI services handle sensitive information, Allen said there are a few ways software teams eager to incorporate ChatGPT into their testing routines can potentially mitigate security concerns.

For one, he advised that testing teams should make sure any sensitive data they choose to share with AI systems is properly anonymized or encrypted. Additionally, testers should take extra care to ensure that the AI model does not trigger any unintended actions due to a lack of domain-specific knowledge or misunderstanding of the application's context. Finally, Allen advised that testers should continue using traditional testing tools and human-certified verification processes alongside AI models like ChatGPT to ensure comprehensive coverage and avoid being misled by inaccurate results.

Some, like Sellers, believe it's a mistake for testers to trust ChatGPT's work without careful validation by experienced QA engineers. For now, it is far from being anything that resembles a replacement for human testers. However, since so many seem determined to give ChatGPT a place in their testing procedures, companies that allow its use should introduce controls to assure human certification of any AI-generated output.

Next Steps

GitHub Copilot vs. ChatGPT: How do they compare?

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close