Olivier Le Moal - stock.adobe.co
Software testing strategies in 2026
When testing is stuck in a loop, use these 10 techniques to help teams rethink regression, test components in isolation and improve quality fast.
In the modern testing landscape, the shape of testing can vary greatly from organization to organization. Still, most teams simply fall into repeatable patterns, without taking a hard look at what is possible and what should change.
This article is designed to help you take a step back and evaluate, offering 10 techniques you can start using immediately. The goal here is to expand your options, to give you something new to try -- something you shouldn't need to buy a tool, hire a consultant or even get permission to do. At most, they might need a team's support to give it a try for two weeks. If it doesn't work, try something else.
We'll cover regression testing strategies, move to testing components in isolation to eliminate regression testing, and end with data and publishing strategies.
Release cadence strategies
With that mindset in place, the first place to look for fast improvement is your release cadence.
Create examples before programmers write code
Specifications for a password tend to be written as requirements: "At least eight characters, at least one of which is uppercase, number or special." In practice, these requirements leave room for interpretation, so developers implement what they believe the rules mean and only later discover edge cases or mismatches between intent and behavior. Programmers write the code based on these specifications and then try to find problems, such as special characters that are rejected by the input filter, or if the requirements actually mean one of each character type.
Include examples with the requirements to reduce ambiguity, such as a simple table of example passwords, each tagged as good or bad. Better yet, before coding starts, get at least three key roles -- programming, voice of the customer, testing -- to collaboratively build that list. This won't eliminate all defects, but it can get simple use cases working -- all without having to learn Ruby, write examples in Gherkin or wire them up with Cucumber.
Add RCRCRC to your release cadence
While manufacturing aims to produce identical items, a build system aims to generate something different each time. If we have any information about what is different, then why would we test the same?
Karen N. Johnson's RCRCRC heuristic does just that -- suggesting ideas for how to test differently on every retest worth exploring by a human. They represent the areas that testers should give special attention to. The letters in RCRCRC stand for:
- Recent. Elements that have changed recently.
- Core. Elements that must work.
- Risky. Areas that team members can articulate as risky.
- Configuration-sensitive. Behavior that can change based on configuration.
- Repaired. Elements that are recently fixed.
- Chronic. Elements that break often.
To apply RCRCRC, you could just get a pile of sticky notes or a digital shared whiteboard and make a list of test ideas detailed enough that any reasonable person could perform them in 10 to 20 minutes. Order the list -- dot vote if you have to -- put them on a Kanban, and pull cards until you run out of time. Once the release is over, decide which tests to rerun and which to stop. An engineer might point out that regression testing will be limited if the individual components are poor.
Component-based strategies
Once you've improved how you retest during release cycles, the next step is to make regression testing smaller -- or eliminate it entirely -- by testing components in isolation.
Separate the GUI from the backend
A great deal of software takes input from a UI, calls an API and displays the result. The front-end will need to check for common errors with the platform -- different device sizes, rotations, OS versions, or loss of internet signal, for example. It also needs a poke test to ensure invalid values are trapped, and a few sample scenarios that pass from text to the database and display correctly.
Perhaps the number of digits will affect the display, but we only need a few complete tests of the GUI -- especially automated ones. Still, you do want to test many submissions on some level. This might be an API check that loops through a large set of input values and checks the output.
Micro, unit or API checks
Most modern organizations running Java, C# or Python have a few unit tests lying around. Their real power comes from exercising a unit in isolation. To do that, you will need to refactor the code so that the dependencies are in the constructor and it is possible to pass in mocks, stubs and fakes. This will force the code to consist of small, testable elements.
As a thought exercise, consider the difference between a codebase of 1,000 units at 90% reliability and one with 1,000 units at 99.9% reliability. While we can't measure that directly, we can infer that the first would be 0.80*0.80*0.80 reliability -- with a few more 0.8's involved -- which works out to be very low (80% to the power of 20 is 12%; 99.9% to the power of 20 is 98%).
Deploy components separately
The Windows and DOS legacy of software gave us the idea of the release with a single disk image. In the context of the Web, that is sometimes called a monolith. Web applications can deploy HTML, Javascript, CSS and backend files separately, limiting regression testing to the scope of the changes -- which means eliminating regression testing. To be effective, separate the GUI from the backend first, then add low-level automated checks, then design the architecture for separate build and deploy by component.
Create model-based tests
Imagine writing code to navigate your application through a random walk. For a social media platform, it's essential to understand the current state of the application. This could be any screen, such as Login, Profile, Your Stream, a specific post, or an editing and commenting interface. You also need to know which links are valid on each of these pages. Then you could generate a random step and random text to fill in for new posts and comments, and bounce through the application.
Essentially, you create a model for the application, take random steps in the application based on that model, and determine if your model matches reality. When you find an error, the tool retraces its steps to determine the minimum number of steps required to reproduce the error, then reports it.
Information publishing strategies
There's an amazing thing that happens when you publish metrics, oldest bugs, unresolved tickets or anything with an objective score. If management is watching the page, people will resolve the issues, and the code will get better. If the metrics page does not exist, then there is no incentive to improve the code. Here are a few code and test-related measures to not consider on every build.
Identify and refactor complex modules
Large modules can be confusing. They have too many variables, and the variables derive value from each other. It is common for one maintenance programmer to cut and paste code to create both sides of an if statement, making a subtle change to one side. The next maintenance programmer might get a requirement, implement it in only one place and not realize the code is begging to be modified in two places.
Complex code hides bugs. Some tools, called complexity analyzers, can tell you how many loops and branches your code has.
Publish to-be-fixed bug count or oldest to-be-fixed bugs
While bugs are not created equal, and bug counts obscure this, you could track the number of bugs slated to be fixed that no one has addressed. That will not only get them fixed but also reduce the time-to-fix, increasing overall productivity. Likewise, a simple list of the oldest bugs marked to be fixed will encourage people to fix them or revisit their status.
Publish test coverage visualizations
Most of the companies don't even have a visualization of their features -- at least when they start. That could be as simple as a mind map analysis of key features of the system. Likewise, except for unit-test coverage, most lack any sort of measure of how well the software is tested or how well it is covered. Instead, testing is a sort of black box. When things are going well, management can add pressure to ship features faster -- which implies cutting testing. When there is a serious issue, they can put everyone in a room and ask, "How did that bug slip through?" Not only is this a trap for workers, but it also fails to provide management with the tools they need to steer.
Start with a coverage model. That could be as simple as a mind map, like the one pictured. To add coverage, simply enter a number from one to ten to indicate how well the feature is covered. Teams will need to make their own rubric for what those numbers mean. That map will be one-dimensional -- it won't include common failure modes from the platform, nor bugs triggered by a user journey that touches multiple features -- but it is a start. James Bach and Michael Bolton's Heuristic Test Strategy Model provides other ways to look at software, beyond simple workflows and features, to build a better model.
Once management has the model, they have options. They can suggest additional testing for areas that are lacking, or pull back testing on features they view as non-critical. Then, when bugs are found in production, they are likely the very things management chose not to invest time in testing. You might still need to have a conversation about the problems and how to adjust -- it is just likely to be much different and more pleasant.
Experiment with new testing strategies
If you are a technical contributor, choose one thing, something you can do without permission, and just do it. If it requires team support, try it as an experiment for two weeks. If you use Scrum, propose it during a retrospective for one sprint. Even if people oppose it, they might be willing to try it for two weeks.
If you already know which strategies work for you and you do them all well, then your position is more fortunate than most. In that case, the next step is to invent your own strategy. It could be a case study in the next edition of the book, Software Testing Strategies: A testing guide for the 2020s. Please leave a comment or email me.
Matt Heusser is managing director at Excelon Development, where he recruits, trains and conducts software testing and development.
Phil Kirkham provided significant peer review for this article.