Tip

A software expert's heuristic for regression testing

Often, regression testing is what stands between a product in a test lab versus a product in the hands of our users – so we don't want to take longer than we need – but we don't want to haphazardly release a product before its ready.

Regression testing can be a bundle of work. Regression testing is testing designed to revisit existing aspects of an application or product to ensure the application is still working after changes have been made within a product or new features have been added. By definition, regression testing can be expansive because we may want to ensure nearly every aspect of a product is retested. Recognizing that regression tests are typically previously-created tests means that the labor of regression testing is not in test creation as much as test execution time. Planning what to regression test is the first challenge. So, how do you choose what to regression test?

I devised a heuristic to plan regression testing, it's called: RCRCRC. It stands for:

  • Recent
  • Core
  • Risky
  • Configuration sensitive
  • Repaired
  • Chronic

If you haven't worked with heuristics before, the term can sound intimidating. A heuristic is a rule of thumb or a shortcut that helps us solve problems and make judgments. A heuristic is not a perfect method. The purpose of this heuristic is to help you think through various aspects of the application you're testing and think about the product in different ways to help provoke or identify areas for regression testing.

Recent
What's been recently added to the application? From obvious to subtle, recent changes to an application present the possible introduction of a defect. Obvious changes include new features or updates to existing functionality. More subtle changes may not be detectable from a user perspective such as improved error logging. And while significant changes are likely discussed throughout the team requiring little effort on your memory to gather a list for what's new, more subtle changes might take some legwork to keep tabs on what's changing.

The theory is a series or suite of regression tests should exist that can be executed manually or through automation to ensure that recent changes have not introduced new issues and that existing functionality is still working. In practice, we may not have sufficient time to re-run all of our regression tests but instead may need to select a subset of tests from our full collection of regression tests based on what's changed in the build or release that we're testing.

Ask yourself: what have we done recently? Build a mind map or make a list. One aspect I especially like about starting with this question is that it seems the most risky changes come to mind first. I start with that gut instinct list and review other items to help me round out the list. Documents might include: user stories, defect reports and data model changes (I typically ask for this specifically to become aware of backend changes) and of course, relevant team emails.

Core
Despite all the bells and whistles we may add to a product, most applications can be described by a few core features. Regression testing is designed to prevent a product from being released with new issues introduced into functionality that existed and worked in previous releases. Focus on what's core. Identify areas of the application that "just have to work" and you've found the core features. Focus on a definition of the word: regress; to fall back or to make worse and you gain a reminder of what regression testing is designed to prevent.

Peel away the extraneous twists and turns of each feature and get closer to the roots of the product. This is a list that can likely be identified and counted readily. When testing can feel like a wide open space with no end in sight, identifying the core features of a product can bring a horizon line into focus. This is what makes regression testing do-able. Instead of thinking about edge cases, think in the box. Instead of thinking far out conditions, think middle of the road. For a refreshing change, instead of hunting for the dark corners and ugly spots in the product, think happy path and include testing that makes sure the happy path is still a smooth road.

Risk
Higher risk areas in an application, whether new feature or old are not usually difficult to recall, instead these are the risk spots that come to mind with little prompting. Which features? Consider features that rely on other services or components to be started or running. If you're testing a website with secure pages, are site certificates in place? If you know where production issues have cropped up in the past, you can identify risk areas in a product. Use your defect tracking software to pinpoint areas of the product that historically had more defects. Like the core features of a product, the especially risky spots should be a relatively succinct list.

Configuration sensitive
One truism I've found in testing is: code that's dependent on environmental settings or is configuration-sensitive is more vulnerable. Code that's dependent on configuration tweaking seems to be a bit like the copy and paste errors we make in documents. We believe we're found every reference and updated material but it doesn't take much to miss one reference and viola, a mistake is introduced. Emails generated and sent out from an application are often an example of code that depends on environment or configuration-specific changes to be made in order to function properly. Being mindful of these features in an application helps me make conscience decisions about whether the functionality needs to be reviewed again – and in what environment that review should take place.

I've had access to the production environment on several occasions in my career and sometimes execute quiet tests in production to ensure some aspects of a product are working knowing the environment change can fracture otherwise working code. Ask yourself if there are aspects of the product you're working with that could benefit from thinking about the environment. Does a short test in a different environment make sense to include?

Repaired
Testers debate the following related questions: How often should a defect be retested? If a defect is fixed and retested, should the defect be retested on a subsequent release? At what point, has a defect been retested enough times that the defect does not need to be revisited?

I think a better singular question might be: do you feel confident with a particular feature of an application? Several times, I've worked with a new feature that just didn't come together smoothly. Even after the feature shipped there were several remaining defects and in subsequent releases, most of the defects that were fixed seemed to cause yet another defect in that same feature. Some features don't become production-worthy without many defects being reported and several internal releases to address a stack of issues. I retest previous defects and I retest previous features to gain confidence that a thorny feature is production-ready. As you think through the defects for the current build, ask yourself: what's been repaired and what features would benefit from another sweep through the functionality?

Chronic
As people we seem to have areas of frailty whether physically and/or mentally. Similarly, applications seem to have weak spots too. Some areas of a product seem to follow Murphy's Law: if it can go wrong, it does. One thing I like about working with a product for a long while is getting to know where a product has ailments. The longer I work with a product the more deeply I know where the issues have been, where the issues currently are and what areas of the application are likely to sprout issues again. The other advantage of working with a product for awhile is the speed at which I can cycle through multiple trouble spots to pick up that sense of comfort that the release is okay, good to go.

Ask yourself; are there features of the product where your sense of confidence isn't strong? Are there areas of the product that frequently have issues? Would it be reassuring to step through a few regression tests?

Summary
With a mental map, a physical mind map, or a checklist in hand, you're ready to go. Regression testing hopefully doesn't uncover new issues so execution time may move along quickly even if the list looks long. My best regression test planning stems from knowing an application well but in order to prevent a bias or a pattern of always testing some areas and overlooking other areas, I use the heuristic RCRCRC to help me think through all of an application. And then at the end of testing, I write another list; I write what I didn't test. This list is helpful when I might be asked: did you test X? By writing a "did not test" list, I'm stopping to make myself aware of what I chose not to test. Sometimes by seeing that list, I realize where I want to take that one more minute, that one final test before at long last it's time to say: I'm done.

Next Steps

The biggest difference in smoke versus regression testing is the depth and scope of the test -- or how far it goes through the application.

 

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close