ltstudiooo - Fotolia
Software testers can struggle to increase coverage without dramatically expanding the number of test cases. While difficult, the goal should be to maximize test coverage with a minimal number of tests.
To ensure effective test cases, QA teams should first determine what exactly they need to test. Define the test scope, then use it to help set the parameters of the test strategy. From there, use test case design techniques to cover the most ground and leave the fewest gaps possible.
To increase test coverage yet maintain a reasonable number of test cases, consider:
- negative testing
- boundary value analysis
- equivalence class partitioning
- error guessing
- decision tables
- state transition testing
- cause-and-effect diagrams
The type of application and the criteria to validate should inform which test case design techniques QA teams adopt.
When testers wonder, "How did I miss that bug?" a lack of negative tests is often the answer. Negative testing, also called failure testing or error-path testing, inverts the thinking of most common test cases. Where positive test cases are sets of inputs for what an application is meant to do, negative testing checks whether software does something its designers did not intend.
Almost all positive test cases should have a negative test to validate the opposite. A negative test case can be as simple as ensuring that a field that should only accept alpha characters does not allow for numbers or special characters. A complex negative test example could include how an interface failing to complete a task in the allotted time affects data load.
QA professionals should think through how to craft a negative test case to pair with a positive one. An additional negative test case will increase coverage in a way an additional positive test case can't. There are always more ways for software to deviate from its instructions than follow them.
Boundary value analysis
Boundary value analysis assesses the values at -- and beyond -- the edges of a range of expected inputs. Imagine a web app that is coded to accept inputs of numbers between 1 and 20 for a given field. Through boundary value analysis, testers check numbers that are expected values (1 and 20) as well as numbers that fall just above (2 and 21) and just below (0 and 19) those boundaries. The two test cases outside of the boundaries, 0 and 21, are also examples of negative tests.
Testers should work with developers to choose the correct boundary values to apply with this test case design technique. If QA checks the wrong boundary value limits, the tests won't catch potential bugs.
QA professionals can easily automate boundary value analysis. However, when an application has complicated calculations of many business rules, this technique tends to generate many test cases.
Equivalence class partitioning
Equivalence class partitioning can help QA teams reduce the number of test cases without hurting test coverage. This test case design technique organizes test data into groups, each member of which should generate the same results. If one test in the group fails, all the others should fail too; and if a test passes, all the others should pass as well. Therefore, not every member of the group needs a test.
When they combine equivalence class partitioning with boundary value analysis and business risk analysis, QA professionals can create effective test cases that catch most bugs.
The subject matter of the application under test is important when you use this test case design technique. Equivalence class partitioning might not be appropriate for health and safety-critical applications, where a high level of certainty for each test scenario is crucial. In those cases, check each member of the test data group for an individual pass/fail result.
Error guessing is when a tester speculates on what parts of an application will contain defects. With this technique, testers rely on their intuition and experience with software similar to what's under test. Error guessing can occur during both requirements and code reviews to gain a better understanding of the application under test.
QA professionals can implement some form of error guessing on virtually any application and test level. The approach is most successful when testers collaborate to suggest areas of focus for test scenarios. Error guessing is useful for regression testing areas in applications where bugs have historically clustered.
Additional test case design techniques
Expert testers can use more design techniques to hone test cases for the best coverage with the least amount of testing. Once you master negative test cases, boundary value analysis, equivalence class partitioning and an ability to guess at error spots, move on to decision tables, state transition testing and cause-and-effect diagrams.
Decision tables catalog what outputs various input combinations cause. Also called cause/effect tables, these diagrams convey how a system will behave for each input option. They ensure testers don't miss any combination of conditions.
State transition testing is a black-box technique to deal with finite state machines -- i.e., software where an action or activity [transition] changes the state of a system. The tester initiates a state or system transition, and then assesses the software's behavior.
Cause-and-effect diagrams visually show the potential causes of a problem and contributing factors for those causes. These diagrams can identify when causes are interrelated or when a single factor has multiple adverse side effects, as testers can see when multiple causes share contributing factors.