Tip

Using proactive test design methods to catch requirements issues early

Proactive test design allows QA testers to identify requirements and design problems at an earlier stage than with traditional test cases.

Robin F. Goldsmith, JD

 Quality assurance (QA) testing relies on requirements reviews to detect requirements issues -- primarily lack of testability. If a requirement is not testable, it's usually because the requirement is not sufficiently clear, which makes it more likely to be implemented incorrectly. And regardless, testers won't catch the incorrect implementation because they won't know how to test for it.

Reviews usually use judgment to identify untestable requirements, but ultimately the way to determine testability is to write test cases that demonstrate that the developed system meets the requirements. Such test cases can be written during requirements definition or review early in the life cycle, but they seldom are. Rather, test cases are ordinarily not written until the tail end of development, typically after the code has been completed.

Proactive test design offers a less well-known and far less frequently utilized opportunity to detect requirements problems and design errors -- including but beyond mere lack of testability -- earlier than test cases are usually written, when the errors are easier and less expensive to fix.

Requirements and design reviews

The fact that proactive test design methods can detect requirements and design issues does not diminish the value of appropriate reviews. A review is a form of static testing suited for examining documents, as opposed to dynamic testing, which involves executing the system being tested.

Reviews can be used earlier in the life cycle than dynamic testing to catch requirements and design errors before they are implemented in code. Reviews can also be used later to detect defects in code, user and operations instructions, Help documentation and tests. Reviews can be performed individually or in groups by peers of the author (such as analysts or developers), users, management, security, operations, audit, QA testing and other involved parties.

Coding, and its associated testing, correction and retesting, is by far the most expensive part of software development. Consequently, the biggest payback from reviews comes from catching requirements issues and design errors before they are turned into incorrect code.

Unfortunately, many organizations don't review their requirements or designs at all, and those that do tend to rely on requirements and design reviews that are often much less effective than presumed. Most reviews use only one or two weak techniques, such as focusing on clarity and testability. Clarity and testability are important, but they mainly address form rather than content. That is, a requirement can be perfectly clear and testable but completely wrong, and clarity and testability are irrelevant for an overlooked requirement.

Moreover, most organizations don't distinguish well between requirements and designs. In fact, what they call requirements are often actually high-level design product/system/software requirements, which are presumed to meet REAL business requirements that are often inadequately defined. Both are needed, as well as more implementation-based and technical designs, and multiple techniques are needed to review each in order to detect their respective issues.

My article "REAL business requirements key to calculating ROI for a project" more extensively describes these requirements and design differences.

Reviews: Know the terms
QA testing authorities agree that reviews are the most effective way to catch and fix errors economically, but there's less agreement on the use of terms for various types of reviews. Walkthroughs and inspections are the most prominent group review techniques. Some authors say the difference is simply that walkthroughs are informal and inspections are formal.  

I prefer to characterize reviews on two dimensions: format and formality. A walkthrough's format follows the logic flow of what is being reviewed, whereas an inspection's format is guided by a checklist of common defects. It's common to apply logic flow and checklist formats together. A formal review follows defined procedures, reports findings in writing, and is the basis for a go or no-go decision from management. Informal reviews lack such elements and tend to be easier and cheaper to carry out. Both walkthroughs and inspections can be formal or informal.

QA testing involvement

One of the biggest frustrations for many QA testers is that they don't have the chance to participate in reviews of requirements and/or designs. Many organizations don't involve QA testing until late in the development process, often after the code has been written.

QA testing nonetheless remains obligated to test the delivered software competently. Competent testing involves planning and designing the most appropriate set of tests to demonstrate that the system works. Test planning and test design identify and determine tests to address the highest risks and establish an acceptable degree of confidence that the system does what it's supposed to and doesn't do what it's not supposed to.

Test planning and design have several important side benefits. First, for organizations that don't have well-documented requirements or designs, the written test descriptions may well represent the closest the organization has to documentation. Developers are often more grateful for such information than they let on.

Second, test planning and design can help clarify unclear or untestable requirements and designs. Finding an unclear potential source of coding errors because you can't figure out how to test it is more economical than waiting for the error to be made in the code and then trying to catch it by testing the developed code.

Third, using more powerful and proactive test planning and design techniques can reveal numerous overlooked requirements and design issues that traditional test planning and design, and even effective reviews, may have missed.

Traditional test design

The term "test design" refers to the process of identifying the set of test cases that need to be demonstrated to give us confidence that the system works. Test design also can involve identifying ways to obtain or create relevant test data and execute the tests. Risk analysis and prioritization are implicit in test design because the number of identified test cases usually greatly exceeds available time and resource constraints, so the most important subset must be identified for selective execution.

Traditional QA testing typically concentrates on writing a set of test cases to execute, hopefully with but often without suitable requirements and system design information. While a number of well-known systematic techniques can be applied to define test cases, many testers use informal "whatever I can think of" test design techniques to write their test cases.

In many organizations, writing test cases takes a lot of time and effort, often because the accepted practice is to write test cases in a format that includes detailed keystroke-level procedural instructions. Many testers think that test design necessarily involves such a tedious, time-consuming format. Consequently, some may be motivated to skip what they think of as test design, believing that the more time spent writing test cases the less time there is to execute them.

Many testers just start running spontaneous tests of whatever occurs to them. Exploratory testing is a somewhat more structured form of such ad hoc test execution, which still avoids writing things down but does encourage using more conscious ways of thinking about test design to enhance identification of tests during the course of test execution. Ad hoc testing frequently combines experimentation to find out how the system works along with trying things that experience has shown are likely to prompt common types of errors.

Spontaneous tests often reveal defects, partly because testers tend to gravitate toward tests that surface commonly occurring errors and partly because developers generally make so many errors that one can't help but find some of them. Even ad hoc testing advocates sometimes acknowledge the inherent risks of relying on memory rather than on writing, but they tend not to realize the approach's other critical limitations.

By definition, ad hoc testing doesn't begin until after the code has been written, so it can only catch -- but not help prevent -- defects. Also, ad hoc testing mainly identifies low-level design and coding errors. Despite often being referred to as "contextual" testing, ad hoc methods seldom have suitable context to identify code that is "working" but in the service of erroneous designs, and they have even less context to detect what's been omitted due to incorrect or missing requirements.

Proactive test design

Instead of reactively diving late into writing test cases, and then analyzing and prioritizing them based on risk, proactive testing design starts early by identifying and analyzing the biggest risks and then working systematically down to the smaller risks, which ultimately are efficiently addressed by individual test cases.

I'm sure you're familiar with showstopper errors. Showstoppers are big and bad and occur at the worst possible time. Most are due to large risks which have been overlooked. How do we know that? The risk has to be big or it wouldn't stop the show; we know it was overlooked, because if we had been aware of such a big risk, we would have done something to make sure it didn't occur.

The main reason traditional QA testing overlooks risks is because those risks aren't addressed in the system design. That is, we tend to test what is in the design, and developers tend to develop what is in the design. Things that are wrong in or missing from the design turn into erroneous development that traditional QA testing is highly unlikely to detect.

Starting with high-level product/system/software requirements design, proactive testing enlists special risk identification techniques to reveal many large risks that are ordinarily overlooked, as well as the ones that aren't. These test design techniques are so powerful because they don't merely react to what's been stated in the design. Instead, these methods come at the situation from a variety of testing orientations. A testing orientation generally spots issues that a typical development orientation misses; the more orientations we use, the more we tend to spot.

QA testing ordinarily overlooks these newly identified risks because they were overlooked in the design. The most common reason something is missing in the design is that it's missing in the requirements too. By revealing these risks that need to be tested, we're not only able to create more effective tests to catch the risks if they do get into the code; we're also reducing the likelihood that the risks will occur, because we've detected things that should be corrected in the design and requirements. Fixing a design error is a whole lot quicker, cheaper and easier than catching and fixing that error once it gets coded.

Catching a high-risk requirements or design error is much more important than spotting a couple more low-risk test cases. Moreover, because the risk identification is occurring early, the higher risks can be tested not only more but also earlier. Earlier is usually not an option with traditional reactive test design.

However, proactive test design methods don't stop there. For the higher-priority large risks, similar special techniques are used to identify the included medium-sized features, functions and capabilities risks. As with the large risks, these techniques help reveal many medium-sized risks that are usually overlooked, which again usually reflect medium-sized requirements or design errors.

Likewise, for the higher-priority medium-sized risks, yet more special techniques are used to identify the small included risks that are addressed by test cases, and this reveals overlooked test cases that in turn represent smaller requirements and design errors.

The greatest benefits will result from using both effective reviews and proactive test design to catch requirements and design defects. Even if your organization doesn't review requirements and designs, or if they do but QA testing is not involved, QA testing can still catch significant requirements issues and design errors as a bonus benefit of designing far more thorough and effective tests.


About the author: Robin F. Goldsmith has been president of Go Pro Management Inc. consultancy since 1982. He works directly with and trains business and systems professionals in requirements analysis, quality and testing, software acquisition, project management and leadership, metrics, process improvement, and ROI. Robin is also the author of Discovering REAL Business Requirements for Software Project Success.

Dig Deeper on Topics Archive

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close