Should there be a one-to-one mapping between requirements and test cases?
As much as I’d like to give a yes or no answer, in reality the answer is, “it depends.”
All requirements must be testable, regardless of how you document them -- as “shall/will” statements, narrative text, structured or patterned text, tabular format, use cases, user stories or other models. Specifically, each requirement must have at least one test case to verify it. If you’re struggling to define a test case, it means that the requirement needs more work. It’s either poorly or incompletely specified.
But even though you must have at least one test case per requirement, it’s possible not to have a one-to-one mapping between each requirement and a single test case. The mapping between requirements and test cases depends on how broadly you define your tests and how narrowly you express your requirements. Some testers argue that a single, well-structured test must be limited to testing a single condition or circumstance; others suggest that a single test can verify more than one set of conditions or circumstances. Thus, in the latter case there isn’t necessarily a one-to-one mapping between a given requirement and a given test.
With either approach, broadly defined requirements call for multiple test cases because each requirement encapsulates multiple variations. On the other hand, if you specify requirements that are small and discrete and that define a single condition, then you may have a one-to-one mapping between each atomic requirement and a single test case. Several small, discrete requirements may be covered by one test case when your test cases cover sets of conditions rather than single conditions. Or there may be a many-to-many relationship between your requirements and your test cases if you’ve specified a small, discrete requirement as part of several sets of conditions.
Whether the degree of the relationship is one-to-one, one-to-many, or many-to-many, it’s important to trace the relationship between requirements and the tests that verify them. Having this traceability is especially useful when your requirements change (as they often do) because it will help you quickly determine which tests to change.
Often, it’s not feasible to define and test for every possible requirement and its variations. Choose tests that will provide “enough” coverage of your requirements. “Enough” coverage means different things to different organizations. Most organizations opt to test their highest-priority requirements -- the ones that get the most use -- or that have a high risk or cost if they are implemented incorrectly or inaccurately. Many organizations also test seldom-used capabilities if their failure could stop operations or result in big regulatory fines or punishment. Highly-regulated organizations and those that deal with health and safety concerns must be far more comprehensive in testing than others. If you use automated testing, your testing can be more comprehensive than if you rely solely on manual testing.
One way to create better requirements and tests is to define the requirements and their associated tests at the same time. In fact, identifying tests as you explore requirements is itself a form of preliminary testing. It helps ensure the completeness, consistency and correctness of the requirements, and deepens the team’s and customer’s mutual understanding of requirements. This pattern is widely used by Agile teams.
Dig Deeper on Software development lifecycle
Related Q&A from Sue Burk
Clearly defining both functional and nonfunctional requirements in software engineering projects is important from both a business perspective and a ... Continue Reading