3 software testing sample test cases, with templates
A good test case is easy to trace, reusable and relevant to user needs. Learn how to write an effective test case with these examples and free templates.
Test cases are the foundation of the software testing process. They define specific conditions, inputs and expected outputs to test a specific feature or functionality.
Software testing, through test cases, helps teams identify bugs during development cycles, provide higher-quality releases for customers and ultimately build trust and loyalty. Through systematic checks on a detailed set of conditions, test cases ensure that an application meets the functional and quality expectations of customers at any given time.
Learn the core components of a good test case, the different types of test cases and best practices for writing them. Also, use the included samples and templates as a starting point for your own test plan.
What is a test case?
A test case is a script -- in either plain language or code -- that defines both data and actions taken to prove application functionality works as expected. Each test case includes an overall objective followed by test steps or an explanation of the steps to take. Typically, most test cases include an objective, input, execution and an expected output or response.
What are test cases for?
The main purpose of a test case is to validate features and functionality within a software application so that it works for a wide variety of users and systems. Essentially, test cases ensure that applications function as expected based on customer requirements and industry quality standards.
For Agile application development teams, test cases also provide the following:
Blueprints for creating additional tests efficiently with reusable templates.
Status reports for how an application is performing during development.
Test cases have specific objectives or purposes based on the test case type. The different types help organize testing into logical groups to provide complete application test coverage. Typical test case types include the following:
Functional or feature.
Performance.
Unit.
Visual (UI).
Integration.
Security.
Usability.
Accessibility.
QA testers group test cases into test suites or collections according to test execution needs.
For example, during the development cycle, QA teams create test cases for functional or feature tests. Once the test case passes in a build, it moves into the regression test suite. Teams execute the regression test suite repeatedly prior to releasing a final build to customers.
Teams create security test cases when they have the expertise. These might be automated, manual or both. Usability test cases ensure the application is user-friendly and intuitive for common user types. Accessibility tests verify that the application works for a wide customer base, including users with a range of disabilities, without requiring any special devices.
Developers typically create unit tests and routinely execute them whenever they add new code or create a new application build. Unit tests -- as well as visual, integration and performance testing -- are considered a type of test automation and usually take the form of scripts.
Many testing tools can automatically create visual, performance and integration test cases. After creating the test suite, QA testers can schedule automated tests according to the team's needs. For example, a team might create an automated API test in Postman or Selenium, then create test cases for the API and execute them as needed -- or schedule a test execution run. Many QA testing teams use a combination of manual testing and automated testing for increased coverage and overall test efficiency.
How to write test cases: Components of a test case
Test cases involve specific components for identification, reusability and placement in the appropriate execution suites. Efficient test case management relies on the existence of the following components:
Test case ID
Use a unique identifier for each test case.
Test scenario
Optional component that identifies a group of related tests.
Note the customer requirement, if provided.
Test steps
Test cases and scripts include a series of steps required for execution.
Prerequisites
Any test data or connectivity required to execute the test.
Expected results
Description of what should happen or what is expected to happen.
Actual results
Description of what happens when test steps are executed.
Usually only filled in when it differs from the expected results.
Test status
Marks the test as passed, failed or blocked.
Passed tests work as expected.
Failed tests have one or more actual results that differ from the expected results.
Blocked tests are tests that cannot be executed.
Blocked tests might be blocked by defects or missing objects in the back end.
Test case components might differ depending on the type and style of test cases. The title of each section might also depend on a team's preferences. For example, "Prerequisites" might be changed to "Preconditions." Instead of "Expected results," some teams might prefer the term "Postconditions." Use the one that best fits the team.
How to write test cases
Make sure to address the items in the following list to develop effective test cases. These steps will help QA testing teams and development teams test efficiently:
Define the test's objective or specific purpose as a description of the test.
Make tests reusable where possible to save on testing time.
Update test steps in alignment with the latest application changes -- this creates repeatable tests for ongoing execution.
Create clearly labeled test cases that teams can identify in large suites and reduce duplicate tests.
Use a common naming structure to keep test cases organized into logical groups.
Write straightforward step descriptions.
Keep steps small and focused.
Write steps that represent how known users will interact with the application.
Include all required prerequisite data or connectivity and where to find it, or how to get it set up.
Additionally, consider the following best practices.
Review test cases for consistency and clarity
A new user following unclear steps verbatim can result in a false failure. Regularly check clarity and consistency in test cases as they evolve.
Test cases are most effective when they are understandable and clear to anyone who picks them up. Be certain all the required steps are documented clearly. Experienced testers often skip over essential steps that they know are required, but new users don't. A new user following unclear steps verbatim can result in a false failure. Regularly check clarity and consistency in test cases as they evolve.
Practice effective test case management
Test cases tend to grow in numbers rather quickly during application development. As the application matures, the number of test cases makes them hard to find without a consistent and clear system of identification. Managing test cases is important for keeping them up to date and helping to reduce duplicate test creation.
Use test management tools and naming conventions
As is commonly the case in software development, there are tools to help track, update and organize test cases and suites. Most test management tools include status reporting and methods for organizing test suites. If tools are out of the question, then the team must define consistent naming and grouping practices so they can find and track tests manually.
3 example test cases with templates
If teams use a test management tool, there are preconfigured fields and methods for creating manual or automated test cases. Change the configured fields to match the team's needs and create process or instructional documents that explain what fields to use and what type of information to include.
Teams can use these examples and the associated downloadable templates to create test cases in any format desired -- text, tool or spreadsheet. The following three test case examples are testing the same application function in three different styles:
Standard style
A standard test case contains test name, ID scenario, data, prerequisites and steps. It includes a description of each step, along with expected results, actual results, test status and comments about the steps.
Test Name: Create a New Appointment Test ID: 101
Test Scenario:
Create a New Appointment
Prerequisites:
User has successfully logged in to an existing patient account
Test Data:
Need user login and password for authentication
Test Steps:
Step:
Description:
Expected Results:
Actual Results:
Test Status:
Comments:
1.
Click the link to schedule an appointment
The link opens the Schedule Appointment page with a list of the patient's providers.
Pass
2.
Select provider
The selected provider displays appointment type options.
Pass
3.
Select from available options for appointment type
When selected, the appointment type loads the appointment details page.
Pass
4.
Select a preferred date by entering directly or clicking the calendar tool
User can set a preferred date, either by entering valid data into the field or selecting a date from the calendar tool.
User cannot enter a valid US date in MMDDYYYY format directly.
Failed
Entered Defect #101 for inability to add a date directly for the preferred date.
Continued test by setting the preferred date using the calendar tool.
5.
Select a preferred time by clicking on the Time link
The preferred time selected is saved and shows on the page with an immediate refresh.
Pass
6.
Enter text into the Appointment field to describe the reason for the appointment
User can enter text into the Appointment field up to 500 characters.
Pass
7.
Click button = Send Request
Button is active and saves the request on click. User returns to the patient homepage.
Pass
8.
From the patient homepage, clicking Confirm Appointment displays with a status = Waiting for appointment confirmation
The patient's homepage displays the new appointment along with a current status = Waiting for appointment confirmation.
Pass
9.
Click the hamburger menu and then select the Logout button
Clicking the hamburger menu opens the user options page with an active Logout button.
On click, the user is logged out and returned to the main login page.
The system fails to log out the user because the appointment is not properly saving (see Defect #101).
Blocked
Defect #101 blocks the user's ability to save a new appointment request.
This standard template type is often found in test management tools.
Typically, this test case format is a more informal method that doesn't designate a Test ID or test scenario, and is managed only through the test name, which usually includes a description of the function only. In the following example, the function is "creating a new appointment." In most cases, this format does not include Expected Results -- the tester notes findings in the Actual Results and Comments section.
Test Name: Create a New Appointment
Prerequisites:
User has successfully logged into an existing patient account
Test Data:
Need user login and password for authentication
Test Description:
The test objective is to verify users can create a new appointment with a primary physician, choosing from various types and a selecting a mix of valid dates and times successfully.
Test Case Story:
A user logs in to the app to schedule a new appointment with their primary physician. Once the user successfully logs in, they see their account with an active link to schedule a new appointment.
Select the primary physician as the provider and then test each selection of appointment type, which allows the user to create an appointment successfully and log out of the app.
After logging in to the app, I selected an annual physical exam, then a routine checkup, then a new issue or illness. For each appointment type, I selected different dates and entered each using the calendar tool.
Once I completed entering each appointment type with valid date and time, I entered another appointment with a date in the past. The system correctly generated an error message that the date selected was invalid (see Comments).
During testing, I selected a range of times as listed. When entering text into the appointment description field, I included special characters, code strings and numeric characters. The special and numeric characters were saved as expected, and the system ignored and automatically removed any text written as code strings.
The Send Request button was always enabled as expected, and the user returns to the correct patient homepage. I reviewed data to ensure it matched the logged in patient's expected data.
I confirmed all appointments display on my patient's homepage with a status of "Waiting for appointment confirmation."
I selected the hamburger menu, then clicked on the logout button and the user was successfully logged out.
I repeated all tests but manually entered the preferred dates for each. The system ignored my attempted entry and then later generated a failure to save when I tried to log out my test patient. (See Comments and Defect #101)
Expected Results:
Actual Results:
Comments:
Reproduced Defect #101 where the user cannot enter preferred dates manually and therefore save appointments and logout.
Although the user receives an error message when selecting a date in the past, is it possible to prevent them from even seeing or entering a date in the past?
If users were to add Expected Results to this example, it might read "The user can create a new appointment with their chosen provider after selecting various dates and appointment types. The system saves changes and sets the appointment without error."
Expected results are not required in this format because they are implied in the Test Description section.
Editor's note:The author created all three downloadable templates in this article.
Amy Reichert is a 25-plus-year professional QA tester and writer. As a tester, she specializes in test development and Agile team management.
Dig Deeper on Software testing tools and techniques