Getty Images/iStockphoto


What details to include on a software defect report

Teams that can write clear and detailed defect reports will increase software quality and reduce the time needed to fix bugs. Here's what to know about getting reports right.

Understandable defect reports get bugs out of an application efficiently. When defect reports aren't clear, a fix can introduce additional bugs.

Efficiency across software development enables quality, on-time releases and happier customers. A large part of that efficiency relies on successful bug correction, and quality defect reports help developers quickly make those fixes.

When writing defect reports, testers can be helpful by adding detailed and accurate steps to reproduce the problems they find. The steps to reproduce must include both the expected and actual results. Teams can also include screenshots and video attachments to aid understanding of the defect in question.

Details to include in a defect report

Most defect-tracking tools include a default template that testers can update to include additional fields if needed. Or, if the team is using a spreadsheet or other documentation method for tracking, testers can create an outline or template. Templates provide standard information that the team defines as useful for correcting the defect. Download a defect report template here.

The details written in the defect report help developers understand the depth and breadth of the bug's effect and figure out the affected code. Locating broken code in a complex codebase is not an easy task, especially when developers work on more than one project at a time. The more details the defect reporter adds to the defect report, the easier the bug is to reproduce, locate and fix. The greater the understanding of the defect, the more likely the team will fix it correctly -- and without generating new and related bugs.

The details needed for an understandable defect report include the following:

  • Unique ID for tracking. This enables testers to find the defect by ID.
  • Reporter name. Name and contact information for questions.
  • Application and code version. Application if it varies, as well as the code version.
  • Server or environment. Define where testing took place.
  • Browser and OS, if applicable. Include OS, version, or browser and version.
  • Reproducible = Y/N. Typically, defects are reproducible; however, there are times when the user's PC settings, browser settings or configuration setup generate defects not otherwise seen. If any special settings are used during testing, indicate what they are and their function.
    • Frequency = Always, Random. Knowing how frequently a bug reproduces is important. Many bugs are random. Knowing the bug doesn't always reproduce helps define where it's coming from and why it occurs only randomly.
  • Link to or name of the test case, if applicable. When the bug is detected during test execution, include a link to the test case or the name.
  • Screenshots or video files of steps, log files or errors. Browser dev tool logs or other log files assist developers in understanding the defect. Include video of the bug in action or screenshots to help with visual understanding.
  • Configuration settings, if applicable. List any nonstandard or specific configuration settings used.
  • Expected result/behavior and actual result/behavior. Developers might not know how the application works from end to end since they tend to code specific functions. Including the expected outcome -- in addition to the actual outcome -- provides crucial information for locating the defect.
  • Severity/priority. How critical is the defect? Product or development might change this value, but set it as the reporter, based on the bug's effect on the user experience.
  • Troubleshooting notes. Include any notes on troubleshooting steps taken, database queries or error log findings.

The quality of a defect report relies on the accuracy of the details that testers include. Adding in any troubleshooting done helps developers find the root cause of the defect rather than a single symptom.

Describing the defect with steps to reproduce

Defects must contain specific steps to reproduce the problem described. Run through the steps more than once to ensure no actions were missed. It's easy to skip steps by making assumptions about the actions taken. Make sure all clicks, page changes and selections are accurate.

The quality of the steps to reproduce varies widely depending on the role and the person entering the defect. Developers appreciate a complete description with all the relevant details needed for them to reproduce the defect. Defect reports written without the correct or complete steps are ignored or stashed back in the system as technical debt.

The following shows high-quality and understandable steps to reproduce, using a patient information app for healthcare professionals as an example:

  1. Log in to the mobile app for MDs as a Nurse with a Physician role using a valid username, password and secondary authentication code:
    1. Dev Server A Username = NurseAP
    2. Password = PTarmi$88gan7
    3. Second authentication code sent to test email: [email protected]
    4. SMS code sent to: 770-123-4567 (test phone for Dev Server A)
  2. Navigate to the patient page by selecting the Patient tab from the top right.
  3. Add a medication allergy to a neonatal patient, or patient under 2 months of age, by clicking the Add Med button:
    1. Verify the configuration setting is on for allergy alert for neonatal patients.
      1. Note: Configuration settings are found under the Config tab > Patient > Neonatal > Allergy
    2. Verify neonatal patient defined as: <= 60 days (1-60 days)
  4. Save.
  5. Add a medication order for the allergy medication to three patients:
    1. Patient > 2 months of age
    2. Patient < 2 months of age
    3. Patient = 3-6 months of age
  6. The following error occurs for Patient B:
    1. Patient B meets the definition of a neonatal patient, but the error message does not pop up to warn the user the action will be ignored due to the allergy present.
    2. The user can add the medication the patient has an allergy to without an error.
  7. Expected result: Physician users cannot add a medication to a neonatal patient. The system generates a pop-up error message indicating the medication cannot be added due to the existing allergy.
  8. Actual result: A physician user can successfully add medication to a neonatal patient with an active allergy to the medication. The system error message indicating the medication cannot be added due to the allergy does not pop up.

The following is an example of ineffective steps to reproduce for the same use case:

  1. The user doesn't get a pop-up error when a patient has medication added with an existing allergy.
  2. Expected result: Physician users cannot add a medication to a neonatal patient. The system generates a pop-up error message indicating the medication cannot be added due to the patient's allergy.
  3. Actual result: A physician user can successfully add a medication to a neonatal patient with an active allergy to that medicine. The system error message alerting the doctor to the allergy does not pop up.

The developer may not know how to add an allergy or a medication to a neonatal patient, let alone where to select a patient. The steps are correct, but they assume application workflow and configuration knowledge the assigned developer might not have. Include explicit details for the developer to reproduce the defect.

Importance of expected and actual results/behavior

The expected results section indicates what should happen when a developer executes the steps to reproduce. Make the expected results specific and explicit. Don't assume the reader fully understands what should happen. In the previous example, the expected results indicate an error message pops up in a window. Consider adding a screenshot of the pop-up window or the complete text of the error message.

Many applications produce error logs in an accessible location. Include an error log when it displays the error or fails to display the expected window. If the application does not generate error logs on its own, use the developer tools from the browser to attempt to trap the error. The console, network and element tabs display application errors when running that might not be visible in the application UI. Including details about the specific window and error helps the developer find the correct window or error.

The actual results describe the bug or defect. In the previous example, the actual results indicated the physician was able to add a medication even when the patient had an active allergy to it. Additionally, there was no error message window to tell the user why the medication cannot be added or prevent them from doing so.

The actual result here describes two symptoms of the defect: One is the missing error window, and the other is allowing the medication entry when it should be declined with an error. When creating actual results, explain the full detail of the problem so it's clear how the actual results differ from the expected results, or why the result is defective.

Aiding understanding with attachments

Attach video, image and error log files only when they show the problem or clearly indicate the defect. Attachments must be useful and clearly show the problem.

Make sure video files accurately depict each step taken and match the steps to reproduce. Missing steps are often the reason developers fail to reproduce a bug. Keep in mind that videos consume data storage, so use them only when screenshots don't indicate the defect. Screenshot images or screen recorder files are useful for visual proof of the issue and to possibly indicate the location in the code.

Include attachments that are meaningful and actionable -- if it's not useful or doesn't show the defect, don't include it. Attaching error logs or dev tool-generated errors aid in reducing the time to locate the defect. Make sure the files open in a readable format; otherwise, attach them as screenshots.

Dig Deeper on Software testing tools and techniques

Cloud Computing
App Architecture