WavebreakmediaMicro - Fotolia

Tip

Pick user-centric software performance testing metrics

Unlike other types of tests, performance tests gauge how well an application works -- not whether it fails. Testers must pay close attention to these metrics to keep end users happy.

Performance testing is one of the most important sets of QA checks. Performances tests are critical for customer-facing applications. After all, app performance directly correlates to customer satisfaction. Customers are impatient and plenty willing to move on to a competitor if your application runs too slowly or crashes.

QA engineers perform several types of performance tests, including stress, load, spike, volume and endurance, to assess the scalability, stability and reliability of an application. Depending on the application and a test's purpose, some of these performance testing types might carry more weight than others. For example, spike testing can evaluate a commercial application's ability to handle increased traffic before a holiday sale. Volume testing can determine if an application is equipped to handle large volumes of data in advance of a major event, such as the Olympics.

To evaluate information from these tests, QA engineers assess software performance testing metrics. Although test objectives typically determine which performance metrics testers analyze, there are two categories of software performance metrics that are relevant in most cases: response time and volume. These software performance testing metrics provide information from a customer perspective and thus are the most important metrics for testers to evaluate.

Response time metrics

The most vital response time metric is likely page load time, which measures how long it takes for an entire screen to download from the server and load on a user's screen. The responsiveness of the initial page load gives users their first impressions of the application's intuitiveness. This performance testing metric, also called render response time, makes a big difference in user experience.

Response metrics measure the speed at which an application returns a response to a user action. There are several varieties of response metrics:

  • Server response time measures the amount of time expected for one node of the system to respond to another node's request.
  • Average response time provides the mean length of the response to all of the requests and responses made during the load test. Testers can calculate this metric either by time to first byte or time to last byte. As an average based on time to last byte, this metric gives an accurate representation of the response that the user will experience.
  • Peak response time displays the longest response time during the test interval, usually one minute. With this metric, testers can determine which requests might take longer than others, and then target opportunities for enhancements.
  • Error rate calculates the number of errors compared to the total number of requests. Although error rate doesn't indicate which request caused the issue, the tester will know that there is an issue to investigate. Also, increasing error rates provide an early warning sign that the system might become stressed beyond its capability.
  • Network response time indicates the amount of time it takes to download data over the network, which can reveal when network latency affects performance.

Volume metrics

Stress tests provide volume metrics, which inform testers about a system's peak capacity. The following are three important metrics that testers can evaluate to learn how the system will handle increasing volume:

  • Concurrency. This metric is the largest number of users expected to access the system at the same time. Concurrency gives testers an understanding of the maximum load that the system will accommodate without performance degradations or crashes.
  • Throughput. A measurement of the expected duration of a scripted transaction workflow. Throughput describes the number of transactions the system can accommodate in a given amount of time.
  • Requests per second. This metric measures how many requests are sent to the server in one-second intervals.

These are not the only helpful software performance testing metrics. Testers can also evaluate performance metrics such as CPU, network and bandwidth. To find performance deficiencies before they reach production, testers must always take a customer-centric approach.

Next Steps

23 software development metrics to track today

How test summary reports yield business value and benefits

Dig Deeper on Software development lifecycle

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close