How to handle root cause analysis of software defects root cause analysis
X
Tip

13 application performance metrics and how to measure them

You've deployed your application, now what? Keep your application performing well by tracking metrics. Take a look at these 13 critical KPIs.

Application performance metrics are important for deciphering the extent to which an application actually helps the business it supports and revealing where improvements are needed. The key to success is to track the right metrics for your app.

These application performance metrics, commonly known as key performance indicators (KPIs), are a quantitative measure of how effectively the organization achieves its business objectives. Capturing the right metrics will give you a comprehensive report and powerful insights into ways to improve your application.

Below are 13 core application performance metrics that developers should track.

1. CPU use

CPU use, often expressed in percentage values, affects the responsiveness of an application. High spikes in CPU use might indicate several problems. If an application is consuming more than 70% CPU, it indicates that the CPU is under high load; this will adversely affect the application performance because the application will spend more time responding to requests. Specifically, this suggests the application is busy spending time on computing, degrading application responsiveness. High spikes in use should be considered a performance bug, as this means the CPU has reached its usage threshold.

2. Memory use

Memory usage is also an important application performance metric. It should be noted that high memory use, memory leaks or insufficient memory will adversely affect application performance and scalability. High memory usage indicates high resource consumption in the server. When tracking an application's memory use, keep an eye on the number of page faults and disk access times. If you have allocated inadequate virtual memory, then your application is spending more time on thrashing than anything else.

3. Requests per minute and bytes per request

Tracking the number of requests your application's API receives per minute can help determine how the server performs under different loads. It's equally important to track the amount of data the application handles during every request. You might find that the application receives more requests than it can manage or that the amount of data it is forced to handle is hurting performance.

4. Latency

Latency, usually measured in milliseconds, refers to the delay between a user's action on an application and the response of the application to that action. Higher latency has a direct effect on the load time of an application. You should take advantage of a ping service to test uptime. These services can be configured to run at specific intervals of time to determine whether an application is up and running.

5. Security exposure

You should ensure that both your application and data are safe. Determine how much of the application is covered by security techniques and how much is exposed and unsecure. You should also have a plan in place to determine how much time it takes -- or might take -- to resolve certain security vulnerabilities.

6. User satisfaction/Apdex scores

Application Performance Index (Apdex) is an open standard that measures web applications' response times by comparing them against a predefined threshold. It's calculated as the ratio of satisfactory to unsatisfactory response times. The response time is the time taken by an asset to be returned to the requestor after being requested.

Here's an example: Assume that you've defined a time threshold of T. Hence, all responses completed in T or less time are considered to have satisfied the user. Responses that take more than T seconds are considered to have dissatisfied the user.

Apdex defines three types of users based on user satisfaction:

  1. Satisfied. This rating represents users who experienced satisfactory or high responsiveness.
  2. Tolerating. This rating represents users who have experienced slow but acceptable responsiveness.
  3. Frustrated. This rating represents users who have experienced unacceptable responsiveness.

You can calculate the Apdex score with the following formula, where SC denotes satisfied count, TC denotes tolerating count, FC denotes frustrated count and TS denotes total samples:

Apdex = (SC) + (TC/2) + (FC × 0)/TS

Assuming a data set of 100 samples, where you've set a performance objective of 5 seconds or better, suppose 65 are below 5 seconds, 25 are between 5 and 10 seconds and the remaining 10 are above 10 seconds. With these parameters, you can determine the Apdex score as follows:

Apdex = [65 + (25/2) + (10 × 0)]/100 = 0.775

7. Average response time

The average response time is calculated by averaging the response times for all requests over a specified period of time. A low average response time implies better performance, as the application or server has taken less time to respond to requests or inputs.

The average response time is determined by dividing the time taken to respond to the requests in a given time period by the total number of responses during the same period.

8. Error rates

This performance metric measures the number of requests that have errors compared to your total number of requests in a given time frame, expressed as a percentage. Any spike in this number will indicate that you'll likely encounter a major failure soon.

You can track application errors using the following indicators:

  • Logged exceptions. This indicator represents the number of unhandled and logged errors.
  • Thrown exceptions. This indicator represents the total of all exceptions thrown.
  • HTTP error percentage. This indicator represents the number of web requests that were unsuccessful and that returned error messages.

In essence, you can take advantage of error rates to monitor how often your application fails in real time. You can also keep an eye on this performance metric to detect and fix errors quickly, before you run into problems that can bring your entire site down.

9. Uptime

Application availability and uptime are critical KPIs for application performance monitoring, as they determine the accessibility and operational status for an application's end users at any given time. Organizations strive to achieve high availability and high uptime to ensure continued access to their applications, reduced interruptions and increased satisfaction among their end users. When an application goes down or is unavailable, it is detrimental to any business since this can result in loss of sales and revenue, reputation damage and discontent among customers.

10. Database queries

The performance of any application depends essentially on how well the database queries are designed. This metric will help in analyzing the total number of executed queries over a specific length of time, identifying the slow or badly formulated queries and determining the inefficient or bad joins. It will also indicate the use of too few or too many indexes and queries that return a lot of data (i.e., more than what is required).

11. Throughput

Throughput is the amount of work the application does by a specified time period -- i.e., it denotes the total number of requests or transactions an application can handle in a defined time frame. Essentially, this shows the application's performance pattern under varying loads and helps identify any performance bottlenecks as well. Thus, an application with high throughput can perform and scale better compared to an application with a lower throughput. You can measure the throughput of an application using the following performance metrics:

  • Requests per second.
  • Transactions per second.

12. Garbage collection

Garbage collection (GC) can cause an application to halt while the GC cycle is in progress. It can also use a lot of CPU cycles, so it's imperative to determine garbage collection performance in an application.

To quantify garbage collection performance, you can use the following metrics:

  • GC handles. This metric counts the total number of object references created in an application.
  • Percentage time in GC. This is a percentage of the time elapsed in GC since the last GC cycle.
  • GC pause time. This measures the time the entire application pauses during a GC cycle. You can reduce the pause time by limiting the number of objects that need to be marked -- i.e., objects that are candidates for garbage collection.
  • GC throughput. This measures the percentage of the total time the application has not spent on GC.
  • Object creation/reclamation rate. This is a measure of the rate at which instances are created or reclaimed in an application. The higher the object creation rate, the more frequent GC cycles will be, consequently increasing CPU utilization.

KPIs for APIs

API analysis and reporting are important aspects of app development, and APIs have their own set of KPIs that development teams need to track.

Some of the most important KPIs for APIs to pay attention to include the following:

  • Usage count. This indicates the number of times an API call is made over a certain period of time.
  • Request latency. This indicates the amount of time it takes for an API to process incoming requests.
  • Request size. This indicates the size of incoming request payloads.
  • Response size. This indicates the size of outgoing response payloads.

13. Request rates

Request rate is an essential metric that provides insights into the traffic increases and decreases your application experiences. In other words, it provides insights into the inactivity and spikes in traffic that your application receives. You can correlate request rates with other application performance metrics to determine how your application can scale. You should also keep an eye on the number of concurrent users in your application.

Conclusion

You can understand an application's overall health by monitoring the most critical performance metrics, such as error rate, traffic volume, response time, throughputs, resource use and user satisfaction. Regular and proactive monitoring and measuring of these application performance metrics helps ensure that the end users have a good experience using the application. The correct metrics predict performance problems clients might have before they occur.

Editor's note: This article was originally published in 2022; the author updated it in 2024 to include more application performance metrics.

Joydip Kanjilal is a developer, author and speaker who has written numerous books and articles on development. He has more than 20 years of experience in IT, particularly in .NET.

Next Steps

Using AI and machine learning for APM

Top benefits of APM for businesses

Craft a risk management plan for application modernization

Dig Deeper on Application management tools and practices

Software Quality
Cloud Computing
TheServerSide.com
Close