What is performance testing?
Performance testing is a testing measure that evaluates the speed, responsiveness and stability of a computer, network, software program or device under a workload. Organizations will run performance tests to identify performance-related bottlenecks.
Without some form of performance testing in place, system performance will likely be affected with slow response times and experiences that are inconsistent between users and the operating system, creating an overall poor user experience. Performance testing also helps determine if the developed system meets speed, responsiveness and stability requirements while under workloads to help ensure a more positive user experience.
Performance testing can involve quantitative tests done in a lab or it can occur in the production environment. Performance requirements should be identified and tested. Typical parameters include processing speed, data transfer rates, network bandwidth and throughput, workload efficiency and reliability. As an example, an organization can measure the response time of a program when a user requests an action; the same can be done at scale. If the response times are slow, then this means developers should test to find where the bottleneck is.
Why use performance testing?
An organization can use performance testing as a diagnostic aid to locate computing or communications bottlenecks within a system. Bottlenecks are a single point or component within a system's overall function that holds back overall performance. For example, even the fastest computer will function poorly on the web if the bandwidth is less than 1 megabit per second (Mbps). Slow data transfer rates might be inherent in hardware but could also result from software-related problems -- such as too many applications running at the same time or a corrupted file in a web browser.
Developers can use performance testing as a form of software testing to help identify the nature or location of a software-related performance problem by highlighting where an application might fail or lag. They can also use this form of testing to ensure an organization is prepared for a predictable major event.
Performance testing can also verify that a system meets the specifications claimed by its manufacturer or vendor. The process can compare two or more devices or programs.
Performance testing metrics
A number of performance metrics, or key performance indicators (KPIs), can help an organization evaluate current performance.
Performance metrics commonly include:
- Throughput. How many units of information a system processes over a specified time
- Memory. The working storage space available to a processor or workload
- Response time, or latency. The amount of time that elapses between a user-entered request and the start of a system's response to that request
- Bandwidth. The volume of data per second that can move between workloads, usually across a network
- CPU interrupts per second. The number of hardware interrupts a process receives per second
These metrics and others help an organization perform multiple types of performance tests.
How to conduct performance testing
Because testers can conduct performance testing with different types of metrics, the actual process can vary greatly. However, a generic process may look like:
- Identify the testing environment. This includes test and production environments as well as the testing tools.
- Identify and define acceptable performance criteria. This should include performance goals and constraints for metrics.
- Plan the performance test. Test all possible use cases. Build test cases around performance metrics.
- Configure and implement test design environment. Arrange resources to prepare the test environment, then implement the test design.
- Run the test. While testing, developers should also monitor the test.
- Analyze and retest. Look over the results. After any fine-tuning, retest to see if there is an increase or decrease in performance.
Organizations should find testing tools that can best automate their performance testing process. In addition, testers should not make changes to the testing environments between tests.
Types of performance testing
There are two main performance testing methods: load testing and stress testing. However, there are other types of testing methods developers can use to determine performance. Some examples are as follows:
- Load testing helps developers understand the behavior of a system under a specific load value. In the load testing process, an organization simulates the expected number of concurrent users and transactions over a duration of time to verify expected response times and locate bottlenecks. This type of test helps developers determine how many users an application or system can handle before that app or system goes live. Additionally, a developer can load test specific functionalities of an application, such as a checkout cart on a webpage. A team can include load testing as part of a continuous integration (CI) process, in which they immediately test changes to a code base through the use of automation tools, such as Jenkins.
- Stress testing places a system under higher-than-expected traffic loads so developers can see how well the system works above its expected capacity limits. Stress tests have two subcategories: soak testing and spike testing. Stress tests enable software teams to understand a workload's scalability. Stress tests put a strain on hardware resources to determine the potential breaking point of an application based on resource usage. Resources could include CPUs, memory and hard disks, as well as solid-state drives. System strain can also lead to slow data exchanges, memory shortages, data corruption and security issues. Stress tests can also show how long KPIs take to return to normal operational levels after an event. Stress tests can occur before or after a system goes live. Chaos engineering is a kind of production-environment stress test with specialized tools. An organization might also perform a stress test before a predictable major event, such as Black Friday on an e-commerce application, approximating the expected load using the same tools as load tests.
- Soak testing, also called endurance testing, simulates a steady increase of end users over time to test systems' long-term sustainability. During the test, the test engineer monitors KPIs, such as memory usage, and checks for failures, such as memory shortages. Soak tests also analyze throughput and response times after sustained use to show if these metrics are consistent with their status at the beginning of a test.
- Spike testing, another subset of stress testing, assesses the performance of a system under a sudden and significant increase of simulated end users. Spike tests help determine if a system can handle an abrupt, drastic workload increase over a short period of time, repeatedly. Similar to stress tests, an IT team typically performs spike tests before a large event in which a system will likely undergo higher than normal traffic volumes.
- Scalability testing measures performance based on the software's ability to scale up or down performance measure attributes. For example, testers could perform a scalability test based on the number of user requests.
- Capacity testing is similar to stress testing in that it tests traffic loads based on the number of users but differs in the amount. Capacity testing looks at whether a software application or environment can handle the amount of traffic it was specifically designed to handle.
Cloud performance testing
Developers can carry out performance testing in the cloud as well. Cloud performance testing has the benefit of being able to test applications at a larger scale while also maintaining the cost benefits from being in the cloud.
At first, organizations thought that moving performance testing to the cloud would ease the performance testing process while making it more scalable. The thought process was that an organization could offload the process to the cloud, and that would solve all their problems. However, when organizations began doing this, they started to find that there are still issues in conducting performance testing in the cloud, as the organization won't have in-depth, white-box knowledge on the cloud provider's side.
One of the challenges with moving an application from an on-premises environment to the cloud is complacency. Developers and IT staff may assume that the application will work just the same once it reaches the cloud. They'll minimize testing and QA and proceed with a quick rollout. Because the application is being tested on another vendor's hardware, testing may not be as accurate as on-premises testing.
Development and operations teams should check for security gaps, conduct load testing, assess scalability, consider user experience and map servers, ports and paths.
Inter-application communication can be one of the biggest issues in moving an app to the cloud. Cloud environments will typically have more security restrictions on internal communications than on-premises environments. An organization should construct a complete map of which servers, ports and communication paths the application uses before moving to the cloud. Conducting performance monitoring may help as well.
Performance testing challenges
Some challenges within performance testing are as follows:
- Some tools may only support web applications.
- Free variants of tools may not work as well as paid variants, and some paid tools may be expensive.
- Tools may have limited compatibility.
- It can be difficult for some tools to test complex applications.
- Organizations should watch out for performance bottlenecks such as CPU, memory and network utilization. They should also watch for disk usage and limitations of operating systems.
Performance testing tools
An IT team can use a variety of performance test tools, depending on its needs and preferences. Some examples of performance testing tools include:
- JMeter, an Apache performance testing tool, can generate load tests on web and application services. JMeter plugins provide flexibility in load testing and cover areas such as graphs, thread groups, timers, functions and logic controllers. JMeter supports an integrated development environment (IDE) for test recording for browsers or web applications, as well as a command-line mode for load testing Java-based operating systems.
- LoadRunner, developed by Micro Focus, tests and measures the performance of applications under load. LoadRunner can simulate thousands of end users, as well as record and analyze load tests. As part of the simulation, the software generates messages between application components and end-user actions, similar to key clicks or mouse movements. LoadRunner also includes versions geared toward cloud use.
- NeoLoad, developed by Neotys, provides load and stress tests for web and mobile applications, and is specifically designed to test apps before release for DevOps and continuous delivery. An IT team can use the program to monitor web, database and application servers. NeoLoad can simulate millions of users, and it performs tests in-house or via the cloud.