WavebreakmediaMicro - Fotolia
Users don't like slow applications. And if an app stalls intermittently or consistently, users are likely to avoid...
it in the future.
When testers create performance tests, their main objective is to discover where an application slows down or stalls. Typically, performance problems occur in distinct patterns that testers can recognize after they analyze a handful of performance test results.
Performance testing is important to application success, but testers often only execute the process "by feel" during functional and regression testing -- if they even execute it at all. To carry out the process "by feel" could mean that the QA professional simply counts to 10, or manually times a test to determine if a page loads or functions slower than expected. There's no scientific or quantifiable method to this form of performance testing.
Performance testing is not doomed. Testers can improve application quality by developing and executing performance tests at the same time as, or immediately following, functional, feature or story testing. Use the following guidelines to create performance tests that ensure significant quality for the application.
Performance testing objective 1: Analyze test results for insights
The top objective of performance testing is to analyze test results. Many software development teams develop and execute some application performance tests, but fail to fully analyze the results. And, enterprises only check page load timing or look for general loading problems.
In many cases, a team develops tests on the fly without design input from a performance expert or enthusiast. If you only test how fast the page loads, you're missing all the other functionality. How long does it take to select a data record and have the next page load? Within a workflow function, how long does it take to get and send/retrieve data back upon refreshing the page? For example, if you edit an address on a form then save the change, does the page load quickly with the new input displayed? These kinds of tests go beyond page load time to examine how the user will experience performance within the app itself.
As part of the performance test suite creation process, testers should plan to execute the test cases at least once per release cycle and fully analyze the results. In doing so, testers can then determine where the application fails or where the development team can significantly improve it.
When testers schedule application performance tests, they must allow time to fully analyze the results. It's not enough to run test scripts. Developers or performance engineers need to think through the results and gather data they can actively use to design application improvements.
Performance testing tools are useful, but testers need to learn how to use them and read important data metrics associated with the tooling. An organization must decide if it wants to invest in a tool, or simply invest time in a developer who can create a performance test suite. Developers can base a test suite on their knowledge of the application's code and connections, such as APIs and databases.
Performance testing objective 2: Define measurements
An IT organization must define what's fast enough for the application to perform each function. And, similarly, what's fast enough for pages to load and to fulfill data requests. These measurements inform software quality.
For example, an application designed to schedule airline flights and hotel stays must sort through a lot of options for a given query, but it can't take so long to download data that the software doesn't return a response to the user in a timely manner. People don't wait long, and a cute spinner icon won't help the UX.
Software teams must decide how fast an application can function and verify it meets those goals. Once QA professionals have goal measurements, they need to develop a performance test suite against each desired function and execute the tests. Execute the tests one to three times to generate the application's baseline performance numbers. The baseline represents the typical app performance, and provides a starting point for comparison after updates and improvements.
Testers can get a baseline by collecting real data from performance tests. The team can choose to execute the performance test during each code deployment or at the beginning, middle and end of a release cycle. When a team analyzes the results, it can determine if the application has met predefined measurements. If not, then they know where the application code needs additional work.
The team can get more sophisticated with performance testing. For example, testers can check the response speed under different kinds of load, such as a user spike or a sustained stretch of high user activity. QA can check for stability issues, such as the application frequently timing out on a particular task under increased load. These tests also help determine what conditions allow an application to perform the best. Measuring application performance is the only way to know how well the application scales and allow for improvements before the developers release new code.
Performance testing objective 3: Create a thorough test design
While some testers think that the free add-ons that come with most tools are enough for performance tests, this is far from the truth. Performance testing is too important to the application's success. The performance tests created with these add-ons often fall short of what testers hope to achieve.
Slapping performance testing together or using simple, prefab tools isn't the answer either. Organizations should use a performance engineer with specific domain knowledge, or allow a development team and its testers to design and plan out thorough and accurate application performance tests.
Consider developing tests by functional feature, or create a test for each significant function in the application. Like any test development effort, a QA professional can repurpose existing functional tests or regression suites and create a set of prioritized options for performance testing. A performance engineer or developer can use tests to take a closer look at code and design unit type tests, or integrate unit tests that execute each function. Include tests for API and database connections, as well as any other third-party or back-end processing engines that the application uses.
Test design is on equal footing with the other performance testing objectives. Make sure there's enough time for developers and QA to design a quality set of performance tests. Once you have a performance test suite, testers can execute it during regular QA efforts so the tests continuously provide business value.
Application performance is an integral part of business success. Take it seriously, and test thoroughly.
Dig Deeper on Software test types
Related Q&A from Amy Reichert
QA needs to reiterate its value to the business side of the organization. Use this tried-and-true advice to leverage documentation and automation to ... Continue Reading
Vendors have inched toward automated application testing for a long time, yet there is still room for growth. Software tester Amy Reichert offers her... Continue Reading
Whether you want to discover new software testing methodologies or rejuvenate test cases, QA is all about efficiency. Evaluate these testing ... Continue Reading