Thep Urai - Fotolia

Tip

Make application usability a priority -- wherever testing occurs

Testers don't have time to fully evaluate apps before deployment. But poor-quality code simply isn't an option. Enter shift right and APM, two tasks testers must be ready to accept.

With cloud-native workloads, distributed applications, microservices and DevOps, modern software development no longer devotes a lifecycle phase to functional testing. Testers must adjust to this code delivery paradigm and not leave application usability to chance. Organizations automate much of testing, from unit to smoke to system tests, and they perform less functional and exploratory testing with distributed architectures and rapid releases.

This strategy fits in with the mantra of fast app development and delivery; organizations deploy without full hands-on testing but retain the ability to rapidly revert to an acceptable version of the software. But, in the process of continuous delivery of code, we lose the ability to dive into quality checks.

This rush to production puts testers in an uncomfortable position: evaluating application usability without dedicated time for exploration. However, whether that means they test live apps or monitor them for specific metrics, it's a responsibility testers should embrace -- and one that will be increasingly vital in the future.

Monitor application usability

When an application is in production, testers must diagnose trouble when it occurs and use data to help with repair efforts. Cloud teams can check whether or not their application works via a continuous ping command, yet that doesn't return information on performance or the availability of specific operations. A ping simply tells IT whether the server or instance is still running, not how healthy it is.

Teams need more information to maintain and boost application usability. Application monitoring is a collection of testing and data collection techniques that help achieve this objective. Monitoring in production incorporates real user monitoring (RUM), which looks at user requests in real time, and what the application returns to the user in response. When you get a snapshot of a large sample via RUM, you can understand how actual users perceive and use your application.

Last, testers can trace individual transactions through the many components and tiers of the application. Through transaction tracing, you can look at code pathways and see what code is used during a customer session. This helps find defects and slow code, plus shows testers which features are most commonly used.

While public cloud providers offer tools to monitor app performance, such as Amazon CloudWatch and Google Stackdriver, independent software vendors -- including Cisco's AppDynamics, New Relic and Dynatrace -- provide more complete monitoring ranges.

Shift-right testing is the future

Not only should testers monitor app performance, but it will be one of the two or three tasks that will make up almost every such job in the future. To guarantee the viability of software testing as a profession, testers simply must adapt to new ways enterprises build and manage applications through their lifecycles.

Think of production monitoring as shift-right testing. With DevOps and Agile, application testing is almost always incomplete upon deployment, as teams constantly rush new features to the production build. That's why production monitoring matters; testers can evaluate application usability, defects and poor performance, while real users interact with the release.

QA testers can also get some help from almost-real users. Synthetic testing enables teams to record and execute tests that represent how users might navigate and interact with the application. These tests can run at different times of day and typically from different global locations. But real user data helps testers diagnose application errors and flag usability problems even more quickly.

Working in production, as a concept, bothers many testers, as they traditionally verified the quality of an application prior to release. However, testing is changing radically, and its practitioners must engage the application at all times during development and deployment.

In the future, experienced professionals will continue to perform some traditional activities, especially exploratory testing. Organizations can automate many other tasks, including functional testing, which enables teams to spend less time on rote manual testing. And AI has the potential to provide yet-undefined benefits across testing processes. But testing must expand to encompass more responsibilities related to app quality, functionality and performance or else organizations will have less need for it in the future.

Dig Deeper on Software design and development

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close