The challenge of performance testing SOA applications

Software testing and QA groups already pushed to the limit face even more challenges with SOA applications. Automation through modeling can help monitor and test such applications.


Chris Farrell, ClearApp Inc.
Chris Farrell

Overwhelmed!

It's a common theme in IT today. From help desk operators to CIOs -- and everywhere in between -- organizations are stretched thin, understaffed, over budget, behind schedule and getting more requests daily. The pressure at every level is intense, more like manufacturing facilities than technology labs and raised floors. In fact, the current IT environment reminds me of an axiom an old manufacturing friend, Michelle, once told me as she was leaving a lucrative management position: "You're constantly either kicking somebody or getting kicked yourself."

The pressure is especially prevalent in testing and QA organizations. Under-funded testing is so common, it's actually a cliché. Application testing is especially vulnerable, since most applications at the time of the test teams' instantiation were pre-packaged applications such as PeopleSoft, Lotus Notes, or Microsoft Exchange. The growing number of custom Web applications, middleware applications on J2EE and .NET, and enterprise portals started stretching extremely thin organizations to the breaking point. As if THAT weren't enough, now there's this thing called service-oriented architecture (SOA) that threatens to throw everything you know about performance testing out the door. What to do?

The number of companies requiring certification of applications before rolling into production is growing. Five years ago, about 10% of all organizations I dealt with had performance standards for new applications. Today, that number is closer to 60% -- and still growing. Of course, performance standards for roll-out mean, by definition, that the QA/test organization has to actually certify the application. Remember the kicking? Well, now there's a great big target on the QA team. If an application slips through the process, it's usually blamed on the QA team (all is not lost – the development team gets whacked, too). As if that weren't enough pressure, try holding a highly mission-critical application out of production because you won't sign off on it. Then the kicking comes from the line of business, which really isn't fair when you think of it. I mean, nobody from IT gets to kick those guys around.

In SOA, whether monitoring or testing, automation (through modeling) is an imperative for keeping the organizations in step with the applications.
,

It might be different if the applications weren't so darn complicated. Middleware is great for creating complex things, but testing a middleware application is wrought with challenges, the first of which is that testing the internals of something designed as a black box is next to impossible. To truly certify the application, the pieces INSIDE the black box are just as important, if not more so, than the external components. But even if test teams gain visibility to the internals -- and let's face it, they can get inside those things now -- there's more to testing distributed applications, portal applications, business integration applications and especially service-oriented applications than just seeing inside the black box.

How do you test all these pieces of an application? To answer that question, we need to examine typical QE/QA activities, find the holes that middleware and SOA applications create for testing these applications, and then discuss ways in which we can plug the holes.

I like to think of testing applications as a set of three questions -- the same three questions that application owners have to answer when monitoring production applications:

 

  1. Does the application work?
    • Do all the necessary functions work, including security (both
    • granting valid access and denying invalid access)?
    • Are error conditions handled smoothly?

     

  2. Is it fast?
    • What are the service-level objectives?
    • Does each individual business process, transaction and function meet the requirements?
    • Are there patterns to any outliers?

     

  3. Can it scale?
    • What is the transaction rate (per minute, per hour)?
    • What is the maximum number of supported users/requests?
    • What is the maximum number of concurrent requests?

Of course, each of the three primary questions must be answered with at least a nod, if not further analysis, on the infrastructure specification (i.e. capacity management) required to achieve the reported numbers.

Challenge #1: Visibility at the application component layer
This is where the middleware that has been our friend all through development becomes our enemy, hiding transaction paths, virtualizing I/O and services and creating management black holes. And here we meet the first of the "new" requirements in testing composite applications -- seeing inside the black boxes of middleware.

That translates to visibility at the application component layer -- objects such as servlets, Enterprise Java Beans (EJBs), JMS, JSP, JSBC requests and more. For this particular feature, any of dozens of solutions can grant this visibility, most of which use Java Management Extensions (JMX) or some other API to get a set of data directly from the application servers. Any of these monitoring tools can provide reports of object details, from response time at the method level (that's deep) to call rates of individual components. However, each metric from these types of solutions is simply a discrete measurement, with no regard to the context of how any individual component was used in transactions. And each has zero visibility into how multiple components or entities work together to create overall functions, transactions and processes.

The need to understand relationships between individual components narrows down the viable solutions to a handful -- the ones that try to tie business process context to individual components -- and each individual component to the relationships with other application entities involved in the application. Whether tagging transactions with tracers, monitoring relationships between each and every call or using architectural diagrams, the need to have these relationships is critical in understanding what the application is doing for the overall processes.

Challenge #2: What to monitor & understanding the metrics
Now we have to address the second challenge -- the same challenge that faces us in production monitoring:

  • Deciding what to monitor
  • Trying to understand the meaning of the metrics we collect

The traditional way to do this, even with the industry-leading application performance management (APM) and diagnostics tools, has been to use a brute-force method of manpower, expertise, reverse engineering and trial-and-error measurements. On average, this takes about four weeks of time, using approximately four months of human effort between the testing engineers, architects, developers and the ISV consultants for EACH application. While this is not necessarily over-expensive, it can add up as the application numbers climb (and I haven't met a technical organization yet that says it's reducing its application consumption).

In production monitoring, this time commitment usually leads to a prioritization of applications for adding management or monitoring. Translation: Some applications are not important enough to monitor. Guess what? If we follow the same procedures in our testing philosophy, then the applications with the least amount of production monitoring will be our least-tested applications. And while "Application 5" may not be quite as important as "Application 8," the marketing manager in charge of the campaign that uses "Application 8" thinks it's pretty darned important.

Application monitoring & testing changes needed
A different approach has to be taken, both in monitoring and test. Let's focus on test for a moment. Automation provides the best chance of being able to scale the testing environment to meet the needs of growing application numbers, complexity and importance. What needs to be automated? The steps in answering the questions above (what to monitor -- what it means) are namely:

  • Setup & configuration
  • Correlation/analysis
  • Change management

If we could automate setup and configuration of the measuring, reporting and management solutions, then we could always know that we're measuring the most appropriate application entities -- from top-level functions and processes to the deep method-level activity required to triage any problems.

Once we've automated the "what" piece of measurements and testing, we have to answer the question of "why." Whether trying to figure out the overall impact of a slow EJB call to an LDAP Server or trying to drill down from a login problem to a malformed JDBC request, understanding the relationships of individual components -- both to each other and the overlying business processes -- is critical. This is where modeling helps out in the automation. Recent advancements in APM have allowed automatic modeling by the tools to create transaction maps and provide the business context associated with both generic API and detailed proprietary tracers of individual deep components.

SOA makes change management more difficult
The final piece -- change management -- while already important, is growing in importance daily. Gartner has stated that currently 60% of all production application issues relate directly to a change in the application environment (ranging from new classes, updated application files, new configurations or new clustering). And things are not getting better. Gartner also predicts that this number will increase to -- get this -- 80% -- in SOA-based applications. Dealing with change will become the single hardest thing to do for testers and production monitors working within a SOA environment.

Modeling, again, provides a solution for the situation. By allowing the management tool to virtualize the application architecture and create transaction maps, any changes to the environment can be detected, tracked and dealt with automatically -- whether adjusting/creating new tracers, turning new monitors on or turning old ones off. Now at the most vulnerable time, the measurement tool can get the exact data needed to isolate not only the problem location and business impact, but exactly which change created the problem in the first place.

So, what happens to our traditional brute-force solutions within an SOA environment? If you think of a medium-sized application, there are approximately 10,000 entities to measure. This is the type of application that takes about three or four weeks to roll out management or test configuration. Now let's take a small SOA application, say 10 discrete services. The brute-force exercise has to be completely done for each service, as well as any KNOWN times the services call one another. So, it would take about half a year just to roll out measurements of the application. Once a change occurs, the work we've done has lost effectiveness -- and could be completely moot.

In SOA, whether monitoring or testing, automation (through modeling) is an imperative for keeping the organizations in step with the applications -- or else everybody is out of sync. And the testers will be the ones who continue to be overwhelmed.

-----------------------------------------
About the author:Chris Farrell is the vice president of product management for ClearApp Inc. Chris has over 20 years of technology experience, previously spending five years as a test engineer and diagnostics programmer for the IBM PC Company (now Lenovo), where he created many of the tests still in use today by the company's manufacturing vendors and suppliers. Chris has also served in a variety of senior level product development positions at Wily Technology and Ericsson.

Dig Deeper on Topics Archive

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close