Tip

What to look for when conducting benchmark tests with PerfMon

Microsoft's PerfMon benchmark tests measure a wide range of processes and server aspects, but it can be difficult for IT professionals to understand the numbers and interpret the results.

This tip is the second in a series on measuring server performance. Read part one on best practices for server benchmark testing and part three on server benchmarking tools and stress testing.

The concept of benchmark tests is easy, but using a benchmark and obtaining meaningful data that can actually improve system performance is another matter entirely. Benchmark tools, such as Microsoft's Performance Monitor (PerfMon), are quite flexible, but the variety of counters and settings available within a tool can complicate testing and make results difficult to interpret. In this tip, we'll examine some of the most common counters used with PerfMon and see how they might influence a benchmark test.

Memory allocation and general RAM settings
Allocating too much RAM to an application can actually hurt performance of other processes on the machine. In fact, improper memory utilization can lead to negative system performance.

When conducting internal server benchmark tests with PerfMon, use the following counters to verify that memory allocation will not disrupt your server performance:

  • Memory:Available Bytes--This counter measures the total physical memory available to the operating system (OS) and compares it with the memory required to run all of the processes and applications on your server.
  • Memory:Committed Bytes--This comparison should be tracked over time to allow for periods of peak activity. You’ll want to monitor when there is a peak and a valley in Committed Bytes to understand how your machine is performing. You should always have at least 4 MB or 5% more of available memory than committed memory.
  • Memory:Page Faults/sec--This counter measures page faults that occur when an application attempts to read from a virtual memory location that is marked "not present." For the most part, zero is the optimum measurement. Any measurement higher than zero delays response time. Remember, the Memory:Page Faults/sec counter measures both hard page faults and soft page faults. Hard page faults occur when a file has to be retrieved from a hard disk rather than virtual memory. On the other hand, soft page faults occur when a resolved page fault, which is found elsewhere in physical memory, interrupts the processor but has much less effect on performance.

Thread and process monitoring
Pay attention to several of your processor counters, particularly when you're trying to maximize the number of threads per processor. Look at the number of “context switches” that occur.

A “context switch” occurs when the kernel, or core of the OS, switches the processor from one thread to another. Context switches should be avoided, as each context switch causes the processor L1 and L2 caches to be flushed and refilled. Cache flushing and refilling takes precious time and reduces the system's performance.

  • Process:Thread Count:Inetinfo--Counts the number of threads created by the Inetinfo process and displays the most recent value.
  • Thread:% Processor Time:Inetinfo =>Thread #--Measures how much processor time each thread of the Inetinfo process is using.
  • Thread:Context Switches:sec:Inetinfo =>Thread#--Measures the maximum number of threads per processor or thread pool. Monitoring this counter is important to prevent creating so many context switches that the memory being lost to context switches supersedes the benefit of the added threads, at which point performance will decrease rather than improve.

Measure and analysis
Unfortunately, there is an extraordinarily broad range of processes and server aspects that can be measured–far too many to list in this article. However, for the most part, system performance and metric testing can be categorized into the following groups:

  • Memory management
  • Network capacity
  • Processor capacity
  • Disk optimization

By looking at the above groups, a testing engineer should get some solid benchmark results that they can utilize to better enhance the overall server environment.

Meeting the challenges of benchmark tests
As with any testing done in a server environment, benchmarking and metric evaluations come with some notes of caution.

  1. Be wary of vendor-provided benchmark results. Vendors tend to tune their products specifically for industry-standard benchmark results. This means that the benchmark documentation on paper may not suite your unique environment. For example, let’s assume that an IT manager is about to purchase an application that is going to handle a user database stored on a server. Metrics for that application show it running well on a Server 2008 box with quick response times. Although this may sound good, it may not necessarily fit with what your environment is running. For example, what if the application that was tested by the vendor was running on a single, beefed up server while yours is on a VM which will be sharing resources? Remember, the goal of the vendor is to sell you the software, so some are known to “cheat” their numbers a bit. Doing so improves the numbers on paper, but might make things a lot worse in a live environment. Although not as common amongst large hardware and software vendors, smaller resellers have been known to slightly alter their numerical data. For example, a hardware device capable of delivering VPN connectivity over the WAN may have had better delivery times because they utilized optimized hardware for the benchmark test. However, when deployed onsite, the devices showed a solid 20%-30% reduction in speeds. So, due diligence is always recommended with devices or applications that are going to be heavily relied upon.
  2. Never focus on just one benchmark. When conducting server benchmark tests, try to include as many components as possible. Don’t focus on just one element, like CPU speed. Watching the behavior of various key elements in the server allows an engineer to gain a better overall understanding of how the server performs under various conditions, which can help in tracking down and correcting future performance issues.
  3. Be careful of benchmark service providers. If you’re planning on outsourcing the benchmark and metric testing, make sure to do your due diligence. Many times, even reputable consulting firms either disregard or do not follow basic scientific methods. This includes, but is not limited to, small server and application sample size, lack of variable control, the limited repeatability of results and working with software and hardware numerical biases. Look for numerical extremes, such as a SQL server running way better than it should be, based on the hardware used. Vague hardware specifications are also a red flag. If a vendor lists the hardware but doesn’t give any detail on it--Dual Core Processor, 4 GB RAM, 512 MB video card-- you should be careful. When figuring out minute details of benchmarking, every variable matters. In this case, what type of processor was it? What model RAM was used and what model video card was installed? All of these details make a difference.
  4. The key point to note is that every environment is unique and has its own set of requirements. Metric testing with tools similar to PerfMon is an ongoing process with numerous variables that will unquestionably affect the results of your data. By planning out the test and following a solid scientific method, the testing admin can more accurately gauge how hardware and software are performing. When done properly, a good benchmark analysis can provide information that improves server architecture performance.

    ABOUT THE AUTHOR: Bill Kleyman, MBA, MISM, is an avid technologist with experience in network infrastructure management. His engineering work includes large virtualization deployments as well as business network design and implementation. Currently, he is the Director of Technology at World Wide Fittings Inc., a global manufacturing firm with locations in China, Europe and the United States.

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close