Sergey Nivens - Fotolia


Three types of mainframe monitoring tools to track performance

Mainframe users face a variety of options when it comes to system monitoring tools. Determine if a real-time, near-time or post-processor tool is the best fit for your IT needs.

The mainframe produces mountains of detailed performance, resource and diagnostic data. To make sense of all this information, IBM mainframe users can buy or develop mainframe monitoring tools. These tools, for the most part, can be divided into three types – real-time monitors, near-time monitors and post-processors. Before choosing one, weigh their pros and cons to determine which would best meet your needs.

Real-time monitors offer live mainframe views

Real-time monitors offer live views into the mainframe system, allowing users to watch processes as they happen. Some real-time monitors, because of their access to system data, include automation extensions to react to events or thresholds.

The advantages of being able to instantaneously find, diagnose and react to problems are obvious. Real-time monitors can help diagnose transient and relatively benign performance problems, but users should be careful not to draw conclusions from small samples in a narrow time window.

The biggest drawback of real-time monitors is overhead. These mainframe monitoring tools typically use exits or sink hooks into critical system logic paths. If done incorrectly, real-time monitors may alter the performance they're trying to measure. To avoid this problem, some monitors limit the amount and details of the data they collect. There's also the risk that malfunctioning software could bring the whole system down.

At minimum, a real-time monitor should be able to report processor, I/O subsystem and memory statistics. Vendors such as IBM and BMC also offer monitors for specific software, like IBM Customer Information Control System (CICS) or DB2, which include details for those particular subsystems.

Real-time monitors typically include text-based user interfaces. Some have graphic-based interfaces, but they typically look and act like bolt-ons that are scraping 3270 screens in the background.

Near-time monitors provide mix of mainframe data

Near-time monitors fill the gap between immediate views into the mainframe system and historical analysis. IT teams use these to debug problems retroactively, while still having readily available data.

Real-time monitors offer live views into the mainframe system, allowing users to watch processes as they happen.

These mainframe monitoring tools offer convenient ways to thumb through data. Some provide ways to summarize data at different intervals for easier analysis. Near-time monitors also tend to have low overhead because they don't need probes and the data is already collected.

Some near-time monitors may fall short, depending on data summarization. For example, a three-second I/O subsystem delay that scotches a time-critical transaction may not show up in data summarized over a one-minute interval. To catch and diagnose this type of problem, IT teams need a finer summarization interval, automation or different mainframe monitoring tools.

IBM's Resource Measurement Facility Monitor III (RMFWDM) is a near-time monitor that collects RMF data into VSAM data sets for quick access. Panels display the data at different intervals, from 60 seconds up, in 60 second steps. Scroll through intervals to see how system behavior changes. There are also point and shoot fields to drill down for more information.

Despite its flexibility, RMFWDM lacks a couple features. The reports are static, so you can't sort the rows by different columns. Also, the tool won't display data in intervals shorter than 60 seconds, which makes it difficult to find transient problems.

Some real-time monitors can perform near-time work. BMC's MainView for CICS optionally collects detailed CICS transaction performance information that is easily viewable and searchable, and helpful to diagnose recent problems.

Post-processors help track mainframe system trends

Post-processors are mainframe monitoring tools that process and analyze massive amounts of data after the fact. This software is most useful to track trends, summarize and plan capacity. They may also help to retrospectively debug problems.

The chief drawback of post-processors is that they use data that's typically unavailable until the next day. Additionally, the sheer volume of records they process is sometimes difficult for IT teams to digest.

MXG is a post-processor mainframe monitor that provides exhaustive resources for reading, analyzing, summarizing and cross-referencing IBM z/OS Management Facility data.

Still, MXG has a few drawbacks. It is written on the Statistical Analysis System (SAS) platform, which can be expensive on the mainframe, in terms of software license charges and CPU consumption. However, MXG runs just as well on Windows or UNIX SAS, which may be preferred for smaller shops. Also, while MXG is extensive, it is basically a huge do-it-yourself kit. Users may have to perform their own customization through exits and replaceable source modules to fit their needs.

Next Steps

Convert CPU hours into MIPS for mainframe capacity planning

The mainframe programmer job market evolves

Do you need to tighten your mainframe security?

Dig Deeper on Data center hardware and strategy

Cloud Computing
and ESG