Sergey Nivens - Fotolia
Network managers are often in a bind. In order to gather sufficient information to get to the bottom of network...
problems, they need data, lots of it. And gathering and transmitting that data can contribute to the very problem they aim to solve. Furthermore, by their nature, networks often span different functional areas, departments and business units. Gathering information from network analytics tools and through other means, across all those domains, often requires buy-in from others for whom network management is not a priority.
Network analytics tools and bandwidth blues
"We see IT managers collecting more data from higher-resolution sources, performing more analysis on that data and then retaining it for greater periods of time," said Jon Toor, chief marketing officer at Cloudian, a data storage company. "All of this puts more strain on the network resources used to transport this data around the globe," he added.
Toor's customers aren't alone: Managing performance is a challenge for a growing number of network professionals as data volumes continue to grow, putting networks under more pressure.
The measurement challenge itself typically boils down to which data source is employed to monitor the network traffic, according to Vivek Bhalla, a research director at Gartner. "We often see organizations utilizing active polling data sources such as [Simple Network Management Protocol, Windows Management Instrumentation] or synthetic monitoring such as those employed when using Cisco's IP [Service-Level Agreement] protocol," Bhalla says.
But these methods may not solve every problem. That’s because while these approaches deliver good results for availability monitoring -- in other words, what parts of the network are up or down, and where. These mechanisms don’t necessarily track real-time performance. The only way to identify if something is degrading or improving is to continually poll the devices in question. "This invites the possibility of over-polling one's environment -- something that can exacerbate an existing bottleneck or performance issue," Bhalla said.
Products from vendors including Gigamon, Ixia, Big Switch Networks and NetScout can alleviate the over-polling issue, said Bob Laliberte, a practice director at Enterprise Strategy Group, but they typically require installing network taps and aggregators to collect, filter and distribute the data to the requisite tools. "Most refer to this space as network packet brokers," he said.
"For most legacy environments, when the taps, aggregates, probes and network packet broker are properly deployed, this can be a very effective," Laliberte said.
Deploying these products can be somewhat expensive, however, since it's almost like overseeing another network. As environments becomes more heavily virtualized and dynamic, data collection becomes more challenging, since often what needs to be collected is traffic running east-west between virtual machines and not traversing a physical tap. In addition, the dynamic nature of a heavily virtualized environment also means that collection points are also rapidly changing and virtual taps are required. "There are a lot of different options and capabilities, so organizations should assess and define what they need, as high-end solutions can get costly," Laliberte added.
For his part, Bhalla recommended adopting flow-based technologies such as IP Flow Information Export, Cisco NetFlow and sFlow for most traffic analysis. "Where possible, this can be supplemented with fine-grained packet capture and analysis," he added.
In addition to using network analytics tools engineered to track network performance, enterprises are also beginning to examine how they can best monitor how end users interact with the network -- so-called quality of experience. While data sources such as Simple Network Management Protocol (SNMP), flow and packet are good for extracting device-based quantitative instrumentation from the network devices, "this does not always reflect the actual experience of the end user, particularly for delay-intolerant applications and services such as VoIP, desktop video, collaboration tools and other forms unified communications," Bhalla said.
In these instances, the need to supplement quantitative device-based instrumentation with qualitative assessment from the end user's perspective is important. Bhalla said Gartner has seen a growing number of vendors looking at ways to capture the sentiment of the end user, based on scoring techniques like collective intelligence benchmarking, which looks at large data sets of end users' experiences, makes the findings anonymous, and reports back to end users and organizations for comparative analysis and benchmarking.
What side are you on?
Getting the cooperation needed to actually perform network analytics and monitoring is yet another challenge. "The access credentials even to read data from networking equipment are closely guarded secrets in many organizations," said Thomas Stocking, vice president of product strategy at GroundWork Open Source, a monitoring vendor. "Just to enable SNMP polling can become a matter of negotiation that takes weeks," he said. And something like setting up appliances to actually do packet capture typically needs C-level approval, he added.
Organizational impediments can be a huge challenge for network monitoring, Bhalla agreed. Often, network operations teams will be stonewalled by adjacent domain groups that don't understand why they should grant those teams and their network analytics tools access to their servers, applications or security processes. Sometimes, groups may believe the network operations team might be trying to take over their responsibilities. “In many instances, I've walked into a room and can immediately feel the tension between the network and server guys," Bhalla said. In other cases, the symptom is indifference between [network operations] and [security operations] groups that often are collecting similar data for their own purposes, but using independent and separate network analytics tools.
Changing the turf culture is often the toughest challenge to overcome. Bhalla said the key to getting past those barriers is recognizing it as a human challenge. In other words, don't come to your opposite numbers with a checklist as if you are addressing a technical problem. "Getting teams to collaborate is best done when they don't even realize they are doing it; the main thing is to present a challenge whereby both teams' or groups' skill sets are required to overcome the problem at hand," said Bhalla.
Bhalla also said cooperation can be encouraged by reconsidering the methods used to "measure" IT personnel. Focus on building an atmosphere for collaboration and enabling the business "as opposed to measuring individuals in a way that promotes a fear of failure," he said.
"I think the bigger issue here is getting the right information to the right teams more than anything," Laliberte said. But if monitoring techniques are used that affect performance, that may definitely cause issues, he added. Similarly, passing on the costs to multiple different departments could also create tension as groups squabble over how much each should pay.
According to Destiny Bertucci, head geek at SolarWinds, a network management software vendor, the business is better served by understanding capacity planning and possessing the ability to forecast future network-infrastructure needs.
Employing network analytics and monitoring the network -- as well as the applications, servers and technology the networks are built on -- will allow companies to proactively combat issues, she said. "Reporting on monitoring is an underutilized strategy within businesses today and can help you prevent and predict the needs of your business before you even receive a call from a user having issues.”