moisseyev - Fotolia

Tip

How to benchmark 25 GbE NICs in converged networks

Network teams can use an array of tools to benchmark the performance and efficiency of 25 GbE network adapters that support converged networks and HCI environments.

What we used to think of as servers in a data center has rapidly morphed into the core elements of hyper-converged infrastructure, which tightly couples compute, storage and networking. HCI requires high-performance network connections for the servers, and that's where 25 Gigabit Ethernet comes in.

While 25 GbE has emerged on the scene with relatively little fanfare, it plays a major role in enabling HCI and high-performance computing environments. Different hardware implementations and adapter drivers from vendors, such as Broadcom, Intel, Marvell and Mellanox, can result in differing performance levels for various data center applications. Thus, it is important to understand relevant tools that can be used for benchmarking 25 GbE network interface cards (NICs) -- or network adapters -- that are focused on specific uses. Network teams need to ensure the 25 GbE NICs in their servers deliver the performance and efficiency they require.

In this tech tip, we'll provide a quick survey of the methods and tools network teams should be aware of to benchmark 25 GbE NIC performance in the hyper-converged data center.

Low-level adapter performance

Packet processing is a basic level and basic measurement of network adapter performance. The Data Plane Development Kit, referred to as DPDK, is a set of open source libraries used to accelerate packet performance.

Performance. Using DPDK drivers, one can test the NIC in the same way as one would test a LAN switch. Network pros can use a packet generator -- e.g., Ixia's IxNetwork or Xena Networks' Valkyrie -- to drive same-size packets through a dual-port adapter to verify packet throughput at frame sizes of 64 to 1,518 bytes. This provides a best-case performance when there is no application overhead.

Efficiency. The flipside of throughput is how efficiently the adapters can deliver performance. With NICs, performance efficiency focuses on CPU overhead. Teams can use DPDK, for example, to determine how many CPU cycles are required to process a single packet. The more CPU required to process a network packet, the less CPU power there is available for applications.

Network disk performance

Many data center servers use high-performance non-volatile memory express solid-state drives for their storage, so teams often run disk I/O-intensive applications across their network connections. To ensure that a network adapter or network fabric isn't a bottleneck, teams will want to benchmark performance.

Over time, easier methods for benchmarking the components of HCI performance will no doubt emerge.

Flexible I/O, or FIO, is an open source, multi-platform test tool teams can use to drive storage traffic -- reads and writes -- across the network fabric to benchmark adapter and file server throughput. To simulate file server activity, one would typically run a many-to-one test configuration -- that is, many clients to a single server. This test exercises not only the hardware capabilities, but the adapter driver software as well. Both are key elements in delivering high performance.

Vdbench and HCIBench disk benchmarking

It makes sense to test at a system level, as that is the closest representation to what your application environment will look like. This system testing provides a good guide for expected performance.

VMware's Cloud Foundation is a set of VMware products that enables you to build a software-defined data center. VMware also provides a benchmarking tool for that environment, HCIBench. HCIBench from VMware is essentially an automation wrapper around Oracle's Vdbench. HCIBench builds a set of Vdbench scripts suitable for testing a complex HCI environment.

RoCE fabric performance

Networks are fast enough now that remote memory access can be mapped across them. RoCE is an acronym within an acronym, standing for remote direct memory access (RDMA) over Converged Ethernet.

At the moment, RoCE brings a whole new level of complexity. There are currently no simple ways to benchmark RoCE network performance. In testing we have done, The Tolly Group has used the Mellanox ib_send_bw utility. It is a powerful command-line benchmarking utility, available here.

Over time, easier methods for benchmarking the components of HCI performance will no doubt emerge. For now, at least, network teams can take the open source tools available, get some experience and establish performance baselines for their environments.

Dig Deeper on Network infrastructure

Unified Communications
Mobile Computing
Data Center
ITChannel
Close