chris - Fotolia

Tip

Storage networking technologies explained

Make sure you know the differences among network fabrics, such as Ethernet, Fibre Channel and InfiniBand, before making decisions about your storage network infrastructure.

Ethernet, Fibre Channel and InfiniBand are the three most commonly implemented network fabrics in today's data centers. They differ in a number of ways, but all three provide a structure for interconnecting resources and transferring data among them, with the goal of moving data as efficiently and securely as possible.

The speed wars continue to rage among these three leading network topologies. And that's unlikely to change even as new storage networking technologies emerge.

When choosing among these three leading storage networking technologies, you'll want to weigh a variety of factors, including performance, cost, ease of deployment, security and management capabilities. Also consider what equipment you have on hand and the systems and workloads you're trying to support. Only then can you be sure you're getting the network topology that will best meet the needs of your environment.

Here, we look at the three network fabric types and examine what sets each apart from the others.

Ethernet

Ethernet has long been the de facto technology for LANs, working in conjunction with TCP/IP to facilitate systemwide communications among corporate resources. Because of Ethernet's adaptability and steady evolution, the technology has found its way into SANs, primarily through the use of the iSCSI protocol. Ethernet is also used for hyper-converged infrastructures and large-scale data centers, such as those supported by Amazon, Facebook, Google and Microsoft.

The speed wars continue to rage among these three leading network topologies. And that's unlikely to change even as new storage networking technologies emerge.

Defined by the IEEE 802.3 standards, Ethernet is the simplest and cheapest type of network topology to deploy. That's partly because it's widely implemented, familiar to most administrators and supported by most routers and device types. Ethernet can also accommodate block, file and object storage, and it can be used with advanced storage technologies, such as automation, provisioning and software-defined storage.

When it was developed in 1972, Ethernet supported speeds of only 2.94 Mbps, but now, it claims throughputs up to 100 Gbps, with 200 Gbps and 400 Gbps in the wings. It has evolved from a hub-centric topology to a fully switched network that can use twisted pair, coaxial cable or optical fiber lines. In addition, Ethernet provides lossless transport -- no power is lost when transferring data -- and supports a wide range of communication protocols.

Ethernet speeds
How Ethernet speed has changed over the years

Ethernet has been at the forefront of integrating new storage networking technologies, such as remote direct memory access, RDMA over Ethernet and nonvolatile memory express (NVMe) over Fabrics. Ethernet's biggest strength has been its ability to adapt to emerging technologies and different environments. Because of this ability, Ethernet has been used for both LAN and SAN deployments, helping to simplify network deployment and maintenance.

Fibre Channel

Fibre Channel (FC) is the only one of the storage networking technologies developed specifically for connecting servers to shared storage devices in a SAN configuration. FC provides in-order delivery of raw data blocks between devices, even as far as 10 km apart if they're connected by optical fiber. It also offers a high degree of security and reliability. Originally, FC ran only on optical fiber, but it can now run on coaxial cable and twisted pair lines, too.

From the beginning, FC supported a lossless, switched-based network topology, something Ethernet didn't provide until later in its evolution. In addition, FC can be used for point-to-point connections, where devices are directly connected, as well as arbitrated loops, where they're in a loop or ring.

FC reached the 32 Gbps threshold with the release of Gen 6 FC in 2016, with 64 Gbps FC on the roadmap for 2019. Gen 6 supports a parallel configuration in which four lanes can be aggregated into a single module to achieve 128 Gbps throughput. In addition, NVMe over FC devices are hitting the market, helping to maximize the value of flash drives in a SAN configuration.

Fibre Channel speeds
Fibre Channel keeps getting faster

FC has reigned over the SAN landscape for decades, but improvements to Ethernet have made FC less dominant for storage. FC has a reputation for being expensive and difficult to manage. Plus, it's limited to block storage, which puts it at a disadvantage in modern data center environments, such as many cloud-based infrastructures that also support file and object storage. Ethernet is favored in such environments because of its flexibility, although InfiniBand is making inroads into these areas.

But while the rate of FC deployments has flattened in the last couple years when compared with Ethernet, the technology is still widely used for SAN deployments and isn't likely to go away anytime soon. In fact, when it comes to storage, many in the industry still consider FC to be better performing and more reliable than Ethernet.

Fibre Channel over Ethernet (FCoE) is a hybrid protocol that encapsulates FC frames over Ethernet networks. FCoE enables FC traffic to move across Ethernet networks and converges the storage networking technologies into one cable transport and interface. But while storage networking vendors have supported FCoE in products since 2009, FCoE hasn't been widely deployed for storage.

InfiniBand

Although not as widely adopted as the other leading storage networking technologies, InfiniBand has distinct advantages over Ethernet and FC. Supporting both switched and point-to-point interconnectivity, InfiniBand delivers high throughputs with low latency. It's also highly scalable and reliable and includes failover and quality-of-service capabilities. InfiniBand is often the technology of choice for high-performance computing (HPC) environments, including some of the world's beefiest supercomputers.

InfiniBand uses serial buses to connect resources. A serial bus requires fewer pins and other electrical connections, improving reliability and reducing costs. A serial bus can also carry multiple channels of data at the same time in order to support multiplexing.

In an InfiniBand network, processing nodes, such as servers and PCs, are configured with host channel adapters, and peripheral devices, such as storage drives, are configured with target channel adapters. The target channel adapters generally support only a subset of the functionality available to the host adapters, but both facilitate connectivity across the network.

InfiniBand speeds are based on supported data rates and the number of aggregated links. Data rates are categorized by type, such as single data rate, quad data rate, enhanced data rate and high data rate (HDR). HDR is the fastest type available and can support up to 50 Gbps throughput, although that's likely to change soon. When links are aggregated, much greater speeds can be achieved. For example, an HDR network based on four aggregated links (4X) can achieve throughputs of 200 Gbps, and a 12X HDR network can achieve throughputs of 600 Gbps. The most common InfiniBand implementation is 4X.

InfiniBand speeds
InfiniBand's speed roadmap

InfiniBand has gained ground in areas that FC might have been expected to fill and is starting to expand out of the HPC niche. It can also support scale-out storage scenarios that incorporate file and object storage, making it a valuable asset for organizations that require a high-performing, multipurpose network architecture to support resource-intensive workloads. However, InfiniBand tends to be more complex and expensive than Ethernet, in part, because it requires more specialized IT skills.

Dig Deeper on Primary storage devices

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close