This content is part of the Essential Guide: How NVMe-oF advantages are transforming storage

NVMe-oF and its many benefits take NVMe to the next level

NVMe over Fabrics is revolutionizing the storage market by making full use of data center capacity and becoming an effective replacement for SAN environments.

Once the final pieces of the Nonvolatile Memory Express over Fabrics puzzle are in place, flash performance and latency could improve over today's levels. NVMe commands exist to work across the network, speeding up external storage. Now it's time to look at the benefits of NVMe over Fabrics, how to best use the technology on your networks and the vendors and standards bodies behind NVMe-oF.

NVMe is a new open standard that major storage vendors have adopted. It's a logical device interface specification that allows nonvolatile storage to be attached to a PCI Express interface. Connecting NVMe devices to PCIe slots reduces I/O overhead. That allows the devices and systems they're on to fully benefit from the parallelism of modern SSDs.

NVMe was first deployed in 2012 in servers, and soon after, it made it into storage. In 2014, the NVM Express Inc. group was founded to promote NVMe as an industrywide standard. NVM Express' board of directors includes Cisco, Dell EMC, Intel, Micron Technology, Microsemi, Microsoft, NetApp, Seagate Technology and Western Digital's HGST unit.

NVMe in the data center

Workloads are often offered in the cloud now, and storage has to deal with this new approach to data. Bringing data closer to the processor is the trend, and that's exactly what NVMe is doing.

NVMe works on the local bus, as well as over Fibre Channel interfaces. In 2014, the NVMe-oF standard was proposed as a communication protocol that would let one computer access block-level storage devices on another computer using Remote Direct Memory Access over TCP/IP or Fibre Channel. The NVM Express group published the standard in June 2016. Drivers are now available for all major OSes.

NVMe milestones

An important NVMe feature for the data center is support for multiple tenants to access a device using multiple queues simultaneously. This functionality, which lets NVMe devices be deployed in a scalable way, wasn't possible before NVMe. Because of it, NVMe can be used as a replacement for old-school SAN environments.

NVMe-oF takes NVMe to the next level by making full use of data center capacity. It provides a common architecture that supports a range of storage network fabrics for the NVMe block storage protocol. As a result, it can scale out to multiple NVMe devices and extend the distance over which NVMe devices can be reached.

NVMe-oF is also an excellent choice where storage and compute power are strictly separated, such as IoT environments.

How NVMe-oF works

SSDs have been on the market for years. But performance has been limited because SSDs have been connecting through SATA interfaces developed for hard disk drives. SATA has only one I/O queue and a relatively small queue depth of 32 commands per queue. These limitations don't fully exploit the parallel capabilities of writing to NAND flash media.

NVMe, on the other hand, uses controllers and drivers developed specifically for solid-state devices. It allows disks to be addressed in a way that's similar to addressing memory, making storage faster. The drives can easily reach a speed of 2 GBps, which is more than four times faster than the current generation of SSDs. NVMe is available for both enterprise IT environments and end users.

NVMe dramatically increases transaction speeds because it connects using PCIe. PCIe slots directly connect to the CPU, enabling memory-like access as opposed to SATA where a much larger storage stack must be used. PCIe also enables faster connections: An 8 GBps PCI connection can offer a raw bandwidth of 4 GBps.

NVMe-oF takes NVMe to the next level by making full use of data center capacity.

The second part of the NVMe effort was to develop a specific driver for devices. This approach lets such devices benefit from the increased access speeds the PCIe interface offers. Doing so possibly eliminates SCSI command stacks that weren't intended for use on SSD devices.

And let's not forget the difference in queue size. NVM allows for 65,535 queues with 65,535 commands per queue compared with SATA's one queue and 32 commands per queue. All of NVMe's benefits have resulted in large-scale adoption in the data center.

Setting up remote access

The main benefit of the NVMe-oF specification is that it lets NVMe devices be used over distance. Still, setting up remote NVMe access is a bit difficult. Mellanox Technologies has a configuration guide that describes how to configure Linux for accessing remote NVMe, and it applies to NVMe-oF as well.

To accomplish NVMe and NVMe-oF remote access, Linux kernel modules must be loaded on the NVMe server as well as the client, after which configuration parameters can be written to the /sys file system. These parameters will configure an IP address on the server and enable sharing of the device. One disadvantage of this approach is that nothing written to the Linux /sys file system is persistent.

This is clearly an area where the vendors have work to do. That will likely happen soon, because NVMe and NVMe-oF have obvious storage benefits and are near the point of being must-have technology.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close