This content is part of the Essential Guide: NVMe storage know-how for an easy and optimized transition

4 NVMe storage array options to consider

Here's what you need to know to pick the NVMe array that provides your organization with the performance improvement it needs and also fits its budget.

When selecting your first NVMe storage array, it's important to understand the four basic implementation options to get the system that best meets your enterprise's performance needs and budget.

1. SAS replacements

NVMe extracts better performance from flash media by using Peripheral Component Interconnect Express, as well as increased command counts and queue depths. Replacing SAS-connected drives with NVMe-connected drives is the most common implementation method. Creating an NVMe-based system is straightforward because most all-flash array (AFA) software runs on top of a Linux kernel and Linux natively supports NVMe. So, moving to NVMe is seamless for the software.

Switching to NVMe can impact compute, though. Vendors choosing the SAS replacement method must increase the CPU power of their systems to benefit from the NVMe investment. The more expensive NVMe drives and more powerful CPUs put these NVMe all-flash arrays at a higher price point compared to SAS-based systems.

Merely replacing SAS flash with NVMe flash limits the performance improvement to interactions within the system. External connectivity is typically still Fibre Channel (FC) or traditional Ethernet, so once data leaves the NVMe storage array, SCSI and its latency are reintroduced. Nevertheless, many organizations see a performance improvement, especially those where the SAS-based AFA is flooded with I/O streams.

2. Hybrid integration

Hybrid arrays mix flash and hard disk drives. These systems can keep costs down and performance relatively the same as an AFA, assuming a large enough flash storage area and accesses from the hard disk tier are kept to a minimum. The problem with a hybrid system is that the performance delta between flash and hard disk can be too large, and when a flash miss occurs, users may notice the performance drop.

These hybrid systems integrate NVMe flash and high-density SAS flash. They keep costs down by limiting the size of the NVMe tier. It needs only to be big enough to store the most active data segments. Because of the smaller size, there's also less need for increased CPU power. And the use of two flash technologies means there's almost no noticeable performance impact.

Hybrid arrays vs. all-flash arrays

While SAS flash is fast, it's not as fast as NVMe. Many organizations may find they must continue to increase the size of the NVMe flash tier to keep pace with workloads like high-end databases and big data analytics processing.

3. Scale-out systems

Scale-out systems can also benefit from NVMe. Today, the interconnections between nodes made over the traditional IP protocol add latency. NVMe-oF enables internode communications at internal storage speed. It's as if the cluster nodes were internally connected to each other. The reduction in latency should enable scale-out systems to scale further without impacting storage I/O latency.

How NVMe over Fabrics works

4. End-to-end NVMe

End-to-end NVMe connectivity is the next step. It enables hosts and bare-metal applications to communicate with storage at speeds and latencies similar to DAS.

All NVMe options promise to improve the performance of flash arrays by reducing latency. The problem is that the performance improvement may be more than many organizations will ever need.

End-to-end NVMe requires more than just installing a new NVMe storage array. Organizations looking to adopt this approach must also upgrade their network. They don't, however, need to replace their network because all FC switches and most storage class Ethernet networks support NVMe and traditional SCSI-based protocols simultaneously. The same is true of network cards and host bus adapters.

Most of the early vendors to ship end-to-end NVMe systems are startups. These vendors invest in making sure their storage systems don't bottleneck the NVMe data flow through the use of field-programmable gate arrays and even application-specific integrated circuits to offload storage software processing. They target AI and machine learning workloads, which justify the massive I/O potential of these systems.

Picking the best NVMe storage array

The key question is: How much performance does your organization need? All NVMe options promise to improve the performance of flash arrays by reducing latency. The problem is that the performance improvement may be more than many organizations will ever need, and that performance comes at a price.

Storage infrastructures are reaching the point where they can deliver more performance than an organization requires. Buying the fastest possible system that fits within the IT budget may no longer be a sound strategy. In addition to understanding the NVMe storage array options, you also must predict what your maximum I/O requirements will be over the next five years and select the array that best meets that need. You may find that a traditional SAS-based system provides the required performance and saves your organization a significant amount of money.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
and ESG