Every IT professional knows we are overwhelmed with acronyms in our industry, and flash storage is no exception. The problem is the industry uses these acronyms so frequently, it is difficult to keep up with all of them. These various methods of describing flash memory standards and interfaces are vital for IT to know as they impact how they might implement the technology.
First, IT professionals should understand how flash storage interconnects. Most flash vendors today, even if they provide turnkey hardware, are software developers. The hardware is, for the most part, a server that a manufacturer configures to support a higher number of drive slots than the typical server.
How those drives in the server connect to the processor has become increasingly important as flash memory storage performance has improved. Also important is how those flash systems connect externally to the application servers they support.
Serial-Attached SCSI (SAS) technology is still the dominant flash interconnect. The performance and bandwidth of SAS continue to improve, but, for the most part, vendors are gradually replacing SAS with non-volatile memory express (NVMe).
This article is part of
Unlike SCSI, the NVMe standard optimizes for interaction with memory storage. It supports a higher command count and queue depth than SCSI, so the flash drive spends less time waiting for I/O requests. Over the next two years or so, NVMe will replace SAS as the connection option of choice.
The connection between application servers and the storage system is also critical to overall performance. Both Fibre Channel (FC) and Ethernet-based protocols like Internet Small Computer System Interface (iSCSI) enjoy the benefit of increasing bandwidth, but bandwidth is not the sole determinate in performance.
Latency plays a significant factor as well. To address latency, NVMe has an extension called NVMe over fabrics (NVMe-oF). There are NVMe-oF versions available to work with FC networks and Ethernet networks. Both protocols provide all the benefits of internal NVMe but across the network. They enable shared storage to deliver performance similar to local storage while maintaining the efficiencies of shared storage.
Understanding bits per cell
When it comes to flash, it is also important to understand all the acronyms that relate to bits per cell. Over the years, engineers have developed ways to squeeze more data into the same physical space on a flash die. The increase in bits per cell increases flash density and lowers the price, but it also reduces flash durability -- the more bits written to each cell, the less service life of the flash drive.
Single-level cell (SLC) was the first flash generation technology to appear in the enterprise and, as its name implies, it wrote one bit of data per cell. SLC was and remains the most durable of all flash technologies. Real-world use has found it to be almost too durable and too expensive for typical enterprise use cases, however.
Multi-level cell (MLC) was the second flash generation technology and, although the most inappropriately named, is the primary driver in the move to all-flash storage systems. MLC writes two bits per cell and dramatically reduces the cost of flash systems to the point that all-flash arrays moved from a thing of fantasy to reality.
Triple-level cell (TLC) is the third-generation of flash technology and, for the most part, is the most prevalent technology used in flash systems today. TLC, as the name suggests, writes three bits per cell and, once again, dramatically lowered the cost of flash storage. Ironically, when vendors first brought TLC to market as a concept, the industry assumed it wouldn't have the durability appropriate for enterprise use. It turned out that, as TLC became ready for delivery, most all-flash vendors found that the current MLC arrays were holding up fine and TLC technology, despite its lower durability, would work for most enterprises.
Quad-level cell, or QLC NAND, is the fourth-generation flash technology, and again, as the name implies, writes four bits per cell. It also dramatically lowers the cost of flash storage. QLC's durability may not be up to the enterprise standard, but that does not mean organizations can't make use of the technology. QLC is ideal for sequential writes and read-heavy environments. Its deployments in the enterprise are mostly in a specific use case where an organization moves older data or data for read-only workloads to a QLC-based system.
We've also seen a few vendors combine QLC with TLC flash to create a new type of hybrid flash technology. Here, an active data set is written to TLC flash first, and then, after it ages, it is moved to QLC. Unlike legacy hybrid arrays that use hard drives, however, the tiering software does not move subsequent reads from the lower tier to the higher tier. They are, instead, read directly from the lower QLC tier. QLC read performance is almost as fast as the upper tier and, in some cases because there are typically more drives in the lower QLC tier, read performance is faster.
Penta-level cell (PLC) is for the next generation of flash drives. It will increase the capacity of a QLC drive by about 25%, which means a 256 GB QLC flash drive will become a 320 GB flash drive. As is the case with QLC, the extra bit per cell reduces the durability of these drives. PLC is ideal for consumers but will more than likely work its way into the enterprise.
Again, hybrid flash arrays are an option here, but the array might have to support three tiers of movement -- TLC to QLC to PLC -- to make sure the data it writes to PLC is truly cold. There will also be specific use case systems were organizations move data for workloads they know are read-only to the PLC flash.
All these standards can be confusing. For most enterprises, TLC based flash is ideal. Still, a system that can tier between TLC and QLC, for example, may enable organizations to lower overall flash costs without any noticeable performance loss. SAS versus NVMe will take care of itself. Eventually, SAS will age out, but organizations should let that happen as they upgrade systems and find NVMe-based technology for the same price as SAS. In five years, a data center's primary storage system will likely be a combination of TLC, QLC and maybe PLC connected to an NVMe network.