Your QLC NAND flash questions, answered NOR flash memory proves a good fit for robotics, IoT devices
Tip

Performance, reliability tradeoffs with SLC vs. MLC and more

Learn the important capacity, performance and reliability differences between multilayer flash when it comes to adding SSD storage to your organization.

Flash, a.k.a. NAND, SSDs continue to be a hot topic for IT pros. They are found in storage arrays, NAS, appliances,...

drives and PCIe memory cards. Flash SSDs have rapidly become part of every storage acquisition discussion, but making those purchasing decisions can be difficult.

There are very compelling reasons for flash to have single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC), quad-level cell (QLC) and even penta-level cell (PLC) flash SSDs, including performance, power efficiency, availability and reliability.

So, how to choose? The rest of this article will go into greater depth for each of these technologies, with the exception of PLC. PLC is still being perfected. Current fab yields are too low to ship cost-effectively, but expect that to be fixed over time. That leaves SLC, MLC, TLC and QLC. Before going into greater depth, here are the general rules of thumb to choosing the correct SSD technology:

  • Capacity, as in the number of states, increases by the power of two for each additional bit per cell;
  • Latency doubles for each bit per cell, as does power;
  • Write endurance decreases by an order of magnitude for each additional bit per cell;
  • Errors increase for each bit per cell, increasing the amount of error correction code in the SSD controller.

What does this all mean? When the lowest latency is required, SLC is a good choice. Then again, other proprietary technologies such as Samsung Z-NAND or Intel Optane storage class memory -- a.k.a. 3D XPoint -- might be better choices with lower latencies and higher endurance, albeit at significantly higher cost. When highest capacity is the primary driver with adequate read and write latencies, QLC is the best choice. Although, TLC's higher reliability and write endurance might be a bigger factor than QLC's lower cost and higher capacities. Choosing an SSD is a matter of tradeoffs.

The caveat that has held back flash SSDs is that they're generally more expensive than HDDs when measured on a dollar-per-gigabyte basis. They're generally inexpensive when measured as dollar-per-IOPS or dollar-per-gigabyte throughput. But capacity is the criterion or tail that wags the IT storage dog. Data generation and capture is on an exponential rising curve from IoT, 5G, AI and machine learning. Capacity is a major criterion.

1. Cost vs. capacity

The flash industry's answer to flash SSDs' cost and capacity issues has taken two non-mutually exclusive paths. The first is adding bits per cell, aka a memory cell. Each cell accepts a certain number of bits. Each bit is registered as a 1 or 0. Every bit that's added to the cell exponentially increases cell capacity by increasing the number of states a cell can have by 2n, where "n" is the number of bits in the cell. SLCs are 21 or 2 states; MLCs are 22 or 4 states; TLCs are 23 or 8 states; QLCs are 24 or 16 states; and PLCs are 25 or 32 states. The same wafer size produces twice the density as the previous bits per cell technology. That increases capacity, while reducing cost per gigabyte.

NAND flash comparison
Multi-level flash differs greatly from single-level flash in terms of endurance.

The second capacity increase path is the move from planar, or 2D technology, to 3D. The 3D technology enables NAND cells to be layered up. It took some time for NAND manufacturers to master 3D layering; they can now deliver 3D NAND flash chips commonly with 96 layers and more. 3D technology greatly increases flash SSD capacities, while, again, lowering the cost per gigabyte. It's important to note there are no 3D SLC chips yet.

More bits per cell and more layers per chip increase flash SSD capacities, while reducing cost per gigabyte. The good news is that flash SSD capacity density has improved very quickly. The bad news is that there is no free lunch and causing significant tradeoffs with these technologies.

2. Tradeoffs in errors and performance

More bits per cell has a significant negative effect of errors, performance, endurance and reliability. Each additional bit per cell takes more time writing to and reading from a cell. It requires more voltage to create and discern the states within the cell. This is because the additional state value makes it more difficult to get a positive value determination. In addition, higher temperatures cause yet more electron leakage in cells because of the higher sensitivity required to differentiate the states. The result is a narrower operating temperature range as the bits per cell increase. This leads to a much higher error rate, aka data corruption.

The net effect is the flash controller must incorporate significantly more comprehensive error-correction technology with each additional bit. As error correction requirements increase, so, too, does the time required to make the data corrections with each bit count. This is a key reason why the latency increases and IOPS decreases as the number of bits per cell rises. The 3D layering increases IOPS per flash chip but does nothing about the latency hit.

Flash SSD performance is affected by much more than just the flash NAND cells. DRAM or persistent memory cache in the drive has a huge effect on SSD performance, as does flash SSD overprovisioning.

Digging into NAND flash memory

NAND flash is a destructive memory technology. Every time a memory cell is going to be overwritten, it must first be erased. Flash NAND isn't a magnetic technology. Erasing it requires a layer of the memory substrate be destroyed. Reads don't cause any memory cell wear. Therefore, the rule of thumb for flash NAND is that writes are costly but reads are free. Writes can take a long time if they must wait for the erasure to occur. Overprovisioning means there is a pool of cells new or previously erased waiting for new writes, so writes don't have to wait. The flash SSD constantly monitors the program erase blocks. When data on a program erase block ages out because newer versions of that data has been written to other program erase blocks, the flash SSD controller does garbage collection. Garbage collection takes the aged-out blocks and erases them, putting them back in the pool of available program erase blocks.

The flash SSD interface also has an effect on performance. NVMe is a much faster interface than SATA or SAS. The shared storage architecture and interface also comes into play. Architectures with DRAM, storage class memory or SLC caching or storage tiering will improve performance. Storage networking interconnect also is another factor to consider. NVMe-oF on Ethernet, Fibre Channel, Infiniband or TCP/IP will improve performance, too. It isn't just the flash SSD technology.

3. Tradeoffs in endurance and reliability

Each bit added to flash NAND cells reduces endurance by an order of magnitude or 10 times. Flash NAND cell endurance is measured as the number of writes before the cell wears out. SLC is rated at approximately 100,000 write cycles per cell. MLC is rated at approximately 10,000 write cycles. TLC is rated at approximately 1,000 write cycles. QLC is rated at approximately 100 write cycles. And PLC is estimated to be rated at approximately 10 write cycles.

Cell endurance and flash SSD endurance aren't the same thing. The two major factors that affect flash SSD endurance are the effectiveness of the flash controller's wear-leveling algorithm and the total amount of flash NAND in the SSD. Flash SSD-rated capacities don't reflect the total flash capacity within the SSD. There is a significant amount of flash NAND capacity overprovisioning within the SSD. That overprovisioning is used to replace worn-out memory cells. The amount of flash SSD overprovisioning varies. Generally, as the bits per cell increase, so does flash SSD overprovisioning. Greater capacities equal more cells to wear-level, which increases SSD endurance.

NAND flash characteristics
SLC, MLC, TLC and QLC flash have different performance levels and price points.

Flash SSD endurance is commonly rated at terabytes written (TBW). It's also rated as the number of drive writes per day (DWPD). DWPD translates into TBW. Vendors warranty their drives to DWPD or TBW. In a nutshell, it's the amount of data that can be written to the flash SSD before it wears out. For example, a 1 TB 3D TLC drive rated and warrantied at 0.66 drive writes per day will have an approximate 1,200 TBW rating. That's a lot of writes over that timeframe.

But what if that 1 TB drive is 3D QLC? It will cost much less, but it's likely to have a much lower DWPD and TBW at approximately 10% or 120 TBW. That number can increase if the amount of overprovisioning also increases.

SLC, MLC, TLC, QLC and PLC: How to choose what's right for your needs

Choosing the right flash NAND storage drives comes down to the capacity, cost, performance, errors and endurance tradeoffs.

PLC flash SSDs generally have the lowest cost per gigabyte but with serious endurance limitations. Performance will be better than any HDD and noticeably lower than any other type of flash SSD. It's obviously aimed at storing data that doesn't change much, if at all: archive data, cold data and even cool data. PLC flash SSDs are in the same class as several other write-once-read-many technologies.

If an application needs the best possible performance, SLC flash SSDs are the choice. However, the type of storage device using SLC matters. A storage array using SLC flash SSDs as cache or as its primary storage media isn't going to be significantly faster than a storage array using MLC or TLC SSDs. This is because of the latency bottlenecks in the path of the SSDs, including the SATA or NVMe controller, storage system controller, storage network or whether the system is NVMe-oF between the application server and the storage system, and the bus of the application server. The performance differences in this SSD SLC vs. MLC scenario will be difficult to justify financially. In contrast, an SLC PCIe card in an application server or a storage-network-attached, SLC-based data-caching appliance demonstrates the huge performance advantage over equivalent MLC products. This SLC example is easier to justify.

Deciding which type of flash storage technology, as well as which device, to use requires weighing the tradeoffs. Is the extra performance necessary? Is capacity more important? What role does cost play in the decision? What about the software that can effectively use flash technology? Does it consume more application CPU cycles? Note that NVMe flash cards trade off that higher performance with a higher CPU resource consumption of as much as 20%.

Here are some general rules of thumb regarding SSD, but keep in mind these rules don't always apply.

  • When application acceleration performance must be the absolute highest it can be -- with the lowest possible latency -- and price isn't a crucial consideration, NVMe SLC flash SSDs are a very good answer. They're not the only answer, as they were a few short years ago. 3D XPoint storage class memory drives are up to four times faster, at up to 10 times the cost. Data center persistent memory modules running in application direct mode are more than 10 times faster, at up to 10 times the cost.
  • When application acceleration is important, but cost is an issue, MLC, TLC and, to a lesser extent, QLC are viable alternatives.
  • When the requirement is to have a high-performance shared storage system, then the storage system architecture is more important than just the flash SSD media. How data is cached or tiered -- NVMe-oF interconnect on Ethernet, Fibre Channel, Infiniband or TCP/IP -- is the bigger issue. SLC, MLC, TLC, QLC and PLC are all viable. Some storage systems on the market, such as those from Vast Data, have proven this with QLC.
  • When cost per gigabyte is paramount, performance is important but not as important, and write endurance isn't a big factor -- because of caching and write coalescing -- MLC, TLC and, potentially, QLC might be the right choice.

Next Steps

IDC Closing Remarks

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Close