NVMe flash hot; Optane, SCM still warming up
The world of fast flash storage evolves slowly; NVMe has emerged, but other types of SCM and persistent memory still have a ways to go before appearing in the data center.
NVMe flash has made it into the mainstream enterprise storage technology list, while the storage class memory flavors that are considered successors to NAND flash remains in the emerging category.
That was the consensus at the Flash Memory Summit 2019 last week. FMS is where storage vendors show off their current products, preview prototypes for those coming soon and share long-range roadmaps for other technologies that may never see the light of day.
The NVMe interface for NAND flash was an emerging technology for years at the annual FMS, but now is common in flash storage arrays from leading vendors and startups.
Quad-level cell (QLC) NAND is moving into the enterprise, bringing the price down from the current leading flavor, triple-level cell (TLC) NAND. QLC should keep NAND flash in the mainstream for years, despite predictions that storage class memory (SCM) and persistent memory technologies will eventually replace NAND. SCM and persistent memory bridge the price and performance gap between NAND and the faster, more expensive DRAM.
"The future of DRAM is DRAM. The future of NAND is NAND. Those aren't going anywhere," said Dave Eggleston, principal of Intuitive Cognition Consulting. "So the question really is, how do you get this tier in between and make good use of it?"
NVMe flash arrays now common
Dell EMC, IBM, NetApp, Pure Storage and Hewlett Packard Enterprise all sell NVMe flash arrays, as do startups Pavilion Data Systems, Apeiron Data Systems, Excelero and Exten Technologies. AWS last month acquired another NVMe flash startup, E8 Storage.
Ken Clipperton Lead analyst, DCIG
Drives using the NVMe protocol reduce latency and increase throughput over SAS/SATA drives. NVMe over Fabrics (NVMe-oF) connections for moving data over a network are far from mainstream, however. NVMe-oF supports Fibre Channel, RDMA and TCP/IP protocols, but storage vendors have been slow to add NVMe-oF support on the front end of arrays.
Ken Clipperton, lead analyst at DCIG, said "in the past year, there have been real advances in front-end connectivity -- I'm talking about speeds."
But the real advances have been in NVMe drives on the back end.
"The NVMe protocol is much lighter weight than traditional protocols," Clipperton said. "NVMe drives on the back end are more efficient; they use fewer CPU cycles in the all-flash array to deliver a given level of performance."
Intel Optane yet to bloom
Intel Optane -- based on 3D XPoint technology -- is the most immediate candidate to supplant NAND in flash storage systems but is still emerging. Intel started selling Optane solid-state drives in 2017 and added Optane DC persistent memory DIMM cards in early 2019, but uptake has been slow. Dell EMC PowerMax, Hewlett Packard Enterprise 3PAR and startup Vast Data Universal Storage arrays use Optane either as a storage tier or cache, but most system vendors are in no hurry to adopt it. Few applications have been written to take advantage of the technology.
Intel still loses money on Optane -- its memory lost $284 million last quarter and lost money in 12 of the last 16 quarters. Micron, its partner in developing 3D XPoint memory, has yet to bring out a product with the technology. Micron has given few details on its 3D XPoint plans, except to say it would ship products in 2019. Micron gave no 3D XPoint updates at FMS.
Optane adoption faces economic and technology barriers. Jim Handy, chief analyst at Objective Analysis, said Intel has a "chicken and egg problem" with Optane: It must keep the price below that of the faster DRAM to make it appealing, but it can't make a profit at that price until it hits mass adoption.
"So far, Intel has been subsidizing 3D XPoint," Handy said. "While everyone else has been making money on NAND and DRAM, Intel's memory group has been losing considerable sums."
Handy predicts Intel will continue to sell Optane. He said it will eventually get the volumes it needs for a profit, and it also makes more money on the processors required to run Optane.
"Optane processors are now $15,000 processors instead of $10,000 processors," he said. "It eventually will be a profitable business. Intel will get those losses back."
Handy is less certain about Micron's 3D XPoint future.
"I suspect Micron is waiting for 3D XPoint to be profitable before they make anything," he said. "It might not be profitable in 2020. "The fact that they said they would ship something in 2019 doesn't mean anything because they've said before they would ship something in 2017."
Getting the full benefit of Optane requires software vendors to modify their applications, which hasn't yet happened, for the most part.
"Intel has done a great job working with the software developers and creating models so the applications can become persistent-memory aware," Eggleston said. "Unfortunately, not many of the applications have been rewritten yet. I think where Intel struggled a little bit is, it was well understand when 3D XPoint came out, it was not particularly that great in SSDs, but they started there because that was the fastest gain. But the value proposition was in memory. And that still exists."
MRAM makes inroads
Magnetoresistive RAM (MRAM) is another SCM technology starting to gain enterprise adoption, although it is still early. IBM embeds MRAM chips from Everspin into its FlashCore media that runs in IBM FlashSystem arrays. Other MRAM vendors include Spin Memory, Crocus Technology, Applied Materials and Avalanche Technology.
"There are some neat things happening with MRAM," Clipperton said. "Manufacturers are now producing MRAM in meaningful quantities. It's not replacing flash memory at this point. In most cases, it's used to replace DRAM on SSDs. So instead of having DRAM and capacitors, IBM is using MRAM as a replacement for DRAM on SSD, so the whole thing is persistent. This is also going to be showing up even on network interface cards for persistent memory. That's slightly forward-looking."
"I'm particularly impressed with MRAM right now," said Rob Peglar, president of consultancy Advanced Computation and Storage LLC. "Not to discredit any other technologies, but MRAM seems to be moving ahead very quickly."
Emerging use cases
FMS 2019 highlighted uses cases that bring flash beyond traditional enterprise storage. Flash is playing a big role in AI applications. All-flash unstructured data vendors Dell EMC, NetApp, Pure Storage, IBM, DataDirect Networks, Excelero and Vast Data have flash arrays designed to provide the latency, throughput and parallel access required for AI applications.
Computational storage is also advancing, and can help bring flash into edge devices. Computational storage uses multi-core processors inside SSDs to offload CPU-intensive functions from the storage controller. NGD Systems is out front with a system-on-a-chip design that embeds and ARM processor into an SSD. ScaleFlux, Eideticom and Pliops also have computational storage products.
These products can bring processing power and storage to edge devices.
"The leading edge is on the edge," Peglar said. "You think of a wind turbine or a cell tower to use examples of the edge. You're not going to put 12 19-inch racks there. And the power distribution is very low; you don't have a lot of power. But if you're out there on the edge of a cell tower and you're trying to figure out the diagnostics on 170,000 5G connections, you don't have a lot of time to do it. So you must have really faster compute engines."