Jason Stitt - Fotolia
Ever since Intel and Micron Technology announced their 3D XPoint non-volatile memory in July 2015, IT pros have been talking about the price/performance implications. 3D XPoint's initial market reception can only be described as lukewarm at best.
But as Intel and Micron have introduced 3D XPoint SSDs, also known as storage class memory (SCM), and DIMMs, also known as Data Center Persistent Memory Modules (DCPMMs), or PMEM for short, with actual benchmarks, enthusiasm has grown among the OEM system vendors. User interest has lagged, however, even though the performance metrics are impressive.
SCM was the first implementation of 3D XPoint. SCM benchmarks commonly show much lower latencies and about two-and-a-half to three times better IOPS than the fastest NVMe flash SSDs, according to the ServeTheHome website. Large data transfer benchmarks have shown as much as 10 times improvement, according to AnandTech.
Perhaps the most important SCM performance advantage is its small write latency, something that's quite difficult for NAND flash to achieve. SCM is found primarily in high-performance storage arrays, although it's also available for servers.
The price/performance conundrum
For many IT professionals, the issue is price/performance. SCM averages a price point that typically ranges four to five times higher than the fastest NVMe flash SSD for the same capacities. That makes it difficult to justify using it simply on price/performance. It comes down to the application in question and the "above-the-line" return on that performance improvement.
Above-the-line return is a new concept for many IT pros. It's revenue that comes from faster response times and earlier times to market -- revenue that wouldn't be there without that performance improvement. An excellent example is high-frequency trading where a 10-times reduction in latency can return millions in revenue. For other businesses, improved latencies and response times can result in faster times to market and greater market shares -- revenue and profits that wouldn't be possible at a later time to market.
So, which applications benefit the most from SCM? Those that require lower write and read latencies and large data transfers. Databases and various AI technologies, including machine learning, deep machine learning and neural networks, all benefit the most from reduced write and read latencies. Data warehouses, big data analytics and high-performance computing benefit the most from faster large data transfers.
How this all works
What about the newer DCPMM or PMEM? Understanding how to effectively use this technology requires some background.
PMEM is only available from Intel at the time of this article. It's 3D XPoint in a DIMM form factor based on the DDR4 standard. It ranges from 128 GB to 512 GB per DCPMM. There is a maximum of 6 DCPMMs per CPU or socket. Each DCPMM is paired with a DRAM DIMM. DCPMM support requires second-generation or better Intel Cascade Lake processors.
DCPMMs cost about half of DRAM on a per-byte basis but are approximately 10 times slower. Both are written to in byte mode. The key difference is data persistence. PMEM can be used as persistent memory even if power is lost. But wait, since it's based on non-volatile 3D XPoint technology, shouldn't the data always be persistent? The answer is yes and no.
PMEM has two modes: Memory Mode and Application Direct Mode (AppDirect). Memory Mode is how persistent memory is mostly used, making it appear as if it's DRAM. When PMEM is in use, the system sees a larger DRAM allocation. It uses the DCPMMs as the main memory stores with the DRAM DIMMs as a fast buffer for rapidly needed data. In terms of performance, this is better option than using DRAM to buffer NVMe NAND flash SSDs. The best part of Memory Mode is it requires no changes to the application or file system to use PMEM. It's literally plug and play. However, the data isn't considered persistent. To be persistent requires AppDirect.
The persistent advantage
AppDirect makes a DCPMM look, feel and act like a RAM disk. Data persists even when the power is shut off. This enables fast restarts, minimizing downtime and data loss, although it's not by itself bootable. AppDirect is appealing to relational databases because it simplifies their atomicity, consistency, isolation, durability requirements when running in-memory. The much greater memory sizes also make running just about any database in-memory easier. But unlike Memory Mode, AppDirect isn't plug and play. It requires modifying the application and possibly the file system -- not a trivial task and potentially a huge one.
One well-known AppDirect implementation is the Oracle Database with Exadata X8M. Oracle used AppDirect with remote direct memory access over converged Ethernet to pool all the DCPMMs in the storage servers, so they appear as a single persistent memory pool of up to 27 TB per rack, for all of the database servers. The results are astonishing, delivering 19 µs or less latency and 16 million 8K SQL read IOPS. Oracle did this for the same price as its nonpersistent memory Exadata X8, which has 250 µs latency and 6.57 million 8K SQL read IOPS. That's more than 10 times lower latencies and approximately 2.5 times more IOPS for the same price.
Which brings this back to the original question? Are SCM and PMEM worth the price? The answer is: "It depends." It depends on whether lower latencies, faster response times and faster large data throughput will significantly improve productivity, time to market and revenue for your business. It also ultimately depends on the price/performance.