Getty Images


How the CXL interconnect will affect enterprise storage

The Compute Express Link standard will shake up the enterprise storage market in several significant ways. Here's what admins need to know.

Compute Express Link is an open source memory interconnect. It specifies how to deliver high-performance interconnects between memory and CPUs, GPUs, TPUs and other processors.

Architected for optimal speed, latency and shared resource coherency, CXL will have an effect on future data storage architectures. While that may take some time, admins can take steps now to better understand the CXL interconnect and how it works.

Some background on CXL

Vendors including Intel, AMD and Nvidia support CXL.

The technology became a trending topic when memory and storage vendor Micron decided to shift away from 3D XPoint -- a memory storage technology the company jointly developed with Intel -- to focus instead on CXL for its DRAM and NVDIMM business. Micron's stated rationale, per Sumit Sadana, the company's chief business officer, is that bigger opportunities lie with the higher-performing memory, memory capacities and enhanced memory bandwidth provided by CXL.

CXL uses PCIe physical and electrical interfaces. It improves performance over PCIe with three transactional protocols:

  1. The main CXL protocol, and one that is very similar to PCI Express 5.0., is used for virtualization, configuration, device discovery, interrupts, register access and bulk direct-memory access.
  2. CXL.cache. An optional protocol that empowers accelerators to cache system memory to enable CXL coherency.
  3. CXL.memory. An optional protocol that grants host processors direct access to accelerator-attached memory. The CPU, GPU or TPU views that accelerator-attached memory as an additional address space, which increases efficiency and reduces latency.

The important problem CXL solves is that it eliminates proprietary memory interconnects. Without CXL, every CPU, GPU and TPU has a proprietary connection to memory. CXL is the industry's agreed-upon open standard. This enables different processors to share pools of memory -- a capability that is especially important for AI neural networks, machine learning and deep machine learning systems that commonly use CPUs, GPUs and TPUs.

The CXL interconnect for memory is analogous to NVMe for PCIe flash SSDs. Prior to the open industry standard NVMe, every vendor's PCIe flash SSD had a proprietary driver that did not work with other PCIe flash SSDs. Consider that Intel's Optane DC Persistent Memory and Persistent Memory (PMem) can only be used today by Intel CPUs. That's because there is a proprietary interconnect to PMem from the CPU. Intel has stated, however, that it intends to support CXL for PMem in the future.

CXL's effects on storage

It's obvious that servers will significantly benefit from CXL. But will storage?

Like all technologies, it depends on the storage software. CXL will enable storage systems to take advantage of much larger memory pools for caching. If the storage software or system uses memory as a cache, it will benefit from CXL. As of right now, the largest DRAM cache in a commercial storage system is 3 TB. Based on the fact that multi-petabyte storage systems are pretty common, 3 TB is not all that much for cache.

The CXL interconnect is going to empower clever new storage architectures.

Some software-defined storage can take advantage of Intel Optane PMem to extend that cache to 4.5 TB with the use of MemVerge or Formulus Black -- but that's it. Bigger memory pools equate to a higher percentage of read cache hits and better media utilization through write caching and write amalgamation. The software-defined storage in hyper-converged infrastructure will also benefit from CXL memory pools.

But where storage will benefit the most from CXL will likely be from Intel Optane PMem. That's correct -- this is the same 3D XPoint technology Micron is abandoning to focus on CXL memory. CXL-enabled PMem will be able to work with CPUs other than Intel's. That's an expansion of the market. The CXL PMem will enable much bigger pools of this non-volatile memory.

As of the second quarter of 2021, PMem in a storage system is only used in Oracle Exadata storage servers. That architecture makes it available to all database servers within the Exadata system. But each storage server is limited to 1.5 TB of PMem per storage server, or up to 27 TB per rack. CXL-enabled PMem can potentially expand that by an order of magnitude or more.

The CXL interconnect is going to empower clever new storage architectures. But don't expect it in the near-term. The latest CXL version 2.0 specifically takes advantage of the PCIe 5.0 specification to radically improve memory bandwidth. This is a very important point. Servers and storage systems based on PCIe 4.0 have just become available in the second quarter of 2021. PCIe 5.0 storage systems are unlikely to be available in any quantities before the end of 2021 in a best-case scenario. More likely, it'll be in 2022.

Still, there's significant potential for massive memory -- both volatile and non-volatile -- in servers and storage systems down the road. We just need to stay tuned.

Next Steps

Understand how the CXL SSD can aid performance

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
and ESG