Getty Images/iStockphoto

MemVerge, XConn, Samsung, H3 unite to show benefits of CXL

Four vendors combine technologies to demonstrate CXL memory pooling at the Flash Memory Summit and tout its benefits to use cases like generative AI.

A handful of storage and memory companies demonstrated the benefits of compute express link without the adoption of a new server or CPUs, enabling testing on use cases such as generative AI workloads.

At this week's Flash Memory Summit in Santa Clara, Calif., MemVerge, XConn Technologies, Samsung and H3 Platform collectively demonstrated memory pooling capabilities of CXL. The interconnect protocol is designed to increase speed and efficiency between CPUs, GPUs and devices such as DRAM while also enabling memory pooling that pushes beyond past the limitations of CPUs.

CXL is game changing, but there is currently little appetite for it, according to Marc Staimer, president of Dragon Slayer consulting, an IT analyst firm based in Beaverton, Ore. While there is vendor support for the specification and a handful of products, CXL has not yet made a commercial impact on the enterprise. The lack of appetite is tied less to the technology and more to the slow adoption of new servers with the latest generation CPUs, he said. The latest Intel and AMD CPUs support CXL 2.0, but hardware within the servers doesn't necessarily. Also new CPU and server adoption takes time.

At the Flash Memory Summit, Samsung, MemVerge, H3 and XConn banded together to illustrate the benefits of memory pooling.

"It's a demonstration -- a testing of the waters to show what is possible," Staimer said.

Early days for CXL

The initial CXL specification debuted in 2019, but hardware support has been a roadblock to larger adoption. The current generation of server support for CXL remains incomplete, according to Charles Fan, founder and CEO of MemVerge, which was founded in 2017. There are three CXL device types with different capabilities. Intel supports Type 2 devices, and AMD supports Type 3. However, the next generation CPUs will offer more complete support.

"[The complete support next year] will give CXL a push," Fan said. "I think the first quarter of 2024 will see the actual CXL hardware become available … making 2024 the first year of CXL revenue and 2025 the first year of memory appliances."

CXL is still in its early stages, Staimer said, but the promise of lower latency and larger memory pools is getting closer. The vendor demonstration at the summit could help push things forward with system planning and developing.

"This will be a big factor in both servers and storage, probably in a year from now," Staimer said.

Current generation servers support the CXL 2.0 specification but lack the necessary hardware and software for memory pooling, expansion or cache consistency, showcased at the Flash Memory Summit.

CXL technologies combined

The demonstration consisted of a 2U rackmount H3 system, Samsung's 256 GB CXL memory modules, XConn's XC50256 CXL switch and MemVerge's Memory Machine X software.

Each of the four companies have a specific focus on different parts to bring CXL benefits today. Established in 2014 and headquartered in Taipei City, Taiwan, H3 develops hardware and software for PCIe-related technology, an interface standard CXL uses as its foundation.

Samsung was the first company to release a CXL memory module, a device roughly the same form factor as an SSD but uses memory inside instead of NAND for lower latency and higher bandwidth use cases.

The CXL memory module allows for expansion beyond what dual in-line memory module memory slots, which surround server CPUs, can support, according to Ken Clipperton, a storage analyst at Data Center Intelligence Group. While not inexpensive, it can lower total costs of memory at higher capacity.

"Part of the opportunity is around the idea that there are large-capacity, individual memory modules at a lower cost per gigabit than [double data rate] memories," Clipperton said.

Founded in 2020 and headquartered in San Jose, XConn make switches that let Intel and AMD processors talk to CXL devices, a key component for memory pooling and sharing, according to experts.

Tying it all together is MemVerge's software, which decouples memory from compute, allowing it to be added as needed.

"[We can] essentially liberate the memory from the compute, allowing you to scale memory independently," Fan said.

Before CXL 2.0 and MemVerge's software, this disaggregation wasn't possible, he said.

The CXL/AI crossroads

At the summit, MemVerge highlighted AI workloads as a specific use case for the expanded and disaggregated memory pool created through the combined technologies. Any use case where the database is too large to fit in typical upper memory capacity around a CPU, including generative AI, would benefit from expanded memory pools such as this, Clipperton said.

[CXL] will be a big factor in both servers and storage, probably in a year from now.
Marc StaimerPresident, Dragon Slayer Consulting

"[Processing larger databases in larger pools provides] memory speed and memory semantics, instead of disk speeds and disk semantics," Clipperton said.

The benefits of CXL on AI workloads depends on the model, according to Ray Lucchesi, president of Silverton Consulting. AI workloads prioritize computation over memory.

"It's really more for large in-memory database solutions," Lucchesi said.

Memory is expensive and will remain so despite CXL. Expanded memory will be reserved for specific use cases such as high frequency trading applications or databases that feed generative AI models, Staimer said.

"The biggest impact will be on the databases that the generative AI is utilizing, especially structured but also unstructured databases," Staimer said.

These databases will benefit from CXL, with GPUs using the information from the databases for subsequent training, he said.

In terms of generative AI, Charles Fan identified two areas of advantage. CXL could be used to create larger memory pools for data staging before training and permits multiple hosts to share these pools, he said. Currently, memory is designed for a single host, but memory sharing would allow for much quicker data transfer between hosts, he added.

Echoing this point, XConn CEO Gerry Fan said that constructing extensive memory systems for each generative AI use case isn't feasible.

"[Due to the high cost of memory,] it is nearly impossible for people to build a standalone system with petabytes of memory, when most of the time that memory is probably doing nothing," XConn's Fan said.

CXL's memory pooling and sharing could address this challenge, he said.

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware, and private clouds. He previously worked at StorageReview.com.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close