Getty Images/iStockphoto

The promise of CXL still hangs in the balance

With AI pushing at the limits of hyperscaler environments, CXL offers a way to alleviate the memory problem. But it still requires a killer app if it's going to gain ground.

The shift toward the AI data center and the promise of more compute express link products coming to market might mean 2024 will be the year enterprises increasingly embrace the new standard.

CXL, which emerged in 2019 as a standard interconnect for compute between processors, accelerators and memory, has promised high speeds, lower latencies and coherence in the data center. But like fusion technology or self-driving cars, CXL seemed to be a tech that was always on the horizon.

Until now, data centers have functioned in the x86 era, according to Charles Fan, CEO of MemVerge, referring to the ubiquity of the server architecture for CPUs in the enterprise. These servers operate as one of three pillars -- compute -- with the other two being networking and storage.

"AI is changing this," Fan said. "While the three [pillars] will continue to exist … a new center of gravity has emerged where AI workloads take place."

Vendors are beginning to turn to GPUs for compute, and to new protocols such as NVLink for networking. But as newer storage, compute and networking technologies enter the IT stack, they'll need a way to interconnect. CXL could potentially be this fabric, Fan said.

There's already interest in the potential of CXL, according to Jim Handy, general director and semiconductor analyst at Objective Analysis.

"There already is some adoption in hyperscale data centers," he said. "Those guys are quite capable of clicking over relatively suddenly to a new architecture."

AI pushes everything

With AI workloads, GPUs do the heavy lifting in compute for both training and inferencing, Fan said. The memory for these processors is primarily high-bandwidth memory that sits on the GPUs. Storage will continue to exist, but there will be a new memory-centric connection with a new AI fabric.

"For Nvidia systems, [this fabric] is NVLink, a point-to-point link interconnecting the different processors. [Alternatively,] there are emerging open system standards such as Ultra Ethernet and CXL," Fan said.

Using the CXL standard as an AI fabric makes sense, given the focus on GPUs, Handy said. It would work on top of the current Peripheral Component Interconnect Express (PCIe) protocol -- the fifth -- which is used to connect the CPU to GPUs, storage and network interface cards.

"A GPU card with memory on it, [CXL allows] the memory to be accessed directly by the server, rather than having it go through a PCIe port," he said.

CXL allows for pooling large amounts of memory and avoiding stranded memory, where memory is overprovisioned so different applications can get the memory they need, while what is left goes unused. Given the interest in generative AI, especially by hyperscalers, finding ways to use memory more efficiently is timely.

"[Hyperscalers will be able to] allocate some of that memory to whichever server happens to be running the big memory application," Handy said.

Overall, AI won't be a catalyst that propels wider adoption of CXL in the immediate near term, according to Marc Staimer, president of Dragon Slayer Consulting. But generative AI will likely cause the biggest uptick in interest.

"Sharing memory with the CPU and GPUs makes a lot of sense [with generative AI]," Staimer said.

Growing market potential

While generative AI might cause an uptick in CXL interest, the revenue potential might create some interest as well, according to Thibault Grossi, senior technology and market analyst at Yole Group.

In an overview of the forecast, [the CXL market] was a bit less than $2 million in 2022, and we project the market to reach almost $16 billion by 2028.
Thibault GrossiSenior technology and market analyst, Yole Group

"In an overview of the forecast, [the CXL market] was a bit less than $2 million in 2022, and we project the market to reach almost $16 billion by 2028," Grossi said at the Memory Fabric Forum 2024.

Grossi said the limited revenue in 2022 was a result of products in the prototype phase, and revenue growth will come from expanding CXL 2.0 and 3.0 use cases.

With new CPUs that came out in January 2023, CXL 2.0 became available to the vendors, but it will take time for products to come to users, Staimer said.

"It usually takes a year from the time where [new technology] comes out to where there are actual products available," he said.

Staimer added that he expects more CXL-based products later in 2024, which aligns with plans from Micron to release new memory modules and Astera Labs with its new CXL cabling.

Still challenges ahead

Aside from product availability, there are other CXL issues to overcome for wider adoption. The challenge that arises when a new layer is created, such as between memory and SSDs, is whether applications can take advantage of that new layer, according to Scott Sinclair, an analyst at TechTarget's Enterprise Strategy Group. The challenge is less about whether a new layer can be created, and more about who targets that layer.

"If nobody's written any software to it, then nobody's taking advantage of it, and it's not going to take off," Sinclair said.

A lot of companies are talking about their upcoming CXL products, but the products aren't out yet, Staimer said.

"Like any new technology, it's going to succeed if there's an application that drives it," Staimer said. "If there is no application to drive it, it just becomes an interesting technology."

Adam Armstrong is a TechTarget Editorial news writer covering file and block storage hardware, and private clouds. He previously worked at StorageReview.com.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close