Getty Images

MemVerge, Liqid create composable memory pools before CXL

MemVerge and Liqid combined software with Intel Optane to make composable memory pools that can do today what the CXL standard will do in the future.

MemVerge and Liqid are combining software to take Intel Optane and make large memory pools that can be used on demand for data-intensive workloads.

MemVerge Memory Machine software virtualizes DRAM and storage class memory (SCM), specifically Intel Optane persistent memory (PMem), into software-defined memory pools. Liqid Matrix's Composable Disaggregated Infrastructure software can separate resources such as RAM and pool it into a virtual bare-metal server as needed for application use. Combining the technology from the two vendors together bypasses the immediate need for the Compute Express Link (CXL) and allows for memory and SCM to be disaggregated and used for memory-intense workloads.

Organizations that have workloads that live in-memory including payment processing or specialized databases will invest in a technology such as this, according to Scott Sinclair, an analyst at Enterprise Strategy Group, a division of TechTarget. More memory can provide a competitive edge for companies focused on low-latency, high-speed processes, and establishing this new relationship between storage software and hardware could also open up new possibilities for in-memory workloads.

"We've been talking about persistent memory and even Optane for a long time," Sinclair said. This collaboration will give users "the ability to integrate [persistent memory] into your environment, cost-effectively, giving some of the benefits to [expanded memory] while also configuring it more granularly."

No longer waiting on CXL

MemVerge referred to working with Liqid and Intel as a bridge to CXL, a standard used to connect processors, memory, expanded memory and accelerators. Marc Staimer, president of Dragon Slayer Consulting, has described CXL as a technology that will enable different processors to share pools of memory, a limitation for those who use the technology. But the underlying infrastructure for CXL, which includes CPUs that support PCIe Gen 5, hasn't come to market yet.

Liqid Matrix composable disaggregated infrastructure software
The Liqid Matrix software composes resources across different servers.

Although not true CXL, the MemVerge-Liqid partnership mimics how the standard works and can be used on existing protocols such as PCIe Gen 4, according to George Wagner, director of product and technical marketing at Liqid.

"We're composing [Optane] directly over Gen 4 now -- basically using MemVerge software to make it look like memory to the server on the host side," Wagner said.

True CXL is about two years away from larger adoption, Wagner said. The hardware, enterprise CPUs and motherboards that support PCIe Gen 5 need to catch up with the software. While Liqid has been able to disaggregate resources such as CPUs, GPUs, NVMe and network interface cards, it has been limited in how much memory could be broken up and moved around. CPUs support only so many DIMM slots for DRAM, but the partnership with MemVerge adds large pools of memory through its ability to virtualize DRAM and SCM together.

Mega memory

MemVerge software virtualizes memory and sits on the host side, according to Bernie Wu, vice president of business development at MemVerge. The partnership with Liqid gives MemVerge flexibility  away from solely being near the host, now supplementing accelerators over the PCIe bus.

"One of the last frontiers to get disaggregated, in computer architecture, is memory," Wu said, referring to the limited number of memory channels per server. "The partnership with Liquid enabled us to do that."

MemVerge also provides a shared memory object architecture, shared storage in memory instead of on disks that provides higher performance, concurrent memory-speed access and reduced I/O to storage, Wu said. Different applications can use the same shared memory object capability so that different types of workloads can run in parallel. For example, users can collect data and run analytics at the same time using MemVerge's shared memory architecture.

The total amount of memory can be expanded to support more workloads as needed. Wu cited the example of a node of all-Optane SSDs, creating a multi-terabyte pool of SCM. With Liqid and MemVerge's combined software, SSDs can act as memory to the application and be composed as needed for in-memory use cases.

"The application can look at [the memory pool] transparently," he said. "There's no refactoring application needed to use it."

The flexible memory pool can save costs with both a lower DRAM footprint or, once a job is complete, the memory can be dynamically shifted away from one job that no longer requires it to another that does, Wu said.

Liqid's Wagner also pointed to composing memory, saying flexibility is key, particularly for larger workloads.

"We could get an enclosure, fill it up [with memory and Optane], and then use the software to say, 'OK, you're running X workload, let's go ahead and move 10 TB in, move it out, move it somewhere else, move it around,'" Wagner said.

The first targeted use cases of composable memory pools will be high-performance computing and machine learning applications, which perform best on in-memory technology.

"All workloads are not created equal," Sinclair said. "And we can't treat them as such."

While the two companies may initially focus on a handful of specific use cases, Sinclair said he believes they will eventually open up the exploration to different workloads that will benefit from this type of architecture.

Next Steps

TGen saves time and expands research using MemVerge

Dig Deeper on Storage system and application software

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close