Compare DRAM vs. DRAM-less SSDs for cost, performance
Are DRAM-less SSDs the way of the future? While they're a great fit for hyperscale data centers, other organizations may also find benefits, such as cost.
Most SSDs use internal DRAM for various housekeeping functions. However, DRAM-less SSDs use some of the host server's DRAM to do everything that a typical SSD's internal DRAM usually does. Hyperscalers find that DRAM-less SSDs can provide cost, power and performance advantages over SSDs with internal DRAM.
Certain data center SSDs have no internal DRAM. It's important to understand how these drives work because DRAM-less SSDs can benefit some users more than others.
What is a DRAM-less SSD?
Most SSDs include DRAM chips -- sometimes, a lot of them.
DRAM helps the SSD to manage flash wear and the intricacies of NAND flash writes. DRAM also helps SSDs communicate with processors through a protocol that was designed around HDDs, which are altogether different from NAND flash.
The SSD's internal DRAM stores metadata, buffers write data, coalesces short writes into longer ones and buffers data that moves around internally to the SSD for garbage collection. It also holds the address translation table that maps drive addresses to flash addresses to ensure that each flash block undergoes its fair share of write activity, which maximizes the flash chips' lifetimes.
A typical enterprise SSD has a controller (silver), DRAM (yellow outline) and NAND flash (red outline).
These differences between NAND flash and HDDs also require the SSD to occasionally perform some protracted background housekeeping operations. The host is unaware of these operations, and the SSD doesn't know when to expect higher or lower traffic from the host. As a result, there are instances when the SSD and the host get in each other's way to degrade overall system performance. The host and SSD are rarely in sync with each other.
The SSD's performance, cost and power consumption all increase with larger internal DRAM.
But this DRAM only needs to be built into the SSD if that SSD is trying to impersonate an HDD. All the functions listed above can be moved to the host's memory as long as the host is given the responsibility to manage them and there are distinct advantages to doing that.
Hyperscale data center users, originally led by Baidu, experimented with SSDs that had most of the control functions stripped out. These SSDs replaced the internal RAM with a portion of the host computer's RAM, which is now called the Host Memory Buffer (HMB).
Baidu's approach gave the host greater control over the SSD's timing and operation to completely eliminate the lack of synchronization between the server's I/O requirements and the timing of the SSD's garbage collection routines. Users could get the best performance by tuning the application and system software to the SSD's internal architecture. This approach is reasonable for hyperscale data centers because they create and control nearly all their own software.
Baidu wasn't the first to use the host's DRAM instead of DRAM within the SSD. Fusion-io, the originator of the PCIe-based SSD, launched this business with DRAM-less SSDs in 2008. Many found this implementation unattractive at the time since the server's DRAM was used as a substitute, eroding the amount available to standard applications.
This configuration also meant that the server couldn't boot from a Fusion-io drive since a DRAM-less SSD's operation depended on the server being booted beforehand. But today's DRAM-less SSDs have overcome the bootstrap issue, and attitudes about server DRAM use have changed since then.
How do DRAM-less SSDs compare to DRAM SSDs?
A DRAM-less SSD has a lower bill of materials (BOM) cost to manufacture. The DRAM-less SSD itself consumes less power than an SSD with internal DRAM.
The function of the internal DRAM may have moved from the SSD to the server's HMB, but this function still requires the same number of DRAM bytes. This may, in some cases, slightly diminish the server's individual performance, but the system performance improvements that stem from tuning the software to a DRAM-less SSD more than offset any such performance reduction.
The BOM point is important. A conventional enterprise DRAM SSD might use $20-$100 worth of DRAM. If the DRAM-less SSD can be produced without using that $20-$100, then customers should expect for some of that savings to flow through to them.
As Baidu found, an SSD that integrates tighter with application software can also perform better than one that receives commands randomly while trying to manage its internal functions. However, the software must be tuned to take advantage of this closer coupling.
That closer coupling provides solid advantages to users who have control over their software. The host can use its understanding of the SSD's architecture to synchronize I/O requests to the status of the SSD's internal NAND flash chips. If one part of the SSD's flash is busy, the application can redirect the task to a different slice of the SSD's flash.
Best of all, users can put the SSD's internal housekeeping -- particularly the timing of the garbage collection -- under the host's command instead of the SSD initiating it with a second guess of the best timing.
Why can hyperscalers use DRAM-less SSDs?
All this control comes at a cost. If the host is to synchronize its application software to the SSD, then the software must be written around the SSD's architecture.
For most businesses, this doesn't make sense since the bulk of their software is purchased off the shelf and not custom-created for the application. The cost of developing this software is prohibitive for these companies.
But, for a hyperscaler that is deploying the same software over tens of thousands of servers, a million-dollar development effort makes perfect financial sense if it leads to $100 in annual savings for each of its 50,000 servers. That's a total savings of $5 million.
Can all data center SSDs go DRAM-less? Not really. There are other hyperscale applications where removing the DRAM is not feasible. For example, certain new SSDs have been designed to communicate through the Compute Express Link (CXL) protocol, a memory rather than storage interface, to achieve persistence at a high bandwidth.
Just as DRAM SSDs try to mimic HDDs, CXL SSDs try to mimic the operation of memory. These CXL SSDs, which are also interesting to hyperscalers, have huge internal DRAMs to hide their speed deficiencies. They are designed to behave more like non-volatile DIMMs than storage.
DRAM-less SSDs appeal to budget systems, too
There's one other application area where DRAM-less SSDs are preferred over DRAM SSDs. Lower-end applications, which put few demands on the SSD, can also benefit from the cost savings of leaving the DRAM out of the SSD.
In this case, the SSD typically uses small static RAM within the controller chip to buffer a small number of writes and for limited mapping for the address translation tables. Other translation tables and metadata are stored in the NAND flash chips themselves. The performance of this kind of SSD is penalized by these differences, but clever architectural tradeoffs do a good job of hiding those penalties in many applications.
The performance for budget DRAM-less SSDs is not always significantly lower than that of DRAM SSDs. Western Digital uses one of the clever tradeoffs mentioned above in its gaming SSDs, where the vendor replicates a subset of the hyperscalers' tools by placing an HMB in a PC's SSD driver. The benefit of this is clear in the table below, which compares two of these SSDs, a DRAM-less WD Blue SN550 and a DRAM WD Black SN750. The two SSDs have relatively similar architectures, except for the number of internal NAND channels and the NVMe revision. Even with those differences, they are more similar to each other than most SSDs.
The performance of the DRAM-less WD Blue SN550 is lower than that of the WD Black SN750, but the gap between the two is smaller than it would be without the HMB driver.
DRAM-less SSDs serve two ends of the spectrum.
Naturally, standard application programs cannot be customized the way that the applications in hyperscaler data centers can be. However, applications with light I/O activity perform similarly whether paired with a DRAM SSD or a DRAM-less SSD, although the performance with the more economical DRAM-less SSD is typically slightly lower. In office applications, the difference is seldom noticed.
DRAM-less SSDs for the extremes, DRAM SSDs everywhere else
As a result, DRAM-less SSDs serve two ends of the spectrum:
Hyperscale and other systems that use customized software to squeeze out important performance gains, while lowering BOM costs and energy consumption.
Budget systems that can tolerate slightly reduced performance in return for hardware cost savings.
Higher-end applications see the most benefit when a DRAM-less SSD can be paired with customized software. This might include certain scientific applications and highly customized applications that don't simply package commercial software and hardware together but require extensive code development. In these applications, a DRAM-less SSD can speed up performance and reduce costs.
Budget systems, particularly those with low I/O activity, like PCs and industrial controllers, may not notice the lower performance of a lower-cost DRAM-less SSD in day-to-day use.
For either case, this SSD selection, whether a DRAM SSD or a DRAM-less SSD, might not be the first consideration while configuring the system. But DRAM-less SSDs can provide important benefits if the organization applies them correctly.
Editor's note: This article was updated in August 2025 to add information and improve the reader experience.
Jim Handy is a semiconductor and SSD analyst at Objective Analysis in Los Gatos, Calif.