pathdoc - stock.adobe.com

Understand the Intel Optane shutdown

Intel's upcoming shutdown of Optane follows multibillion-dollar losses. However, some technology benefits came out of the IT giant's effort to improve storage performance.

All businesses must make big bets from time to time, and the successful ones know when to walk away from a gamble that didn't turn out the way they had hoped. Intel did just this at the end of July when the company announced the "wind down" of its Optane persistent memory products.

To get a better handle on the coming Intel Optane shutdown, let's dive into the reasons behind Intel's decision to launch the technology and how factors in the market ultimately led to its demise.

Rationale and expectations

Optane was an attempt to generate a sizable performance gap between Intel's processors and those from AMD.

Over the past decade, the vendor used what was commonly called "The Intel Treadmill" to keep ahead of other processor manufacturers. The model enabled Intel to differentiate itself from competitors based on the following guidelines:

  • Sell processors based on an advanced process technology at a premium. Reap big profits.
  • Reinvest those profits in new leading-edge technology to keep ahead of all other processor makers.
  • Use that new leading-edge process to reap higher profits with the next process technology.

But the economics of semiconductor production slowly changed, removing the model from Intel's hands.

The number of wafers an economical leading-edge wafer fabrication plant must process has increased steadily, moving well past what Intel's processors require. Add the mushrooming cost of building one of these fabs and it's clear that Intel couldn't build a leading-edge fab and only use a fraction of its output -- that would render the company unprofitable. As a result, Intel's processor technology fell behind that of the Taiwan Semiconductor Manufacturing Company, which produces wafers for AMD and other firms. Intel CEO Pat Gelsinger has since addressed that shortfall by acquiring chip maker Tower Semiconductor.

In the meantime, Intel needed a way to widen the competitive gap with equivalent or better technology. Intel's daring plan was to make a significant architectural change, named 3D XPoint memory, and sell it under the Optane brand.

The Optane strategy

Intel designed Optane to replicate a change that started around 2004. That year, NAND flash prices permanently fell below DRAM prices, which made it reasonable for all computer systems to incorporate SSDs to improve the price and performance ratios of their systems. By adding an SSD, a system could achieve the same performance with less DRAM and less money. Server farms with SSDs often could reduce the number of servers they had.

Major performance improvements resulted by plugging a growing gap in the memory and storage hierarchy with a NAND flash SSD. A flash SSD fit perfectly between HDDs and DRAM. As a result, SSDs rapidly became a key component in most data centers.

Intel decided to fill the growing gap between DRAM and NAND SSDs with a new memory technology. Emerging memories -- such as magnetoresistive RAM (MRAM), phase-change memory (PCM), resistive RAM and ferroelectric RAM -- fit the bill because they offered features Intel could add to the memory and storage hierarchy to improve price and performance. Most emerging memories have a smaller bit cell size than DRAM, so they should be cheaper to produce and purchase than DRAM. They're also faster than NAND flash. Plus, they're nonvolatile -- they bring persistence closer to the processor, which could streamline systems unable to tolerate data loss from a power failure.

Intel had been researching PCM since the 1960s, and announced its first PCM chip in 1970, so this technology reasonably fit into that gap. If Intel could do this in a way that made Optane work only with Intel processors, it could drive a wedge between itself and its competition that might last for a very long time.

Importance of DIMMs

The natural fit for this approach was to produce Optane with a proprietary interface -- not an SSD interface -- Intel could use to thwart any efforts by competitors to use this technology themselves. Although SSDs were Intel's first Optane product, they were only intended to build volume production of 3D XPoint memory early in the game.

SSD users prefer faster SSDs, and Intel produced faster SSDs using Optane. These SSDs didn't tap into all the speed offered by 3D XPoint memory, though, because the SSD interface was too slow.

It's hard to put a price on speed, and users decided that a relatively minor speed increase wasn't worth the premium price Intel wanted to charge. This meant the SSDs didn't sell enough volume to drive the required production scale.

The larger plan was to create a module that ran at near-DRAM speeds. Intel chose to adapt the standard DDR4 memory bus to Optane's needs, and to keep the changes secret as a competitive edge.

To this end, the company developed the DDR-T interface, which was DDR4 with a few additional signals to support a transaction protocol. The bus required support at both the DIMM and CPU side of the interface, providing Intel with a walled garden.

These DIMMs enabled 3D XPoint memory to provide its entire speed advantage to the system. Intel released them during its second-generation Xeon Scalable Processor launch in early 2019.

But to gain adoption, these slower-than-DRAM DIMMs had to be sold at lower-than-DRAM prices, and the cost to make them started out higher than the level Intel needed to price them.

Economics: A stumbling block

Any new memory technology can only reach the necessary cost target if it can ramp to production volumes like those of DRAM. That's how NAND flash prices crossed below DRAM prices.

A single-level cell NAND flash chip has always been approximately half the size of its DRAM counterpart, assuming both are built using the same process geometry and hold the same number of bits. Yet, NAND flash costs didn't compete with those of DRAM until 2004. NAND flash wafer production in 2004 reached one third that of DRAM, according to estimates from semiconductor market research firm Objective Analysis. That's when the economies of scale tipped in favor of NAND.

This is when Intel made its big gamble to subsidize the initial effort until consumption levels rose high enough to provide those economies of scale. So far, this hasn't happened.

As a result, Intel has lost more than $7 billion in its efforts to squeeze Optane's costs down. It seems upper management has decided this was as far as they wanted to go and discontinued the product.

Chart of Intel Optane losses
Intel has lost more than $7 billion on Optane, according to estimates from Objective Analysis.

Could things have turned out differently? Probably not by much. Had Intel priced Optane SSDs more aggressively, they might have gained more popularity, but then the losses would have been steeper. If we assume management budgeted the gamble's losses at a fixed amount, then the Intel Optane shutdown would have happened that much earlier.

Optane's legacy and what comes next

Will current Optane users be left in the lurch? In a statement read at the Flash Memory Summit, Intel made it clear the company will support existing users, so that's not an issue.

However, companies who depended on Optane for fast storage will need to migrate to another more expensive product for future designs, the easiest being an NVDIMM. Companies that don't need persistence but took advantage of Optane's larger memory sizes will also need to pay more to use DRAM instead for their next system iteration. Neither move should break the bank for these companies, but their profits will be somewhat lower.

Had Intel priced Optane SSDs more aggressively, they might have gained more popularity, but then the losses would have been steeper.

Optane does leave behind a positive legacy. The industry learned a lot from its introduction. The Compute Express Link (CXL) interconnect may have been designed with Optane in mind, and the Storage Networking Industry Association developed a Nonvolatile Memory Programming Model that promises to speed up many other forms of storage. This model will be a boon as processors move to new process technologies that incorporate nonvolatile MRAM caches. MRAM should become the norm as chip process technologies continue to shrink and static RAM fails to shrink along with them.

Optane opened the industry's eyes to the idea that different memory speeds require a vastly different bus approach than the fixed-speed path the industry has followed since synchronous DRAM in the early 1990s. In addition, Optane taught the industry that servicing interrupts with slow context switches is inadequate for handling fast and slow memory.

IT also started to reuse the term non-uniform memory architecture to describe memory systems that combine faster and slower memory types. This approach enables memory on both ends of the CXL to map almost seamlessly into a processor's memory space.

Perhaps Intel will share some of what it learned about PCM manufacture or transactional DRAM interfaces that will help others in the industry. No matter what happens, more people are open to the notion of adding another memory layer to the memory and storage hierarchy.

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close