Why Intel Killed Its Optane Memory Business • The Register

Analysis Intel CEO Pat Gelsinger has confirmed that Intel will exit its Optane business, ending its attempt to create and promote a level of memory that was somewhat slower than RAM but had the virtues of persistence and High IOPS.

The news should come as no surprise, however. The division has been on life support for some time after Micron’s decision in 2018 to end its joint venture with Intel, selling the factory where the 3D XPoint chips that go into Optane drives and modules were made. While Intel has signaled that it is open to using third-party foundries, without the means to manufacture its own Optane silicon, the writing was on the wall.

As our sister site Blocks and Files reported in May, the sale only came after Micron sold Intel a glut of 3D XPoint memory modules — more than the chipmaker could sell. Estimates put Intel’s inventory at about two years’ supply.

In its poor second-quarter earnings report, Intel said exiting Optane would result in $559 million in inventory write-downs. In other words, the company abandons the project and writes off the inventory as a loss.

The deal also marks the end of Intel’s SSD business. Two years ago, Intel sold its NAND flash business and manufacturing plans to SK hynix to focus its efforts on the Optane business.

Announced in 2015, 3D XPoint memory arrived in the form of Intel’s Optane SSDs two years later. However, unlike competitor SSDs, Optane SSDs couldn’t compete in capacity or speed. Instead, the devices offered some of the best I/O performance on the market, a quality that made them particularly attractive in latency-sensitive applications where IOPS were more important than throughput. Intel claimed that its PCIe 4.0-based P5800X SSDs could achieve up to 1.6 million IOPS

Intel has also used 3D XPoint in its Optane persistent memory DIMMs, particularly around the launch of its second- and third-generation Xeon Scalable processors.

From a distance, Intel’s Optane DIMMs looked no different than your regular DDR4, except, perhaps, as a heat sink. However, on closer inspection, the DIMMs could have capacities far beyond what is possible with DDR4 memory today. Capacities of 512 GB per DIMM were not uncommon.

DIMMs slotted alongside standard DDR4 and enabled a number of new use cases, including a tiered memory architecture that was essentially transparent to operating system software. When deployed in this way, DDR memory was treated as a large level 4 cache, with Optane memory behaving like system memory.

While offering performance far from comparable to DRAM, the approach enabled the deployment of very large memory-intensive workloads, such as databases, at a fraction of the cost of an equivalent amount of DDR4, without requiring software customization. That was the idea, anyway.

Optane DIMMs can also be configured to behave as a high-performance storage device or as a combination of storage and memory.

And now?

Although DDR5 promises to solve some of the capacity issues solved by Optane persistent memory, with 512GB DIMM capacities planned, it is unlikely to be price competitive.

DDR isn’t getting cheaper — at least not quickly — but NAND flash prices are falling as supply outstrips demand. All the while, SSDs are getting faster and faster.

Micron this week began volume production of a 232-layer module that will push consumer SSDs into over 10GB/sec territory. It’s still not fast or low-latency enough to replace Optane for large in-memory workloads, analysts say The registerbut it comes awfully close to the 17 GB/sec offered by a single channel of low-end DDR4.

So if NAND isn’t the answer, then what? Well, there is actually an Optane memory alternative on the horizon. It’s called Compute Express Link (CXL) and Intel has already invested heavily in the technology. Introduced in 2019, CXL defines a cache-coherent interface for connecting processors, memory, accelerators, and other peripherals.

CXL 1.1, which will ship with Intel’s Sapphire Rapids Xeon Scalable processors and AMD’s fourth-generation Eypc Genoa and Bergamo processors later this year, allows memory to be connected directly to the CPU via the PCIe 5.0 link.

Vendors like Samsung and Marvell are already planning memory expansion modules that slot into PCIe like a GPU and provide a large pool of extra capacity for memory-intensive workloads.

Marvell’s acquisition of Tanzanite this spring will allow the vendor to also offer Optane-like tiered memory functionality.

Additionally, since the memory is handled by a CXL controller on the expansion board, older and cheaper DDR4 or even DDR3 modules could be used alongside modern DDR5 DIMMs. In this regard, CXL-based memory tiering might be superior because it does not rely on a specialized memory architecture like 3D XPoint.

VMware is thinking about software-defined memory that shares memory from a server to other boxes – an effort that will be much more powerful if it uses a standard like CXL.

However, emulation of some aspects of Intel’s Optane persistent memory may have to wait until the first CXL 2.0-capable processors – which will add support for memory pooling and switching – come to market. It also remains to be seen how the software interacts with CXL memory modules in multilevel memory applications. ®

Margie D. Carlisle