E-Handbook: The latest on emerging memory technology Article 2 of 3

Greg Blomberg - Fotolia


Back-end storage considerations for in-memory processing

Storing hot data in main memory removes the latency associated with hard disk and solid-state storage operations; it's the best option for need-it-now data.

In-memory processing eliminates the latency that storage operations add to HDD and SDD-based systems, but come with back-end storage considerations.

In-memory databases are much more than front-end caches to an on-disk database. Most are built to strip off all the unnecessary I/O for caching and other processes and to just update the database entry. Read-after-write verification is very fast.

It's also possible to remove all the storage stack work associated with getting disk locations and parsing directory trees. In-memory processing can accelerate a database by as much as 100x, which saves response time and decreases the number of servers necessary in the database cluster.

NVDIMM to the rescue

However, in-memory databases present a new problem: Doing everything in dynamic RAM (DRAM) is great for performance, but when a server fails, the DRAM goes blank and data is lost. Having the option to persist some or all of the database entries is a useful extension of the in-memory database approach, but that raises the issue of what back-end storage is needed.

Early advice was to save any data to NVMe SSDs or to flash cards. These offered the fastest way to move data from DRAM to persistent storage.

In-memory databases present a new problem: Doing everything in DRAM is great for performance, but when a server fails, the DRAM goes blank and data is lost.

Today, organizations should also consider NVDIMM as persistent storage for the DRAM pool. NVDIMM is a few terabytes of flash or similar storage mounted on a DIMM.

Because it is on a fast bus and uses memory I/O methods, NVDIMM is very fast; perhaps four times faster than the fastest SSD. By using it in conjunction with DRAM, the in-memory database sees a seamless and much larger memory space and avoids sharding issues. More importantly, IT can maintain an image of the data in-memory or create a journal file of write transactions without affecting operations.

NVDIMM extension has some size limits compared with SSD, however. This is especially true with big data, where the data set is either a continuous stream or simply a huge amount of information, much of which is never accessed. In this case, there are two alternatives: IT administrators can create networked storage using NVMe drives, or they can use large numbers of slower SSD drives. The latter option offers perhaps 10% of the NVMe drive performance, but it is much cheaper, and it operates with high parallelism. The choice is use case-dependent.

High-bandwidth interfaces help a lot with in-memory processing. Admins can use remote direct memory access (RDMA) over Ethernet to share data between memory systems, and they can extend it to any drive-based storage.

What's next for the infrastructure?

The infrastructure for in-memory processing is evolving rapidly. The most interesting potential changes are in the NVDIMM and CPU architectures.

For example, Intel's Optane technology has byte addressability on the roadmap. This will enable a program to write a single byte to Optane memory with a CPU register to memory command. Compared with 4 KB block I/O -- which is the current NVDIMM access method -- this can be blindingly fast. It creates a block in the app, then goes through the storage stack software, and finally transmits a minimum of 4 KB with a single CPU operation. The whole process takes only one or two machine cycles.

Byte addressability requires changes to every piece of code in a system. But applying the idea to a database engine is much simpler than applying it to traditional monolithic apps, and this is where it will show up first. In theory, the concept can be extended to the cluster and to attached NVMe Optane drives using RDMA.

In big data environments, network bandwidth and local storage could be the key bottleneck exploiting system power. The solution is data compression. Compression ratios depend on data, but a 5x reduction is typical. On the CPU side, there is enormous attention on a flattened fabric topology inside the server, where memory, GPU, CPU and drives all share the same ultra-fast fabric, which can couple to the cluster fabric.

Another CPU initiative is to increase DRAM capacity and performance with a much tighter coupling of CPU and memory. There are several initiatives, all based on the Hybrid Memory Cube concept. The aim is terabyte per second memory speed; decreased memory power; and hybrid DRAM, flash and CPU module architectures, all of which can speed in-memory operations significantly.

These advanced initiatives may take different paths as they evolve, but the focus on a major leap in performance is clear, and the opportunity for in-memory systems and their storage will expand accordingly.

Next Steps

Alluxio strikes partnership with Dell EMC

Apollo code inspires storage nostalgia: Remember rope?

A database is the best use case for in-memory technology

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
and ESG