adimas - Fotolia
First, some general background on how data moves between systems is needed. When an application running in a server needs to send data to an external location, it copies some of its data into a memory buffer, then makes a call to a network or storage driver and provides the address of the memory buffer. Depending on the type of connection (Ethernet, Fibre Channel, InfiniBand, etc.), the application memory buffer may be copied several times as the request works its way down through the various software stacks to the host adapter hardware. Once the data gets to the host adapter hardware, it can then be put “on the wire,” so to speak, to the destination device. The device at the other end has a similar process; it copies buffers until it gets to its intended location within the destination system.
With Remote Direct Memory Access (RDMA), the original host application puts its data into the memory buffer and calls the network or storage driver. The RDMA stack is built so that the hardware adapter gets access to the original buffer, bypassing much of the traditional software stack (TCP/IP, for example) and all those memory buffer copy operations. This reduces latency and gets the data onto the wire as fast as possible. Assuming the device on the other end also uses RDMA, the entire conversation between the two systems can be completed much faster than equivalent systems without RDMA.
All-flash storage systems have much faster performance, including significantly lower latency. As flash storage technology keeps improving performance, the traditional software stack contributes to more of the overall latency, so changes need to be made there to reduce it. RDMA is one of the technologies that can do this.
RDMA is the standard for high-speed InfiniBand connections. RDMA is also available in some Ethernet network interface cards (NICs), primarily the 10 Gigabit Ethernet (GbE) and 40 GbE NICs.
RDMA technology is frequently found in supercomputing environments with scientific applications that need the absolute lowest latencies and highest transfer rates. It is often used in connections between nodes in compute clusters. Some database workloads are sensitive to latency and perform best with the lowest latency technology.
New spec released for building RDMA into 10 GbE
RDMA improves the live migration feature in Windows Server 2012 R2