Non-Volatile Memory Express, or NVMe, is an interface protocol standard that allows solid-state storage devices to communicate directly with the CPU via PCIe by treating flash storage as memory and removing the SCSI translation layer required for spinning disks. It allows for a massive increase in parallel I/O with up to 64K queues per core and 64K commands per queue, compared to a single queue with 32 commands as seen with SCSI. NVMe SSDs can drastically improve the performance and decrease latency compared to SAS or SATA SSDs, however NVMe devices alone are limited to communication between attached devices and the CPU. Without a similarly effective network protocol, the storage network can become a bottleneck despite the high performance achieved by NVMe devices.
NVMe over Fabrics – NVMe-oF – extends the functionality of the NVMe protocol over a network fabric. Just as NVMe can provide significant performance boost for device connectivity when compared to SAS or SATA, NVMe-oF is capable of drastic performance increases between host and client systems over a fabric when compared to other network protocols such as iSCSI. Combining NVMe storage with NVMe-oF provides the full performance benefit of NVMe technology in what is known as “end-to-end NVMe”.
NVMe-oF can be implemented in a number of ways. Commonly, NVMe-oF is implemented utilizing Remote Direct Memory Access (RDMA). There are multiple types of RDMA based NVMe-oF including InfiniBand, RoCE (RDMA over Converged Ethernet), and iWARP. In addition to RDMA based NVMe-oF, Fibre Channel is supported via NVMe/FC, and TCP is supported via NVMe/TCP. Each implementation has varying benefits and complexities and may be better suited towards different specific use cases or environments. This paper examines the benefits and potential issues of NVMe/TCP.
Vendors Mentioned: Dell, VMware, NetApp, Huawei, LightBits, InfiniDat, Pavillion
Download the full, free Technical Insight report now!