Research Library

Microsoft Storage Spaces Direct – Product Brief

Last updated December 30th, 2018 for general updates. Concise 4-page report exploring the deployments, usage, highlights and strengths of Microsoft Storage Spaces Direct.

Microsoft Storage Spaces Direct (S2D) is a software-defined storage solution that’s included with the Windows Server 2016 Datacenter Edition operating system. S2D can be run on industry- standard server hardware with commodity storage devices to create a scale-out storage system or can provide shared capacity for virtual machines in a Hyper-V environment. S2D runs in the Windows Server 2016 kernel so, unlike most other SDS solutions, it doesn’t require a hypervisor.

Clusters contain a minimum of two servers, a maximum of sixteen, with up to 416 drives, and all server nodes should be of the same make and model. S2D nodes can be configured in “Converged” or “Hyperconverged” modes, allowing either a scale- out storage system or hyperconverged clusters to be created (see “Usage and Deployment”).

S2D uses SMB block mode, a technology Microsoft built into its 3.0 version of Server Message Block, the protocol that’s most often associated with file services. In block mode each volume is divided into 256MB blocks or “slabs”, which are spread around the cluster, along with the redundant blocks required to support the data resiliency option chosen.

S2D is built on the Software Storage Bus (SSB), a virtual bus architecture that connects nodes with the SMB Direct feature of SMB 3.0 that uses RDMA NICs (iWARP or RoCE) over 10GbE or 40GbE. In addition to RDMA connectivity, Microsoft has apparently done significant development work to boost “east-west” performance (between nodes), by reducing latency and lowering the CPU overhead of SMB Direct. Microsoft has published test results of 80GB/s throughput to workloads running on local Hyper-V VMs, using a 4-node cluster with 4 x 8-lane NVMe PCIe devices.

Storage Spaces Direct offers multiple options for creating data resiliency. One uses a distributed RAID model, writing two or three copies (mirrors) of each data block to different devices in different nodes. Or, a distributed parity model (erasure coding) offers a more space-efficient way to protect against one or two simultaneous device failures.

Product Brief Includes: 

  • Overview
  • Highlight
  • Usage and Deployment
  • Eval(u)Scale Product Review Table
  • Evaluator Group’s Opinion
To Download, Contact Us about subscriptions or purchase
LINKEDIN

Forgot your password? Reset it here.