Azure Stack HCI is the name given to hyperconverged products running Windows Server 2019 and marketed by server vendors in partnership with Microsoft, including Dell EMC, HPE, Fujitsu, Lenovo, Axellio, Supermicro and others. HCI solutions using Windows Server 2016, which include essentially the same vendors, are still called the Windows Server Software Defined (WSSD) program.
While Storage Spaces Direct isn’t the name of an HCI product, per se, it does provide most of the differentiation between this and other products in this category. For that reason, S2D is mentioned throughout this Product Brief, sometimes interchangeably with Azure Stack HCI.
Azure Stack HCI clusters contain a minimum of two nodes, a maximum of sixteen, with a max capacity of 4PB (raw). All servers must have the same number of drives and Microsoft recommends that server nodes be of the same make and model. Nodes can be configured with either storage or compute functions on each node, enabling the cluster to scale storage and compute resources independently. Microsoft calls this the “converged” mode. S2D can also be set up with both storage and compute functions residing on the same node, called the “hyperconverged” mode.
S2D uses Cluster Shared Volumes (CSV) as a clustered file system layer. CSV performs metadata synchronization and I/O forwarding between nodes using the SMB (Server Message Block) protocol.
S2D is built on the Storage Bus Layer (SBL), a virtual bus architecture that creates a fabric connecting all disks across all nodes with using SMB as the protocol transport. With a feature called SMB Direct, SBL can use RDMA-enabled NICs (iWARP or RoCE) over 25GbE (recommended for 4+ node clusters), although SMB Direct can support smaller and larger bandwidth networks.
In addition to RDMA connectivity, SBL supports persistent memory (Intel Optane DC/3D XPoint) providing extremely low latency storage performance. S2D also derives performance benefits from running in the Windows Server kernel (“kernel embedded”), compared with HCI software-defined storage layers that run as a VM.
Storage Spaces Direct offers multiple options for creating data resiliency. One uses a distributed RAID model, writing two or three copies (mirrors) of each data block to different devices in different nodes. Another, “nested resiliency”, is similar to a RAID 5+1, providing protection against multiple hardware failures on a two-node cluster. A distributed parity model (erasure coding) offers a more space-efficient way to protect against one or two simultaneous device failures.
S2D has post-process deduplication and compression to optimize capacity, plus asynchronous and synchronous volume replication with Storage Replica to support remote data protection and disaster recovery. Azure Stack HCI does not support stretched clustering. For DR, Storage Replica can provide crash-consistent copies, to another cluster in a remote site or to Azure cloud, but does not automate the failover process. A storage quality of service (QoS) feature allows the creation of policies that specify a minimum and/or maximum performance level for VMs or virtual hard disks.
The Resilient File System (ReFS) is included in Windows Server 2019 and integrated with S2D. It can automatically repair detected data corruption using the appropriate data block copies maintained in the cluster. ReFS has the features real-time tier optimization, block cloning and variable cluster sizes, to improve performance, plus deduplication.
Product Brief Includes: