Research Library

Datrium – Industry Snapshot

Last updated April 17th,2017 with minor corrections. An Industry Snapshot discussing how Datrium's open convergence makes HCI more cloud-like. Download the free report now!

Datrium’s Open Convergence makes HCI more like cloud-like

Hyperconverged Infrastructures (HCIs) can help address the complexity and deployment time associated with traditional 3-tier infrastructures by consolidating the whole thing into an appliance form factor. HCIs also allow the use of industry standard server hardware and commodity storage devices to lower cost, like the hyper-scale systems that large clouds and social companies developed. But HCIs don’t provide the capacity, efficiency or performance at scale that enterprises and large clouds require. Datrium’s DVX platform is designed to address this issue with a unique “open convergence” architecture. The company also recently announced a new Blanket Encryption feature that doesn’t conflict with deduplication.

Open Convergence

Open convergence refers to the ability to leverage the benefits of scale-out, software-defined storage but not be constrained by the HCI architecture. It diverges the IO processing function from the persistent storage function, providing more flexible scaling and the option to use (or reuse) most any server hardware, and solid state devices.

DVX Rackscale

Datrium’s DVX Rackscale system consists of the DVX Data Node, a highly available server that stores persistent data, and DVX Compute Nodes which come pre-loaded with VMware vSphere and Datrium DVX Software to provide IO processing and data services. The Data Node handles data availability and resiliency functions and shares a pool of capacity with all compute nodes in the cluster via 10GbE. DVX Rackscale systems allow a “mix and match” of DVX Compute Nodes with third party x86 servers, whether new or already installed in a customer’s environment.

The DVX Software runs on each compute node as a VMware Installable Bundle (VIB) and handles all data services, like deduplication, compression, erasure coding, cloning, etc. Workloads are served out of a large flash or NVMe cache, sized to hold the entire application data set on each host node.

Datrium sells the DVX Data Node as an appliance, each 2U chassis containing 12 HDDs and 29TB usable capacity, with a range of 60 to 180 TB of effective capacity depending on data reduction. As persistent storage, each appliance has redundant, hot-swappable controllers with mirrored NVRAM, and redundant power supplies and fans. The DVX Compute Node is a 1U, 16- or 28-Core Intel Xeon server, configurable with up to 768GB RAM and up to 8 SSDs. While the DVX Software is pre-loaded on DVX Compute Nodes, it installs as a VIB on ESXi, allowing deployments to use any appropriate hardware platform, even re-purpose existing assets, turning them into compute nodes within the overall DVX Rackscale system.

Writes are mirrored to NVRAM in the Data Node, then coalesced and written to the Data Node drives using an 8+2 erasure coding scheme. As a caching process, each node only sends unique blocks that have already been compressed and deduplicated locally to the data server, where global deduplication across all host data then occurs.

Having each Compute Node connect to a shared storage pool on the Data Node, DVX eliminates the distribution of data across nodes that most HCIs do. This also eliminates the ‘east-west’ network traffic that results when nodes are added, data sets are recovered or VMs are moved, since with Datrium a copy of all data is accessed by each Compute Node directly from the Data Node. This aspect of the DVX helps provide more predictable performance when scaling, better support of mixed workloads and more resilient server infrastructure since hosts remain stateless.

Blanket Encryption

Datrium recently announced the availability of a “blanket encryption” technology that avoids the issues with deduplication that other storage systems and HCIs have. Deduplication compares data blocks before they’re stored, using a hashing technique that generates a unique ‘fingerprint’ for each block. Encryption essentially randomizes these blocks so most storage systems, instead, use self-encrypting drives. But this only provides protection for data at-rest.

DVX runs the fingerprinting process and then compresses and deduplicates each block in RAM on the host. Then it encrypts the data blocks, not the metadata fingerprints, before they’re written to flash or sent to the NetShelf. This assures the metadata used to control the re-duplication process is unaffected by encryption and provides protection for data in-process and in-flight, not just at-rest.

Evaluator Group Comments

Datrium’s architecture reflects an evolution of software-defined storage (and converged infrastructure) technology that we’re seeing from a few other vendors as well, although none are doing it quite like this. Some SDS products are disaggregating the storage functions from the compute functions to provide greater efficiency and scale. Datrium has refined this by keeping the software-based I/O services and active data on the host/Compute Node, and moving only the persistent data storage functions to the shared Data Node. This eliminates the data movement between Compute Nodes, combining some of the benefits of a shared storage array with a hyperconverged infrastructure.

By separating the IO processing from persistent storage, Datrium eliminates the scaling inefficiency that HCIs can have when capacity expansion requires adding more compute resources with each node. Instead, Datrium’s architecture allows storage capacity to expand independent of the IO compute resources, while improving performance at scale. This also frees up host resources from the inter-node data handling that HCIs must do. Overall the Datrium approach brings HCI-like simplicity to large scale, mixed use environments, like those that support private clouds.

Download the free report now!

Available with Subscription or Purchase. Contact us below.

Forgot your password? Reset it here.