How to Choose a Hyperconverged Infrastructure Solution – by Eric Slack

By , Wednesday, January 11th 2017

Categories: Analyst Blogs

Thinking_Outside_Checkbox_Banner

Evaluating products in the IT space is a complex process. A simple “feeds and speeds” comparison isn’t enough as features and functionality proliferate. This is especially true in Hyperconverged Infrastructure (HCI) space where the product evaluation now encompasses compute and management functions, not just storage.

To help with this process Evaluator Group has created the Eval(u)Scale, a matrix of product criteria that are arranged in “top 10” fashion, with minimum requirements for each characteristic. The Product Briefs for vendors covered in the Evaluator Group HCI appliance research each contain an Eval(u)Scale with comments on how well they meet those requirements. For more information, see the Evaluation Guide for this research category.

In this and subsequent blogs we’ll cover the Eval(u)Scale, detailing these characteristics and mentioning a few products that provide good examples of each criteria.

Model Options

Since HCIs are pre-configured, the amount of resources they include in each model is somewhat fixed. This makes model options an important characteristic of each product. Also, hyperconverged solutions provide a turnkey compute infrastructure, so you’re making these resource choices upfront, to a large extent. Models are typically designed to support specific applications or functions, like the following:

Small, entry level, ROBO – low cost, lower resources, single CPU, lower core counts (sometimes able to run as a single-node)

VDI – more CPU and memory, often lots of flash capacity

Storage – high capacity hybrid storage for content-rich use cases or as way to expand storage more efficiently

General Purpose – high density frame, balanced resource combinations, often a higher-density frame (4-node chassis), designed for server consolidation and SMB/mid-market environments

High Performance – highest CPU and memory configurations, often have high storage capacity, including all-flash

Storage-Only – really “storage-heavy”, must have some CPU and memory in order to participate in the cluster, although some are designed to not run compute VMs or virtual desktops

Most vendors are adding more nodes to their line cards, especially all-flash nodes. Nutanix has had one of the most complete line ups of appliance modesl, and has recently added all-flash options for most of them. Dell EMC’s VxRail released a new collection of models with their switch to Dell servers. They actually named these by use case (Entry, VDI, General Purpose, Performance and Storage), which simplifies the choice somewhat. They also have all-flash configurations for most models.

Scalability

Right after model selection is ability to scale, and scale efficiently, since resources are the biggest cost component. HCI solutions, as a category, offer simple, incremental scalability by adding individual nodes. But with this comes the criticism that they force resources to be added in lock-step, since each node aggregates compute and storage in the same box.

There’s also a need for scaling down, or staring small, since a common use case for HCIs is as a remote office solution. To address this, vendors are offering smaller nodes and the ability to create smaller clusters, even a “cluster” of one.

Some vendors get around the scaling efficiency issue, to an extent, by providing “storage only” nodes – really storage-heavy nodes since some CPU and memory is required to support a hypervisor and the SDS layer. One vendor, Cisco, allows UCS servers to be connected into the cluster, providing a “compute-only” node.

HPE’s StoreVirtual VSA scale-out SDS architecture underlies their HCI solution and allows external StoreVirtual VSA appliances to be incorporated into the cluster. Others, like Maxta’s MaxDeploy support direct-attached or even network-attached storage capacity to be accessed by VMs in the cluster.

Some SDS technologies are built on a disaggregated architecture, meaning nodes are truly configured for store or compute and are only added as needed. Examples of this are Hedvig, EMC’s ScaleIO, IBM’s Spectrum Accelerate, all of which could be used to create a hyperconverged cluster, although they’re currently not sold as HCI appliances, per se. Microsoft’s Storage Spaces Direct (S2D) is the newest SDS solution to come out with a disaggregated architecture.

In the next blog we’ll look at Economics and Performance.


The amount and diversity of technology available in infrastructure products can be overwhelming for those trying to evaluate appropriate solutions. In this blog we discuss pertinent topics to help IT professionals think outside the checkbox of features and functionality.

Forgot your password? Reset it here.