How to Measure Scalability in Hyperconverged Appliances – Eric Slack

By , Monday, May 9th 2016

Categories: Analyst Blogs

Tags: blogs, hyperconverged, IOmark, scalability, VDI, VM,

Hyperconverged AppliancesScalability in IT systems is a fundamental requirement. Most compute environments need to grow, or at least have that capability as business expands, etc., so infrastructure choices are often made based on products’ abilities to meet this demand for resources. But how do you measure scalability? In a storage system it’s capacity, tempered with processing power. In a server that’s running applications it’s CPU speed and number of cores, with enough memory. Now, with the pervasiveness of server virtualization and the popularity of hyperconverged appliances a more comprehensive scalability measure is needed, one focused at the VM level.

Scaling a Storage System

Scalability in a storage system has historically meant capacity – the size and number of disk drives supported – and how easily they can be increased. “Scale-up” systems got bigger by adding more drive shelves but eventually hit a point where storage performance was limited by available processing power in the storage controllers.

Scale-out systems allowed that processing to expand, usually by combining storage capacity and storage controllers in the same physical modules. They made it possible to scale much larger, but could also be less efficient since linking capacity and storage processing restricted the independent growth of these resources. Now hyperconverged infrastructures are making the definition of scalability more complicated.

Scalability in a Hyperconverged Appliance

Hyperconverged systems combine the compute function with the storage function (both capacity and storage processing), as well as the networking function that connects server nodes, when multiple nodes are housed in the same chassis. These turnkey systems are almost always used in virtual server environments, and most vendors support multiple hypervisors. Hyperconverged Appliances (HCAs) are usually clustered architectures that physically expand by adding modules. But is the scale of these systems determined simply by the number of nodes or chassis that are connected in a cluster?

The typical HCA vendor literature lists a number of characteristics, below the module level, that factor into a system’s ability to scale. These include the number and size of storage devices, the type of storage (HDDs and SSDs) and often both the raw and ‘effective’ capacities of these resources. They list the number of CPU cores, the amount of memory, the number and bandwidth of network interface cards, even how many disk failures they can sustain without losing data. But what’s the best metric for comparing real scalability in a hyperconverged appliance?

The VM as a Measure of Scalability

The most pertinent measure of scale in a hyperconverged appliance is number of virtual machines (or virtual desktops) the system will support. The requirement to expand a virtual server environment is driven by the applications running on each host, specifically by the need to spin up additional VMs. When the demand for more virtual machines exceeds the supply of resources to support them, administrators ‘hit the scale button’. The question then becomes how to determine how many VMs each module, or node can accommodate.

Users are most concerned about assuring current VMs are well supplied and that their infrastructures can support the new virtual machines they expect to add without impacting existing workloads. This kind of planning takes more than a rule-of-thumb estimation, it requires testing tools that can generate real-world workloads and repeatable results using industry standard metrics.

Hyperconverged appliance vendors are now describing their systems in terms of the virtual machine, providing resource management at the VM level. VVOLs are Vmware’s effort to move the unit of management to the VM, away from resource-specific metrics such as GBs of capacity, CPU cores, etc. In addition to supporting VVOLs, some HCAs enable policies to be applied for each VM around where and how they store data, the efficiency methods they apply (like deduplication) even data protection parameters. Now, we’re seeing HCA vendors starting to provide accurate specs on how many virtual machines and virtual desktops their products can support, and one of the tools they’re using is IOmark.

Getting Accurate VM and VDI Metrics

IOmark is a suite of storage workload generation tools and benchmarks designed to recreate accurate application workloads in order to test storage system performance. The suite includes IOmark-VM for virtual server applications and IOmark-VDI for virtual desktop applications. These workloads may be used in conjunction with other testing, or to run IOmark certified benchmarks. Using the concept of I/O capture and replay IOmark is able to recreate application workloads efficiently, without a large scale test environment and without requiring application software be resident on the test servers.

Storage system vendors can purchase IOmark to run in-house or set up testing with Evaluator Group. End users can license IOmark for 30 days at no cost. For more information contact IOmark.


Many products have long lists of features that sound the same but work very differently. It’s important to think outside of the checkbox of similar-sounding features and understand how technologies and products differ.

 

Forgot your password? Reset it here.