There’s a new word cropping up in the storage industry, more often in presentations than in published writing (possibly because spell check doesn’t work while one’s giving a presentation). That word is “performant” and, although not actually a word, it’s being used to describe a storage system as having good performance. Now storage performance is a complicated concept, one that doesn’t lend itself to single word descriptors, so there’s little to lose in cutting this mutant from the vocabulary. But understanding performance is important, especially as solid-state storage devices are taking over from disk drives and all-flash arrays become more common.
How to Measure Performance
In storage systems performance used to mean throughput, or the speed at which data could be transferred into or out of a storage device. GB per second was used to calculate how long it would take to restore a backup from tape or disk or how fast a big file could be copied from its repository. As virtualization, analytics and transaction-based computing grew IOPS was added to the performance discussion. I/O Operations Per Second measures the number of reads or writes a system can accomplish each second. This metric is affected more by the time it takes to locate and start reading (or writing) the first byte of data, the access time, than by the time needed to move each successive byte.
Hard disk drive (HDD) access time, the time required to move heads to the desired track or platter (seek time) and spin the platter to the correct position (rotational latency) is in the low milliseconds. In flash drives (SSDs) access time is hundreds of times faster, typically in the tens of microseconds. This huge performance differential has forever changed the expectations of storage users and added another factor to the performance discussion.
Solid state storage technologies removed the long access times inherent in hard disk drives. This has forced storage vendors to redesign their controller architectures to better take advantage of this medium’s capabilities. It has also put the focus on latency when we try to further increase storage performance. Where IOPS measure the number of I/O transactions a given system can process in a second, latency measures the time each individual transaction takes to complete. This is the round trip from the time a request for data has been received by the storage system until that data object is on it’s way to the host making that request.
Latency is impacted by some hardware elements, like connectors and the length of the physical data path, but it’s more affected by software. This includes the software stacks that must be processed as a request moves through the data path, such as data protection, error correction code, data reduction, and by the load on the CPU doing that processing. Latency is also impacted by buffers, queues and the time waiting for serial processes from the previous transaction to complete.
Performance in a Hyperconverged Infrastructure
Measuring storage performance isn’t an easy task, but it’s especially difficult in a complex system that includes compute and storage networking as well. While storage is often the performance bottleneck in hyperconverged infrastructures (HCIs), typical storage metrics like throughput, IOPS or even latency alone don’t provide a very accurate indicator for comparing the capabilities of one HCI with another. What’s more useful is a measure of VMs supported. Unfortunately, most HCI vendors use a “rule of thumb” estimate for determining how many VMs each configuration can accommodate.
A better solution is to use a method that’s designed to replicate actual workloads and run them against a given configuration of CPU cores, memory and storage. IOmark is a suite of storage workload generation tools and benchmarks designed to recreate accurate application workloads in order to test storage system performance. Using the concept of I/O capture and replay IOmark is able to recreate application workloads efficiently, without a large scale test environment and without requiring application software be resident on the test servers. For more information contact IOmark.
Many products have long lists of features that sound the same but work very differently. It’s important to think outside of the checkbox of similar-sounding features and understand how technologies and products differ.Back to Analyst Blogs