Software-defined storage, or SDS, is a somewhat ambiguous term that’s been with us for over 20 years but is still at the forefront of storage technology. Initially it was a storage solution that ran on commodity hardware, which is more flexible and cost-effective than traditional arrays. Then it was the storage foundation for hyperconverged infrastructure (HCI), creating an integrated compute platform for virtual machines. Now, SDS is working its magic in container environments running Kubernetes with container-native storage.
But software-defined storage architecture is also finding its way back into the storage world as companies look to pull the storage software away from the hardware in order to build storage systems that are cost-effective at scale. In this article, we’ll provide a quick history of software-defined storage and explore its evolution. We’ll also discuss two SDS products that embody this original concept of flexible, scalable storage that costs less.
A Short History of the (SDS) World
The concept of software-defined storage is tied to virtualization, which provides the ability to abstract physical resources in software and then commit or allocate them as needed. One of the first examples of software-defined storage was DataCore SANsymphony, a product that was released in about 2000 and is still the central technology for that company. Around the same time, another example of software-defined technology was introduced by VMware, the hypervisor that virtualizes compute resources (along with networking and storage) to create virtual machines.
For the last two decades, the IT world has been focused on server virtualization as VMware built up its empire and companies became more and more virtualized. This was aided by the advent of multicore processors and the reduction in memory and storage prices. About 10 years ago, HCI came onto the scene and built on the acceptance of VMware by incorporating SDS into a combined server and storage virtualization platform. HCI has steadily grown to become a standard infrastructure choice for companies of all sizes. In fact, based on the latest Evaluator Group study, almost half of enterprise IT professionals surveyed said they would use HCI for any workload in their data center. We’ll have more information on the results from this study in the next article.
For the past five-plus years, most SDS interest and development has related to HCI. However, data requirements continue to grow and companies are again looking at software-defined storage as a pure storage solution, one that can replace dedicated storage arrays. HCI has proven not cost-effective enough in larger deployments, due partly to its inability to separate storage capacity from compute (disaggregation notwithstanding). By separating the software from the hardware, SDS can offer a more affordable storage solution, especially at larger scale.
SolidFire, maker of one of the first all-flash arrays, was founded in 2009 and acquired by NetApp in 2016. At the heart of the SolidFire product is Element OS, an operating system developed to run all-flash storage. SolidFire’s technology was aimed at enterprises and service providers that really needed consistent, highly available storage. The architecture is scale-out, meaning it’s comprised of a cluster of standard server nodes, each with internal storage devices (solid-state drives, or SSDs) that are combined with an SDS layer.
A big part of SolidFire’s technology is associated with its Quality of Service (QoS) feature, which provides a consistent level of performance regardless of the workloads the system is running or the level of capacity it has. SolidFire recognized that QoS could help an all-flash storage system leverage the high level of storage IO performance that SSDs offered. This resonated with service providers, companies that make up a large part of SolidFire’s customer base.
SolidFire was originally sold as an appliance, but NetApp recently released it as a software-only solution called eSDS, short for Enterprise Software-Defined Storage. NetApp also created an HCI product with Element OS, called NetApp HCI, that enables SolidFire storage nodes to be connected to the HCI cluster.
StorONE is an Israeli company started by Gal Naor, the founder of Storwize, which IBM acquired in 2010. StorONE’s S1 is a scale-up storage system that runs on single- or dual-server controllers connected to standard rack-mounted chassis full of disk drives or SSDs. Like SolidFire, S1 was designed with the performance of SSDs in mind. In fact, StorONE spent six years developing its technology in stealth mode before release in 2017.
This development effort produced a multiprotocol storage system (block, file and object storage) with storage services redesigned for resource efficiency and to run in parallel, using multiple cores of modern CPUs simultaneously. This reduces latency, resulting in the ability to utilize up to 85% of the IO capacity of an SSD, according to the company. S1 also writes directly to SSDs; there is no caching layer. With Optane and NVMe devices, the system provides fast acknowledgement of writes and eliminates potential issues with data coherency. StorONE has a tiering process that moves data out of the faster storage and into standard SAS-based QLC flash.
StorONE’s focus is a lower cost based on capacity and performance efficiency. The logic is that a system that consumes less CPU power for storage processes can run on lower-cost servers. Also, if that same system doesn’t rely on redundancy to protect data and can more effectively tier that data, you could spend less on capacity.