Categories: Analyst Blogs
Tags: blogs, Forbes, John Webster, software-defined storage, TCO,
In a recent blog post on Software Defined Storage (SDS) I mentioned that, while the model was attractive from the standpoint of lowering the upfront cost of enterprise storage, pricing models across SDS vendors are still a work in progress. But in this post, I was only referring to how SDS gets paid for at acquisition time. In my mind the even bigger unknown is the three to five-year total cost of ownership (TCO) for SDS including management/support costs and presumed IT flexibility.
Think back for a moment to the rise of client-server computing. At that time, monolithic mainframes were hugely expensive while the client-server model promised to deliver a more user accessible environment at substantially reduced cost. But what resulted over time was a perhaps equally expensive problem known as server sprawl. Ironically, using the virtual machine concept borrowed from the mainframe, VMware swooped in to solve that problem. And in a world where the network is the computer, client-server also resulted in a management paradigm that turned IT administrators into computer engineers. Recently, vendors of the computing appliance (VCE VBlock, Hitachi UCP, HP ConvergedSystem, IBM PureSystem and a list of others) have stepped up to capitalize on this opportunity.
Now comes software defined storage. And before going off to chase this new buzzword butterfly, it might be wise for enterprise IT to step back a bit and project the real vs. assumed cost of this new storage deployment model as one would when evaluating the acquisition of a more traditional storage array where software and hardware are integrated.
First, it should be noted that enterprise storage administrators are now keeping arrays on the raised floor for up to four and five years as opposed to the normal three year product life cycles of the recent past. Therefore, a total cost of ownership calculation for software defined should take this time horizon into account. This will impact the cost to maintain and support the underlying hardware and if the commodity server upon which the storage system is built can’t stand up to this four to five-year time horizon, factor-in the not insignificant costs of hardware and software refreshes along with at least one data migration sometime during this period of productive use.
Second is the cost to administer the total software-defined storage platform – software, commodity sever and add-ons. Like client-server, SDS turns IT administrators into storage systems engineers who will need to track hardware/software compatibility issues for each system over the afore-mentioned time horizon. In addition, they will likely need to tune and retune each for maximum performance (rinse and repeat for every hardware refresh). Add to this management vision the likelihood that SDS administrators will become the only ones that understand and manage the storage environments they’ve built. If they leave the organization, that knowledge will walk out the door.
On the opposite end of the spectrum from SDS is Oracle ZFS-based ZS3storage appliance. Here the entire storage platform is “co-engineered” and optimized for Oracle applications. Oracle calls it Application Engineered Storage and would hasten to add that the same mojo that makes ZS3 fast on Oracle also makes it fast for other transaction-oriented and virtualized server environments. And “commodity” really isn’t the word for Oracle’s underlying hardware which, for performance acceleration, uses DRAM to cache virtual images and a symmetric multiprocessor (SMP) controller architecture. While its competitors are dis-aggregating software and hardware, Oracle is converging it.
So enterprise storage buyers now have three deployment models to consider: software defined (SDS), application integrated and optimized (Oracle ZS3), and the traditional general purpose array. In this blog post I’m not advocating or arguing against any of these. I’m simply saying that before jumping on the SDS bandwagon, IT administrators should evaluate this deployment model the same way they would the other two – and do so with a more elongated time horizon than the three year-based calculation they are likely to be used to doing.