“Data Tsunami” is a trite expression meant to convey to enterprise IT a sense or impending doom. If you as an IT executive fail to manage it, unnamed bad things will happen – like maybe you’ll drown? But the Tsunami is here and now and so too is enterprise IT. They keep their ships afloat—precisely because they’ve come up with the means to manage. One of those coping mechanisms is a technology called Object Storage.
IT deals with two general categories of data: structured and unstructured. Transactional data resides in a structured database while unstructured data—document and image files for example—are managed by file systems. Unstructured data volumes grow four to five times faster. Just after the turn of the present century it became apparent that enterprise storage arrays, straining to cope with the growth of unstructured data, would not be able to do two very essential things: scale to the capacities required while protecting from data lost due to equipment failure. Object storage was created for large scale coupled with data resiliency and is used extensively in hyper scale data centers (Amazon, Google, Facebook, etc.)
For the last few years, object storage has lived in the IT background. It is commonly thought of as “secondary” storage—storage that soaks-up all that unstructured data but can’t respond to the performance demands of primary storage arrays where the business-generating transactions live. However, advances in object storage technology are about to change that perception. With revenue generated by the sale of primary storage arrays declining, object storage can take over as a growth engine. Here’s why:
Stored objects are containers for data that are similar to files managed by a file system. However, object containers are handled differently—managed by a construct called a namespace. Because the namespace can be distributed across arrays within a data center as well as across global regions, vendors often refer to their object management technologies as “global” namespaces. As a result, the user can store data under a single construct that is global in scale.
The majority of enterprise IT organizations are now on a path to integrate their data centers with public clouds such as Amazon Web Services, Microsoft Azure, Google Cloud Platform, and IBM Bluemix. Because object storage can be distributed globally, it’s a perfect fit with enterprise hybrid cloud projects. Indeed, the need for a contiguous data layer has given rise to a new, quickly expanding category called Cloud Object Storage. Examples from the large players are IBM’s Cloud Object Storage (formerly Cleversafe) and Dell EMC’s Elastic Cloud Storage (ECS). Start-ups including Cohesity and Scality (re-sold by HPE) are also seeing significant sales growth. And then there’s Amazon’s global and hugely successful Simple Storage Service (S3), where objects are the fundamental storage entities. Ditto for Azure Storage Services and Google Cloud Storage.
Early-on, object storage systems featured large-capacity rotating disks. Those are now being replaced by high-capacity flash modules. In that object storage can be used to support analytics and machine learning applications, this very significant advance in object storage technology will accelerate its use in a range of analytics applications.
Object storage has lived in the shadows mostly because vendors shy away from mentioning it by name. The concept can be difficult to explain which prompts sales people to recall an old adage: When your explaining what you just said for more than five minutes, you’re losing. EMC’s Centera array, that was introduced in 2002 and became an ILM fixture in enterprise data centers for years to follow, was an object-based array. Who knew? Now enterprise IT will likely see that it works for IoT, Customer 360, Industry 4.0, hybrid cloud and other digital transformation initiatives, they’ll care less about how the underlying technology works and more about its business benefits.Back to Analyst Blogs