Customers are Looking to Multi-Cloud Data Management for Improved Business Continuity
“Multi-cloud,” or the utilization of multiple clouds (including on-premises private cloud infrastructure and externally delivered cloud services), has become the norm for today’s enterprises. Multi-cloud architectures offer the flexibility to optimize application hosting and data placement according to cost, performance and compliance requirements. If not managed correctly, however, they create silos of data that make data protection, recovery and governance a challenge.
According to Evaluator Group’s Trends in Multi-Cloud Data Management study, due to publish in January 2020, most enterprise IT professionals believe that their organization is doing some form of multi-cloud data management, or is in the process of investigating or has near-term plans to investigate multi-cloud data management. This study revealed that the benefit customers most frequently are receiving or expect to receive from multi-cloud data management is improved business continuity (59%) – defined as mitigating application downtime and data loss.
From a business continuity perspective, study participants were often using, or interested in using, the public cloud for disaster recovery. This includes the public cloud serving as a failover environment for an on-premises production data center (in fact, this is often an initial gateway to testing the viability of the public cloud). Additionally, study participants were wary of putting all of their eggs in one basket – that is to say, they wanted to be sure they could fail over from their primary to a secondary public cloud providers, in the event that their primary cloud provider were to experience a service outage. The problem is that public cloud providers’ environments are notoriously incompatible, due primarily to divergent application programming interfaces (APIs).
Enterprises require data migration capabilities as well as portability between various on-premises and public cloud-based clouds. They require migration and recovery processes to be orchestrated, as well as instant access to copy data stored in the cloud. At the same time, downtime must be minimized, performance requirements must be met, and costs must be controlled as much as possible.
Actifio’s upcoming 10c release adds features to the vendor’s copy data management (CDM) platform that can help to alleviate these pain points.
On the business continuity side, Actifio 10c facilitates on-premises to cloud, and cloud-to-cloud, data portability. It allows for on-premises virtual machines (VMs) and physical servers, as well as cloud based VMs, to be recovered in Amazon Web Services (AWS) and Google Cloud Platform (GCP). Looking specifically at VMware-based virtual machines (VMs), these can be backed up directly to and recovered (according to Actifio) instantly from an Amazon, Microsoft, Google or IBM public cloud-based object store. Notably, Actifio 10c can back up VMware VMs to multiple targets, including on-premises object stores in addition to the public cloud storage services previously mentioned. To enhance its ability to centrally manage backup and recovery processes, Actifio 10c includes agentless, automated management of cloud-native VM snapshots. To further cut costs and boost elasticity, load balancing for object storage can be applied across multiple Actifio instances.
Complementing these capabilities, 10c adds disaster recovery orchestration that, according to Actifio, allows thousands of systems to be recovered at once. Pre-defined disaster recovery plans are executed automatically and can occur on a scheduled or on an ad hoc basis. The key value proposition is reduced downtime and streamlined operations, especially for complex, multi-tier applications that have a number of interdependencies.
Actifio 10c’s disaster recovery orchestration capabilities can also be used to migrate applications to the cloud. The product conducts an instant mount of backups to the cloud, and then migrates VMs, physical servers and databases (on the scale of multiple terabytes, according to Actifio) as cloud VMs. Also allows for sandbox testing, to determine things like how that machine or database will perform in the cloud.
To accelerate recovery performance and still control costs, Actifio 10c introduces an intelligent read cache in the public cloud. Copy data is mounted to an S3-compatible, public cloud-based object store and is presented as a virtual disk with read and write capability. Actifio 10c intelligently caches reads to a public cloud-based SSD cache; according to Actifio, 80% of reads will come from this cache, and only 20% will come from the object store. The cache eliminates the need to restore the entire data store to be restored to more expensive, faster-performing storage. According to Actifio, customers can achieve SSD levels of storage performance at only 20% of the cost.
Evaluator Group Comments
By the nature of its roots in copy data management, Actifio has always been focused on minimizing redundant copy data, to improved storage infrastructure utilization, to reduce administrative overhead pertaining to storage, and to ensure that mission-critical data can be accessed and restored. With its 10c release, Actifio further brings these capabilities into the multi-cloud era. Its Instant Mount capabilities, its ability to facilitate cross-cloud portability, and its intelligent cache stand to help enterprises to reduce operational burdens relating to disaster recovery and data protection, to cut on-premises infrastructure requirements, and to save on cloud costs by minimizing data transfer requirements. Meanwhile, it stands to accelerate restores to meet stringent recovery time objectives (RTOs). Meanwhile, policy-driven orchestration stands to accelerate recoveries and migrations, while at the same time helping to ensure that SLAs and compliance requirements are met and to further reduce administrative requirements.