Press

Storage considerations for VDI implementations — InfoStor Article by Russ Fellows and John Webster

Monday, July 12th 2010

Large enterprises have begun to move to a virtual desktop environment using a Virtual Desktop Infrastructure (VDI).  Within the VDI environment, desktop applications and storage are centralized to IT operations, whether on-premise or outsourced.   Virtual desktops are provided to application users when needed through the deployment of thin clients or through terminal services. 

Bringing desktop applications, operating systems, and storage back under the control of centralized IT represents opportunities to save administration costs, system costs, and enhance the security of business data.  However, adding the requirements to support VDI environments also brings significant challenges.  One of the most critical resources for implementing a VDI is storage.

There are various approaches for implementing VDI, including VMware View, Microsoft Terminal Services and Citrix Xen.  Each of these solutions differs in their architecture and in the way it uses storage.  Since there is no single standard today, storage administrators need to consider each approach and how it impacts storage. For instance, how clones are created from a golden image, software updates are applied to the desktop images, storage can be shared, desktops are provisioned, and how data protection is accomplished. 

The storage architecture and its requirements have a significant impact on the success of a VDI implementation.  For this reason, joint planning with the VDI application and server team is critical.

VDI storage
Storage vendors typically characterize the performance of their systems in the context of a normal applications environment using statistical measurements such as throughput in MBps, GBps and I/Os per second (IOPS). Vendors also provide implementation guidelines to help administrators achieve the published results while optimizing the unique characteristics of their storage systems. These include size and use of cache, configurability of solid-state disk (SSD) drives, the number of internal data paths, and other vendor-specific features. However, VDI is different.  Efficiently and effectively supporting VDI requires a new paradigm and set of criteria.

Reviewing storage for VDI, comparisons should be done with an understanding of how VDI will impact the different storage systems.  At the start of the evaluation, a simple set of questions need to be asked to document the requirements.  Below we enumerate the questions and some commentary.

Capacity
How much storage space is required for each of the virtualized desktops and all virtual desktops to be supported by the storage system?
How does capacity scale as more virtual desktops are added?
How will the storage subsystem capacity scale as virtual desktops are added?
Can the capacity allocated for a virtual desktop be reduced?
How will primary data deduplication effect capacity?
How long does it take to provision storage for a new desktop?
Is provisioning a simple clone operation or does it require additional effort?  (Scripting or automating the provisioning process is crucial because of the potential scale of the number virtual desktops.)
Does the storage system provide a means for sharing common blocks of data between virtual desktops?

Protection and recovery
How is virtual desktop data protected and recovered?
Is it possible to back up changed data only?
Using snapshots may be the protection element, but how is disaster protection accomplished?
What are the RPO/RTO requirements for the full system recovery, single virtual desktop, or single file recovery?
Is there a removable copy requirement for the protected data?  (Recovering after a disaster or an inadvertent deletion must be addressed similar to normal IT operations, except that the scale may be multiplied by thousands.)

Boot storm
How long does it take to initialize the virtual desktop environment from the perspective of accessing virtual desktop data?
Since there are real-world probabilities that almost every virtualized desktop will be booted at the same time, how will the storage system tolerate a “boot storm?”

Administration
How is the administration of virtual desktop storage done? This will be an additional task.  Personal computers that were used previously were probably administered through a separate organization.  How administration and storage management needs will be handled requires planning and coordination. Understanding the tools and the time required to administer the storage for virtual desktops is critical.

Security
Are there adequate storage system security controls in place?  Can they be extended to address users that may lose their access device (PC, iPad, thin client device, etc.)?    

Storage systems: Understanding different usage characteristics
As noted earlier, there are specific storage characteristics that are needed to effectively provide the resources for a VDI.  The next step is to map these needs to different vendors’ storage systems.

Capacity requirements – Using VDI desktop cloning capabilities, administrators can create thousands of virtual desktops in a relatively short period of time. Therefore, the use of storage capacity can grow much more quickly than under normal operational circumstances. Using static capacity provisioning concepts such as a volume with a fixed capacity is not only wasteful once, but multiplied by thousands can be overwhelmingly expensive.  A better approach would be to look at storage systems that only allocate what is needed and can add capacity as it is needed dynamically. 

One method to accomplish this is thin provisioning.  With thin provisioning, a storage system only allocates capacity in chunks as it is needed, reducing overall usage and waste of storage space. 

Using the snapshot features of a storage system is another approach. However, not all snapshot features are equal. The snapshot must be able to share common data among desktop images. This will dramatically reduce capacity demand.  The snapshots must also be read/writeable and able to grow as each individual virtual desktop increases its storage requirement.

Provisioning – Creating hundreds or potentially thousands of virtual desktops must be automated in some fashion.  IT administrators can use custom scripting, or a utility process built into the VDI environment (VMware View Composer, for example) to create individual desktop images from a golden image. Another approach is to use the snapshot capabilities of the storage system to create additional snapshot copies of the golden clone.  The snapshot needs to be read/writeable and, continuing with the capacity requirement discussed above, must be expandable as data changes or is added to a virtual desktop.  

However, storage systems typically have a limit on the number of snapshots they support and that limit may be less than the number of virtual desktops required.  To overcome this, administrators may have to look to another level above the storage system or create multiple golden images on other storage devices. These could be the same or have some differences based on user roles. 

Using snapshots can dramatically reduce the time required to provision many virtual desktops. 

Data protection – Virtual desktops can be backed up with standard backup software. However, the restoration process may be difficult if the backup is done on a real volume basis and only the administrator can effect a restore.  To overcome this limitation, backing up a set of snapshots may be more effective in that a storage system’s functions regarding snapshots and the recovery thereof can be utilized.  A number of vendors have well integrated management tools for recovery of snapshot copies, providing ease of use and completeness in the management of consistency groups.

In addition, with backup an administrator is most likely copying all the system images.  A system that can deduplicate these images will minimize the storage requirements significantly.  Deduplication can be done at the source, eliminating some of the network traffic for backup, or at the target, which may be more efficient for global deduplication. (For more information on data deduplication, see the Evaluator Group Deduplication Buyer’s Guide, April 2010.)

Another consideration is the deduplication of user data. Deduplication will have some impact here, but not as much as with system images.

Boot of the virtual desktop – If an entire volume where the virtual desktop is configured must be loaded at boot time, a simultaneous booting event will overwhelm any storage system.  However, if shared data elements between the virtual desktop images can be placed in read cache and only be loaded once, a simultaneous booting event or “boot storm” can be effectively handled. In some instances, a combination of traditional cache and SSD cache can be used, but this will vary as to architectural implementation.   

The storage system must also be able to handle command data between the desktop images.  Snapshot functions in many of the currently available storage systems handle retention of the original (root) snapshot and the clones that were subsequently created which are solely differentials to the original.  The original data will be brought into the storage system cache with the first boot and will not have to be brought into cache again for common images. 

Performance – There will be considerable duplication of files introduced when migrating personal work files and reference information to the VDI environment. Primary storage deduplication is advantageous for reducing the amount of storage needed for each user and for enhancing performance as experienced by application users. 

Another approach would be implementing automated tiered storage, where SSD is used as the optimal performance tier and where dormant files are moved from high performance disk to high capacity disk to free up space on the higher performance disk tier.

We recently reviewed new approaches that provide more automation for tiering, eliminating the administrative overhead found in past approaches (see Evaluator Group ES/OL Database: EMC FAST, the Announcement Analysis, Dec. 2009; 3PAR Product Analysis and Auto Tiering, March 2010; Technical Insight Paper on Automated Storage Tiering, March 2010; Evaluating IBM’s SVC and TPC for Server Virtualization, May 2010). 

Some vendors allow both deduplication and tiered storage to be used in combination to enhance performance and reduce the long-term costs of data retention.

Administration – As noted, there are existing and enhanced capabilities in storage systems to manage the storage part of VDI.  The virtualization software products listed earlier have controls that can exploit these features and provide the base for the administration of VDI. 

Examples of storage systems with VDI features include 3PAR’s InServ, Compellent’s Fluid Data, EMC’s V-Max, Hitachi Data Systems’ USP-V, IBM’s SVC and DS8700, and NetApp’s FAS.

Recommendations for IT administrators
In summary, the first step in evaluating VDI storage is to determine what will be required of the storage environment. Traditional networked storage approaches to silo’d applications environments should be ignored, at least in the beginning, in order to get a fresh, unbiased perspective. We dare say that it will be worth pursuing the older generation mainframe disciplines that emphasized expertise in storage system performance and capacity planning.

The next step is to match those perceived requirements to available storage technology. Some storage vendors are further along in their ability to support these environments than others, and some have taken more steps to integrate their storage products with VDI environments than others. Storage vendors also vary as to the level of in-house VDI expertise that can be leveraged by users.

Technology, integration, and in-house expertise are all critical vendor requirements as IT administrators approach VDI

Connect with us on Twitter and LinkedIn

Forgot your password? Reset it here.