Desktop virtualization and application virtualization have promised to ease centralized management of endpoint devices in enterprise settings. However, virtual desktop infrastructure is still affected by the same limitations as client/server computing. You can ease bottlenecks and provide for future growth with careful capacity planning that takes into account varying demands on storage and bandwidth. This tip will help administrators...
save money and improve performance by identifying the storage needs of desktop virtualization deployments.
How much storage do you need?
An organization needs to allocate the right amount of storage for a virtual desktop deployment -- too little storage capacity will impair the performance of virtual instances, while too much capacity wastes capital. Unfortunately, determining the appropriate amount of storage can be much harder than it appears.
Invest the time and effort to understand the storage needs of desktop users. Start by considering IOPS performance rather than capacity. Software tools can help you identify these performance characteristics for each user. You can then build a picture of the total, cumulative performance required from the storage system. Any evaluations should reflect different times of the day or month, as well as different users or user groups.
Design the storage system to exceed those performance levels wherever possible because better performance will affect scalability. Admins often don't account for the increased load of more users in virtual desktop deployments.
Figuring the actual storage capacity requirements can be equally challenging. Virtual desktops are usually deployed based on a limited number of standardized "golden images." For example, if 1,000 users were each given 50 GB of storage for unique desktops, an organization would need to add an astounding 50 terabyte (TB) of enterprise-class storage. A limited set of standardized images reduces those storage demands dramatically -- perhaps to just a few TB.
Don't forget user data and backups. Even though apps like Word and Excel may be part of the same golden image, the documents and spreadsheets created by each user need space to reside. A good ballpark estimate is to take the approximate user data space in a cross-section of individual PCs and then multiply that average by the total number of users.
Capacity planning is key for virtual desktop storage
Virtual desktop deployments almost always grow as new use cases are identified. The problem is that storage demands and network traffic multiply as virtual desktops proliferate. This makes monitoring and capacity planning particularly important. Establishing and maintaining solid storage and network performance can avert problems that might impair user productivity.
The combination of desktop golden images, user data stores and backup space can add up to a significant amount of storage that is costly to buy and manage. Certain user groups may benefit greatly from desktop virtualization, while other groups or individual users may need unique desktops. Eliminating unnecessary users from the virtualization project's scope can reduce demands for storage.
Also take steps to reduce the footprint of virtual desktop images. For example, remove unnecessary Windows components and limit the number of applications included in a golden image. This will shrink the golden image and provide faster load times, faster backups and lower backup storage needs.
Similarly, adjust backup schemes to reflect the needs of each user group. Agents on the sales floor may need frequent backups (such as snapshots), while back-office employees running different applications in a different golden image may need far less frequent backups of their user data.
In addition, reduce or avoid disk-intensive user activity that can hinder storage performance across large virtual desktop deployments. A prime source of trouble is antivirus software, so relegate malware tools to the storage subsystem wherever possible.
Allocating storage for VDI
Simply supplying storage for a virtual desktop environment isn't enough. Once storage is provisioned, it must be monitored and managed to ensure proper performance and availability to virtual desktop users. Here is an overview of storage-allocation tactics for desktop virtualization deployments.
Choosing storage for VDI deployments involves a variety of variables. A main consideration is the choice of disk and disk subsystem, and administrators can select from high-end Fibre Channel, midrange Serial-Attached SCSI (SAS) and low-end Serial Advanced Technology Attachment (SATA) disks. If RAID is also implemented in the disk subsystem, administrators will need to consider the tradeoffs involved with RAID 1+0 versus RAID 4, RAID 5 and RAID 6 (dual parity), as well as vendor-specific RAID versions such as RAID-DP for NetApp arrays, MetaRAID in EMC Clariion or vRAID in HP EVA arrays. Larger deployments demand more capacity, performance and resilience. However, disk choices are not as critical for early or limited deployments.
Thin provisioning is a means of creating a logical disk space that is actually larger than the amount of physical storage assigned to it. Thin provisioning stems from the notion that an application does not use all the space assigned to it right away, but once space is allocated, no other apps can use that space. The result is that "unused" space is essentially paid for and wasted until it's actually used (if ever). With thin provisioning, it's possible to create a LUN, but only assign a fraction of that actual disk space to start, and then add more physical space to the LUN as needed.
For example, it's possible to create a 100 GB LUN, but only provide 10 GB of storage up front. As that initial 10 GB fills, an administrator can add another 10 GB or 20 GB (up to the 100 GB size limit). The challenge with thin provisioning is the need for careful storage monitoring. Applications have no way to know the difference between the logical limit and the actual disk space available, so it's possible to run short of space and encounter serious write errors for virtual desktops relying on allocated space without the physical disks to cover it.
Data deduplication can also save enormous amounts of storage space. The technology works by identifying and removing redundant data blocks and replacing them with simple placeholders. As a simple example, consider a virtual desktop environment where 50 users have the same 10 MB report in their user data stores -- that's 500 MB.
By removing redundant copies of that data and pointing to one working copy of that data left on disk, the amount of storage needed to hold redundant information is slashed. The same principle can be used to remove redundancy in other storage, such as snapshots, golden images and all elements of the organization's storage.
Deduplication is handled within the storage array itself. It's not a virtualization feature, and desktop virtualization software like VMware View or Citrix XenDesktop won't know that deduplication is even in place. Deduplication, however, can affect storage performance.
Snapshots take a point-in-time copy of a LUN and protect virtual machines in operation. The snapshot can then be used to restore corrupted or non-operational virtual machines (VMs) or to create clones of VMs for new servers.
Desktop virtualization can also take advantage of the snapshot features included with many storage arrays by cloning virtual desktops and providing those snapshots for new VDI users. For example, the snapshot would be a read-only file, and any changes to that desktop would be written elsewhere in storage for that user. Administrators can quickly and conveniently deploy new desktops without having to create images from scratch.
VDI storage gotchas
Organizations enamored with the promise of VDI can easily incur a variety of lesser-known costs. A common mistake is settling for "fat" desktops where a virtual desktop includes all of the configuration files, operating systems, applications and user data. This approach works, but it demands far more storage than a virtual desktop instance actually needs. When this oversized image is multiplied by the total number of "fat" desktops and then multiplied by the additional storage needed for backups and disaster recovery, the storage requirements can easily overwhelm the purported cost benefits of VDI.
Desktop provisioning administration can also be problematic and costly. VDI makes sense only when an administrator can provision and update a large number of desktops using automated techniques such as scripts. Desktop provisioning involves creating a virtual machine, installing the OS, creating a template, customizing the template and then cloning the boot image to the production desktop on the VDI server.
It might take only 20 minutes or so to tackle these tasks manually, but multiplied by dozens or even hundreds or thousands of desktops, the administrative problems become insurmountable. Patching also requires manual processes that can be equally problematic and time-consuming.
These two challenges can often be mitigated by creating "thin" desktops. For example, administrators can thin-provision a volume, build a golden image with an operating system and applications in a template, and then create writable snapshots to be assigned to each end user. User data is stored apart from the desktop image -- perhaps on a different storage system. The interrelated components can be used and reused without creating entirely new images, and it eliminates the manual cloning process.
Patching can also be accomplished automatically while leaving user settings and data alone. Virtualization tools such as VMware's View Composer allow administrators to make images that share virtual disks with a master image, using less storage. By comparison, VMware View Manager streamlines and automates virtual desktop provisioning and management.
Poorly implemented storage architectures and inefficient desktop provisioning can result in poor load times and unresponsive applications, as well as costly lost productivity and angry users. Proper storage system implementation is critical for adequate disk performance under random I/O workloads and network resilience to prevent access disruptions. Storage systems with advanced caching can share desktop images from the cache, radically improving load times for users where desktop images are almost identical.
ABOUT THE AUTHOR:
Stephen J. Bigelow, a senior technology editor in the Data Center and Virtualization Media Group at TechTarget Inc., has more than 20 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow’s PC Hardware Desk Reference and Bigelow’s PC Hardware Annoyances. Contact him at email@example.com.