Virtual desktops may make management easier, but proper planning is needed to reduce storage bottlenecks, ensure...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
performance and accommodate growth. Storage subsystems can ease VDI deployments, whose costs can balloon if you don't follow best practices for supervision. In this first segment of a four-part e-book, we'll help you understand the storage limitations of desktop virtualization and how to overcome bottlenecks.
Desktop virtualization has been a promising technology for some years now, allowing organizations to centralize and control individual endpoints in ways that simply aren't possible with independent desktop or laptop computers. Application virtualization enables key applications to be deployed to users from a server.
In a virtual desktop infrastructure, the entire desktop environment is instanced on a server and delivered to a simple endpoint across the LAN. But desktop virtualization is subject to all of the limitations found in client/server computing, such as server and network disruptions, and organizations seeking to adopt the technology need to understand the most common causes of virtual desktop bottlenecks.
Storage and desktop virtualization limitations
Superficially, the appeal of technologies like virtual desktop infrastructure (VDI) is undeniable. Virtualization software such as VMware View or Citrix XenDesktop can provide complete desktop instances to relatively "dumb" and inexpensive endpoints from a central server in the data center. This allows administrators to exercise complete control over the desktop, quickly provisioning new desktops, limiting the software that can be installed, and handling operating system patches and application updates. In theory, an admin can manage hundreds or even thousands of desktop instances without ever leaving the data center.
The reality, however, is not quite so rosy. Desktop virtualization follows a client/server computing model and is subject to the same limitations. For example, problems with the network or servers can disrupt user sessions and even render endpoints unusable, resulting in significant lost productivity.
In terms of storage capacity, the potential for trouble is every bit as pronounced. Just consider an organization that provides a unique 50 GB desktop instance to 1,000 employees -- that's 50 TB of enterprise-class storage area network (SAN) storage added to the data center. Actual deployments are far more space-efficient, but the potential for enormous storage demands can't be overlooked.
Beyond the enormous cost of that storage, there are serious performance concerns to contend with. One issue is storage access behavior. Unlike server-based applications, which tend to have very predictable storage needs, desktop computers typically exhibit much more random storage access.
For example, one user may be streaming audio while another user is watching video, another user is processing spreadsheet data, and yet another user is accessing files. With a multitude of users reading and writing to storage in such an unpredictable manner, the storage subsystem can easily be overwhelmed if it's not designed properly.
Another huge burden on the storage subsystem occurs when many users attempt to access storage simultaneously in a phenomenon called a "boot storm."
"Everybody shows up Monday at 8 o'clock and fires up their virtual desktops all at the same time," said Ray Lucchesi, president and founder of Silverton Consulting in Broomfield, Colo. "That can present quite a drastic performance load on a storage system."
A similar phenomenon is sometimes called "resource storm," where a large number of users attempt storage-related tasks at the same time during the day (such as watching a video clip that has gone viral).
Of course, potential problems extend well beyond storage, and user activity can easily tax the computing resources of even the most capable servers. As one example, anti-malware activity can increase the CPU demands for a virtual desktop and vastly increase its storage activity.
"Pop up the Resource Manager, look at what services are running in your own PC right now, start tracking CPU, memory and disk resource consumption," said Keith Norbie, vice president of sales at Nexus Information Systems in Minnetonka, Minn. "You can see what's taxed on it."
These demands translate directly to virtual desktop instances running on a server. When you consider the effect of hundreds (or thousands) of virtual desktops, it becomes easy to see how appealing the technology might look in small, proof-of-concept deployments.
Cumulative resource demands, however, can make scalability problematic. This is often overlooked, resulting in unsatisfactory performance, outright project failures, or unexpected server, storage and network investments that cast doubt on the value proposition of desktop virtualization.
Overcoming bottlenecks in desktop virtualization
Although the challenges of desktop virtualization are serious, there are numerous ways to address those concerns -- particularly when it comes to storage and performance. The most important strategy is adequate planning and research. Not all desktops (and desktop users) are created equal, and the chaotic demands of traditional desktop computing require careful consideration.
It's not just a matter of provisioning enough storage for each desktop image. The resource use of each independent PC must also be measured over time, as well as during the periods when more resources are needed. With this data in hand, virtualization planners can start piecing together a picture of requirements for server-side computing, network bandwidth and storage performance.
Armed with this comprehensive picture, planners can make better architectural decisions in the design phase. Given the enormous variation in desktop computing demands, planners may determine that desktop virtualization is not appropriate for every user -- this step is often overlooked. In fact, the technology is most efficient when applied to relatively static "cookie cutter" desktop users.
For example, virtual desktops may be perfect for an army of order-entry clerks who use the same one or two applications all day in the corporate call center. By contrast, the creative teams in the marketing and graphic communication departments may simply need far too much computing to make desktop virtualization practical. In other cases, users may require a handful of exotic applications that simpy aren't worth virtualizing.
The architectural planning can then focus on meeting the computing needs of the targeted desktop virtualization clientele. Of course, the storage system must be optimized for random I/O and heavy bursts of demand such as boot storms, but storage subsystem cache can also be used if desktop images are essentially identical.
"When requests come in for those [desktop] images, if they're all effectively snapshots of one another, the first request loads the data into cache, and all the subsequent other requests just access the data in cache, so it performs fairly well," Lucchesi said.
One way to further boost storage performance for random I/O is through wide-striping, which utilizes a larger number of smaller hard drives -- effectively increasing the number of disk spindles that read and write data. Solid-state disk drives or hybrid drives (with a large solid-state memory space between the magnetic disks and interface) can also help boost performance.
Finally, reducing the operating system's footprint and redirecting user data to network-based file shares can significantly reduce the size of each desktop instance and boost storage performance. Virtualization documentation often includes a myriad of best practices to help administrators manage storage demands.
In the next segment of a four-part e-book, we’ll help you understand your virtual desktop storage requirements for optimal performance.