VDI design guides tell you that scalability is a critical feature of a good architecture, but what does this actually mean?
How do you make a virtual desktop infrastructure (VDI) deployment grow from a few dozen or hundred pilot users to a full population many times that? To reach a large scale without compromising the user experience, it is important that resources scale as the user count scales.
Scaling compute is fairly simple: Just add more virtualization hosts when you need to accommodate more users. A good VDI pilot will help you to identify the number of users you can accommodate on a single host or a cluster of hosts. When you run a pilot, it is important that real users doing normal work with the final desktop build be used to identify the number of users per host. Using an automated synthetic workload will yield different scaling that doesn't represent real use, leading to a production deployment that doesn't perform as desired.
Once you have identified the number of virtualization hosts, you also need to work out where your bottlenecks and single points of failure are in the infrastructure. These are unavoidable, so understanding where they lie will enable operational practices that minimize their impact.
If restarting a load balancer or patching a connection broker will cause an outage, it is best to know that before it happens. Similarly, it is important to know whether your VDI deployment can deliver usable desktops if all the VMs start up at 6 a.m. after a power outage, for example. Don't expect to identify all the possible impacts before rollout, though; plenty more will show up as the load increases.
In a small VDI deployment, the unit of scale may be a single virtualization host. Should the next one be placed in the same rack as the last three, or placed on a different power circuit? In a large environment, the unit of scale may be a cluster built on a blade enclosure -- a new blade enclosure being bought for the new cluster as the environment scales. Is this new cluster in a different fire zone from the last cluster? Think about distributing the cluster across multiple blade enclosures, so when you need an outage on the whole enclosure for an upgrade, you can still have some users working.
Scaling storage for VDI
Scaling storage can be a serious challenge; a lot of the cost of a VDI infrastructure tends to be storage that delivers the required performance. VDI places a very high load on a storage system, concentrating the I/O of a large number of desktops in one location rather than spreading it across many PCs, each with its own disk. While the major VDI products have technologies that reduce the amount of storage capacity that is required, they don't reduce the number of storage transactions required.
This is generally where the scaling can cause problems. The storage that provided awesome performance for the 200 pilot users may hit the wall for 2,000 production users. You must carefully monitor the pilot and do the math for the production storage system before the rollout begins. A scalable solution will distribute the I/O and allow performance to be readily increased as user numbers grow.
More VDI development tips
Scaling up can kill your VDI deployment
When a small-scale VDI deployment is the right fit
Don't underestimate VDI storage requirements
One scaling approach is to use a modular storage array, which uses a cluster of x86 server nodes, each loaded with a dozen disks to provide shared storage. As more capacity or performance is required, more nodes are added to the cluster. This sort of scaling allows the purchase of more storage nodes when more users are added, scaling performance with capacity and user count.
Another approach is caching software, which uses RAM or solid-state drive (SSD) storage in each virtualization host to handle the performance requirements for the desktops. The shared storage provides the persistent and shared location without such a high performance load. As new hosts are added, each one implements the caching, so storage performance scales as user count scales.
A hybrid of the two approaches is a hyper-converged infrastructure, where the virtualization hosts provide the compute resources for the virtual desktops and also act as storage nodes. The hosts have disks, as well as RAM and SSD, as cache for performance, using VMs to provide the modular storage cluster that scales its performance as new hosts are added for the additional desktops.
You will notice one of the VM food groups missing; I have made no mention of scaling network capacity for VDI. That is because most data centers have an excess of network bandwidth. With a few hundred virtual desktops, a small number of 1 GB Ethernet adapters per host will provide enough bandwidth (provided you have another storage network). For thousands of users, or if you use IP-based storage, a small number of 10 GB Ethernet ports per host will provide ample bandwidth. You still need to think about WAN bandwidth, however; here, again, a good pilot will give you a great idea of how much bandwidth is required to support each of your branches.
Scaling a VDI environment requires an understanding of the workload your users will place on the desktops and the capacity of the infrastructure you provide. Roll out slowly and monitor your environment as the load increases, especially the storage workload.
Alastair Cooke asks:
What's the most important consideration when scaling a VDI deployment?
3 ResponsesJoin the Discussion