In some ways, measuring the performance of a virtual desktop is more difficult than measuring those same stats
for a conventional desktop. This is largely because you're not dealing with a single, isolatable computer -- you're dealing with a machine that's running side-by-side with other machines on the same hardware "in context," so to speak. Since isolating the dependent variables for a single machine's performance is easier, testing methodologies for virtual desktops need to be specific to their particular enterprise environment.
Measurements are all about metrics. As the old saying goes, you won't miss what you can't measure, and so you want to measure what matters. What sort of measurements matter to those running and managing virtual machines (VMs)?
Here's a quick rundown of the statistics that matter most.
Memory utilization. This should be both the amount of physical and virtual memory used on each machine. If a given VM is paging a great deal, it definitely needs to have its memory allotment increased. Note that excessive paging is something an administrator can generally detect only when a virtual desktop is being used -- the end user typically can't hear the hard drive grinding away as a warning sign.
- Disk throughput. "Throughput" is a generic term for reads or writes to a physical disk. Since disk access is one of the biggest bottlenecks in any system -- real or virtual -- you should keep tabs on transactions per second (TPS) to determine what kind of contentions are going on. Place different VMs on different physical disks whenever possible.
- Back-end CPU. As the name implies, this is the CPU utilization of the server that runs the virtual machine. This should be considered on a per-core and per-socket basis, not just as a single aggregate. It may also be useful to break out per-VM usage statistics, especially if you have hard caps on how much physical CPU usage each VM can take up.
- Transaction rate (number of users). This is a measurement of how many individual desktops are currently being virtualized. Depending on how things are set up in your organization, you might want to break this into two numbers: the number of physical users and the number of user connections. This is useful if, for instance, you often have scenarios where a given user is running more than one VM at a time.
- Responsiveness. This can be tough to measure because it could cover a number of different metrics at once. Generally, it'll be the amount of time a virtual machine takes to return a response to a user request such as a mouse click or a keyboard command.
A few third-party examples can provide some ideas. In one of IBM's white papers (download PDF) about virtual desktop performance, the company chose to measure the milliseconds it took to invoke an instance of a Web browser and access a file (see page 10).
Slicing and dicing
A number of these metrics can be gauged differently. For instance, CPU usage can be tallied up three ways. There's the total processing used by the entire aggregation of virtual machines running on the hardware in question, the maximum percentage of CPU allocated to each VM and the average utilization of that maximum percentage for each VM.
This variegated approach to analyzing resource usage can provide several benefits. For one, if most of your virtual machines are using only a small amount of their allotted resources, but a few percentages are spiking dramatically, you can restructure allocations according to actual need rather than projections. Total systemwide usage indicates how your hardware as a whole is handling the current load, it and allows you to see which physical servers might be under- or overprovisioned.
The same sort of tallying can be applied to memory and disk utilization, TPS, or anything measured by whatever percentage of its apportioned per-machine amount is in use.
Testing virtual desktop performance can be difficult because you're trying to measure something that is not normally measured in a virtual machine environment: user interactivity. Most of the experiences you've probably had with VMs until now are as servers, where user interaction is limited to serving up a database or Web pages (or both). Gauging the performance of virtual desktops is likely to be a challenge at first, but one well worth rising to.
ABOUT THE AUTHOR
Serdar Yegulalp is editor of the "Windows Power Users Newsletter." Check it out for the latest advice and musings on the world of Windows network administrators.