Most users don't currently need virtual GPU power to boost their virtual desktops' performance -- it's a luxury...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
-- but as organizations shift to new operating systems and applications, vGPU will become necessary.
For a few years now, IT has had the ability to add hardware-accelerated graphics to virtual machines (VMs), which allows virtual desktops to run graphical applications with performance similar to physical PCs. Rather than running on an endpoint device, virtual GPUs (vGPUs) live on physical GPU cards on a VM server, and IT can distribute them to users to deliver higher performance VDI.
But the question is not whether VDI admins should use vGPUs at all, because it is now an effective and necessary strategy to deliver graphics-intensive applications. Instead consider whether it's really essential to deliver vGPU to every end user. Some employees, such as engineers and designers, need hardware acceleration, but there are a few reasons why not everyone needs vGPUs -- yet.
Why IT should consider vGPU
Desktop VMs can share a single GPU in the physical virtualization host with each virtual desktop getting a slice of the resulting virtual GPU just like it gets a slice of CPU power. Adding a vGPU makes sure the operating system can run all the features in newer OS versions such as Windows 10. It also relieves some of the load from the CPU, making the virtual desktop more responsive. Even for a basic Microsoft Office user, a little vGPU power makes the virtual desktop more acceptable and even satisfying to use.
User acceptance alone is a great reason to deploy virtual GPUs, because that extra power makes graphics-heavy applications practical. In the past, apps such as Adobe Photoshop and AutoCAD were reasons not to use virtual desktops. With vGPU power, these applications can run beautifully on VDI.
The trouble with GPU cards
The GPUs that go into virtualization hosts are usually workstation-class graphics cards -- most often NVIDIA GRID K1 or K2 GPU cards. These cards are large PCI Express (PCIe) host bus adapter cards, and they use a lot of power. These cards also do not come cheap, adding thousands of dollars to the cost of a VDI host.
One of the first things to know: Most blade servers don't support vGPUs. Blades seldom have PCIe slots or enough power to supply a GPU card. Even a rack mount server may not have space for a GPU card. NVIDIA GRID cards are double width, so they take up two adjacent PCIe slots. Plus, many virtualization hosts already have storage adapters and network cards plugged into the PCIe slots where GPU cards would go.
Even if there are free slots, there might not be any available auxiliary power connectors. GPU cards use so much power they need their own cables from the servers' power supply. All this additional power consumption means additional heat, so servers running GPU cards need good airflow. These requirements mean vGPU vendors need to certify which cards can go in which servers. Using vGPUs with servers that aren't on the approved list increases the risk of server crashes.
With the power and space requirements that GPU cards bring, it is important to plan ahead. IT shops may not deploy a vGPU to every VDI host, but they should make sure it's possible to add GPU cards without having to replace any physical servers.
When to use virtual GPU
Most organizations likely won't add virtual GPU support to their existing VDI environments today. But when it comes time to refresh or build a new VDI deployment, vGPUs should be an option.
For cost-sensitive deployments, vGPU power is unlikely to make sense. For example, a call center with 2,000 employees, all using a single application, does not need NVIDIA GRID cards. This is particularly true with in-house applications, which developers usually design to consume minimal resources.
Knowledge workers are where vGPUs go a long way. Employees using multiple applications often get more work done with low-power vGPUs in their virtual desktops. The easy place to begin using hardware acceleration is with power users. Any user working with rich graphical applications can't get work done without a healthy ration of vGPU power.
Right now most virtual desktops do not need vGPUs, but new OS versions and applications will soon require that almost every endpoint support graphics acceleration. NVIDIA lists more than 300 business-related applications that it says perform better using vGPU instead of an x86 CPU socket, and that list will only grow as apps become more dependent on graphics and other media. Additionally, Windows 10 virtual desktops can take up more resources than previous Windows OS versions. VDI admins have to keep pace by adding equivalent vGPU power to each VM.
What virtual GPU can do for VDI
An overview of VMware and Citrix vGPU support
NVIDIA's GRID vGPU technology in action