Everything you need to know about GPU virtualization
A comprehensive collection of articles, videos and more, hand-picked by our editors
To deliver the maximum graphics performance to a VDI desktop, VMware vDGA gives an entire workstation-class GPU...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
card to the desktop.
VMware DirectPath I/O passes a GPU card in the ESXi server into a virtual machine (VM). The whole card is dedicated to that VM, and it uses the card vendor's driver to access its full feature set.
Any features that card would give to a physical PC are also available in the VM. This allows applications using the Nvidia CUDA parallel computing platform to talk directly to the GPU for high-performance computing. It also allows for workstation-class graphics in the VM. This is important for high-value graphical work where rapid visualization of complex data is critical, as is the case with oil and gas exploration, for example.
Additionally, because one VM gets the whole GPU card, there must be one GPU card for each VM that needs it. You can have eight GPU cards per ESXi server, so you can support eight VMs with VMware Virtual Dedicated Graphics Acceleration (vDGA).
If you're giving a VM a whole GPU card, the desktop is probably being used for some important work, which means you'll also want it to have a lot of vCPUs and RAM as well. But keep in mind that the eight GPU per server limitation means you're also limited in how large you can scale a single ESXi host. Multiple ESXi servers, each with eight GPUs -- and eight VMs -- can let you deliver quality graphics to more virtual desktop users. One other limitation of vDGA is that using VMware DirectPath I/O prevents you from being able to vMotion VMs from one ESXi host to another, which can increase maintenance time.
Companies usually use VMware vDGA with multiple high-resolution monitors, and the GPU renders images quickly, but you'll still need a high-performance network to get the pixels to users' desks. You'll also want to make sure workers use a high-performance remote display protocol and client device.
Remember that you do not need to have a GPU card allocated to every VM on the host. Only workers who need high-performance video and graphics need their own cards. You could have VMs with less-demanding graphical loads on the same ESXi server as your VMs with their own GPU cards, just be careful about resource management. GPU-assisted VMs tend to be much more important than those without GPU, so it's critical to reserve enough CPU and RAM for the GPU-assisted VMs.
Related Q&A from Alastair Cooke
Admins can define vSphere roles to manage user access and control over virtualized platforms. So what are some tips to start that process?continue reading
Despite easier provisioning and policy-based management, the unique knowledge held by storage administrators remains essential in the age of ...continue reading
It takes a lot of work to learn how to secure and manage cloud-hosted desktops. But, if it's an option financially, organizations could consider some...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.