Essential Guide

Everything you need to know about GPU virtualization

A comprehensive collection of articles, videos and more, hand-picked by our editors
Q
Get started Bring yourself up to speed with our introductory content.

How do I use VMware vDGA?

VMware vDGA lets you link a whole GPU to a virtual desktop, but you can only support eight VMs per ESXi host that way.

To deliver the maximum graphics performance to a VDI desktop, VMware vDGA gives an entire workstation-class GPU...

card to the desktop.

VMware DirectPath I/O passes a GPU card in the ESXi server into a virtual machine (VM). The whole card is dedicated to that VM, and it uses the card vendor's driver to access its full feature set.

Any features that card would give to a physical PC are also available in the VM. This allows applications using the Nvidia CUDA parallel computing platform to talk directly to the GPU for high-performance computing. It also allows for workstation-class graphics in the VM. This is important for high-value graphical work where rapid visualization of complex data is critical, as is the case with oil and gas exploration, for example.

Additionally, because one VM gets the whole GPU card, there must be one GPU card for each VM that needs it. You can have eight GPU cards per ESXi server, so you can support eight VMs with VMware Virtual Dedicated Graphics Acceleration (vDGA).

If you're giving a VM a whole GPU card, the desktop is probably being used for some important work, which means you'll also want it to have a lot of vCPUs and RAM as well. But keep in mind that the eight GPU per server limitation means you're also limited in how large you can scale a single ESXi host. Multiple ESXi servers, each with eight GPUs -- and eight VMs -- can let you deliver quality graphics to more virtual desktop users. One other limitation of vDGA is that using VMware DirectPath I/O prevents you from being able to vMotion VMs from one ESXi host to another, which can increase maintenance time.

Companies usually use VMware vDGA with multiple high-resolution monitors, and the GPU renders  images quickly, but you'll still need a high-performance network to get the pixels to users' desks. You'll also want to make sure workers use a high-performance remote display protocol and client device.

Remember that you do not need to have a GPU card allocated to every VM on the host. Only workers who need high-performance video and graphics need their own cards. You could have VMs with less-demanding graphical loads on the same ESXi server as your VMs with their own GPU cards, just be careful about resource management. GPU-assisted VMs tend to be much more important than those without GPU, so it's critical to reserve enough CPU and RAM for the GPU-assisted VMs.

This was last published in January 2015

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Everything you need to know about GPU virtualization

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchEnterpriseDesktop

SearchServerVirtualization

SearchCloudComputing

SearchConsumerization

SearchVMware

Close