GPU vs. CPU: Three ways to run graphics-heavy apps on virtual desktops

It's a pain trying to accommodate graphics-intensive applications on a virtual desktop, but don't despair. There are ways to deliver these apps fast.

Some applications just don't play well with VDI.

In part one of this series on graphics-intensive applications, I explained the difficulties of giving virtual desktops to people who work with 3-D graphics and video rather than words and numbers, such as artists or designers. In part two, let's look into issues with getting the pixels efficiently to the desktop screen.

To understand the problem with graphics-heavy applications, you need to know how a graphics processing unit (GPU) differs from a central processing unit (CPU). Then I'll cover some solutions for providing GPU power to the applications that need it.

Understanding GPU vs. CPU

A GPU is a specialized processor that uses parallel processing to calculate many elements of an image at once, whereas a CPU does only one calculation at a time. With a desktop PC, the 3-D rendering is done by a dedicated GPU on a graphics card, and for high-performance 3-D applications you need a high-performance GPU.

More on 3-D graphics applications

VMware View 5 supports hardware-accelerated graphics

Getting the most out of graphics-intensive application performance

How content redirection can help deliver rich media

More and more applications require 3-D graphics today. For instance, Windows 7's Aero Glass requires a GPU to operate, and Internet Explorer recently gained the ability to use a CPU and GPU to get better performance than it would by using a CPU alone. In addition, video production and computer-aided design (CAD) applications have spread into more organizations, making the requirement for 3-D graphics common among customers deploying virtual desktop infrastructure (VDI).

The problem is that most VDI environments use virtual machines (VMs) and a shared virtualization host, but VMs haven’t traditionally had the ability to use a GPU. Similarly, remote desktop sessions have been unable to use a GPU. Those two issues often rule 3-D workers out of the VDI equation.

Dedicated hardware can help

One way to get around this problem is to have a dedicated rack mount or blade PC for each user. These have dedicated GPUs for each user (along with dedicated CPUs, disk and RAM, too). This setup provides isolation between users and guarantees a lot of resources for each one. Essentially, it places the user’s workstation in the data center and simply remotes the display.

However, this approach to GPU applications can be expensive. Rack and blade workstations cost at least as much as their desktop counterparts, and you would need to buy one for each user. The workstations are used by only one user at a time, so there’s also the potential for a lot of wasted resources. For this to be a viable approach, you should have a very good reason for needing user isolation.

What about shared hardware?

A while ago, Citrix's server virtualization hypervisor, XenServer, acquired the ability to pass a PCI-e GPU card through to a VM. This feature allows for 3-D graphics acceleration on as many VMs as there are PCI-e cards in your virtualization hosts. Realistically this means a handful of users per physical host -- a big step up from one user per host but a long way from the 100 users per host you could achieve with users who don’t often require 3-D applications.

This approach suits full-time 3-D users who do not need the level of isolation provided by dedicated hardware. VMware has announced an alliance with NVIDIA to bring this capability to its VDI product. (Although there doesn't seem to be a product yet -- maybe it's been  overtaken by the virtualized hardware approach.)

Advancements in virtualized hardware

Over the last few years there has been a lot of engineering effort toward achieving a shared GPU architecture for VMs and Remote Desktop Services. Microsoft's RemoteFX display protocol allows multiple users to share GPU in Remote Desktop Session Host (Terminal Services) or Remote Desktop Virtualization Host (VM-based VDI).

With multiple users per PCI-e slot GPU card, you can scale to a lot more users per physical server. This enables you to run mainstream 3-D graphics, such as user interfaces, and to support infrequently used 3-D applications without requiring dedicated hardware. Both Citrix and VMware are working to bring this capability into their VDI products as well.

CPU still matters

All these solutions to the graphics problem are about getting a GPU into a virtual desktop so it can draw screen elements fast. However, the job of determining which 3-D elements need to be drawn still sits with the CPU -- as does much of the math that CAD applications need to do. As a result, a lot of users running 3-D applications on a virtualization host may experience CPU bottlenecks even though the GPU eliminates display-rendering bottlenecks.

That means that 3-D applications are likely to reduce the average user count per host because they will be high on CPU load and typically also require a lot of RAM.

It's harder to satisfy users who have graphics-intensive applications with VDI than those with normal office duties, but with a good design there are ways to accommodate many types of desktop requirements.

This was first published in December 2012

Dig deeper on Virtual desktop management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Related Discussions

Alastair Cooke asks:

What do you think is the best way to assist GPU-heavy application rendering?

1  Response So Far

Join the Discussion

10 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchEnterpriseDesktop

SearchServerVirtualization

SearchCloudComputing

SearchConsumerization

SearchVMware

Close