The remote display protocol pool has improved over the years, but there are still some limitations when delivering
graphics-intensive applications. To get decent application performance, you'll end up sacrificing the user experience or something else.
You can have low bandwidth, a good experience or low CPU usage. Pick two.
As far as protocols go, at first there was only Citrix ICA (now called HDX, for the most part). That was joined over time by Microsoft's Remote Desktop Protocol and RemoteFX, VMware's PC-over-IP, Quest EOP, Ericom Blaze and others. A so-called remote display protocol war ensued as the companies tried to add features that got them closer to HDX -- the gold standard for VDI performance.
That field is fairly level now, but each remote display protocol has its shortcomings that affect application performance. One of those limitations comes into play with graphics-intensive applications.
The performance of these apps via a remote display protocol has certainly gotten better over the years, but it's not just because the protocols got better. Graphics-intensive apps also tend to consume more bandwidth, require extensive tweaking or supporting hardware and a perfect network.
For instance, it's possible to deliver an application flawlessly to the user, but you often have to dedicate more CPU cycles to handle the graphics processing. In turn, that reduces the number of virtual machines or user sessions that you can fit on a given server, which means the incremental cost of a user goes up.
The easiest way to visualize this never-ending trade-off is the old business adage that says, "You can have it fast, cheap or done correctly. Pick two." We can update this for a remote desktop protocol to say, "You can have low bandwidth, a good experience or low CPU usage. Pick two."
Basically, you can't have your cake and eat it, too. If you opt for low bandwidth and a good experience, you'll have to dedicate some CPU to the problem. You may be able to dedicate some GPU resources or something like that, but that adds cost and complexity.
NVIDIA virtualized GPU eliminates graphics performance barriers
VMware to support hardware-accelerated graphics in View 5
Likewise, if you want low CPU and a good user experience, that will come at the cost of bandwidth. On a local area network, that's how most people get by because the bandwidth is more or less unlimited.
Another possible model that's less ideal is that if low bandwidth and low CPU are the most important things to you, you can get there at the expense of the experience. That's usually not the most desirable solution for application performance, but there are plenty of one-off cases where that might be the most important configuration.
Of course, there are other things you can do that complicate the "pick two" mentality with graphics-intensive apps. Wide area network accelerators, GPU offload cards and even protocol enhancements can all narrow the gap and make it possible to achieve all three goals.
Still, they all come at a cost. You could add "cheap" as another dimension to the picture, but isn't that what we're almost always dealing with? If money were no object, we wouldn't need to worry about striking this balance with remote display protocols.
ABOUT THE AUTHOR:
Gabe Knuth is an independent industry analyst and blogger, known throughout the world as "the other guy" at BrianMadden.com. He has been in the application delivery space for over 12 years and has seen the industry evolve from the one-trick pony of terminal services to the application and desktop virtualization of today. Gabe's focus tends to lean more toward practical, real-world technology in the industry, essentially boiling off the hype and reducing solutions to their usefulness in today's corporate environments.
Dig deeper on Application virtualization and streaming
Gabe Knuth asks:
Does your VDI software meet all your graphics demands?
0 ResponsesJoin the Discussion