Remote display protocols for VDI: will RDP be enough?

It seems like hundreds of VDI solutions are popping up now. Some are more complete than others, but all share one common fact: they are server-based computing.

It seems like hundreds of VDI solutions are popping up now. Some are more complete than others, but all share one common fact: they are server-based computing. In other words, they all involve the remote execution of a Windows instance that sends screen updates across a network to a client display device. For years, this protocol was either RDP or ICA. Moving forward, however, this might not work. Quoting Provision Networks’ Peter Ghostine from a recent conversation:

Historically, ICA and RDP were designed to flush the video framebuffer to the client roughly once every 100 milliseconds, which is fine for most Windows GDI apps, but not suitable for graphics-intensive apps, the Aero experience, 3D apps, and especially apps that require audio-video synchronization. We hardly had much use for more than this functionality until recently. But now that VDI is being promoted as a "desktop replacement," remote display protocols will have to rise to the occasion.

In this world of VDI desktops replacing traditional local desktops, how much does the protocol matter? Where are ICA and RDP today? Who are the others players? This is what we’ll look at in today’s article.

Desktop remoting techniques

Fundamentally there are several different ways that a desktop running at one place can show up on a screen of a client at another location:

  • The “screen scrape” method
  • Screen scrape + multimedia redirection
  • Server graphics system virtualization
  • Hardware acceleration on the server and client


The general idea with “screen scraping” is that whatever graphical elements are painted to the “screen” on the host are then scraped by the protocol interface and sent down to the client. This can happen in two ways:

  • The client can contact the server and pull a new “snapshot” of the screen from the frame buffer. This is how VNC works.
  • The server can continuously push its screen activity to the client. This can be at the framebuffer level, the GDI / window manager level, or a combination of both. (This is how RDP and ICA work.)

Login Consultants' Benny Tritsch adds a note of caution regarding the term "screen scraping:" 

Over the years, this screen-scraping has become very advanced. RDP, ICA, and other protocols don't simply look at pixels on the screen and compress them into graphical images. Instead this process is enhanced by analyzing the screen content and identifying screen regions that are being reused (such as icons, fonts / glyphs, dialog boxes, etc. Those graphics elements can be cached at the client side, so if the host needs to send one of these elements, it only transmits the reference number of the cached element and the new coordinates. This dramtically reduces the amount of data transmitted and thus increases performance and user experience. This cached information can even be used for enhanced local echo effects, like Citrix's Speedscreen Local Text Echo for the standard GDI output.

So even though the specific term "screen scraping" is no longer an exact literal representation of what is happening, the term is used more broadly to describe this general concept.

Screen scrape + multimedia redirection

As most people reading this article know, any screen scraping-like approach works fine for applications that don’t have a lot of graphically intensive screen elements or where relatively low frame rates (~10 fps) are acceptable. But these approaches are not good with multimedia content.

The “screen scrape” method can be combined with “multimedia redirection,” a technique whereby server-side multimedia elements are sent in their native formats down to the client devices. Then the client can play the multimedia streams locally and dynamically insert them back into the proper position on the screen.

This works well if (1) your client has the technical capability and hardware specs to render the multimedia, and (2) your client has the proper codec installed so that it knows how to render the multimedia content. In effect, this means that your clients can’t be “too thin.”

This is what Citrix does in ICA with their “SpeedScreen” multimedia acceleration enhancements. It’s also what Wyse does in RDP with their TCX enhancements.

Server graphics system “virtualization”

"Virtualizing" the entire graphics system of the host was also explained to me in the conversation I had with Peter Ghostine, so I'll quote / paraphrase his explanation here:

In “virtualizing” the graphics system of the host, software on the host captures all possible graphical layers (GDI, WPF, DirectX, etc.) and renders them into a remote protocol stream (like RDP) where they’re sent down to the client as fast as possible. (Certainly much faster than the default of 10x per second.) This will give the client an experience which is very close to local performance, regardless of the client device (even on very low-end WinCE and Linux clients).

The challenge here is that GPU capabilities must exist on the server side where the rendering is taking place. This is fine if you plug a physical graphics card into physical hardware running a physical OS. But in a VDI scenario, your hypervisor must be able to virtualize the GPU just like any other piece of hardware. This means that the Windows desktop OS running inside the VM be able to detect the “virtual” GPU so that it can enable all of it’s cool graphical features.

This is what Calista Technologies does today: full desktop-like remote experience to any RDP client, even low-end ones, over the regular RDP protocol.

In the future, it's even conceivable that you could somehow hook this in to those GPU computing servers that are starting to hit the market now. (NVidia has a Tesla series of hardware which is basically 1U servers stuffed full of GPUs). (And fans. Lots of fans.)

Proprietary chipset-based solution

The final remote desktop option requires special hardware on the host and on the client side. Screen and video content is captured on the host via a special chipset and sent across the network in a proprietary way to a client device with a matching special chipset.

This is what Teradici does. Today their solution works with physical blades (with their special TERA chips) and their clients (also with TERA chips), but in the future something like this might (in theory anyway) work with something like the NVidia Tesla GPU server (except with a Teradici chip server instead).

What about bandwidth?

This "server-based computing" technology is of course also known as "thin client computing" technology. But what does "thin" refer too? The client device? The protocol? (The LCD screen? :)

In the early days of RDP and ICA, it could be said that the protocol was the "thin" part, and in fact many people used Terminal Server and Citrix to make three-tiered apps work across WAN links. But now that we're talking about remoting full and true desktops, that whole "20kbps" per session thing can be thrown out the window.

Regardless of protocol, regardless of technique, a true “desktop-like”  experience is only going to happen with bandwidth. Some of these approaches require more bandwidth than others. As Peter Ghostine said, “no one is going to be able to squeeze an elephant through the eye of a needle. While compression algorithms will always advance, if a user wants to watch a video at 24 frames per second, that’s a lot of data that needs to go across the network. Period."

Delivering a few business apps in 1998 via RDP or ICA is very different than delivering a whole and completely functional desktop in 2008!

Where does this leave RDP and ICA?

Microsoft’s RDP protocol is sort of the standard for a lot of remote computing conversations since it’s been built-in to Windows for the past eight years. RDP is a good protocol. RDP version 6, built-in to Windows Server 2008, will support all those new-fangled features like seamless windows, RemoteApps, TS EasyPrint, etc.

Fundamentally, Citrix’s ICA protocol is not that different than Microsoft’s RDP protocol. In practical use, yes, connections over ICA typically have a better experience than the same connection over RDP, but that’s because Citrix has only chosen to enable their advanced features (SpeedScreen, UPD printing, compression, virtual channel limits, etc.) when using the ICA protocol. It’s not because ICA is any different than RDP. This is why RDP was classified as "screen scraping," while ICA was classified as "screen scraping + multimedia enhancements.

Why does Citrix even bother with ICA today? Remember that Citrix actually developed ICA (and the multi-user kernel technology that eventually became Terminal Server) as an add-on to Windows on their own in the mid-1990s. When Citrix licensed their core “MultiWin” technology back to Microsoft in 1997, Citrix kept ICA for themselves. Microsoft went out and developed RDP on their own (actually based on some of the work they’d been doing with NetMeeting).

Sure Citrix could have just used RDP back then, but in 1997/1998 it was really important that Citrix had a hard-core feature to differentiate themselves from Terminal Server. (Remember this was before the days of application publishing.)

Over the years, RDP got better and better, but Citrix couldn’t ever “switch” because they spent so many years telling people how crappy RDP was. Plus, other companies like Provision and Ericom and Jetro came out with ICA-like extensions to RDP, and Citrix wanted to keep the ICA brand in order to discredit their competition as using “just” RDP.

The other (and lesser-known) driver that really required Citrix to hang on to ICA was that in versions of Windows before Server 2008, (even including Windows Server 2003), Microsoft didn’t expose everything that Citrix needed to integrate ICA with Terminal Server. This meant that Citrix had to find their own way of doing things, which in turn meant that the way ICA hooked into TS was very proprietary.

But in Windows Server 2008, Microsoft (with the help of Citrix I’m sure) finally created all the “proper” and fully-documented interfaces Citrix needs to snap ICA into Windows. This is great for Citrix! And it’s also great for Provision and Ericom and HOBlink and everyone else who wants to enhance RDP. (Ironically this also means that moving forward there is absolutely nothing holding Citrix to ICA except for marketing and backwards-compatibility.) But if they wanted to, Citrix could transfer all of their "+ multimedia" away from ICA and into RDP.

What will VMware, Microsoft and the other VDI vendors do?

By now I hope you understand that if you want to do VDI with a “real” local desktop experience, you need more than pure RDP to make it happen. (Well, perhaps I should phrase that as "I hope you understand that I think this is more than traditional RDP.")

Citrix has announced their VDI strategy based around their XenDesktop product—a combination of Citrix Desktop Server, Citrix Provisioning Server (Ardence), and XenServer. XenDesktop will use the ICA protocol with direct connections into workstation VMs. Citrix will leverage the same SpeedScreen multimedia acceleration technologies as Presentation Server to deliver a decent remote desktop experience beyond what “pure ICA” could do. So they’re all set.

Teradici and Calista Technolgies are doing some interesting things with regards to server-side hardware and software, so they’re all set too.

Wyse and Provision Networks are enhancing the RDP protocol in much the same way that Citrix is enhancing the ICA protocol, so they’re all set.

Several other VDI vendors are doing interesting things in the protocol space too. Qumranet (creators of the KVM “kernel virtualization module” for Linux) have developed a purpose-built remote desktop protocol called SPICE, so they’re all set.

DeskTone is using something they call the “dynamic best fit” protocol for remoting desktops, so (one assumes) they’re all set too.

Who’s missing from this “all set” list? The two biggest vendors are VMware and Microsoft, who both (at least today) are basing their VDI solutions around an unmodified RDP.

I want to reiterate that I’m not suggesting that RDP is “bad” for VDI or that it can’t be used. My point is that RDP will work fine for a lot of line-of-business apps where 10fps is acceptable and not too much changes on the screen. But for companies looking to replace their desktops (and all of their apps), RDP by itself is just not going to cut it.

What will VMware do?

Good question.

There’s a new standard that’s working it’s way through the Video Electronics Standards Association (VESA) called “Net2Display.” The basic idea is that Net2Display will be a remote display protocol (like RDP, ICA, X, VNC, etc.) that’s purpose-built for remoting entire desktops to remote clients (with full USB support even!). This standard is being developed now (and should be ready very soon), and will be available to for basically any company to use when they build their VDI software/server/client/whatever.

There’s not too much available on Net2Display right now. (There's a four-page overview that's very good. Direct link to PDF.) This paper is very recent and even talks about RDP 6. What’s interesting is that the people on the Net2Display committee work for companies we all know in this space: IBM, Teradici, DeskTone, Avocent (the IP-KVM people who are probably nervous about all this remoting), and...VMware!

Who knows where this Net2Display standard will go, but the people behind it are no dummies. The idea of an open standard for this kind of thing as opposed to a proprietary vendor-controlled protocol is extremely interesting.

Of course VMware could also just buy one of the other companies mentioned already instead of trying to develop something from scratch.

What will Microsoft do?

Maybe they’ll ditch RDP and modify Terminal Server and Vista to also meet the Net2Display standard?

Ok, maybe not. But they're also no dummies. Microsoft have a lot of developers and a lot of motivation to make sure users' Windows desktop (and especially Areo) experiences are as good as they can be. And you can bet they're not going to do that via an old RDP. Benny Tritsch suggests some things that Microsoft could do:

  1. Microsoft changed the window manager architecture in Vista and Server 2008. The new GDI framebuffer is embedded in the WPF hierarchy instead of being a single standalone layer of pixels. Today, Server 2008's RDP only uses the single GDI framebuffer instead of remoting the entire hierarchy. What if RDP were extended to remote the entire WPF framebuffer tree?
  2. .NET includes concepts like .NET Remoting which allows the communication of application components over the network. Will Microsoft sort of merge RDP and .NET remoting? (Today, .NET remoting does not include the transmission of graphics objects as we use them for SBC.)
  3. Microsoft uses real-time protocols (RTP) for their conferencing software. Can this be used to “tunnel” RDP in a better way? This reminds us a little bit of what Citrix does via Session Reliability / CGP for improving the stability of the ICA protocol when it’s being used on lower-quality networks. The same could be done for reducing delays by using a real-time protocol as a transport media for RDP.

Fun times indeed! And big thanks to Peter Ghostine and Benny Tritsch for contributing their ideas and visions as to where we're headed.

Dig Deeper on Virtual desktop infrastructure and architecture

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.


Thumbs up, this topic is very usefull! and not mentioned so often by the various vendors.
Nice overview!, I don't see HP Remote Graphics Software mentioned. I know that they offer and support 2D, 3D and multimedia application access within a BladePC environment and also in a VM (next months).
I doubt how the performance with RGS software within a VM will be, there isn't a high-end graphics card within a VM..

Very decent article Brian. I think it is indeed time to discuss one of the (upcoming) pitfalls of VDI. When I say VDI is this regard, I'm referring to VDI as the bulk of the people (will) use it: via a screenscraping protocol. I think you are even being mild in stating that " that we're talking about remoting full and true desktops, that whole "20kbps" per session thing can be thrown out the window." The fact that screenscraping protocols (such as RDP and ICA) to many people is synonymous with low bandwidth usage is very concerning. This concern unfortunately isn't limited to the VDI train that is coming. Many, many current SBC implementations that offer a published -Terminal Server based- desktop suffer from fatal latencies due to congested WAN links. Heck, I've even seen cases where this occurred in LAN environments (100Mbps).

The groove of VDI being that flexible, just-like-your-home-PC that the vendors are sending out will probably make these issues even more common.

Let’s hope people are going to be aware of these possible issues when VDI really starts to take off.
Brian, you are a most excellent communicator, and I now have an easy to understand article for the "dummies" who want to go cheap and also want great graphics at 24fps.

Lauds and honours!
I'm not quite sure I understand your categories. When RDP is used with TS, there is only 1 graphics card, but there are 50-100 users. It is not scraping the stuff being sent to the graphics card, it is emulating a graphics card for each user. In a sense it is still scraping, but the difference is, instead of having to watch what is going on the screen and capture it (like true screen scraping), the programs contact it's emulated graphics card, and tell it what they want to change. So the graphics system has been virtualized.
Look at ultra vnc and at tight vnc, they both have an emulated graphics card option as well.
As far as I can tell, your distinction to server graphics virtualization is that there is lower overhead because it is a more direct path architecture.
I believe that rdp in a TS environment or ICA fall more on the side of graphics virtualization.
Matt Grab

Kudos on a great article, that we can all use as a summary for those that aren't as involved with the whole industry (i.e. management, etc.) You clearly broke down the different technologies and options, and even eluded to where a lot of this migh be going. The ONLY criticism I have would be input from other "experts" rather than just the two that were included. Of course too much input would only be likely to skew the information, and I by no means think that either of them created any slant on the content provided. Thanks again for putting this all together!
Hey Reuben,

By its nature, RGS is very processor intensive. This is the main reason its currently only deployed within HP's Blade PC products. That being said however, it does work very well in low latency blade workstation/pc environments. I don't think it would be a serious competitor to Net2Display assuming that the standard can deliver on their goals.
Very concise and to the point - the most intriguing for me was the final options you have discussed that Microsoft might take?

Interestingly enough I have heard some rumours that if anyone wants to run Video and Multimedia (at 24 fps?) from VDI they are essentially going to suffer from this scenario and because of this they may only see as many as 2 sessions/users/VM's per core? This does tend to tear at the cost benefits side of the VDI play, any comments on this?
I believe that the school district in Collier county Florida has a large VDI deployment using HP Remote Graphics Software. I saw that at VMworld 2006.

- Robbie

It's this presentation:
As I understand they are still deploying and are making adjustments to their architecture along the way based on lessons learned. Prior to the latest RGS release they offered a version that worked with VM's. If you closely read the requirements for this version it states they do not support and hardware beside their own and do not support VM's. You might be able to get support as a one off.
See the link below for info on their VDI solutions and direction:
Reuben, that's exactly in line with how I understand it. I think Collier County is a fairly unique model, and based on discussions with numerous individuals, my understanding is that RGS carries significant server (or workstation) side overhead compared to RDP or ICA and thus is not suitable for traditional VDI (or as IDC and Desktone refer to it, Server Hosted Desktop Virtualization). Of course, even if it did, it does not address the shared GPU issue which is another bottleneck much to Brian's points.
So my experience has been that RGS and RDP typically perform much worse in a WAN scenario. Also as this article correctly hints at, multimedia and other enhancements make a difference to the overall user experience. In this regard, based on my experience, ICA wins today with the Citrix solution. I like Provision, but they don't scale as well as Citrix today, but I'd encourage them to keep trucking and adding to RDP. Wyse TCX works well with both RDP and ICA. Callista breaks down really quickly on a WAN with latency from what I have read and heard. Teradici, well ok great, but unclear what they want to be when they grow up. Seems to me much of what they tout may be able to be built off the shelf by somebody using hardware from say Never tested Desktone. Net2Display is a standard, I have read it. Great concept, just like ThinC was, but the question is will this ever work well over a WAN, and how long before somebody builds a real high quality implementation. A lot of talk about open source protocol, but I doubt VMWare or MS are interested. SPICE looks really interesting, and would love to hear feedback from anybody whose done any WAN testing with it. I think VMWare has no choice but to buy a protocol, or perhaps this space will just become too confusing. Maybe if there was one best of breed protocol, that just worked, the vendors could compete on features as opposed to protocol. We don't have different HTTP versions and TCP/IP stacks do we. Interoperability is key, otherwise the cost of VDI goes up. Alternatively, perhaps the company that designs a connection broker that is protocol agnostic is the layer that we can all use as an integration layer. So today, and at least looking forward to the medium term, for WAN requirements and complex use cases I still believe ICA is the leader. However over time this will become less of an edge to Citrix as others innovate, and eventually I personally hope there is one protocol, and lots of features added by the vendors from which we choose as consumers as opposed to protocol.
Great point about there being too many potential protocols. Take a look at panologic as well. Zero client, really cool, but what protocol/technique will they use over a WAN? Will we just end up with a bunch of interfaced solutions as opposed to integrated ones?
Great article - short and consise wrt. where things are in the VDI space.

Wanted to comment on "Teradici, well ok great, but unclear what they want to be when they grow up. Seems to me much of what they tout may be able to be built off the shelf by somebody using hardware from say"

The Teradici solution is quite unique. It basically remotes all of the key aspects of a CPU - not at the 'driver' level but at the hardware level - giving you the 3 main connectivity points - Video, Audio, and USB.
The Video blows anything and everything out of the water - who else can do dual 1600x1200 ~ 4 megapixels at 60 fps. That is twice the amount of data as the best chip (DL-160). Oh the Tera devices also dynamically adapt the throughput based on what is changing on the screen and how much bandwidth your network is capable of handling.
Audio - HDAudio compliant means (5.1 - if you ever need it) - more than enough power for your typical user, and enough for those in the high-end A/V users.
USB - anything and everything under the sun is supported - they have standard USB xHCI support, so if your OS has a generic USB driver it supports the Teradici chip set.

The host remotes all of these things - packages it up into IP packets and sends it on it's merry way over ubiquitous Ethernet.
The client isn't really a client at all (why they call it a portal) as it doesn't have all of the management overhead of a thin-client (no OS or software to load on it - everything runs on the Host).

Because the hardware interfaces were used as the access point, there are no drivers to install on the OS making it OS agnostic - yes - run linux, windows, mac OSX whatever your heart desires.

There's lots more on the webpage . But it doesn't compare to seeing this thing in action.

So back to your comment - I think it should read :
"Unclear when everyone else realizes they want to be Teradici when they grow up."
I would not call ICA screen scraping. It leverages a Virtualized Display Driver loaded in Session Space. This display driver does only remote bitmaps but also remotes a rich set of GDI commands and associated ROP operations based on the capabilities of the client operating system.

The last sentence should have read:
This display driver does NOT only remote bitmaps but also remotes a rich set of GDI commands and associated ROP operations based on the capabilities of the client operating system.
From a local (non-WAN) perspective - what if one could (diskless) bootstrap the hypervisor and then the virtual machine directly - no screen scraping, RDP / ICA, no server-based computing, etc.?
I have been analyzing this subject for quite some time and completely agree with your conclusions. I must agree with the others that your synopsis is remarkable. This is a very complex subject that is very difficult to synthesize to a few paragraphs as you did. Well Done!

It is my opinion that most customers like standards. When given too many choices, particularly in an emerging technical area, customers will often hold off until the dust settles. I think that this may be the case for some companies now.

I think the Proposed Net2Display standard draft says it best… “One additional disadvantage of existing remoting approaches is that customers may be locked into one OS, remoting and/or virtualization and have limited flexibility on other options. This disadvantage becomes magnified when the customer-selected remoting protocol is acquired by another company, upgraded incompatibly or discontinued leaving future direction in doubt.”

I believe that despite the relative confusion of the future of the protocol, VDI (or whatever you want to call it) will march on for now, but major adoption will occur when standards emerge.

Michael Franke
I like this article very much. It discusses the idea of having a full desktop experience via VDI. VDI in the meaning of running many virtual desktops on server/host style hardware and having remote access to it. In a world of multiple media and loosing boundaries between private and business life. Which desktop do you mean? The desktop I watch videos and hear music or the desktop I play high resolution high performance games with. Ok, I overstate the use cases, but I want to lead to it.
Watching videos is not a core business use case. I remember to the elephant and the needle. If you don't have the appropriate network bandwidth, send it out via satellite or cable network or just send a DVD to the remote location. I know many enterprise customers using extra tools to switch off the USB connectivitiy. It also seems not to be business critical. I also did't understand the business benefits of a g playful aero scheme user interface like MS Vista has.
But, there are business relevant applications that need high, real color resolution and a high frame rate like 20-30 fps.
I only know the Citrix ICA protocol running in huge business based implementations with Citrix Presentation Server where you get a restricted (business like) desktop on multiple fat to thin clients in remote locations. Today, applications with lower display requirements are typically used to have a good cost/user ratio.
If you want to use all the benefits of a true VDI data processing, start with server based computing. Flexible access with different clients to your business applications and the advantages of centralized, secure operations is a real good starter.
If you then have use cases for specialized apps and users like software development, DTP and CAD, add the true VDI concept to it. OK, this leads away from the main discussion. Network quality (priority, bandwidth and latency) and the protocol to transfer data for the user interface and the printer output are important for these concepts.
I'm a fan of standards. Without Ethernet, TCP/IP (4 and 6), HTTP (1, 2, 3 ...), Java 1.x, Kerberos, Windows and others we would be tied to one single manufacturer for hardware and software and would pay a lot more than today in a standards compatible world. Net2Display is a cute idea. X11 was it too. Lets enhance RDP like Wyse and Citrix does. It works - now.
Great concept, but bandwidth bandwidth bandwidth... not every network is there yet.
Watching videos could be a core business case for different verticals. Education is one that comes to mind. Training is another specialized elements where it could be relevant. Mailing DVD's to each location, student, etc. is not efficient enough to be effective. Not to be totally off base here, however, why do you think companies like NetFlix and Blockbuster are now allowing you to download the videos rather than just mail the DVD's everywhere? :) Ok, maybe that was a stretch but you get the idea.

Secondly, VDI does not equal CAD. You are still limited to the remoting protocol. This is one of the reasons why Citrix started looking at Project Pictor and changing ICA to be able to work well with CAD applications.

Net2Display is something that companies like IBM are looking heavily at and investing in. However, the one quandry they keep running up against is LAN vs. WAN. Very proficient across the LAN with some nice features and functionality but horrible across the WAN.
Agreed, video information is certainly not limited to entertainment and non-business purposes. For example, at law firms witness depositions are distriubted as traditional paper transcripts, but are now also accompanied by the video record. Courtroom proceedings are sometimes streamed live to the interested parties. And evidence can include security camera footage or published DVD media.

SBC technology is an excellent means for lawyers to collaborate on the same case around the world. But the video performance is not acceptable without huge investments in bandwidth.
I agree with the LAN/WAN bandwidth issues, remote WAN sites suffer if bandwidth is not sufficent, we noticed a marked improvement in bandwidth with Riverbed products, compressing the Normal WAN traffic, it didn't really affect the True RDP Sessions, but did help with any FAT's and printers that hung off the WAN.
Surely there could be some hope in terms of compression technologists doing something with the RDP, ICA traffic, or could TCP/IP be improved to cope ? i.e. Compress the traffic at the Farm and Uncompress at the remote site ?
You should try using Expand Networks instead. They're the only company to compress real-time IP traffic, including RDP. Expand's compression is typically 5 times better than the native RDP compression.
SPICE - Qumranet VDI soln is ground up built for VDI. It's performance is 10X better then RDP and will be 5x better performance then new Net2Display specs.

SPICE working great over the WAN and have intelligence to deal with congestion.

I stronly suggest people to try SPICE - Qumranet. You will be very surprised like me to see the great true full desktop experience
Good as an overview and taxonomy of what is out there or just over the horizon. Some of the explanation of how RDP & ICA work is a erroneous. Peter Ghostine is clearly mistaken about the 100ms refresh cycle in ICA and RDP. I am sure Citrix or Microsoft can confirm the validity of this assertion?

Having said that Matt Grab and related responses address the discrepancies.
Many people here seem to think FPS as a good indicator of performance. FPS is a nice metric, but it doesn't really tell you much. What are you rendering, how fast can it be rendered, what does the rendered output look like (meaning how well can it be compressed). There is really a matrix of things to consider, complexity of the scene you are rendering, the rendered output (how compressable), available bandwidth. You may find that you have a simple seen to render, but that rendering results in a complex image (not very compressable) then you need a lot of bandwidth to deliver all those frames. The only time the client is going to see the full frame rate capability is when the scene is simple (relatively) and the rendered output is fairly compressable, and you have the bandwidth to deliver all the rendered frames. What you really want to look for is FPS delivered to the client. Most solutions provide you the FPS on the host side, not client side.

Don't think so. According to, OutBufDelay specifies the RDP Output Buffer Transmission Delay. Default value is 100ms as indicated in Brian's article.
Well, it's not as simple as black and white. Among many other things, there's also a parameter called InteractiveDelay (default = 50ms) which is relevant when the user is typing/clicking something).
Not sure if anyone else can comment on their solution particularly with esx 3.0.1 ... I've just started evaluating it and to be quite honest, we all can talk about video compression, etc., over long distances, and new and interesting achievements, but in today's world (at least where I'm at), i'm simply looking to extend my basic business apps and such remotely to consultants, vendors, contractors, off-shore, etc.

i think as an administrator is the ease of management and deployment of these virtual desktops, wrap access rules around them, etc. and this product so far seems perfect (even has an ssl front end) ... don't get me wrong, it could use a little work, but so far, it does all of that and beyond. It is, by far, the most mature VDI access product in the space that I can see. VMware bought dunes and then came out with silverstone and i messed with that beta for 2 weeks and quickly figured out it is barely a gen 1 product. If Citrix's product was a little better (auto-provisioning of the vm's, etc.), then I could see making an argument, but it too (amazingly) is behind Provision. Maybe not with the next version, but if they're sticking with Xen then it will still be behind becuase VMware is so far ahead of Xen. I told my VMware rep yesterday ... buy Provision Networks now before Virtual Iron does.

Does anyone else agree/disagree? has anyone successfully implemented a large scale vdi environment behind ProvisionVAS 5.8? if so, can you verify that you successfully published apps to an XP desktop or used Softgrid successfully? I'm just curious because I haven't been able to proceed that far yet. :)
I know of nobody who has implemented Provision/VMWare/Softgrid at scale across a diverse network. Still a very immature industry. Interesting to note a few of the news articles questioning VMWare's dominance. I tend to agree, is it really founded. What do they really offer. HYPE----pervisor is much of what they do. Limited Connection Broker, no protocol, ESX runs on limited hardware. Provision looks great, but just feels like a cheap Citrx rip off. Seems ok for smaller less complex shops, but when MS 2008 Server TS comes out, I think that will being to eat away at this market as well. Cheap customers are CHEAP, and there is no money to be made there, so the provision business model is questionable to me. I agree they seem to be a takeover target, but if VMWare picks them up, so muck overlap, not sure what the real value is here. Virtual Iron, seems to be on business model 10.x, so really unclear what they are doing period. Softrgrid, well 0 innovation on that product since the MS buy out, all he brains have moved to other groups, so where is that going...... "Please buy the MS Systems center junk rip off product." Thinstall has a lot more innonvation in this space in terms of thinking, and a prevous article mentioned a comopany called Installfree that looked interesting as well.
Despite its name, Citrix's XenDesktop 2.0 (formerly Desktop Server) will work with VMware or XenServer as the Virtual Infrastructure for hosting the WinXP/Vista images. XenDesktop will also include Provisioning Server (formerly Ardence) technology, which no one else has.
In the time interval you aren't just sending bitmaps but possibly several other types of drawing operations. Thus the time interval does directly equate to frame rate. Scene complexity etc need to also be taken into account.
My prediction is that Citrix will be the ultimate champs in the VDI space. They already have the best protocol and with the speedscreen 2 and pictor stuff they are going to be hard to beat. They also have an ace up their sleeve, the acquisition of WAN scaler is going to give them an opportunity to develop a branch office edge device that has proprietary ICA enhancements that allow for an even better VDI experience. If you want your users to be able to reach their Virtual Desktop from over the internet you are going to want to use ICA.

A new protocol? well ok yes please, i would like a latency and congestion resistant, secure, smooth, feature rich, secure potocol that supports 3d graphics and multimedia with a heep of ice cream on top :)
I think you meant to say that the time interval does not directly equate to frame rate. I agree there are several other factors that come into play.

I work in an agency that had the chance to chat with the Microsoft developers behind Windows Server 2008.


Check out the vids and meet some colourful characters…

Really... are you sure?   It is NOT in the BETA and I cannot get a definitive answer regarding this.  Even if it is, it does not address most of the mm content on the web today....FLASH!  MMR is a fine solution for specific needs, but it is not a good strategy for a comprehensive desktop replacement.  We'll need a more complete solution than running around sticking our fingers in the dam.  GPU virtualization or HW acceleration are really the only chance for this.

Calista aquisition should reshape the picture. I hope MS will include Calista into one of Server 2008 updates, but not into the next major release.

 Thank you for the great overview!

Alec Istomin 

I dont agree. Many enterprises have plent of ESX hosts deployed in their datacentre but no citrix environment, dont forget how costly citrix licenses are, it would be nice to just hook into the existing ESX infrastructure with a new / extended protocol without being forced down the citrix route.

It seems like everyone is stuck on the "protocal" being used.  I would love to see a true comparison of the ICA and RDP protocol.  From where I sit, where Citrix has the advantage is in the client.  Consider operations like manipulating a static image.  It could be either rotating or even scrolling while zoomed in.  Side by side, we've seen ~50% differences in the speed to "refresh" the screen change.  It's similar logic to the lossless graphics editors.  Why retransmit the same bits if the existing bits were simply relocated or turned on their side.  More effort needs to be put on the client.  I'd be willing to bet my lunch that the RDP and ICA transports compare very nicely.


I was just thinking this as I read the article again. It seems an answer to "What will MS do?" has been somewhat answered....I would really like to know what VMware has in store since I fully agree that RDP "alone" will not be the mechanism to to reach the 100% virtualized desktop reality.


Does the RDP supports IPv6? How IPv6 is enabled?