Desktop virtualization makes use of access virtualization, application virtualization, processing virtualization...
and security and management software for virtualized environments. Let's review some of these and their impact.
Access virtualization, also known as presentation services, encapsulates the display and input functions of an application and then projects it out over the network to a remote system. Special-purpose client-side software interprets this stream of data and presents it on the screen of the remote device. Since this process happens at an operating-system level, the application doesn't need to be aware of it.
Processing virtualization is a spectrum of technologies. The technologies range from making it possible for a workload to span many systems for scalability or performance on the one hand to a single system that supports multiple independent workloads on the other. It's the latter that is combined with virtual access software to create the technology known as Virtual Desktop Infrastructure (VDI.)
It is clear that security and management software play an important role when you use any of these technologies. Virtual desktop technologies may be used together or separately depending upon the goals of the organization.
Each of these technologies is useful when used for the appropriate tasks. Since the technologies' suppliers all appear to be using similar language when talking about their solutions, I often run into confused IT decision makers. It's not clear to them when access virtualization is enough, when they should add application virtualization or when they need to add VDI to the mix. Let's take a look at these options.
Virtual access to an organization's applications is a useful tool when workers always have access to the network and have little need for video, sound or other types of non-structured content to do their work. Virtual access suppliers have made it possible for people to use thin clients, laptops, desktop systems and even some smartphones in order to access those applications. Suppliers such as Citrix, ClearCube, HP, IBM, Microsoft, Pano Logic and Sun have been working hard to remove those limitations through the use of new hardware and software technology.
When workers need to take an application and its data with them or when an application would conflict with other applications already in use, application virtualization is a useful tool. Applications can be copied or streamed down to a mobile system before the worker leaves the office. These applications are then available while the worker is at a customer site, a hotel, a conference or while flying from one location to another.
This approach has also been used when an organization is moving from one version of an operating system to another. Some suppliers, such as Citrix, InstallFree, Microsoft and VMware's Thinstall have made it possible for Windows XP-specific applications to run on Windows Vista-based systems.
Virtual Desktop Infrastructure
VDI is the next step some organizations take on the journey to a virtualized environment. Entire workloads, including the operating system, the applications and other necessary components are encapsulated into a virtual machine image. You can run virtual systems on the client-side system, on a local PC-blade computer or back in the data center on a server. Workers can then access "virtual desktops." These virtual desktops could be running Windows, Linux or Unix and still reside on the same machine. Suppliers such as Citrix, Microsoft, Neocleus, Qumranet (now part of Red Hat) and VMware offer technology that makes this trick possible.
There's no pat answer to the question of which system to use. IT decision makers need to consider what each member of their team requires. Some will be well served by virtual access to applications running on a server. Others will be best served by a combination of local and remote execution. Some will need the ability to run a diverse workload they can best access using VDI.
Virtualization seems all the rage now and many act as if it is something new when it's really based on ideas implemented in mainframes 30 years ago and in the world of midrange systems 20 years ago. That technology is now finding its way onto industry-standard client and server systems. And, as with other areas of information technology, here, too, everything old is new again.
About the author: Daniel Kusnetzky, president of the Kusnetzky Group LLC, is responsible for research and analysis on the worldwide market for system software, open source software and virtualization software. He examines emerging technology trends, vendor strategies, research and development issues and end-user integration requirements. Mr. Kusnetzky has been involved with information technology since the late 1970s working for both end user organizations and IT equipment suppliers. Prior to founding the Kusnetzky Group, Mr. Kusnetzky was executive vice president of Corporate and Marketing Strategy for Open-Xchange. Prior to that, he worked for IT industry watcher IDC and Digital Equipment Corp. His comments and opinions have been published in the Boston Globe, Byte Magazine, ComputerWorld, Communications Week, eWeek, InfoWorld, Investor's Business Daily, Network World, New York Times, PC Week, PC World, San Jose Mercury News, Wall Street Journal, and many others. He has appeared on BBC, CNN, CNNfn, CNBC, MSNBC and NPR.