It's going to take someone longer to walk 10 miles than 10 feet. Similarly, it takes data a lot longer to travel...
across continents than it does to travel within a single office building.
The time it takes data to travel between two points is known as network latency and it is a critical consideration for organizations working with VDI or desktop as a service (DaaS) because both technologies put distance between users' screens and their applications. If the distance becomes too large, network latency issues become real and user experience can suffer. And even though remote display protocols are designed to tolerate quite high network latency while remaining usable, other parts of VDI or DaaS deployments may not tolerate latency so well.
The good news is data can travel at the speed of light, which is pretty fast, so cross-city network latency shouldn't be much more than the latency within a building. But when distances grow, network latency issues grow. Introduce interstate or transcontinental distances and network latency can get quite large. The best way to limit network latency is to bring data and desktops closer together.
The facts of VDI and DaaS network latency
The applications inside a user's desktop expect fast, low-latency access to data. One of the great benefits of VDI is having desktops inside the same data center as the user's data, right next to the file servers and application servers. Data access inside the data center is extremely low latency and applications perform well.
Adopting DaaS means users' desktops are in a cloud data center, potentially a long way from the on-premises servers and data. Having users' desktops in a different data center than their data leads to higher network latency and slower applications. Higher latency can mean it takes users longer to log in because user profile data has to travel from one data center to another. It can also slow down applications because they must wait longer for data transfers. Some applications, such as Microsoft Outlook, have features to reduce the effect of slow networks, but with other applications the network latency issues are very visible to users.
For the best application performance, servers must be close to the cloud data center or even in the same cloud data center as the desktops. Moving the application servers from on-premises data centers to infrastructure as a service (IaaS) platforms at the same cloud provider puts the servers close to the desktops. Usually it happens in the opposite order: servers move to the cloud and then desktops move to DaaS to get closer to the data. The cloud provider's data center is essentially the only data center containing both servers and desktops.
Are you prepared for the challenges of IoT in the data center?
IoT will force data center teams to evolve their network, storage and cloud strategies. Take this short quiz to see how much you know about IoT implications in the data center.
What about hybrid deployments?
A hybrid desktop deployment is the most challenging. In hybrid deployments some users have on-premises desktops delivered through VDI while others have cloud desktops delivered by a DaaS provider. As a result data cannot be close to all users. The on-premises data is closer to VDI users while cloud data is closer to DaaS users. Another common hybrid deployment features some users on VDI or DaaS and some users with physical PCs on premises. If the servers are all VMs on an IaaS platform, then they are remote from the physical desktops.
In a hybrid deployment, there must be some servers and data on premises for the users whose desktops are on premises. There are also servers in the cloud to keep some data close to the DaaS desktops.
Often the decision about using DaaS, VDI or physical desktops comes down to what data is required and where it must reside. Whether IT uses on-premises VDI or cloud-based DaaS, the desktops will be most responsive if the data they need is close by.
Avoid latency issues in a hybrid cloud
How to keep latency out of VDI
Can HCI help with latency?