alphaspirit - Fotolia
The answer to whether VDI deployments can benefit from high-density hardware is a not-so-simple "sometimes;" it all depends on the problem you need to solve.
High-density servers are likely to increase your cost per desktop, they don't support more advanced multimedia needs yet, and they require you to rethink your storage strategy. But if you have to support a large population of virtual desktop users and your data center floor space is limited, then the premium that comes with high-density hardware could easily be worth it.
A high-density server is one that has more compute power in less space than traditional servers. For example, Dell's PowerEdge FX line of 2u servers can support up to 128 cores of compute. Vendors also offer high-density switching, so you can get 132 ports of 40G connectivity on one switch, for instance. (This might sound a lot like converged infrastructure, and some high-density hardware does fall under the CI umbrella, but the two aren't mutually exclusive.) When you're talking about a highly dense virtual desktop deployment, the term refers to fitting many virtual machines (VMs) on one host. It would be logical to assume that using highly dense hardware to support many virtual desktops brings virtualization deployments to a new level of high density, but that's not necessarily the case.
Is high-density hardware right for VDI?
More vendors are making high-density compute and switching hardware options available. Some obvious use cases for highly dense hardware are database, big data and video applications, but what about VDI?
You might think, "High-density compute means more desktop VMs per host and that's a good thing, end of discussion." But that logic unravels the more you think about it. To really explore whether highly dense hardware is right to support VDI, you must first ask what the constraining factor in most VDI deployments is. It's rarely compute.
Most VDI deployments suffer from I/O constraint or reach their memory boundary long before they hit their CPU limit. Looking at high-density hardware platforms that I might build my VDI deployment on, I found that many pack a lot of CPU and bus horsepower into a very small form factor. As a result of the space constraint, you must typically purchase the largest memory chips on the market to keep pace with the available CPU. This increases the cost per VM of your deployments. Because VDI is a cost-per-desktop proposition, this reality probably won't help you get management on board with using high-density hardware for VDI.
The problem with high-density hardware for VDI
Many vendors underestimate the number of IOPS virtual desktops require. I have seen vendor IOPS recommendations for VMs range from 10 steady state IOPS to 400 peak IOPS. These numbers come from theoretical calculations that vendors or resellers sometimes manipulate to sell hardware.
But I live in the real world -- my desktop PC arrives with a SATA disk that delivers 85 IOPS and often constrains my virtual desktop's performance. It runs faster when I replace the disk with a solid state drive.
If you want to move to high-density hardware for VDI and you make the right storage-side upgrades, then your performance won't suffer. But you can't just plug your existing fiber into a shared host bus adapter on highly dense hardware that can handle significantly more traffic than any other port on your storage network and expect great performance.
Finally, let's not forget the most important part about virtual desktops: Users are on the other side them. Some users will overload the VMs by keeping every single program in the image running at the same time. Others will demand better multi-media support, and they could possibly need rendering capabilities. No problem, right? Just offload to an Apex or GPU card.
Not so fast -- many high-density hardware offerings don't support those cards, so buyer beware. If you invest in high-density computing for VDI, you might not be able to use hardware offloading to improve performance until vendors add support to their high-density servers.
Where hyper-converged infrastructure fits
Of course, there is one use case for high-density compute that I haven't touched on yet: hyper-converged infrastructure. Many high-density servers have a large in-chassis capacity for solid state disks that are directly mapped to the compute node. They aren't shared storage, like a SAN.
In a hyper-converged platform where storage and networking are software-defined, these high-density servers create a nice, self-contained option for scale-out VDI. Because everything you need to run the VMs is in-chassis, you can simply add a node of storage, networking or compute anytime you need to scale the environment. This doesn't solve the problem of offload cards, however.
Top seven complaints about VDI deployment
Settle the VDI vs. RDSH debate
Defining data center density