As administrators design and operate a VMware Horizon View deployment using hyper-converged infrastructure with...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
VMware Virtual SAN, they must remain conscious of its functions and limitations.
Virtual desktop workloads have a few characteristics that suit hyper-converged infrastructure (HCI). For example, HCI delivers the storage performance and scale-out architecture VDI deployments need. As a result, the VMware HCI offering bundles with some licenses of the company's VDI product, Horizon View.
But VMware HCI, and particularly Virtual SAN (VSAN), is not the same as a standard storage array.
What is VMware VSAN?
VMware VSAN is storage software in the ESXi hypervisor. It takes local storage in a group of ESXi servers and makes it behave like a shared storage pool. VMware VSAN mirrors virtual machine storage access across multiple ESXi server nodes to achieve storage availability. VSAN uses the Ethernet network between the ESXi servers and forms a storage cluster that matches a Distributed Resource Scheduler /High Availability cluster. VSAN uses two tiers of storage in a node -- a fast tier for performance and a cheaper tier for capacity. The fast tier must be solid state while the capacity tier can be solid state or spinning hard disks.
Teaming VMware HCI with VSAN and View
The first thing to consider with VMware HCI with VSAN and View is that VSAN presents only a single data store per cluster. VMware says VSAN can handle a maximum of 250 desktops for each Network File System (NFS) data store. The limits arise because all the VMs on the data store share the storage paths and queues in the storage controllers. As a result, many data stores support large desktop numbers. A deployment with 4,000 users might require 20 NFS data stores.
Like all storage products for hyper-converged infrastructure, VSAN effectively has a storage controller in each ESXi server. Consequently, the desktops share the storage path and queue with a single data store on a single hyper-converged node. Logically this suggests a maximum of around 250 desktops per ESXi server with a single VSAN data store. The same VSAN data store could have 250 desktops on each ESXi server in the cluster, leading to a much higher total desktop count across the cluster. But most VDI deployments have far fewer than 250 desktops per ESXi server.
Consider the storage tiers
The fast tier in each node works as a write buffer and read cache. If the fast tier is large enough, then most desktop VM I/O comes from the fast tier. The fast tier is expensive, however, which can limit its size, because organizations can only afford so much of it.
Virtual desktops have always been a challenging workload for storage. Boot storms produce a huge amount of sequential read I/O. Login storms produce vast amounts of small writes. Recompose operations lead to plenty of sequential I/O -- both reads and writes. The steady state is usually a lot of small random writes.
VDI is not a single storage workload profile, and at times it can be many different profiles. Good storage design is critical to a good VDI deployment. The net result is that VSAN for VDI works best with an all-flash configuration where the capacity tier is solid state for better performance. The capacity tier can be much cheaper at SSD than the performance tier and still delivers significantly better performance than hard disks.
Test your knowledge of hyper-converged infrastructure for VDI
Prove how well you know hyper-converged infrastructure with our quiz on vendor offerings, hardware upgrades, the contents of an HCI stack and more.
The importance of the Ethernet network
VSAN also places significant load on its Ethernet network, using it for most storage I/O. This network is critical to each ESXi server accessing the VSAN data store. Each host should have at least two network interface cards (NICs), each connecting to a separate physical switch. This way neither a cable fault nor a switch failure stops VSAN from working.
It's a great idea to dedicate a 10 gigabit Ethernet (GbE) network just for VSAN. Each ESXi server should have two 10 GbE NICs connected to redundant switches for VSAN traffic. In smaller deployments with around 100 desktops, it may be acceptable to use 1 GbE for the VSAN traffic. Admins should still dedicate this 1 GbE to VSAN and preferably have redundant switches. In addition, the desktop VMs need some networking for users to access the desktops and for applications to access servers. Generally, the VM networking requirements are modest and admins can easily accommodate them on a couple of 1 GbE NICs. Again, admins should have a redundant pair of switches to protect against failure.
How HCI plays into VDI vendor lock-in
Improve ESXi cluster space in VMware VSAN
The reality of HCI for VDI