Virtual Desktop Success ‒ The Role of Performance Assurance in VDI Deployments – Part 2

[Read part 1]

Let’s continue our 3-part series about virtual desktop performance:

Multi-tier architecture makes VDI challenging
From a user’s perspective, VDI is a very simple service – the user logs in with their domain account and can access applications on their desktop. From an administration perspective, VDI is a lot tougher to manage than it appears to be for a user. The reason for this is there are many, many tiers of software and hardware that have to work together to support the service. A user logging in connects to a connection broker first. The broker authenticates the user using Active Directory. Once the user is authenticated, the broker communicates with a virtualization platform (vSphere, XenServer, Hyper-V, etc.) and provisions a virtual desktop. The desktop OS may be streamed from a provisioning server, and the storage for the desktop is hosted on a SAN. For VDI to work, every tier of this infrastructure should work. If not, users will experience slowness and complain about performance.

The Role of Performance Assurance
In most VDI projects, performance assurance is not looked at upfront. The focus initially is on the VDI technology – Which broker to use? What protocols to use? What thin clients to use? Once the VDI deployment is underway and users start complaining about slowness, that is when performance monitoring, reporting and root-cause diagnosis get attention. The questions then are: Where is the bottleneck? Is it due to capacity? Or due to one user or application hogging resources? Or could it be that the workload has changed?

Many times there are no benchmarks of normal usage across the infrastructure. Hence, it’s impossible to tell whether the workload has changed or is different from what was expected when the infrastructure was first planned. Making problem diagnosis harder is the fact that different tiers of the infrastructure are managed by different teams and each of these teams may be using a different toolset to monitor the infrastructure. Coordinating across the domain experts to determine the exact cause of slowness can be a time-consuming and expensive exercise. After the problem is finally isolated, it may be too difficult to fix an issue (because the VDI architecture is already in place) or the remedial action may entail significant changes to the existing architecture, making the problem resolution process lengthy and expensive.

Effective performance management doesn’t just help with the diagnosis of problems. It can also help you optimize your infrastructure, so you can get more out of your infrastructure investments. For instance, you may find out that a few of your servers are handling most of your users, while other servers are idling. By detecting such imbalances in load distribution, you may be able to identify changes that can make your infrastructure function more effectively.
Understanding the performance requirements of your users can also help you plan the virtual desktop infrastructure more efficiently. For example, if you know which users run applications that are CPU intensive and which ones run applications that are memory intensive, you can distribute your workload in such a way that a server has a good mix of CPU intensive and memory intensive users, so that the server’s resources are best utilized. On the other hand, your user density (number of users per server) would be a lot less if you had all your CPU intensive users on the same server.

[Read part 3]