eG News

The latest news from eG Innovations

Free Trial

VDI success - the role of performance assurance in VDI deployments

Virtual Desktop Infrastructure (VDI) has been a hot topic for years - and every year one expert or another makes a claim that this will be the year of VDI. Finally, 2013 may be the year that VDI goes mainstream. We're seeing the cost of VDI implementation come down to being roughly comparable to the cost of managing PC endpoints. Many organizations are starting to move from pilot to production.

While the pilots worked fine, many organizations are finding that VDI projects fail in the rollout phase due to performance and poor user experience issues. One of the key reasons for this is that the pilot phase is often over-provisioned and less complex.

Don't miss the infrastructure demands by focusing too much on the desktop

Very often, when an enterprise begins the virtual desktop journey, the focus is on the user desktop. This is only natural; after all, it is the desktop that is moving - from being on a physical system to a virtual machine. Therefore, once a decision to try VDI is made, the primary focus is often to benchmark the performance of physical desktops, model usage, predict the virtualized user experience and, based on the results, determine which desktops can be virtualized and which can't.

With VDI, the virtual desktops no longer have dedicated resources. The virtual desktops share the CPU, memory, disk and network resources of the physical machine on which they are hosted. While resource sharing provides several benefits, it also introduces new complications in the infrastructure. A single malfunctioning desktop can drain resources to the point that it impacts the performance of all the other desktops.

Resource sharing across virtual desktops also introduces other interesting artifacts. In many of the early VDI deployments, administrators found that when they just migrated physical desktops to VDI, backups or antivirus software could cause problems. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn't matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of all the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on individual desktops and not the holistic virtual desktop infrastructure.

The cost of failure with VDI is much higher than with physical desktops

In the physical world if a desktop failed, only one user would be impacted - so the failure or slowdown was minimal. In the virtual world, a single malfunctioning desktop is much more severe, as one failure can impact hundreds of desktops. A CTO of a large multi-national organization recently mentioned this as the single biggest reason why he was averse to moving to virtual desktop technology.

Multi-tier architecture makes VDI challenging

From a user's perspective, VDI is a very simple service - the user logs in with their domain account and can access applications on their desktop. From an administration perspective, VDI is a lot tougher to manage than it appears to be for a user. The reason for this is there are many, many tiers of software and hardware that have to work together to support the service. A user logging in connects to a connection broker first. The broker authenticates the user using Active Directory. Once the user is authenticated, the broker communicates with a virtualization platform (vSphere, XenServer, Hyper-V, etc.) and provisions a virtual desktop. The desktop OS may be streamed from a provisioning server, and the storage for the desktop is hosted on a SAN. For VDI to work, every tier of this infrastructure should work. If not, users will experience slowness and complain about performance.

The Role of Performance Assurance

In most VDI projects, performance assurance is not looked at upfront. The focus initially is on the VDI technology - Which broker to use? What protocols to use? What thin clients to use? Once the VDI deployment is underway and users start complaining about slowness, that is when performance monitoring, reporting and root-cause diagnosis get attention. The questions then are: Where is the bottleneck? Is it due to capacity? Or due to one user or application hogging resources? Or could it be that the workload has changed?

Many times there are no benchmarks of normal usage across the infrastructure. Hence, it's impossible to tell whether the workload has changed or is different from what was expected when the infrastructure was first planned. Making problem diagnosis harder is the fact that different tiers of the infrastructure are managed by different teams and each of these teams may be using a different toolset to monitor the infrastructure. Coordinating across the domain experts to determine the exact cause of slowness can be a time-consuming and expensive exercise. After the problem is finally isolated, it may be too difficult to fix an issue (because the VDI architecture is already in place) or the remedial action may entail significant changes to the existing architecture, making the problem resolution process lengthy and expensive.

Effective performance management doesn't just help with the diagnosis of problems. It can also help you optimize your infrastructure, so you can get more out of your infrastructure investments. For instance, you may find out that a few of your servers are handling most of your users, while other servers are idling. By detecting such imbalances in load distribution, you may be able to identify changes that can make your infrastructure function more effectively.

Understanding the performance requirements of your users can also help you plan the virtual desktop infrastructure more efficiently. For example, if you know which users run applications that are CPU intensive and which ones run applications that are memory intensive, you can distribute your workload in such a way that a server has a good mix of CPU intensive and memory intensive users, so that the server's resources are best utilized. On the other hand, your user density (number of users per server) would be a lot less if you had all your CPU intensive users on the same server.

Best Practices for VDI Success

Based on the earlier discussion, we can conclude that for VDI success, you need to:

  • Build performance assurance early into the VDI life cycle. For successful deployments, focus on the virtual desktop infrastructure, not just the virtual desktop. Build performance assurance early in the deployment life cycle in order to avoid costly issues and re-mediation downstream, and to mitigate the risk of VDI failure during deployment. It is imperative that IT considers inter-desktop dependencies from the very beginning. When deploying VDI on a large scale, avoid slow, manual ad-hoc processes not only for the deployment but also for performance assurance. Automation is a key to being able to be alerted to problems in advance, well before users notice and complain.
  • 360-degree visibility is key. Today, service delivery is more demanding than ever. Companies require 360-degree VDI service visibility of every layer, every tier of the infrastructure ‒ from desktops to applications and from network to storage. The virtual desktop where user applications run is often a blindspot and VDI service managers require the ability to look inside the virtual desktop to understand user activity and the performance they are seeing. It is critical to be able to get this level of visibility without needing to install agents on each and every desktop.

Often, different tools are used for monitoring each of the VDI tiers. This leaves a lot of room for finger-pointing between domain administrators. Having the ability to monitor all the different tiers from a single unified console and being able to analyze the performance of these tiers from a single console, across a common time window, is a key to effective management of the virtual desktop infrastructure.

  • Manage VDI as a service, not as many silos. Administrators need deep insights into the causes of VDI service performance issues in order to detect and fix root-cause problems. It's no longer useful to monitor individual silos because of the complexity of today's infrastructures. There are just too many opportunities for problems. Also, bear in mind that VDI service managers are not likely to be experts in each of the technologies used in the infrastructure. Therefore, it's essential that they have access to management solutions that can intelligently analyze and correlate between problems at different tiers and help them quickly pinpoint where the root-cause of problems lie. Since the virtualization platform is an integral part of the infrastructure, the management solution must be virtualization-aware, i.e., it must be capable of monitoring the virtualization platform, but it must also be intelligent enough to understand the inter-dependencies between the virtualization platform and the applications and desktops that it hosts.

Conclusion

By proactively alerting administrators to problems and providing accurate root-cause diagnosis, a performance assurance solution ensures that the performance of a virtual desktop is comparable to that of a physical desktop. By providing deep insights into the performance of each VDI tier and identifying areas for optimization, a performance assurance solution can ensure that the infrastructure is right-sized and thereby generates the best return on investment. For these reasons, enterprises deploying virtual desktop infrastructures must consider performance assurance early on in the VDI life cycle. Historical analysis has indicated that these deployments have had the best chance of virtual desktop deployment success.

Reposted from: http://srinivasramanathan.ulitzer.com/node/2585263
 

Click here to view a short presentation on how eG Enterprise is tailored to address your Virtualization 2.0 monitoring requirements.
- Srinivas Ramanathan

All trademarks, service marks and company names are the property of their respective owners.