8 key VDI deployment best practices What Does a VDI Engineer Do and How Do You Become One?

How to evaluate VDI hardware requirements

VDI has specific hardware needs that servers hosting other virtualized workloads may not meet. Learn how to gauge VDI hardware requirements for servers, storage and more.

VDI deployments must start with a detailed consideration of server-side capabilities and an assessment of hardware upgrade needs.

VDI instance support is directly related to computing resources, but VDI hardware requirements vary depending on the complexity of desktop images and layered features such as personalization and application virtualization.

All these factors make it extremely challenging to determine the exact amount of resources needed for every desktop instance -- and the total number of instances that a given server will support.

If IT pros overestimate the number of virtual desktops a host can run, they will end up with poor performance and likely find themselves asking management for money for additional VDI hardware. Conversely, if they overestimate the hardware requirements, they will end up wasting a lot of money on equipment they don't need. The key is finding a happy medium.

All of this underscores the need for extensive system testing in well-planned proof-of-principle projects and limited deployments (such as select workgroups or departments) prior to general deployment across an enterprise.

Server requirements to support VDI

It's important to note that no single list of VDI hardware requirements exists. The issue is not a lack of support; VDI will operate on almost any current virtualized server. Rather, the number of VDI instances that may be deployed on a server is limited by that server's available computing resources.

It's important to note that there is no single list of VDI hardware requirements.

As an example, a typical "transparent box" server for enterprise-class VDI deployment might include dual eight-core processors and at least 192 GB of fast DDR3 memory. In terms of storage, it is certainly possible to use centralized SAN storage for VDI instances. But to avoid storage and VDI traffic on the same LAN, a SAN should use a separate network (such as Fibre Channel or a physically separate LAN) or use local storage on each VDI server to load and protect VDI instances -- this means the VDI server will need physical space for perhaps 16 high-performance 10-15k RPM SAS 6 Gbps hard drives (meaning a 2U or 3U rack unit).

Larger and more powerful servers can support more VDI instances on the same box, while older or less-capable servers will support fewer instances. A server like the example above might be expected to host anywhere from 80 to 130 instances, though the exact number of VDI instances on any server depends on other details such as the size and complexity of the base image, the level of personalization, the number of virtualized applications, user and application activity across the LAN and so on.

This may seem like a lot of instances, but consider that an enterprise large enough to justify a VDI initiative may employ 1,000 people or more -- this means at least 10 such servers would be required for the deployment, along with additional servers to support growth and failover. An enterprise with 5,000 users would need roughly 50 such physical servers with the added costs of hypervisor and VDI platform licensing.

VDI server appliances

There are server systems commercially available built to meet VDI hardware requirements, though these should be considered more along the lines of pre-configured "packages" than specially designed systems. One example is Dell's DVS Simplified Appliance. The Desktop Virtualization Solutions (DVS) package is based on Dell's standard PowerEdge R720 or T620 servers bundled with Citrix XenServer or Microsoft Hyper-V and VDI management tools. The appliance is reported to host up to 129 users on each appliance, and additional appliances can easily be deployed to support more users.

Other VDI appliances are also available, including VMware's Horizon Turnkey Appliance (formerly Rapid Desktop Appliance) based on VMware Horizon View, the Vertex VDI appliances from Tangent and the vSTAC VDI appliance from Pivot3, among others.

Since packages like the DVS rely on standard servers, there is no custom or specialized circuitry to differentiate the "appliance" from a conventional server. Features like N+1 redundancy, automatic failover, load balancing, desktop provisioning and desktop image management are all handled through software tools.

Predicting other VDI hardware requirements

When it comes to VDI hardware planning, IT pros should avoid the temptation to base their planning on the estimates they read online. Just because someone claims to be able to comfortably host 50 virtual desktops on hardware that is similar to what an IT pro plans to use does not necessarily mean that other organizations will have the same results. After all, one organization's users don't work with the exact same set of applications as another organization's users.

And, even if two organizations' users did work with identical applications, one set of users will be doing a different job from the users in the other company, which means their usage patterns will be unique.

Reading about another organization's experience with VDI hardware can give IT pros a rough idea of what to expect, but they shouldn't anticipate getting exactly the same results as someone else. Mileage will vary.

One step IT pros can take to ensure that they make reliable VDI hardware projections is to look for planning tools from VDI vendors. Some vendors, including Microsoft, offer calculators that can help IT determine exactly what hardware it will need.

The problem with VDI calculators is that IT pros cannot simply tell the calculator that they have 250 users and expect to get an accurate projection of the hardware they will need for their users with virtual desktops. Typically, they need to know how the users work with the hardware.

For instance, IT might need to know how many of those 250 users should be treated as power users and what the peak IOPS rate is for those users, as well as the duration for which they are at that peak. Similarly, IT pros might need to know the average amount of memory a typical knowledge worker consumes in their organization.

In any case, getting a good VDI hardware projection depends on providing the calculator with accurate information. If IT pros simply guess the values that the calculator asks for, then the calculator's projection will likely be inaccurate.

Another thing IT pros can do to get an accurate VDI hardware projection is to perform small-scale testing. Set up a few virtual desktops on unused hardware and ask a few users to try them out. At the same time, use performance monitoring tools to track the resources the virtual desktops consume.

The users working from the virtual desktops can tell IT pros whether the test desktops perform well, while the performance monitoring data can tell them about resource consumption. IT pros can then use this information to fine-tune the virtual desktops and to create a projection that will tell them what hardware they'll need to ensure that users have a good experience.

Graphics co-processing support for a VDI server

VDI works by handling all the processing tasks within the server and using the endpoint device only as an I/O platform (e.g., video, mouse and keyboard). So, all the desktop and visual rendering work takes place within the host server's processor, and the resulting images are relayed to the endpoint across the LAN. This is often adequate for rendering basic Windows-type desktop dialogs and other elements, but advanced graphics tasks (such as streaming video or 3-D graphics) can pose a major processing problem.

The issue is hardware support. Servers often omit graphics processing units (GPUs) because traditional server-side functionality involving file servers or Active Directory servers do not use graphics. But when graphics instructions (such as SSE3 instructions) must be processed, no GPU is available to offload the burden -- leaving the CPU to grind those instructions with inefficient software emulation. The result is a significant performance penalty that can impact every VDI instance on the affected CPU core. As VDI use matures and embraces more sophisticated visualization applications, it's important for VDI servers to include GPU support as a boost to system performance.

GPUs are always deployed as a separate device, but the device can be integrated in several different ways. The most common approach is to install a GPU as an expansion device such as PCIe adapter card. Everyday desktop PCs routinely use this approach because PCIe slots are plentiful and readily accessible, and servers can use powerful server-class products such as Nvidia's Kepler-based GRID K1 and K2 adapters. However, servers may not provide enough PCIe slots to accommodate GPU adapters, which are usually quite large and sport several cooling fans. Limited PCIe slots may also be used with other expansion devices such as multiport network adapters or storage accelerators.

An alternative is to use an out-of-box GPU such as the Cubix GPU-Xpander, which uses a simple, low-profile PCIe adapter that simply connects an external, independently powered self-standing GPU system. This approach avoids overtaxing the server's limited power supply and space constraints with the PCIe slots.

A third approach is to integrate the GPU directly into the processor package, so every CPU socket has access to its own GPU. As an example, Intel adds a GPU to the Xeon E3 family, and plans transcode performance improvements to boost graphics performance. RISC processors based on ARM architectures are also adding GPUs to handle graphics tasks. Integrated GPUs are probably the most efficient approach because they do not overwhelm the server's power supply and do not use a PCIe slot, but IT planners may need to await a future technology refresh to acquire servers with CPU/GPU integration.

Next Steps

VDI hardware evaluation checklist

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
Data Center