Konstantin Emelyanov - Fotolia
How to choose the best CPU for virtualization
Selecting the right CPU for virtual infrastructures depends on many factors, including feature sets and hardware specs. Proper configuration of processor resources is also crucial.
Choosing a CPU for virtualization and configuring it properly are just as important as choosing memory, storage and network resources. A processor that doesn't make sense for the infrastructure or one that's misconfigured could negatively affect the other three components.
VM performance also largely relies on properly configured CPU, memory, storage and network resources. Processor resources are often hyper-threaded and overprovisioned to a detrimental degree. However, before you can implement configuration best practices to avoid this, you must first select a CPU that meets your needs.
Much of the decision behind which CPU to select -- aside from cost -- depends on the types of workloads being run. Some processors are better suited to memory management optimization, whereas others offer better support for I/O devices.
What is CPU virtualization?
CPU virtualization is the process of abstracting the physical processor's resources into one or more logical representations that can be applied to different workloads. OSes have direct access to hardware resources, but with virtualization, software known as a hypervisor abstracts those resources so that IT teams can provision and use them more efficiently.
You can assign one or more virtual CPUs (vCPUs) to a VM, depending on how compute-intensive the workload is. The same underlying processor resources exist whether you use physical or virtual machines; the hypervisor abstracts each physical CPU into one vCPU that can be easily assigned to workloads. It's easier to assign and reallocate those resources when they are virtualized.
Following are some considerations for choosing and configuring a CPU for virtualization.
Compare processor types: AMD vs. Intel
Processor types, such as those offered by Intel and AMD, and their specific uses are differentiated by acronyms that represent the command and instruction sets developed to perform virtualization-centric processor tasks. For example, Intel VT implementations include VT-x, VT-i, VT-d and VT-c, and AMD offers AMD-V and AMD-Vi.
One of the best approaches to choosing the right CPU for virtualization lies in identifying the specific features you want to implement in your virtual environment. The choice of hypervisor -- and its support for specific processors -- might affect the choice of processor for virtualization.
One feature that Intel and AMD both provide is memory space isolation through no-execute (NX) and execute disable (XD) bits respectively, which protects VMs from malware. NX and XD bits ensure that the CPU refuses to run code in protected areas.
Other CPU features that are key to virtualization are the Load AH from Flags (LAHF) and the Save AH to Flags (SAHF) commands and virtualization extensions. LAHF and SAHF commands enable control over register contents, while virtualization extensions provide better resource use. Choosing the right hardware is the first step to ensure virtual workloads run at peak performance.
Disable hyper-threading to boost performance
The next step is to ensure you put your CPU to good use. Identify which features to take advantage of and which features might lead to performance problems in the future. Hyper-threading is a CPU feature that organizes and schedules application threads, but it isn't always the most efficient way to improve processor performance. Hyper-threading works as a second pipeline for instruction sets to avoid resource waste.
The problem with this approach is that the CPU core itself only has one execution engine, so resource contention can occur, which causes performance bottlenecks. Instead of using hyper-threading, consider spending the money upfront for more cores when purchasing a CPU for virtualization, if possible. It can be more efficient to buy more CPU cores and disable hyper-threading than to share fewer cores between workloads via hyper-threading.
Even if you don't experience performance problems due to hyper-threading, hosting multiple cores equates to more overall CPU power to prioritize and execute instruction sets.
Minimize virtualization overhead with SLAT
One common issue with virtualization is the amount of overhead it requires to continuously translate between physical and virtual memory space. The software layer brings overhead that can reduce the resources available to VMs. Second Level Address Translation (SLAT) -- a processor feature that Intel calls Extended Page Tables, and AMD calls Rapid Virtualization Indexing or Nested Page Tables -- reduces that overhead, which improves virtualization performance.
SLAT eliminates repetitive work by adding a cache for recent virtual-to-physical page table mappings that the hypervisor has generated. If the mapping information already exists in the cache, those CPU and memory resources aren't needed for the translation. The level of overhead saved varies depending on the specific workload. Also, some systems require that you enable SLAT support in the BIOS to take advantage of the feature.
Provision enough vCPU resources to VM workloads
Demanding workloads require a creative approach to allocating processor resources and CPU virtualization. Every application -- and its computing requirements -- is different. Most VMs operate normally with one vCPU, but for more labor-intensive workloads, such as database or email servers, you might need as many as four vCPUs -- sometimes even more.
To start, provision the same number of virtual processors as the physical processor requirements the application dictates. If the application requires two physical CPUs, assign two vCPUs to it, and monitor the performance to see if it needs more. In addition, if the workload is particularly demanding, assign vCPUs from different cores to balance the load. Applying affinity and anti-affinity rules can help specify which CPUs a single machine should and shouldn't use.
Review hardware specs with goals in mind
When selecting a physical server for virtualization, consider CPU, memory, storage and network I/O. All four of these elements are key to server consolidation. In terms of CPU specs, evaluate the number of cores, internal cache size and clock speed.
Determine what you want to achieve with your virtual infrastructure and the types of workloads you run when choosing a CPU for virtualization. If footprint reduction is the main goal, go with a larger number of cores over faster clock speeds. However, if workload performance is a bigger concern, faster clock speeds and fewer cores might make more sense.
More memory and more storage help with server consolidation as well, but size memory and storage according to your needs to prevent wasted physical resources. Finally, ensure you have enough network bandwidth to accommodate your virtual workloads.
Popular CPUs for virtualization
The three major processor vendors are Intel, AMD and IBM. The following are some of their most popular CPUs for enterprise virtualization:
- Intel 3rd Gen Xeon Scalable processors. The 5000 series processors have anywhere from six to 40 cores and six to 80 threads. Bronze is a good option to deploy basic servers in house; Silver and Gold have increased memory speeds and power, and more security features; and Platinum is available for enterprise-level data centers.
- AMD 3rd Gen EPYC processors. The 7003 series processors have anywhere from eight to 64 cores and 16 to 128 threads. AMD also offers a Server Virtualization TCO Estimation Tool that can help you choose the processor that best meets your needs.
- IBM Power9 processors. Power9 processors have anywhere from four to 24 cores, and IBM offers scale-out and scale-up options. Power9 processors use I/O subsystem technology intended to boost off-chip I/O and are compatible with major I/O standards, including Nvidia NVLink 2.0, PCIe Gen4 and OpenCAPI.
This information will help you choose and configure a CPU, but there's much more to know about processor technology, such as the introduction of chiplets for scalability and recent improvements to existing CPU features.