GPU virtualization evolves with new chip types on the horizon
GPU-makers Intel, AMD and Nvidia all provide virtualization support; the next step is to merge CPU and GPU functionality to handle both application and math/graphics-related tasks.
There are several options for IT administrators looking to take advantage of GPU virtualization, including Intel...
Continue Reading This Article
Enjoy this article as well as all of our content, including E-Guides, news, tips and more.
Graphics Virtualization Technology, Advanced Micro Devices Inc. Multiuser GPUs and Nvidia Corp. virtual GPUs, but the concept of GPU cores is changing and chipmakers are working on new core types.
A graphics processing unit (GPU) is basically a specialized processor -- usually called a shader -- with its own instruction set capable of tackling transforms and other complex math quickly and efficiently. GPUs have long been absent from enterprise data centers, and virtualization technologies have largely overlooked support for GPUs.
But as computation-intensive techniques such as big data, machine learning and business intelligence visualization tools gain acceptance, the power of GPUs is boosting math processing and accelerating workload performance in the data center. Virtualization is expanding to include GPUs, enabling admins to provision GPU resources and share them with VMs.
Major GPU-makers Intel, Advanced Micro Devices (AMD) and Nvidia have each created software-based technologies intended to integrate with hypervisors to virtualize GPUs and make those virtual GPUs available to VMs. As of 2019, GPU vendors typically provide GPU virtualization, and new and more powerful GPU chips are coming out on a regular basis.
Intel Graphics Virtualization Technology (GVT) uses a software layer that supports hypervisors, such as KVM and Xen, and enables VMs to access virtualized GPU cores using OpenCL.
There are three principal variations of GVT. GVT-d enables admins to dedicate Intel GPUs to a VM. GVT-g enables admins to timeshare a virtualized GPU between multiple VMs using the native graphics driver. And finally, GVT-s enables admins to share a virtualized GPU between multiple VMs using a virtualized graphics driver.
AMD Multiuser GPU (MxGPU) technology uses graphics cards with AMD MxGPU chips. Rather than a software layer, MxGPU relies on Single Root I/O Virtualization technology for GPU virtualization and supports VMware ESXi, KVM and Xen hypervisors using specialized drivers.
MxGPU enables up to 16 virtualized workloads to share a physical GPU and access the GPU using OpenCL. MxGPU is suited for VDI deployments where each VM generally corresponds to a user.
Nvidia virtual GPU (vGPU) technology also uses a software layer on top of the hypervisor to enable each VM to share the underlying physical GPU. Nvidia vGPU can operate in pass-through mode, essentially enabling one VM to access the GPU at a time. This provides the best performance because the VM can have complete control over the physical GPU at any given time.
However, when multiple VMs need GPUs, the server will need more GPUs, which can raise the cost of the deployment. But vGPU can also function in a more conventional mode, creating vGPUs from the physical GPU and provisioning those vGPUs to VM workloads.
Note that the concept of GPU cores is blurring due to two driving forces: core count and core roles. A GPU core count can be radically different than a traditional CPU core count.
Where a CPU might have eight to 24 CPU cores, a GPU subsystem -- such as an expansion card in a server -- might provide hundreds or even thousands of GPU cores. One high-end example is the Nvidia Quadro RTX 8000, which features 4,608 Compute Unified Device Architecture cores and 576 Tensor cores with 48 GB of virtual RAM.
Core roles are also blending. Traditionally, CPU cores performed application-related tasks, while GPU cores performed math and graphics-related tasks. In 2019, manufacturers such as AMD are working to merge CPU and GPU functionality into compute cores, which can handle both types of tasks in the same chips. While such concepts don't change the underlying benefits of GPU virtualization, they can change how resources are virtualized and made available to workloads in the future.
Part three of this three-part series will cover each of the major I/O chipset virtualization extensions.