E-Handbook: Building and implementing a true SDDC platform Article 3 of 3

Kit Wai Chan - Fotolia

Tip

GPUs, FPGAs increase software-defined data center performance

Some applications in your SDDC might need special hardware so they can function at their best. GPUs and FPGAs are an effective way to increase performance.

Resource-hungry workloads such as virtual desktop infrastructure or artificial intelligence applications often require GPUs or FPGAs because commodity servers don't always provide enough power to support sufficient application performance.

Professionals designing and operating a software-defined data center (SDDC) should consider the requirements for hardware accelerators such as graphics processing units (GPUs) and field-programming gate arrays (FPGAs) if they want to increase data center performance.

Usually, an SDDC is most effective when it pools all of its x86 servers; maximum flexibility comes from being able to place any workload on any physical server. This means the central design decision must add accelerators to either the whole SDDC pool, or to just a few hosts. This creates an island of acceleration within the SDDC.

Addressing data center scale

Before adding hardware, admins must determine the necessary scale of acceleration, because GPUs and FPGAs bring a trade-off: accelerated islands or potentially idle hardware. Having accelerated islands reduces the available flexibility in data center performance. But, if every host has an accelerator and only 10% of workloads require acceleration, then there are idle accelerators, and the organization does not get the value for its purchase price.

For example, an SDDC that serves office workers may have some virtual desktops that need a little GPU power to make them responsive. In that case, a midrange GPU in every virtualization host makes sense to allow any host to be part of the VDI environment.

If the SDDC serves a bank's fraud detection team that requires massive GPU power to train its AI system, then a few physical hosts with multiple high-end GPUs makes sense. This creates a GPU-accelerated island within the SDDC for effective data center performance.

Calculating SDDC maintenance

Software that runs on commodity servers is the core of an SDDC, but there are plenty of applications that require different types of compute.

A second significant consideration is the rate of change of the accelerators versus the rate of replacement for the x86 servers. Part of the value of an SDDC is that it requires limited hardware changes because the software moderates it. If the GPU or FPGA needs to be replaced every two years to maintain data center performance but the x86 servers are on a three- or four-year cycle, then admins must make data center upgrades far more often.

Larger SDDCs should plan for hardware refreshes annually or every six months so that organizations can deploy newer accelerators without making changes to existing servers. A rolling upgrade is more common in hyperscale environments and may be a challenge for organizations that are used to larger upgrades every three years.

Looking ahead at data center performance

Even with these advancements, computing hardware companies continue to improve accelerator technology. One goal is to have data center GPUs as shared resources, rather than being tied to specific physical servers.

There is work underway at Dell to allow machines without a physical GPU to send Nvidia's compute unified device architecture instructions across a network to a pool of GPUs and get results just as if there were a physical GPU in the data center. This technology allows GPUs to act as a pooled resource that the software accesses across a network, similar to current shared data center storage.

Admins can also consider composable infrastructure options that use a Peripheral Component Interconnect Express fabric as a network, which allows the SDDC to connect specific physical hosts to a bank of accelerators for increased data center performance. Some of these options are cloud-based and might serve the business better than on-premises hardware.

Software that runs on commodity servers is the core of an SDDC, but there are plenty of applications that require different types of compute. Before investment, IT professionals must understand the accelerator's needs, or risk business units bypassing their SDDC to get the application -- and data center -- performance they require.

Dig Deeper on Data center ops, monitoring and management

SearchWindowsServer
Cloud Computing
Storage
Sustainability
and ESG
Close