scandinaviastock - stock.adobe.c

Explore the benefits, tradeoffs of GPU instances in cloud

While GPU instances can give compute-intensive apps the processing boost they need, not all workloads -- or budgets -- will benefit from these cloud instance types.

The increasing adoption of AI, machine learning and big data analytics has enterprises looking for new cloud instance types.

As their data sets continue to grow, enterprises need more power to crunch all the numbers. A graphics processing unit (GPU) -- mainly used for graphics-heavy applications but suited for some compute-intensive ones as well -- can help solve that challenge.

Major cloud providers now offer GPU instances, but not all workloads are right for them -- and cost could be a deal-breaker. Here are three common questions to ask before you adopt cloud GPUs.

Which workloads benefit from GPU instances?

GPU instances aren't for every application or workload. Before you jump in, carefully evaluate your application needs, and remember that, in general, these instance types are best for compute-intensive workloads.

Some business analytics applications are well-suited for GPUs because of the processors' parallel computing capabilities, and AI applications can also be a good fit. Companies who use supercomputing for academic research can also benefit from GPUs, and other compute-intensive applications, such as those used for video production, virtual desktop infrastructure and engineering simulation, can get a boost from these instance types as well.

What challenges or risks come with GPU instances?

While GPU-based cloud instances might be less expensive than implementing the technology in-house, they tend to be more expensive than their virtual CPU-based counterparts. The types of workloads that run on GPUs might also come with a learning curve for some IT teams.

The GPU options from Amazon Web Services (AWS), Azure and Google range from $0.70 to $0.90 per GPU per hour. By comparison, AWS' general-purpose, virtual CPU-based instances start at $0.0058 per hour.

Also, high-performance computing (HPC) apps, which typically use GPUs, often require customized development, as well as specialized tools and frameworks, such as Apache Spark, TensorFlow and Torch, which some IT teams are not familiar with. Enterprises might need to invest in training and certifications to ensure staff can build and manage these applications properly.

Which cloud providers offer GPU instances?

Leading cloud infrastructure providers have their own brands of GPU instances. AWS offers Elastic GPUs that attach to Amazon Elastic Compute Cloud (EC2) instances via a network. Users choose the EC2 instance type that fits their application's compute, memory and storage requirements and then select the GPU to attach. AWS provides four options that range from 1,024 to 8,192 MB of GPU memory.

Azure's GPU instances are categorized under its N-series VMs and come in two options: NC and NV. The NC sizes are geared toward compute- and network-intensive applications and are powered by Nvidia's Tesla K80 card. NV sizes are meant for visualization, gaming and encoding and use Nvidia's Tesla M60 GPU card and Nvidia Grid. To use these GPU instances, users must deploy them in Azure Resource Manager.

Google provides similar options through its Compute Engine platform. Users can choose to add either Nvidia Tesla P100 or K80 GPUs to either non-shared-core predefined instances or custom instances. There are a few restrictions users should note, such as a limit of 208 GB of system memory for GPU instances.

Dig Deeper on Cloud app development and management

Data Center