WavebreakmediaMicro - Fotolia

Tip

Public cloud services grow HPC market, but work remains

HPC applications need a lot of resources, but not all enterprises can meet those requirements. However, new cloud services are emerging to address that gap in the market.

Public cloud providers are always searching for ways to extend their reach, and one of the ways they've advanced those efforts is with high-performance computing.

AWS, Microsoft and Google are having success with high-performance computing (HPC) services because of the convenience they offer for enterprises. That's because HPC requires sizable investments in data center equipment, business processes and maintenance. Few enterprises have the in-house processing power or employee skill sets required to meet those demands.

Public cloud lowers barrier to adoption in HPC market

HPC applications chew up lots of resources -- often requiring specially designed servers -- to run complex mathematic calculations, such as computational fluid dynamics and financial analysis simulations, according to Charles King, an analyst at Pund-IT Inc.

Also, historically, HPC applications were built in proprietary development environments, with often obscure programming tools. Now, the public cloud provides organizations with the needed flexibility, as well as application development environments that are open source, standardized and widely adopted.

"Cloud computing has expanded the HPC market and has further democratized it by making HPC available to new adopters that lack on-premises HPC resources," said Steve Conway, senior research vice president at Hyperion Research in St. Paul, Minn.

Enterprises can now run more demanding HPC applications on the public cloud for use cases such as data analysis, automated driving systems, precision medicine, affinity marketing, business intelligence, cybersecurity, smart cities and the internet of things, Conway said.

Enterprises that already have HPC infrastructure on premises will find it impractical to migrate their entire HPC infrastructure to the cloud, but they can move parts of it.

"Application development can take place in the public cloud, and enterprises can run the code in their data centers," said Karl Freund, consulting lead for HPC and deep learning at Moor Insights & Strategy.

Many HPC applications have sporadic computing needs. For example, an oil company may need to determine the best location to drill, but it only needs to run that analysis once every six months. With cloud, organizations access the necessary computing services as needed.

Cloud vendors rush to fill gaps with HPC services

AWS, Microsoft and Google have moved aggressively to address the demand for HPC services. They also see it as a way to maximize the return on their data center investments, because it can put idling computing power to work, Freund said.

In October 2017, Microsoft signed an agreement with Cray, a longtime leader in the HPC market, to bring its services to Azure. The same year, it acquired Cycle Computing, which focused on HPC. In May 2019, Microsoft said its Azure Virtual Machine HB-series scaled to 10,000 cores, a cloud high-water mark.

AWS' HPC offerings provide a variety of compute instance types that can be configured by each customer. They feature Intel Xeon processor-powered CPU instances, GPU-based instances and instances powered by field-programmable gate arrays.

In April 2019, Google expanded its Google Compute Engine VM offerings to include Compute- and Memory-Optimized VMs; a Google Cloud Platform Marketplace service for Lustre from DataDirect Networks; and support for running T4 GPUs on GCP.

Work remains

Despite the efforts of cloud providers to expand their HPC services, enterprises still face barriers to adoption. These applications require terabytes of data, so it's not feasible to send the information via a high-speed WAN connection. Instead, cloud vendors offer offline migration services, such as AWS Snowball, where they physically pack up the data and bring it from the customer's data center to their own facilities.

However, there are a several potential complications that could hinder this approach for the time being. For starters, packing up petabytes of data is extremely complicated, and users often spend considerable time staging data and tuning algorithms. It can take months to reconstruct the data to run jobs in a new environment post-migration. In short, the chance for problems is higher than normal, and the tolerance for risk is much lower -- not the best combination.

The broader concern is the fact that HPC is at odds with the generic nature of the public cloud. These vendors are better at generic, commodity services, so HPC is a bit outside their comfort zone.

"Most HPC systems are designed to support specific types of projects and workloads," King said. "Dealing with projects from a wide variety of organizations and HPC applications is fairly unconventional. Vendors may find that they're writing the rule book as they go along." 

Still, while HPC services are making slow progress in the enterprise market, they may reach an inflection point. To date, public cloud supports less than 10% of HPC jobs, and that number is expected to reach about 15% in the next two to three years, according to Hyperion Research.

"Cloud adoption among HPC sites is rounding an elbow in the growth curve, and the proportion of all HPC workloads run in the cloud will expand at a brisk rate," Conway said.

Dig Deeper on Cloud infrastructure design and management

Data Center
ITOperations
SearchAWS
SearchVMware
Close