HPE goes all-in on supercomputing in the cloud

Given the popularity of large language model technology, tech vendors such as HPE are now looking to give customers offerings to help them train these models.

Hewlett Packard Enterprise (HPE) has long played a major role in the advancement of supercomputing. Its past work sought to make supercomputers more powerful, energy-efficient and accessible. This was the case even before the acquisition of Cray. Now, they've combined their strengths and brands to lead the market in supercomputing.

With their recent announcements at HPE Discover, they're doubling down on those strengths and expertise in an effort to give customers access to supercomputing capabilities faster and more cost-effectively.

HPE GreenLake for large language models

The biggest announcement was HPE GreenLake for large language models (LLMs). There is so much more to this announcement than what is on the surface. Within that announcement are several components that could pay dividends for HPE.

First, enterprises can privately train, tune and deploy large-scale AI on HPE's powerful supercomputing infrastructure stack. Additional angles -- such as the broad concept of supercomputing as a service, the notion that HPE is entering the AI cloud market, the partnership with German AI startup Aleph Alpha and the underlying sustainability angle through a partnered data center -- add significant substance to this announcement.

Large language models are all the rage right now, so it makes sense to target this particular use case out of the gate. With so many organizations needing help in getting started, HPE GreenLake for LLMs will be a cloud-based offering that is specifically designed for large-scale AI training and simulation workloads. This is critical because general-purpose cloud offerings typically run multiple workloads in parallel. HPE believes that by operating a single workload at full computing capacity on hundreds, if not thousands, of CPUs and/or GPUs at once, customers will see much higher levels of performance, efficiency and reliability when training LLMs.

But the big question here is whether or not this AI-native architecture is enough to lure folks away from competing cloud providers offering AI infrastructure to train models. If you're an existing supercomputing customer, this should be appealing. Paying for access to thousands of GPUs for a week is far more cost-effective than purchasing eight cabinets that must be deployed, integrated and managed in a data center.

HPE's partnership with Aleph Alpha

Part of the announcement includes access to Aleph Alpha's Luminous, a pre-trained LLM that enables customers to leverage their own data to train and fine-tune a customized model. This is important in accelerating the ramp up of LLM usage. Customers can access Luminous out of the gate and quickly gain real-time insights based on their own proprietary domain-specific or business-specific knowledge to use in a digital assistant.

This component of the announcement is important because it's nearly impossible to expect customers to have their own LLMs at the start of their journeys, especially when it comes to training with private data. Many of the LLM announcements we've seen across the industry entail providers having their own LLMs or partnering with others that have them. While customers may be enabled to bring their own to HPE GreenLake, access to Luminous will significantly reduce their time-to-value.

A needed commitment to sustainability

Supercomputers are powerful systems that consume a lot of energy. This idea extends to all AI infrastructure, which must be leveraged to train LLM. There's a continued focus on sustainability from all vendors in the AI market to improve energy efficiency and rely more heavily on renewable energy sources.

For customers, this means lower operating costs and affordability, as well as reducing energy footprints and emissions to combat climate change. HPE GreenLake for LLMs will run on supercomputers initially hosted in QScale's Quebec colocation facility that provides power from 99.5% renewable sources.

Traditional supercomputing applications are next

LLMs are the first supercomputing application to be supported on HPE GreenLake, but customers can expect more traditional supercomputing applications soon. That includes climate modeling, life sciences, financial services and manufacturing. All applications will be run on HPE Cray XD supercomputers deployed and managed by HPE experts. The HPE Cray Programming Environment will also be integrated and optimized to deliver a complete set of tools to help develop, port, debug and tune code. HPE customers comfortable with HPE Cray environments will be able to more easily consume this service with existing workloads.

Takeaways

This is a gamble for HPE. I completely recognize the emphasis on LLMs as the first use case, but the gem of this announcement is the underlying delivery mechanism: I've yet to see anyone else position supercomputing as a service. HPE knows supercomputers as well as anyone, and being able to deliver it to clients instantaneously in a cloud environment is significant. Factor in the sustainable data center component, and this concept could be embraced by customers currently using or looking to use supercomputers to support AI workloads.

And that will be the challenge. Are the promises of higher performance, lower power consumption, and resource availability enough to sway customers away from the other cloud providers with potentially more LLM expertise and already supporting LLMs today? It's far too early to tell. But since this service won't be available until the end of 2023, other cloud providers have time to develop and deliver right-sized AI infrastructure to support the training of LLMs. What has me the most excited are the other areas they'll be focusing on in the future that align to more traditional supercomputing applications: climate, life sciences, financial services and manufacturing. These areas could resonate with customers that not only know supercomputing, but are likely asking themselves how they can afford access to more resources to explore new AI workloads.

Enterprise Strategy Group is a division of TechTarget. Its analysts have business relationships with technology vendors.

Dig Deeper on AI technologies