This content is part of the Conference Coverage: HPE Discover 2024 news and conference guide

HPE GreenLake adds GenAI capabilities as on-premises PaaS

HPE GreenLake debuts a new PaaS offering for enterprise GenAI development, co-created with Nvidia. HPE also updated OpsRamp and server hardware refreshes for AI workloads.

LAS VEGAS -- Hewlett Packard Enterprises' and Nvidia's latest co-developed offering for the HPE GreenLake platform aims to bring the cloud-like experience of generative AI to on-premises enterprise data centers.

Nvidia AI Computing by HPE is a new portfolio of products and services developed by the two companies and sold by HPE or its partner resellers. The catalog, releasing in full this fall, took centerstage today during HPE CEO and president Antonio Neri's keynote at HPE Discover 2024.

HPE Private Cloud AI is a platform-as-a-service offering that brings HPE hardware and software along with Nvidia GPUs, AI tools and AI models as a managed private cloud service to a customer's data center through HPE's GreenLake.

HPE executives said Private Cloud Al provides a turnkey service for generative AI (GenAI) creation, enabling users to scale hardware as demands dictate with the benefit of keeping data on premises and under the customer's control.

Creating the service has led to updates and features in other parts of GreenLake, with a new GenAI copilot coming to OpsRamp, an IT ops management and visibility SaaS that HPE acquired last year, and a refresh of server hardware to accommodate more potent Nvidia GPUs.

Enterprises are still experimenting with GenAI capabilities and will continue to rely on hyperscaler services through AWS, Azure and others, according to Mike Leone, an analyst at TechTarget's Enterprise Strategy Group.

As those experiments mature and customers move them on premises and into production, however, many enterprises will lack the ability to build systems and networks capable of managing GenAI at scale, he said. HPE has an opportunity with those customers as GreenLake offers not only hardware akin to offerings from Dell Technologies and other on-premises hardware vendors but also mature data management and query software through its Ezmeral software suite.

"HPE has the technology in place if they put it together and execute well," Leone said. "Data gravity is real and is going to dictate what some organizations do."

Customer AI cloud

HPE Private Cloud AI will come in several hardware configurations depending on the size of AI workloads a customer plans to deploy.

Supporting these configurations are new servers including HPE ProLiant Compute DL384 Gen12 and HPE ProLiant Compute DL380a Gen12, which target small and medium-sized workloads and support Nvidia's H200 NVL GPUs.

Customers looking to train large language models and AI service providers can use the HPE Cray XD670 line for the most demanding workloads with support for up to eight Nvidia H200 Tensor Core GPUs.

Nvidia's contributions to Private Cloud AI include GPUs, Nvidia AI Enterprise cloud-native software platform and AI models through the Nvidia NIM inference microservices.

HPE's software additions include HPE AI Essentials and HPE Data Lakehouse with HPE File Storage, recently updated to support GenAI workloads, to support infrastructure needs. All configurations will include liquid cooling installation options and HPE Sustainability Insight Center software for energy management and reporting.

Unlike a reference architecture, HPE Private Cloud AI will require no component assembly by the user, said Neil MacDonald, executive vice president and general manager of compute, HPC and AI at HPE.

OpsRamp copilot

The OpsRamp automation and visibility SaaS for IT ops teams now include an operations copilot to enable customers to ask questions and get possible solutions for operations challenges.

Copilots have become an expected feature in IT ops automation tools, Leone said. But they are still immature. True innovation for copilots and other GenAI assistants will come as these services develop into predictable, analytical and self-acting agents within an IT environment. These agents should not only advise teams on how to fix a given problem but also know based upon past actions what should and could be automated, he said.

Data gravity is real and is going to dictate what some organizations do.
Mike LeoneAnalyst, TechTarget's Enterprise Strategy Group

"It's a table stake now," Leone said. "When I think of IT ops, it will need to have AIOps. But I still don't think GenAI is the right approach. This has to be the first iteration as we work toward agents."

The addition of copilot into the GreenLake catalog might also send a conflicting message to buyers, said Mike Matchett, founder and analyst of Small World Big Data. GreenLake AI services promise to keep enterprise data isolated from the public cloud, but the OpsRamp copilot relies on cloud services and could affect that data.

Customers who are wary enough to keep their data on premises should question what third-party data is being used to support the copilot and what customer data could be fed back to those vendors, he said.

"Simply adding a chatbot to a tool does very little and may even be counterproductive," Matchett said. "There's some benefit to automation, but there's [a number of] questions you should ask to anyone patching GenAI into your operational tool."

Cost of business

Many enterprises aren't investing in GenAI infrastructure during the early experiment stage, which can quickly tally to hundreds of thousands of dollars, Matchett said.

As experiments mature and become production-ready, enterprises could be looking for ways to process workloads without running proprietary data in the cloud, he said.

"[GenAI] is going to touch a lot of our IP, and we need to keep control of it," Matchett said. "It's a security and a control need."

Bringing workloads on premises will also require a new set of skills to develop and oversee GenAI, another area many enterprises lack expertise, according to Steve McDowell, founder and analyst of NAND Research. A PaaS approach could smooth deployment and eliminate the need to reskill IT teams or hire new employees.

"AI doesn't look like anything we've touched in IT, and the IT guys are trying to figure it out," McDowell said.

Private cloud deployments and other on-premises configurations won't be cheap, Matchett said. But the combination of saving on hiring new employees and data control might outweigh the potential sticker shock.

"Whatever the zeitgeist is, every board room has talked about AI and what they're going to do about it," he said. "There's probably a market there."

Tim McCarthy is a news writer for TechTarget Editorial covering cloud and data storage.

Dig Deeper on Cloud infrastructure design and management

Data Center