Virtualization alternatives and AI: Lessons from HPE Discover
IT vendors continue to deliver VMware alternatives and expand infrastructure systems they offer to support enterprise AI workloads.
At this year's HPE Discover conference, it was clear that IT pros' demands for virtualization alternatives and infrastructure for AI have been heard.
Here are three important facts from the event that IT leaders must be aware of as we enter the second half of 2025.
Demands for virtualization alternatives
When there is a massive uptick in demand, the vendor community will respond, though might take a few quarters.
Since Broadcom adjusted VMware's licensing model 18 months ago, many organizations have investigated potential alternatives. While some businesses have started to integrate alternative options, the majority of organizations have decided to stand pat.
At HPE Discover, CEO Antonio Neri highlighted the availability of HPE Private Cloud Business Edition with HPE Morpheus VM Essentials, specifically calling out the potential to save 90% on virtualization costs.
Morpheus VM Essentials allows users to provision and manage HVM-based virtual machines (VMs), HPE’s own hypervisor based on KVM, and VMware-based VMs from a single interface. With HPE Private Cloud Business Edition, HPE provides Morpheus VM Essentials as part of a private cloud built on the HPE Alletra dHCI platform.
Red Hat, Microsoft, Nutanix, and Verge.io also provide alternative hypervisor options. If your organization is interested in diversifying its hypervisor environment, it is important to recognize that this space is evolving quickly. Keep an eye out, as additional alternatives are likely to emerge, and their capabilities should increase over the next several quarters.
When evaluating alternatives, security, scalability, and cost are all key concerns. But more importantly, understand how easily an alternative can integrate into your existing environment while also providing greater agility to meet future demands, such as supporting hybrid cloud options and container-based workloads.
IT infrastructure for AI on the rise
According to 2025 hybrid cloud research from Enterprise Strategy Group, now part of Omdia, 91% of organizations say they are making or planning to make significant infrastructure investments to support new AI initiatives. This new wave of investment in AI is transforming vendor roadmaps and priorities.
One of the earliest signs of shifting vendor roadmaps appeared on the server side, with vendors adding the ability to integrate more GPU accelerators into a single system, such as the HPE Compute XD690, to improve the density of the deployment for training and inference. On the storage side, nearly every vendor now offers a high-performance, highly scalable storage option to their portfolios to support AI demands.
HPE highlighted the HPE Alletra MP X10000, which was announced late last year. This storage infrastructure offers what you would expect: a highly scalable, high-performance software-defined storage system that offers object storage services to support the anticipated large volumes of unstructured data required for AI training and inference. In addition to the foundational specifications, however, HPE adds the ability to integrate a pre-validated portfolio of generative AI models designed to tag the metadata of object data inline on ingest.
Quality data is essential to success in AI. Prepping the data to identify and tag the right data to train or augment models is a complex and time-consuming activity. With the ability to integrate pre-validated models, the X10000 should help simplify the data preparation process to support internal AI projects. While a similar process could be done using external systems leveraging similar generative AI models, this integrated approach should simplify deployment and reduce network bandwidth once in place, since the tagging happens inline within the system itself.
Beyond the X10000, HPE also announced a new generation of its HPE Private Cloud AI, which provides turnkey AI factory infrastructure for enterprise environments. Importantly, this technology can integrate with the previous generation of HPE Private Cloud AI.
Given how new AI environments are, there is a question as to whether a turnkey approach with pre-defined configurations (which should make deployment simpler) is superior. Or is a more customized approach, which tailors the hardware to the use case, preferable for improving the return on investment. With that consideration in mind, HPE also offers options to deploy a more customized approach as well.
For IT decision makers investing in AI, the takeaway is that success requires more than a GPU investment. The way your organization manages its data environment to support the AI environment is a critical design factor in ensuring success in AI. As businesses become more mature in their use of AI, their needs will move beyond compute and storage.
Networking is essential for AI success
In his keynote address, Neri listed networking along with AI and hybrid cloud as the three pillars of HPE's corporate strategy. The strategic importance of networking is likely fueling HPE's plans to acquire Juniper Networks to augment its networking portfolio.
Networking obviously plays a critical role in ensuring the distributed application environment operates properly. As organizations scale their internal AI initiatives, modernizing networking infrastructure has become increasingly vital to ensuring that the surrounding data pipeline infrastructure -- storage and networking -- can support the needs of the accelerator technology. In your AI architecture investment plans, networking should be a critical consideration.
Scott Sinclair is Practice Director with TechTarget's Enterprise Strategy Group, now part of Omdia, covering the storage industry.
Enterprise Strategy Group analysts have business relationships with technology providers.