your123 - stock.adobe.com
Nvidia updated its AI Enterprise offering by adding production support for VMware vSphere with Tanzu, adding more OEM servers and including a Domino Data Lab platform integration.
Nvidia AI Enterprise 1.1 became generally available on Wednesday.
Chief among the new capabilities is the Tanzu integration that Nvidia added to its VMware support. The AI hardware and software vendor said the added Tanzu offering gives data scientists and developers the flexibility to run AI workloads on containers and VMs within their vSphere environments.
While Nvidia AI Enterprise suite users have had access to a variety of tools on VMware since the two partnered in 2020, the Tanzu integration was missing, said Anne Hecht, senior director of enterprise products at Nvidia.
The full integration with Tanzu is a positive step for both Nvidia and VMware, said Andy Thurai, an analyst at Constellation Research.
"While Nvidia has mastered the art of capturing [the] majority of containerized modern workloads, it has been a challenge for them to get into the mainstream old-school data center workloads dominated by CPUs," Thurai said.
With this addition, the vendor is providing data scientists and data engineers with the option to run their AI projects in their private data centers instead of having to choose cloud over on-premises implementations.
"Managing and scaling Kubernetes clusters is a monumental task that takes a lot of data engineering cycles to productionize AI workloads," Thurai said. "The combination of vGPUs, vSphere management of AI workloads across containers and virtual machines, and Nvidia Bluefield data processing unit optimization for private data centers makes this a compelling event to run AI workloads in a truly hybrid mode."
He added that the update can help enterprises that are starting to shape their metaverse workloads.
Partnership with Domino Data Lab
By accessing the Domino Data Lab layer of the suite, customers can fully automate their MLOps, Hecht said.
The Domino system gives enterprises self-service scaling and provisioning of workloads and Kubernetes clusters for data scientists. It is also fully integrated with Tanzu and enables collaboration between data scientists through the MLOps platform, according to Nvidia.
Andy ThuraiAnalyst, Constellation Research
The choice to partner with Domino Data Lab is especially interesting since many companies play in the MLOps market, Thurai said.
Hecht said Nvidia chose the MLOps vendor because it has already integrated and supported Nvidia's DGX platform.
Additional OEM servers
The AI suite's full integration with Tanzu gives enterprises the flexibility to run workloads on containers and VMs, according to Nvidia. The vendor also introduced two new certified servers from its partners.
Enterprises using the AI suite now have the choice of selecting two additional Nvidia-certified servers: Cisco and Hitachi Vantara.
Pricing for Nvidia AI Enterprise software ranges from $2,000 per CPU socket to $8,090 per CPU socket depending on the length of subscription and license support. Enterprises will have to pay additional costs for both the servers and the Domino Data Lab platform.