your123 - stock.adobe.com

Nvidia seeks to help enterprises with generative AI

The hardware and software vendor introduced new offerings to help enterprises work on models locally. It also updated its enterprise AI suite and Omniverse platform.

Nvidia continued its drive to incorporate generative AI into its products and support enterprises interested in the fast-growing technology.

The AI vendor on Aug. 8 revealed new software and hardware offerings during SIGGRAPH 2023, a computer graphics conference. Nvidia revealed a new workspace for developers named Nvidia AI Workbench, among other new software offerings.

Making it easier for enterprises

With Workbench, developers can create, test and customize pre-trained generative AI models on a small PC or workstation. They can then scale them to any data center, public cloud or Nvidia DGX Cloud.

"This is focused on developers early in the stage of their journey in getting going, and we think that will make it a lot easier for them," said Erik Pounds, Nvidia's senior director of enterprise computing.

Nvidia's strategy of trying to make it easy for enterprises to customize and run their generative AI projects appears to be on the right track, Futurum Group research director Mark Beccue said.

"Enterprises have had to scramble to collect a lot of different pieces to run AI projects," he said. Some of those pieces include models, frameworks and libraries. Enterprises have also had to consider using private or open source libraries when running their projects.

Having all these things in one place -- such as with AI Workbench -- should help enterprises develop models faster, Beccue said. But having the tools to build generative AI applications does not mean enterprises necessarily have the talent to do it.

"One of the biggest barriers right now is that there is not enough skill in the employment pool to build … applications the way people want to build them," Forrester analyst Rowan Curran said.

Therefore, vendors like Nvidia and others are helping enterprises reduce the complexities involved in developing these models, he said.

AI Workbench is a rebranding of what Nvidia is already doing for data scientists, said Anshel Sag, an analyst at Moorhead Insights.

The vendor is working to enable developers to access high-performance computing at a local level and scale it as they see fit, Sag said.

"It's a way for them to have more control over how these systems are built and tested," he said. "It's very much like a refresh on some of their strategy."

Updating the AI suite

Nvidia also unveiled a new version of its AI platform, Nvidia AI Enterprise 4.0.

AI Enterprise 4.0 now includes Nvidia NeMo, Nvidia's framework for building, customizing, and deploying LLMs.

Also, Nvidia Triton Management Service, which helps automate and optimize production deployments, is available now on Enterprise 4.0.

Nvidia Enterprise is currently supported on Nvidia RTX workstations. Nvidia also introduced three new desktop workstation Ada Generation GPUs: RTX 5000, RTX 4500 and RTX 4000. The workstations have 48GB of memory and can be configured with either Nvidia AI Enterprise or Omniverse Enterprise, Nvidia's platform for metaverse applications.

It's a way for them to have more control over how these systems are built and tested.
Anshel SagAnalyst, Moorhead Insights

"The enterprise suite is one of those things that they're trying to combine everything into one," Sag said.

Nvidia's goal is to give enterprises access to more AI capabilities.

"One of the reasons why Nvidia has been so successful with their hardware is because they have a very comprehensive software suite," Sag said. "The enterprise AI software suite is a big component of why they're going to be successful with most of the things they're announcing."

However, despite its success with the hardware, Nvidia's challenge may be trying to balance the new hardware with the GPU supply it currently has, Sag said.

Nvidia also revealed a partnership with Hugging Face, the collaboration platform for machine learning model building.

Through the partnership, Nvidia users will have access to the Nvidia DGX Cloud AI supercomputer combined with the Hugging Face platform to train and tune AI models. Hugging Face also plans to release a new service called Training Cluster as a Service in the coming months. The service will help simplify the building of new and custom generative AI models.

"Hugging Face has a lot of mindshare from within the industry in terms of creating new AI applications and prototyping new applications. And I think having Hugging Face as a partner is a big deal," Sag said.

Updating Omniverse with genAI

Along with focusing on generative AI, the vendor is doing the same thing with Nvidia Omniverse, its platform for creating metaverse applications.

Nvidia released an update to Omniverse and is offering new foundation applications and services for developers. The update includes advancements to Omniverse Kit, an engine for developing OpenUSD (Nvidia's version of Pixar's Universal Scene Description metaverse standard) applications and extensions. The kit consists of an extension registry for accessing and sharing Omniverse extensions.

Nvidia also introduced new Omniverse Cloud APIs, such as ChatUSD, an LLM model copilot that answers USD knowledge questions; RunUSD, a cloud API that translates OpenUSD files into rendered images; and DeepSearch, an LLM agent that enables fast semantic search.

Infusing the Omniverse platform with generative AI makes sense, Gartner analyst Tuong Nguyen said.

The nature of generative AI is to create new content. Since Omniverse is a platform for creating metaverse applications, it also involves simulation, which includes creating new content, such as avatars, and metaverse applications, such as 3D simulations and digital twins.

In the past, that would have required hiring someone to create specific parts of the simulation. Therefore, combining generative AI with Omniverse tools works well, Nguyen said.

Nvidia also revealed that AI-created image stock vendor Shutterstock is bringing generative AI to 3D scene backgrounds with Nvidia Picasso, a cloud-based application for developing generative AI models for visual design. Nvidia Picasso now includes a feature that uses simple text or image prompts to help artists enhance or light 3D models.

"What they're doing is by creating all these partnerships, they're kind of spelling it out for different types of enterprises," Nguyen said, adding that the vendor is showcasing a range of applications for how enterprises can use generative AI or other iterations of AI.

On the hardware side, Nvidia introduced the next generation of the Nvidia GH200 Grace Hopper Platform, which is based on a new Grace Hopper Superchip with a HBM3e memory chip processor. The platform is meant to handle generative AI workloads that includes recommender systems, LLMs and vector databases. It is set to be available in the second quarter of 2024 and is designed to scale out data centers, according to Nvidia.

Esther Ajao is a news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close