your123 - stock.adobe.com
Nvidia unveils Omniverse Cloud for metaverse applications
The new service lets enterprises build metaverse applications. The vendor also introduced new large language model services and unveiled upgrades to its H100 hardware line.
For enterprise applications ranging from digital twins and simulation to robotics, Nvidia has raised its bet on the metaverse by turning to the cloud.
The AI hardware, software and gaming vendor on Tuesday unveiled a slate of new cloud services, including its first software-and-infrastructure-as-a-service offering, Omniverse Cloud. Nvidia's Omniverse is the vendor's platform for building and operating metaverse applications.
"The next wave of AI is where AI interacts with the physical world," CEO Jensen Huang said on Tuesday during a streamed keynote at Nvidia's GTC developer conference. "Omniverse is useful wherever digital and physical worlds meet."
Nvidia Omniverse Cloud is a suite of cloud services for artists, developers and enterprise teams to build metaverse applications.
Applications on Omniverse Cloud include Omniverse Farm, a system layer that enables users to scale 3D workloads; Omniverse Replicator, which helps users generate 3D synthetic data to train computer vision and robotics; and Omniverse Nucleus Cloud, a system that lets 3D designers and teams work together using a shared universal scene description.
"Omniverse Cloud is taking some of the best elements of what you can deploy in Omniverse and making them accessible over the internet," said J.P. Gownder, an analyst at Forrester Research.
The cloud platform gives users a starting point for advanced metaverse technologies and services that they can use without making a huge investment into building their own, according to Gownder.
"It should make it more democratized," he said. "It should make it cheaper, more scalable. But it probably won't do everything you would eventually necessarily want to do."
Omniverse Cloud also lets employees in organizations access tools and technologies for simulation and content creation, said Tuong Nguyen, an analyst at Gartner.
"Employees aren't limited to a specific device," like a high-powered desktop machine, he said. Omniverse Cloud also will enable collaboration between employees.
While Nvidia did not reveal the price of Omniverse Cloud, "enterprises will need to evaluate the costs of increased accessibility to these tools and technologies," Nguyen said, including the cost of cloud usage and connectivity and network time.
However, Nvidia is positioning itself as a leading vendor in the metaverse market, especially since there's not a lot of competition now, said Daniel Newman, an analyst at Futurum Research.
"Whether it's Meta or Microsoft, a lot of these companies that are trying to build 3D collaborative environments, digital twins, data replication, simulation -- they're going to be utilizing Nvidia's software," he said.
Daniel NewmanAnalyst, Futurum Research
Nvidia said one current user of its Omniverse technology is WPP, the multinational advertising and communications company, which is using the platform to create a suite of services for personalized programmatic content for customers, according to Nvidia.
Lowe's is also using Omniverse to test digital twins that let sales associates visualize and interact with nearly all of a store's digital data at two of the home hardware retailer's locations.
Omniverse Farm and Replicator containers are available now on the NVIDIA NGC cloud platform for self-service deployment on AWS using Amazon EC2 G5 instances with Nvidia A10G Tensor Core GPUs.
In addition, Omniverse Cloud will be available as NVIDIA-managed services via early access by application.
Large language models
Other than Omniverse Cloud, the vendor also unveiled two new large language model (LLM) cloud AI systems: Nvidia NeMo Large Language Model Service and Nvidia BioNeMo LLM Service.
NeMo LLM lets developers customize and deploy inferencing in large AI models. Developers can use a pre-trained model, such as the NeMo Megatron 530B or GPT 3, and build a specific framework around it based on domain prompts provided by the user. The service then generates a pre-trained API for the user to interact with.
"This service will help bring large language models to all sorts of different use cases," said Ian Buck, general manager and vice president of accelerated computing at Nvidia, in a media session before the conference.
Applications include generating summaries for product reviews, building technical answer Q&As for medical products, and generating answers for financial analysts.
BioNeMo LLM provides researchers access to pretrained chemistry and biology language models. BioNeMo helps researchers find patterns in biological sequences and supports protein, DNA and biochemical data.
What makes NeMo LLM and BioNeMo stand out from other LLMs is they are available in Nvidia- managed infrastructure and the Nvidia cloud, said Andy Thurai, an analyst at Constellation Research.
"This allows for the users to use their cloud and GPUs versus other variations which might drive the traffic to their competitors such as Microsoft or SambaNova," Thurai said.
BioNeMo is application specific and used for drug discovery, protein experimentation and cancer research.
"This is a great example of where large language models could start to help solve these really significant problems," Newman said.
Due to the massive amounts of data, simulation and processing power needed to hypothesize and formulate drug combinations for experimentation, only a few vendors -- mostly high-performance computing providers -- have offered tools for such applications in the past. Some vendors have offered such HPC services without specific scientific knowledge, Thurai continued.
Since BioNeMo was trained in scientific data sets like biotech and chemistry, "this can allow for knowledge transfer and experimentation easily," he said.
Nvidia's system can also help enterprises that had trouble training and running workloads from HPC vendors like HPC due to cost.
"With this subscription offering … and a software and hardware combination, Nvidia is going against the other HPC and pure cloud providers," Thurai added. "Only time will tell if they can win."
NeMo LLM and BioNeMo become generally available in October, according to Nvidia.
Nvidia also revealed that its new H100 Tensor Core GPU, aimed at super large-scale AI model training and production, is now in development. Customers that want to try the technology can use the Dell PowerEdge servers on Nvidia LaunchPad now. AWS, Google Cloud, Microsoft Azure and Oracle are expected to deploy H100 to their clouds next year.
Nvidia also introduced the next iteration of its OVX system, which was originally launched in March. The update is powered by Nvidia's next generation of GPU architecture, the L40 GPU. L40 lets enterprises build industrial digital twins that require high power and performance. The new OVX will be available next year.
The vendor also introduced an open source library for building computer vision and image processing pipelines called CV-CUDA. It speeds up AI special effects and enables tasks for users building 3D environments.