your123 - stock.adobe.com

Nvidia aims at generative AI, joins with image stock firms

The AI hardware/software vendor offers services for enterprises to build generative models without starting anew each time. The vendor is also teaming with Adobe and Getty Images.

Nvidia is now offering generative AI cloud services and hardware for enterprises.

On Tuesday, the second day of its GTC (GPU Technology Conference) for developers, the independent AI hardware and software vendor introduced a generative AI platform and four new inferencing platforms.

The move by one of the most prominent AI specialists comes amid a dramatic upsurge in generative AI technology. Nvidia competitors -- which are also customers, in many cases -- have introduced widely popular large language text- and image-generating models such as Open AI with GPT-4, ChatGPT and Dall-E, and Google with Bard.

AI Foundations

The new Nvidia AI Foundations platform consists of several model-making cloud services, including Nvidia NeMo, the Picasso image-generating cloud service, and the BioNeMo large language model service.

"The emphasis with our service is customization," said Manuvir Das, vice president of enterprise computing at Nvidia. "Any customer can either start from the ground up or they can take one of our pre-trained models in order to get a head start."

This approach will relieve enterprises of the time-consuming task of training their generative AI models anew each time they build one, said Gartner analyst Chirag Dekate.

"The models are incredibly large, and as a result incredibly expensive to train in many cases," he said. AI Foundations is "for enterprises seeking to customize generative AI experience to their enterprise context."

While Nvidia initially introduced NeMo in the fall, the vendor has infused the system with new capabilities. A new retriever and generator that enable an LLM to retrieve accurate information from data sources and create conversational human-like answers to users' questions are among the new features, according to the vendor.

Unlike ChatGPT, for example, NeMo can also cite sources for the language model's responses. Developers can also set up measures to control the LLM-generated responses to guard against inappropriate results.

Enterprises interested in adding generative AI capabilities to their applications with Nvidia products can apply for early access to the NeMo service.

Picasso and partnerships

Picasso enables enterprises to produce images, videos and 3D models. The 3D format can be produced in Pixar's Universal Scene Description format, imported into Nvidia's Omniverse platform, and used as assets to create any environment. With Picasso, Nvidia is partnering with image stock providers including Getty Images, Adobe and Shutterstock.

Nvidia and Getty Images will train generative text-to-image and text-to-video foundation models on Getty Images' licensed assets, Nvidia said. Artists will receive royalties on any revenues generated from the models.

Meanwhile, CX and visual content tool vendor Adobe and Nvidia are working to develop generative AI models that focus on transparency, the vendors said. Content Credentials, Adobe's Content Authenticity Initiative-approved program, will power the models. Adobe said it plans to bring its work to market and integrate them into products such as Photoshop, Premiere Pro and After Effects. It’s unclear when the vendor will do this.

The partnerships represent a strong direction for the generative AI market, especially for image content creation, said Forrester Research analyst Rowan Curran.

Legal turbulence

"Generative AI for images is becoming enterprise-ready," he said, adding that companies need both hardware and software to build their own generative AI models. "These stock photo, video provider companies getting into this space is a wonderful bellwether for businesses that were very interested in generative AI but were a little bit tense due to certain legal proceedings that were going on around it."

Generative AI for images is becoming enterprise-ready.
Rowan CurranAnalyst, Forrester

In the past few months, several artists have sued Stability AI for allegedly using their images to train its image-generating platform Stable Diffusion. Getty Images also sued Stable Diffusion for allegedly taking some of its stock photos and using them for training.

These lawsuits have raised concerns about ownership in the art community and the ethics of vendors training methods for large language models.

With its partnerships, Nvidia eliminates some ambiguity since its partners own the images they're using to train their models, Curran noted.

Enterprises "have clear assurances on both the inputs and the outputs of the model," he said.

However, the partnership may not alleviate artists' concerns that LLMs will easily reproduce their work style.

Adobe is trying to build protection against that, Curran said. Artists who want their images trained on something other than an LLM can choose not to opt in to the Content Authenticity Initiative.

"This is a first step toward developing some kind of model that we all agree upon -- socio-culturally, ethically -- on how to handle generative art," he said. "This all points to how we can have generative AI as part of our artistic landscape while still also being empathetic and supportive and inclusive of artists who don't want to be involved in the generative AI space."

But that sort of supportiveness does not alleviate some of the concerns that artists' and illustrators' jobs will be eliminated.

"Art continues to evolve," Curran said. However, organizations developing the tools should strive to empathize with artists' concerns, he added.

Nvidia Picasso is available in private preview.

The third cloud service available in Nvidia AI Foundations is BioNeMo. BioNeMo enables enterprises to fine-tune generative AI applications and run AI model inferencing in a web browser or cloud application to speed drug discovery.

Generative AI inferencing

Along with Foundations, Nvidia on Tuesday launched four inference platforms for optimizing generative AI applications.

The platforms include Nvidia's latest processors, including the new Nvidia L4 Tensor Core GPU and Nvidia H100 NVL GPU. The vendor also revealed H100s are now offered by Meta and Stability AI, among others, for generative AI training and inference. OpenAI uses A100 to power ChatGPT and plans to use H100 on its Azure supercomputer.

Each inference platform contains a GPU specialized for running a specific generative AI inference workload.

For example, Nvidia L4 for AI Video offers decoding and transcoding capabilities. Nvidia L40 for Image Generation enables enterprises to build graphics and AI-enabled 2D video and 3D image generation. In addition, it powers Nvidia Omniverse, a platform for building and operating metaverse applications. Nvidia Grace Hopper for Recommendation Models is for graph recommendation models, vector databases and neural graph networks.

Finally, Nvidia H100 NVL for Large Language Model Deployment is for scale deployment of LLMs such as ChatGPT. 

"They're creating a solution that targets extreme scale inferencing, especially when you start getting generative AI-type scenarios," Dekate said. He added that ChatGPT showed that going from zero to millions of users requires extreme throughput, low latency and performance.

Image of different AI Foundations cloud services
Nvidia AI Foundations includes cloud services for generating AI models.

AI arms race and Nvidia

Nvidia is also partnering with Google Cloud.

The AI hardware and software vendor's new L4 inference platform, which runs its latest L4 GPU, is now available to Google Cloud customers in private preview. Google will also integrate L4 into its machine learning platform, Vertex AI.

"Nvidia is in an interesting place because they are a hardware provider to third-party cloud platforms," Curran said.

However, like Google and Microsoft, Nvidia also possesses generative AI capabilities. And, despite partnering with Nvidia, some cloud providers also have their own hardware, making them not just partners but also competitors with Nvidia.

"It's one of these situations where Nvidia really wants to empower and energize the AI space overall," Curran added. "But the AI market is much bigger than Nvidia, and it's much bigger than Nvidia could ever be."

For Nvidia, the differentiating factor is business-to-business rather than consumers.

"We are providing APIs to enterprise developers to personalize and customize large language models for them," said Kari Briski, vice president for AI and HPC software development kits at Nvidia.

Ultimately, Nvidia's newly added products provide choice for enterprises looking to get into generative AI, beyond the cloud giants' offerings, Dekate said.

"Earlier, they were restricted to cloud service experiences. Now Nvidia also offers them choice," he said.

Esther Ajao is a news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close