your123 -

Google updates Vertex AI with new models, expands reach

The cloud provider updates Vertex AI with models from Meta, a better PaLM 2 LLM, and adds features to its text-to-image product. It also expands an alliance with AI vendor Nvidia.

Google is moving to the next iteration of generative AI by offering new models in Vertex AI, updating its AI hardware offerings and partnering with other enterprises.

On Tuesday, the first day of its Google Cloud Next conference, the tech giant said it is expanding its machine learning and generative AI platform Vertex AI with new third-party models, updating its foundation models and releasing new tools to help enterprise use its models most effectively.

"We believe that making AI helpful for everyone is the most way we'll deliver on our mission in the next decade," Google CEO Sundar Pichai said during a keynote.

"That's why we have invested in the very best tooling, foundation models and infrastructure across both CPUs and GPUs," he said, referring to Google's microprocessors.

Google's investments and the new updates introduced during the conference show the ways the cloud provider is differentiating itself in the generative AI race, according to Gartner analyst Chirag Dekate.

New models in Model Garden

One way Google sets itself apart is with its array of foundation models, Dekate said. Not only is Google's foundation model, PaLM 2, available in Google's Model Garden, but models from generative AI startups such as Anthropic and Cohere are also included.

Google also revealed that Meta's open source models Llama 2 and Code Llama are now available in Model Garden. Google supports Llama 2 with both adapter tuning and reinforcement learning, letting organizations tune the open source model with their data while still controlling that data.

Meanwhile, Google said the Abu Dhabi government-funded Technology Innovation Institute's Falcon -- an open source large language model for research and commercial use -- is now available in Model Garden.

Open source and choices

Google's choice to include open source models displays the power of open source in the generative AI race, according to Futurum Group analyst Steven Dickens.

"The power of open source development is going to trump anything one particular vendor can do on their own," Dickens said.

This is because the AI and machine learning community is drawn to open source and sharing technology because creating and building new models constantly can be taxing, Constellation Research analyst Andy Thurai said.

"There is a reason why Hugging Face is so popular," he said, referring to the AI and machine learning community.

While offering open source models is essential, the strategy of offering different types of models provides enterprises with choices, according to Dekate. This strategy is not unique to Google. AWS and Microsoft are also offering selections of different generative AI models.

Providing different model ecosystems enables enterprises to look at each model side by side and determine which ones work for their particular use case.

"At the end of the day, the largest model is actually not the right answer," Dekate said. "In many cases, smaller, purpose-specific models might actually be better served for the kinds of genuine problems that you are trying to solve in your organizational context."

To cater to that kind of need, Google has also introduced domain-specific foundation models such as Med-PaLM 2 for medical purposes.

Updates to PaLM 2 and Imagen

Besides infusing Model Garden with new models, Google updated its own PaLM 2 model. Thirty-eight languages are now generally available on PaLM, and the context window for the language model is notably larger so that one can input entire books in PaLM.

Google's text-to-image diffusion model Imagen now has a new capability: Style Tuning for Imagen. The feature will help users align their images to their brand guidelines within 10 images, according to Google.

Imagen also includes a new digital watermarking capability.

Google Cloud CEO presenting at Next.
Google Cloud CEO introduces new updates to Vertex AI during Google Cloud Next.

Extensions versus plugins

Within Vertex AI, the cloud vendor is introducing Vertex AI Extensions, a set of managed developer tools for extensions, which connects models to APIs for real-time data and real-world actions.

With Extensions, developers can connect pre-built extensions to enterprise APIs or build their own. They can use the extensions to build generative AI applications like digital assistants, search engines and automated workflows.

Vertex AI has prebuilt extensions for Google cloud services such as BigQuery and AlloyDB. It will also let developers integrate with LangChain, an open source framework for developing applications with generative AI.

Extensions are similar to Microsoft's approach to its agents and plugins, Dekate said. Earlier this year, Microsoft gave developers access to plugins that they can connect into consumer and business applications.

"Google is taking a thoughtful approach," Dekate said. "They are methodically thinking through what it means to build an ecosystem that is sustainable."

The vendor is doing this by extending the foundation model an enterprise uses through Google extensions while still keeping proprietary data private, he said.

Other Vertex updates

Vertex AI Search and Conversation is now generally available.

These capabilities were previously unveiled in preview as Enterprise Search on Generative AI App Builder and Conversational AI on Generative AI App Builder. Vertex AI Search and Vertex AI Conversation help organizations combine enterprise data with generative AI foundation models and conversational AI with information retrieval technologies.

The conversation function coordinates the creation of natural-sounding chatbots and voicebots powered by foundation models. Organizations can finetune the chat with data from websites, documents, FAQs, emails and agent conversation histories.

Meanwhile, Vertex AI Search helps organizations set up Google's search-quality, multimodal, multi-turn search applications powered by foundation models. Multi-turn search lets users ask follow-up questions without starting their interaction over.

Enterprises can ground their outputs in enterprise data or use the data to supplement the foundation model's initial training, Google said.

Key partnerships and customers

Other than expanding its model ecosystem, Google is focusing on its partnerships. Chief among them is an expanded partnership with Nvidia.

"Our two companies with two of the most talented, deepest computer science and computing teams in the world are joining forces to reinvent cloud infrastructure for generative AI," Nvidia CEO Jensen Huang said during a "fireside chat" session at Google Cloud Next.

The partnership includes the expected public release next month of Google's A3 supercomputer, based on Nvidia's H100 GPUs.

Moreover, Nvidia DGX Cloud, the Nvidia AI supercomputer, will now be able to be used with GCP. Google's framework for building LLMs, PaxML, is also now optimized for Nvidia GPUs.

The two vendors are also working on building what they said was the next generation of their processors and hardware infrastructure: an AI supercomputer called DGXH200, based on a new Nvidia chip named Grace Hopper.

Google's partnership with Nvidia is not surprising due to Nvidia's unquestioned leading role in the AI hardware market, Stevens said.

"Everybody's going to have to partner with Nvidia," he said. "They're pretty much the only game in town for the next six quarters until everybody keeps catching up."

While Nvidia's GPUs are leading the market, Google's accelerator tensor processing units (TPUs) also provide value, Dekate said.

At the end of the day, the largest model is actually not the right answer.
Chirag DekateAnalyst, Gartner

Google revealed that TPU v5e is now available in preview and offers integration with Google Kubernetes Engine; Vertex AI; and frameworks such as PyTorch, Jax and TensorFlow. Google said v5e provides better training and inference performance for LLMs and AI models than previous versions. Startups such as AI21 Labs are also training their models using TPUs.

For some organizations, using TPUs might be a better option for training LLMs because the processors are cost-efficient, Dekate said.

"It's lowering the cost base for startups like AI21 Labs that want to innovate at the forefront," he said.

On the customer front, Google said organizations such as Bayer, Fox Sports and GE Appliances are using Google's generative AI technology. For example, Bayer uses Vertex AI Search and Med-PaLM 2, according to Google.

GE Appliances said it will use Vertex AI to offer users a feature called Flavorly AI. Flavorly AI generates custom recipes based on the food in users' kitchens. The company also plans to provide SmartHQ Assistant, a chatbot that answers consumers' questions about their registered appliances.

HR software vendor Workday is taking a hybrid approach to foundation models in that it's building its own while working with Google LLMs, according to the vendor's co-founder and co-CEO Aneel Bhusri.

Using Google's generative AI technology, Workday creates a job description and builds a skills inference API engine that customers can use to build apps for retention and upskilling.

The winners

Enterprises stand to benefit from what Google, as well as with its rivals Microsoft and AWS, are doing with generative AI, Dekate said. The vendors are working hard to improve the technology, which means enterprises are seeing innovation at a rapid rate.

"The field is maturing faster than anybody can imagine it. But enterprises are now starting to see a lot of differentiated capabilities emerge," he added.

Next Steps

Google extends generative AI leadership at Google Cloud Next

Dig Deeper on AI technologies

Business Analytics
Data Management