your123 -

Google starts incorporating Gemini across AI software stack

The cloud provider revealed the model's API is now available in Studio and on its Vertex Platform. It also introduced new Duet offerings and a partnership with Mistral.

Google has started incorporating its new foundation model, Gemini, into its software stack.

The tech giant originally introduced Gemini on Dec. 6, in three sizes: Ultra is the largest, Pro is medium, and Nano is the smallest size.

On Wednesday, the cloud provider revealed that Gemini Pro API is now generally available to developers in Google AI Studio and available in public preview to enterprises on Google's Cloud Vertex AI platform.

While Google plans to incorporate Gemini across its Duet AI portfolio in the next few weeks, it introduced two new Duet AI offerings: Duet AI for Developers and Duet AI in Security Operations. Duet AI is Google's generative AI (GenAI) assistant for Workspace.

Duet AI for Developers includes features to help users code faster with AI code completion, code generation and chat in multiple integrated development environments.

Google also revealed that 25 code-assist and knowledge base partners will contribute data sets specific to their platforms so that Duet AI for Developers users will receive AI assistance based on the partners' coding and data models.

Partners include Jasper, Labelbox and LangChain.

Duet AI in Security Operations enables users to protect their organizations from cyberattacks. Users can search large amounts of data within seconds with queries generated from natural language and can improve their response time to threats, according to Google.

Duet AI for Developers and Duet AI in Security Operations are now generally available.

Google's pricing model

Google's incorporation of Gemini across its software stack should be expected and is not surprising, said Futurum Research analyst Mark Beccue.

"It makes sense," Beccue said. "The logical progression is that you're going to add Gemini to the stack."

The logical progression is that you're going to add Gemini to the stack.
Mark BeccueAnalyst, Futurum Group

What stands out is Google's pricing model, Beccue continued.

Gemini Pro costs $0.00025 for 1,000 characters, or $0.0025 per image for inputs and $0.0005 per 1,000 characters for outputs. The released version of Pro has a 32,000-character content window for text.

Comparatively, OpenAI's GPT-4 model costs anywhere from $0.01 per 1,000 characters for prompt tokens to $0.06 per 1,000 for prompt tokens or $0.03 per 1,000 for sampled tokens to $0.12 per 1,000 for sampled tokens depending on users' context window.

Also, Anthropic's Claude foundation model costs between $0.80 per 1million tokens to $8 per million tokens for prompts and between $2.40 per million tokens and $24 per million tokens for completion.

A token represents the length of a text. One token usually amounts to four characters.

Google's pricing model is likely cheaper because the vendor has used its own AI hardware to build the foundation model, compared with other vendors that relied on GPUs, Gartner Research analyst Chirag Dekate said.

"If you were a cloud provider that relied on GPUs, those GPU cost structures are baked in," Dekate said. "With your own stack, you can now introduce pricing elasticity."

Google trained Gemini on its application-specific integrated circuit chip, the TPU. This allows the cloud provider to offer Gemini Pro at a notably lower cost than if the vendor had used GPUs to train the model.

Taking advantage of multimodal market share

Moreover, by incorporating Gemini into its AI software stack, Google is providing enterprises access to native multimodal capabilities, Dekate said.

Native multimodal models better understand the complexity of responses, he continued.

Instead of traditional single-task models that can only produce one type of output, Gemini creates an image and text in response to a question. Models like Gemini can also learn from diverse types of data sources, including text, images and video.

The current release of Gemini Pro accepts text as input and generates text as output.

By introducing Gemini at the end of the year, Google is betting that while 2023 was focused on the GPT family of models, 2024 will be about multimodal models, Dekate added.

For Google to be successful with its generative AI strategy, it must show how models like Gemini will be applicable to enterprise business processes and workflows, he said.

Google has to figure out how to make multimodal AI seamlessly actionable through its stack.

"For Google, this is possibly one of the largest cloud market share creating opportunities in the last few years," Dekate continued. "If they managed to bring this closer to the enterprise client ... this could enable Google to grow their cloud market share. If they fail to do that, that is when it becomes an impeding factor for them in the long run."

More information on Ultra

Google also has to provide more information about the largest Gemini model: Ultra, Beccue said.

"The big gorilla in the room is the Ultra model," he said. The model would likely be comparable to GPT 4.5 (which isn't out yet). "This could be one of the biggest LLMs there is," he said.

Google did provide the size of Nano, the smallest Gemini. The first version, Nano-1, is trained on 1.8 billion parameters, which is smaller than Microsoft's newest small language model: Phi-2.

Phi-2 was released Tuesday and includes 2.7 billion parameters.

Nano-2 is trained on 3.25 billion parameters.

Microsoft claims Phi-2 outperforms not only Nano-2 but also open source models like Mistral and Llama 2.

Google plans to launch Gemini Ultra early next year.

Other moves

The cloud provider also introduced the latest version of its image model, Imagen 2.

Now generally available on Vertex AI, Imagen 2 enables developers to generate higher-quality images, render text in multiple languages and generate logos.

Also, Vertex's AI indemnification commitment -- which reimburses customers for costs related to copyright infringement legal actions -- covers Imagen 2.

It also revealed that Mistral AI, an open source AI startup that recently raised about $415 million, will use Google Cloud's infrastructure to distribute and commercialize its LLMs. The startup's 7 billion-parameter open source LLM is now available in Vertex's AI Model Garden.

In addition, IT consulting firm Accenture is partnering with Google to help enterprises adopt GenAI. The companies will create a joint generative AI Center of Excellence to help businesses scale generative AI models and applications.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
Data Management