your123 - stock.adobe.com

Google updates AI model Gemini, adds 1M context window

The cloud provider's 1.5 Pro model has the largest context window seen in the market. Despite its innovation, it still needs to show the applicability of its model for enterprises.

Google on Thursday introduced the next version of its Gemini large language model: Gemini 1.5.

Gemini 1.5 Pro is the first Gemini 1.5 model.

It is a mid-sized multimodal AI model that performs at similar levels as the 1.0 Ultra version that was released in early February but uses less compute, according to the cloud vendor.

Gemini 1.5 Pro comes with a standard 128,000-token content window, which specifies the textual range a large language model (LLM) can process. However, developers and enterprise customers can try 1.5 with a context window of up to a million tokens through AI Studio and Google's Vertex AI platform in a private preview.

A large context window

That's the largest context window in the market so far. It's about eight times bigger than OpenAI's GPT-4 and five times bigger than Claude 2.1 from Anthropic.

The large context window equates to about an hour of video, 11 hours of audio, 30,000 lines of code and 750,000 words.

Gemini 1.5 Pro can analyze, classify and summarize large amounts of content in a prompt. It can also perform highly sophisticated understanding and reasoning for different modalities, including video.

Google's update of Gemini comes a week after it rebranded its Bard AI chatbot to Gemini.

It also comes after a year when its competitor Microsoft seemed to lead in the generative AI market, especially with its partnership with OpenAI.

However, with its recent Gemini developments, Google is displaying the upper hand.

"This is now Google setting the pace of the future of GenAI," Gartner analyst Chirag Dekate said. "It is no longer a question of Google catching up to others. It is more about when will others catch up to Google."

The 1.5 Pro's context window aims at one of the biggest restrictions of generative AI systems today, Forrester Research analyst William McKeon-White said.

That challenge is the ability of generative AI systems to understand state, a collection of information that indicates where elements of an AI system are at a certain time.

While retrieval-augmented generation has been used to address the problem, the limited context window still has proved problematic for LLMs.

However, Google's large context window does not fully eliminate the challenge of state, McKeon-White said. AI models still find it challenging to store information in a way that can be updated over time but is not ephemeral.

The 1.5 Pro context window is also helpful because it will match the perception of end users to what they think Gemini should be able to do, McKeon-White added.

"It's able to maintain context, it's able to maintain previous interactions, relevant answers," he said. "It's able to get much more fine-tuned to the level of getting closer to just human passive perception of context, relevancy and understanding."

Google's large context window is also important for enterprises because the current 1-million context window of Gemini 1.5 is expandable to 10 million for research and Google might be able to extend that to enterprise versions, Constellation Research founder R "Ray" Wang said.

"An enterprise user can improve personalization at scale and also move at a faster speed," Wang said. "Google delivered on this faster, better and hopefully cheaper with their efficient transformer and MoE architecture."

With MoE architecture, models are divided into smaller neural networks. This makes the model more efficient and relevant depending on the input given.

Beyond the innovation

While Google's innovation is impressive and appears hard to beat or match, the cloud provider will still need to prove to enterprise customers how it translates into business use, Dekate said.

"What they need to learn how to do effectively is to connect the dots on behalf of the customer," he said.

Google will need to show how 1.5 Pro applies to industries such as insurance, finance, and manufacturing.

It is no longer a question of Google catching up to others. It is more about when will others catch up to Google.
Chirag DekateAnalyst, Gartner

Microsoft has been able to succeed in this area because it has quickly made its generative AI technology useful for the enterprise.

"Google needs to make its innovation relevant for the enterprise," Dekate said. "If they managed to do that and innovate on behalf of the customers and create industry alliances, execution strategies, then they can create a market share changing moment."

Without that, Google's innovation with Gemini would be impressive but forgettable, Dekate added.

Google plans to introduce pricing tiers for its standard 128,000-context window and scale up to a million tokens as it improves the model.

Early testers can try the 1-million token context window at no cost.

Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems.

Dig Deeper on AI technologies

Business Analytics
CIO
Data Management
ERP
Close