Getty Images/iStockphoto

New Google Gemini AI tie-ins dig into local codebases

Google ties in its latest Gemini AI model with company-specific data in a new code assistant and Vertex AI updates that also anticipate a coming wave of AI agent development.

Google's Gemini 1.5 Pro model forms the foundation for a new AI code assistant and for updates to Vertex AI that also add the context of companies' own data to enhance results.

The updates were part of a bevy of news out of the Google Cloud Next conference this week, most of which centered on generative AI. The new Gemini Code Assist for software developers and Cloud Assist for IT ops pros are based on the Gemini model launched in February. They replace the Duet AI for Developers launched at last year's conference based on the older Palm 2 large language model (LLM).

Google Gemini 1.5 Pro's claim to fame is its support for large context windows, which means it can summarize, analyze and classify up to approximately one hour of video, 11 hours of audio, 30,000 lines of code or 750,000 words at a time. Eventually, some analysts predicted that such large context windows could supplant the need for retrieval-augmented generation (RAG) to refine generative AI outputs.

However, with this week's release, RAG remains alive and well with new support for the approach in Google's Vertex AI developer tools, along with grounding in enterprises' specific data to improve the accuracy of AI analysis. Vertex now supports more LLMs in Google's Model Garden in addition to Gemini 1.5 Pro and has added new features that allow developers to evaluate different models against one another for use in specific applications.

Finally, Google rolled out Vertex AI Agent Builder, a no-code console in which users can build generative-AI-based multi-step workflows executed by AI agents. Agent Builder is integrated with RAG and grounding tools, including Google Search, Workday and Salesforce data, as well as vector search tools and Google databases.

Devin Dickerson, analyst, Forrester ResearchDevin Dickerson

Grounding generative AI applications in private codebases and making tools such as code assistants aware of a specific local context has been popular among developer tools vendors such as JetBrains IDEs, which last week added a code assistant that runs locally on developers' machines.

"It's early days, but I can see [context-aware AI tools] being valuable," said Devin Dickerson, an analyst at Forrester Research. "Sometimes the devil's in the details in terms of the specifics of [a user's] implementation, the coding style, the look and feel of the types of applications [they're] trying to deliver, or certain style guides and internal practices."

Generative AI's dizzying pace

Overall, cloud providers including Google offer a more cost-effective way for enterprises to experiment with incorporating generative AI into their applications than hosting models themselves. But LLMs are still rapidly developing -- so rapidly that enterprises that adopted Google's Duet products a year ago now face their replacement with another line of Gemini tools. Generative AI also remains prone to inaccurate, biased and sometimes fabricated results if not used correctly -- issues early adopters' platform engineering teams must take steps to deal with and for which tools such as RAG are still needed.

We get a lot of questions that come in from our client base [not only on] how [they] can best position [themselves] to innovate but also around future-proofing because it's unclear what products and services will still be around in a few years.
Devin DickersonAnalyst, Forrester Research

At the same time, most businesses face enormous pressure to add generative AI automation to software-based products and services, according to Dickerson. This can make workflows from call centers to software development exponentially more efficient when they work as expected. As a result, many enterprises are struggling to figure out how to move forward safely, he said.

"We get a lot of questions that come in from our client base [not only on] how [they] can best position [themselves] to innovate but also around future-proofing because it's unclear what products and services will still be around in a few years," Dickerson said. "It's moving rapidly, and so I've received questions about reusable architectures for AI applications that don't necessarily depend on a particular implementation or product."

So far, there aren't any general rules of thumb Dickerson said he can recommend.

"I don't think I can give a straightforward prescription on that," he said. "[The answer] usually involves me reviewing their actual architecture, where I go through and look at every service."

Gartner analyst predicts AI agent wave

One industry expert predicted that the rapidly evolving conversation around generative AI application development will soon shift again, moving beyond specific models and fine-tuning their outputs to how enterprises can string apps together into cohesive automation workflows. The executors of those workflows, called agents, are in the early stages of entering mainstream IT through projects such as LangChain's Agents and Microsoft's AutoGen -- and now Vertex AI Agent Builder, said Chirag Dekate, an analyst at Gartner.

Chirag Dekate, analyst, GartnerChirag Dekate

Google still faces an uphill battle in AI code assistants against Microsoft Copilot, while Amazon's Bedrock AI service already offered model evaluation. But Google has a chance to beat its major cloud rivals in this next phase of generative AI development, Dekate said.

"The agent ecosystem that interconnects Vertex AI, the Model Garden, Gemini for Google Cloud and Workspace, along with third-party connected ecosystems, essentially enables [users] to bring generative AI to life in enterprises [with] workflow automation techniques that are truly transformative," he said. "Imagine a sales workflow that connects Salesforce to your existing prospect database and your existing set of revenue projection vectors into a simplified pipeline."

Agents will be the subject of "a wave of hype" over the next three to six months, Dekate said. But afterwards, agents will add real value to AI application development and software development lifecycles in general, he said.

"It goes beyond code generation, it goes beyond code completion," he said. "All of the grunt work is automated or integrated using agents, and you, as a software developer, are overseeing and correcting outputs that are coming out of this process and enabling the agent to deliver better, more accurate code."

Beth Pariseau, senior news writer for TechTarget Editorial, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.

Dig Deeper on Software design and development

Cloud Computing
App Architecture
ITOperations
TheServerSide.com
SearchAWS
Close