Getty Images

Google adds Gemini CLI for GitHub Actions coding agent

The beta version of Google Gemini CLI for GitHub Actions starts simple and builds in security, but overall, the 'honeymoon phase' for coding agents might be ending.

Google updated its Gemini CLI this week with support for GitHub Actions workflows, looking to broaden the role of its terminal-based coding agent into the software delivery pipelines.

After Gemini CLI was initially rolled out in late June, Google's development team for the product needed to expand workflow automation to handle the feature requests and issues being opened by users in its GitHub repository, according to Ryan J. Salva, senior director of product at Google, during a virtual press briefing last week.

"We were receiving so many ideas and contributions that we wanted to automate issue triage and code reviews inside of GitHub, and the community happened to take notice," Salva said. "They happened to see what we were doing and wanted to use those same tools for themselves."

The beta release of Gemini CLI for GitHub Actions is now available free to users with a Google AI Studio API key. The initial release will encompass three preconfigured workflows: Gemini CLI can perform the following tasks:

  • Analyze, label and prioritize incoming issues;
  • Review pull requests (PRs) for quality, style and correctness;
  • Take on delegated tasks with the mention of @gemini-cli in GitHub issues or PRs.

"All you have to do is mention @gemini-cli anywhere comments are available in GitHub and tell it what to do," Salva said. "You need a bug filed, you need new tests written, you want to implement a change … and Gemini CLI will hop to it."

Salva also detailed the AI security safeguards available for the new coding agent. Vertex AI and Gemini Code Assist Standard and Enterprise users can use Google Cloud's Workload Identity Federation with Gemini CLI for GitHub Actions to avoid using long-lived API keys. Users can enforce the principle of least privilege with features such as command allowlisting to explicitly approve every shell command the agent can execute and create a custom identity for the agent with restricted access to resources. Finally, Gemini CLI for GitHub Actions integrates with OpenTelemetry to monitor usage and debug workflows.

AI security issues give developers pause

Google's news comes amid an ongoing flood of coding agents from multiple vendors, including other parts of the cloud provider's own business. Google's Firebase Studio agentic mobile app development product, for example, added fresh AI features last month, including Gemini-driven services integration with Firebase Authentication and the Firestore data store via the Model Context Protocol. Google Cloud rolled out support for its Agent2Agent protocol in the Agent Developer Kit July 31. GitHub, meanwhile, rolled out its own Copilot coding agent in May.

But along with the glut of coding agents and the rise of the "vibe coding" trend has come a mounting tide of concerns about their security and reliability following a set of high-profile incidents. Most relevant for Google was the public report of a security vulnerability for Gemini CLI that left it open to the silent execution of malicious shell commands. Google patched that vulnerability, but more high-profile AI security and reliability snafus soon followed from a Replit coding agent and Amazon Q's VS Code extension.

Organizations are going back to the drawing board, making sure to enforce standard best practices that were temporarily thrown overboard during the honeymoon phase of the adoption of AI agents.
Torsten VolkAnalyst, Enterprise Strategy Group

"The recent security issues have demonstrated impressively that AI agents represent excellent vectors for bad guys to exploit," said Torsten Volk, an analyst at Enterprise Strategy Group, now part of Omdia. "However, all this means is that organizations are going back to the drawing board, making sure to enforce standard best practices that were temporarily thrown overboard during the honeymoon phase of the adoption of AI agents. Logging, sandboxing, penetration testing and external audits are key principles that need to be taken as seriously when coding with AI agents as they were with human coders."

For now, developers are interested in experimenting with Gemini CLI for GitHub Actions but hesitant about delegating pull requests to it.

"The risk of malicious pull requests is so significant that GitHub defaults to requiring manual approval for CI/CD workflows on PRs from first-time contributors," said David Strauss, chief architect and co-founder at WebOps company Pantheon. "A project administrator needs to be careful before mixing untrusted PRs, from an AI or a person, with CI/CD, as they stress the same weak point: the CI/CD pipeline's broad access.

"These pipelines are classic vectors for lateral movement because they leverage high-level credentials and permissions, allowing a GitHub Actions job to pivot from a compromised build process to other critical systems," Strauss said. "Unfortunately, the expansive risk of attack means there's expansive risk from a well-aligned but mistaken AI tool operating without oversight."

Trusting a coding agent for PRs seems a long way off, according to another developer with Gemini experience.

"I've used Gemini for months for development, and even if the code generation is good, some details and miscellaneous tasks must be done to make it perfect and ready to push," said Guillaume Blaquiere, a Google Developer Expert since 2019. "Generating code from a feature request will produce incomplete code that must be reviewed and tweaked by a developer. … Instead of this, you could directly ask Gemini locally to produce the code, tune it and then push it. The step to generate on GitHub is totally superfluous [but] maybe in the future it will be possible."

Strauss and Blaquiere aren't alone in their distrust of coding agents for critical tasks. The 2025 Stack Overflow Developer Survey of more than 49,000 respondents found that while 84% were using or planned to use AI tools, an increase from 76% the previous year, positive sentiment toward those tools had decreased, from 70% in 2023 and 2024 to 60% in 2025. More developers said they actively distrust the accuracy of AI tools (46%) than trust it (33%). A majority of respondents (52%) don't use agents, and 38% said they had no plans to adopt them.

Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.

Dig Deeper on Software development lifecycle