A GitHub Copilot coding agent introduced this week performs asynchronous tasks in GitHub Actions, with built-in security guardrails and plans to expand user control over underlying AI models.
The GitHub Copilot coding agent is different from the GitHub Copilot agent mode introduced last month because it functions outside an individual developer's integrated development environment (IDE), and teams can delegate multiple tasks that it can perform in the background. Once given a task, the coding agent, which is available to GitHub Copilot Enterprise and Copilot Pro+ subscribers, establishes its own parallel workspace, according to a GitHub blog post.
"It uses GitHub Actions behind the scenes to boot a virtual machine, clones the repository, configures the environment and analyzes the codebase with advanced retrieval-augmented generation (RAG) [in] GitHub code search," the post reads. "As the agent works, it regularly pushes its changes to a draft pull request as git commits and updates the pull request's description. Along the way, you can see the agent's reasoning and validation steps in the session logs."
Suggested tasks for the GitHub Copilot coding agent include bugfixes, repairing security vulnerabilities, code reviews and writing tests -- in other words, toilsome tasks that developers would rather not do, said Matthew Flug, an analyst at IDC.
What [devs] don't want to do is test their code, fix their code, stand up environments to deploy it and allocate resources, and I think that we're starting to see more of that.
Matthew FlugAnalyst, IDC
"If you ask 100 developers, 99 of them will say that they want to write code," Flug said. "What they don't want to do is test their code, fix their code, stand up environments to deploy it and allocate resources, and I think that we're starting to see more of that in terms of GenAI and agents taking those kinds of tasks away from a developer."
GitHub adds model controls, with more to come
GitHub also rolled out GitHub Models this week during the Microsoft Build conference. In its initial release, GitHub Models will support developer evaluation and experimentation in the GitHub interface with large language models (LLMs) from OpenAI, Meta, Microsoft, Mistral, Cohere and others.
Bring-your-own-model support and model controls through GitHub Actions will come in future updates, according to Mario Rodriguez, chief product officer at GitHub, in an interview with Informa TechTarget before Build. This could include features similar to those added to Microsoft's Azure AI Foundry this week, such as model observability and automated model evaluation, routing and rollback. Some of those features have already become available for the Microsoft 365 Copilot Studio and as a GitHub extension for low-code developers this week.
"I could see in the future that if you run your own model from Foundry, there will be a one-button automatic rollback [feature for GitHub Copilot]," he said.
Further user control over underlying AI models might mitigate ongoing concerns in the industry about the accuracy and effectiveness of AI-generated answers, including for code, said IDC's Flug. Built-in model evaluation in GitHub could further that cause, he said, especially if it builds in the same automated model evaluation features released this week for Azure AI Foundry.
"Again, it all comes back to developer enablement, helping them choose a model that is best for their particular project with their particular requirements," he said. "That's huge, because they may just choose a model that they think is best, and they don't even realize the inefficiencies that result from their choice."
The new GitHub Copilot coding agent supports task delegation alongside human developers.
Azure AI Foundry updates beef up security
Amid industry concerns about AI security, including for Microsoft's Copilot, GitHub Copilot coding agent comes with strict guardrails by default. The agent can only push code to a branch it has created, which is kept separate from an organization's main branches; a developer that prompts the agent to open a pull request can't be the one to approve it; the agent has limited access to the GitHub Actions environment and requires human approval at each stage of its workflow. GitHub Copilot is granted special access tokens scoped to only the code repository in which it operates and information it needs to complete a task, Rodriguez said.
Flug said he hopes Azure AI Foundry security updates this week also portend a future direction for GitHub Copilot agents. For example, Microsoft previewed a new integration between its Defender for Cloud and Azure AI Foundry that "will bring real-time security recommendations and runtime alert monitoring into the AI development workflow," according to a Microsoft press release.
"With any agentic AI security capability, I like the thought -- whether the agent can execute that is a different story. But I really do like the idea of being able to address vulnerabilities in real time while the developer is coding," he said. "And then ideally, it says, 'Hey, here's the proposed fix. Do you want me to execute it?' And then it goes in and does it. That is the utopia."
Other AI Foundry security updates this week included generally available prompt shields, which intercept malicious LLM prompts, and a preview of spotlighting, which can identify malicious prompts embedded in external data sources.
Questions remain about lock-in and cost
Like many AI agent vendors, GitHub and Microsoft both tout support for third-party tools through Model Context Protocol servers, although identity controls and a standard registry remain works in progress for the project, including new Microsoft-contributed features as of this week.
But for the most part, Microsoft's AI tools are bound to its own cloud and product lines, which has some early AI adopters concerned about lock-in.
"Our current thinking is to go after the best fit-for-use AI tools [and] orchestrate multiple models -- LLMs, [small language models], etc. -- from several providers, as this is still a rapidly moving target, instead of tightly coupling to AI managed services like Foundry," said Nuno Guedes, cloud compute lead at Millennium BCP, Portugal's largest privately owned bank, headquartered in Lisbon. "That said, I'm sure that companies starting now or in less regulated industries might value the single pane of glass and unified tooling."
With many of the new Microsoft AI features still in preview without set pricing, it also remains to be seen whether users will be willing to pay for them, said Jason Wong, an analyst at Gartner.
"There are a lot of great [updates] from a technology perspective, but clients we talk to are still concerned over the pricing models," he said. "Microsoft is making it easier to embed these agents in tools, but ultimately it's going to come at a cost … that's the big unknown for a lot of organizations."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.