MCP OAuth update adds security for personalized AI
An impending update to Model Context Protocol marks an important step toward secure, personalized AI, but also shows that significant work remains to secure AI agents.
The next release of the Model Context Protocol project will include a mechanism to authorize AI agents' access to back-end tools and services on behalf of users, which its creators believe will open new possibilities for personalized AI.
Despite being less than a year old, Anthropic's Model Context Protocol (MCP) has already become a widely embraced means to facilitate AI agents' access to tools and services. Virtually every IT vendor now offers an MCP server, and some, such as GitHub and Microsoft, host MCP server registries and tools catalogs. The project also underpins a range of AI gateway products and open source projects that add security features, although the upstream version has gradually added its own enhancements. The most significant of these was support for AI agent authorization using open source OAuth 2.1 mid-year, which replaced a less secure API-key-based system.
However, according to Alex Salazar, co-founder and CEO at Arcade.dev, there was still an important aspect of AI agent authorization left to be addressed.
"The OAuth conversation up until now has been about, ‘how does the client talk to the MCP server?’" Salazar said. "This is about making sure that the user is allowed to talk to some other service, and I can pull down permissions to ensure that they are usually allowed to do the different types of actions that might be available inside my MCP server."
Engineers from Arcade.dev proposed a mechanism for this authorization flow in July that operates out-of-band, away from the MCP client, without exposing sensitive data to a large language model (LLM). That proposal has been merged into the next major release of MCP, which is scheduled for Nov. 25.
"Now you can build agents that can be scoped to a user," Salazar said. "Think of a personal assistant. You're going to ask it to read your email, you're going to ask it to check your calendar. You're going to ask it to do all kinds of things. Well, it can't even access your email or your calendar without going through an authorization flow."
Outside MCP, that kind of authorization flow is typically initiated with a URL, according to Salazar. But under MCP there previously wasn't a means to send back a URL without going through the MCP client, potentially exposing sensitive data to an underlying LLM.
Provided this standardization can be adopted broadly, granting an AI the ability to act in a limited way on your behalf can be made secure and scalable.
Kyler MiddletonPrincipal software engineer, Veradigm
"The amount of engineering talent and expertise and conversation and negotiation at a technical level that went into this work is extraordinary, but the result is fairly mundane," Salazar said. "We discovered how to send a URL back in a secure, sensitive way, where the large language model isn't involved."
MCP upstream work continues
The new MCP OAuth mechanism, baked in upstream, might render some AI gateway features redundant in certain vendor products -- including Arcade's. But one AI developer said it's better to build this kind of feature directly into the protocol.
"Provided this standardization can be adopted broadly, granting an AI the ability to act in a limited way on your behalf can be made secure and scalable," said Kyler Middleton, principal software engineer at healthcare tech company Veradigm. "There are a lot of bad and incomplete implementations of this functionality, and the comprehensive discussions and consensus from the MCP team help bring us all forward."
However, discussion continued in the upstream community in late September about potential drawbacks to Arcade's MCP OAuth approach.
"Allowing servers to elicit arbitrary URLs may raise phishing/social engineering risk," read the notes from a Sept. 23 meeting of the MCP security interest group. "There is some discussion around servers potentially declaring approved domains that they can send to the client, and the client can make policy determination on which URIs it will even display to the end-user. This likely will be a follow-up exploration."
Middleton said she has similar concerns.
"For instance, I have administrative rights on several sensitive platforms where data is stored. If I interact with a bot, and it asks for a token, which I approve, will it automatically get all rights that I have, or will there be some mechanism by the agent or by the token granting process, by which I can grant only particular rights?" she said. "There needs to be, but that could easily be onerous to users, so it has to be designed carefully."
Securing AI agents still a work in progress
The new MCP OAuth mechanism, while promising, addresses just one layer of security for AI agents, while users and developers must still address several more, according to Ian Beaver, chief data scientist at Verint, a contact center-as-a-service provider in Melville, N.Y. Elsewhere in the industry, further AI agent runtime controls have been proposed based on principles of the EU's General Data Protection Regulation, which Beaver said his company has also begun to implement.
"This is an important addition but doesn't absolve the agentic application designers from carefully logging all calls made to tools and APIs on behalf of the user," Beaver wrote of the MCP OAuth update, in an email to Informa TechTarget.
"Even with this modification, the application won't know if changes are made by the user or an agent logged in by the user," he said. "Therefore, it is critical that agentic application developers take a security-first mindset, logging any actions for easy auditing if an unexpected application change is made."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.