putilov_denis - stock.adobe.com

One year of MCP: Support a must for data management vendors

With most AI development initiatives focused on agentic AI, failure to provide the framework for easily connecting agents with relevant data is now a competitive disadvantage.

One year after its launch, support for the Model Context Protocol is a requirement for data management vendors. Without MCP support, they risk losing customers and struggling to appeal to new ones.

Developed by AI vendor Anthropic and released on Nov. 25, 2024, MCP is a set of open source code designed to simplify the otherwise complex process of developing agents and other AI applications.

AI tools, whether they're agents, generative AI (GenAI) chatbots or traditional machine learning applications, need an enterprise's proprietary data to understand the organization's unique characteristics and respond to prompts related to its operations. Once trained with relevant proprietary data, they can be used to inform strategic decisions and automate business processes.

MCP code provides a standard for building MCP servers that connect AI models, including large language models (LLMs) such as OpenAI's ChatGPT and Claude from Anthropic, with databases, data lakehouses and other data sources. Support for MCP means including either prebuilt MCP servers or MCP code in AI development environments.

Because data management vendors provide the capabilities that ingest and prepare the data for agents and other AI tools, MCP support quickly became common in their platforms after the protocol was launched. Now, one year after MCP's release, support for it is not merely common but instead critical for data management vendors, according to David Menninger, an analyst at ISG Software Research.

"We've reached that point," he said. "It's the way [Open Database Connectivity] was maybe 20 to 30 years ago. If you didn't support ODBC, you were less likely to have tools that worked with your technology. We've reached that point with MCP as well, that if you're a data management vendor, you've got to support MCP."

Sumeet Agrawal, vice president of product management at Informatica, similarly noted that MCP support isn't a luxury or potential differentiator. Instead, after only 12 months, it is a fundamental part of data management platforms, and distinction only comes from failing to provide MCP capabilities.

"It's a table-stakes functionality," he said. "It's like five years ago if you asked if API support was important. The answer is yes, it's fundamental. I believe that MCP is extremely fundamental for any data management provider, and if they don't do that, they will fall behind."

Addressing a need

Enterprises have substantially increased their investments in AI development in the three years since OpenAI's November 2022 launch of ChatGPT marked a significant improvement in GenAI technology.

Throughout 2023 and into 2024, most AI development initiatives focused on chatbots that simplify insight generation by enabling users to query and analyze data using natural language rather than code. In addition, development efforts included AI assistants that let technical experts automate certain time-consuming processes, such as code generation and documentation.

Data management vendors responded to the rising interest in GenAI development by creating environments within their platforms to simplify building pipelines that connect proprietary data with AI models and applications.

For example, Microsoft developed its Azure AI Studio environment, which is now called Microsoft Foundry, while AWS built Amazon Bedrock. Many others -- including Databricks, Domo, Informatica, Qlik and Snowflake -- also created GenAI development suites.

As 2024 progressed, agents emerged and quickly became the dominant trend in AI development. Unlike chatbots and assistants that require human prompts before acting, AI agents can be trained with contextual awareness and reasoning capabilities so they can act autonomously. Data management vendors again responded, adding agentic AI development capabilities to their development environments.

But despite all the interest in AI -- the investments enterprises were making, and the tools vendors were providing -- AI development initiatives continued to fail at an alarming rate.

A year ago, enterprises were burning money on handcrafted agent projects, integrating the data and tools they needed in ways that wouldn't scale or govern themselves. [MCP] turned agent connectivity into an open port.
Michael NiAnalyst, Constellation Research

It's complicated to discover relevant data, integrate and prepare it to ensure its quality, connect the data to appropriate models, feed it to an application and constantly monitor the data pipeline to ensure proper performance. Doing so repeatedly as each new application is developed only adds to the difficulty.

Something to simplify the process was needed, according to Michael Ni, an analyst at Constellation Research.

"MCP is the USB-C moment for AI agents," he said. "A year ago, enterprises were burning money on handcrafted agent projects, integrating the data and tools they needed in ways that wouldn't scale or govern themselves. [MCP] turned agent connectivity into an open port so vendors, startups and enterprises can all plug into the same decision fabric instead of rebuilding the plumbing every time."

Enterprises still need to discover, prepare and govern data for agents to perform properly. But that's the case with traditional BI reports and dashboards as well. MCP removes the burden of architecting pipelines that connect agents, models and data sources.

"MCP turns development from artisanal craftsmanship into platform engineering," Ni said. "As with new defined interfaces, frameworks and standards, developers can stop building plumbing and focus on building solutions. Once agents, models and tools plug into the same context fabric, innovation results in faster releases, safer deployments and enterprise-grade decision and process automation at scale."

As a data management vendor that has expanded to include an AI development environment, Snowflake has observed customers struggling to move AI projects from pilots to production.

Before MCP, the primary problem preventing many enterprises from successful development was connecting agents to proprietary data, according to Dwarak Rajagopal, Snowflake's vice president of AI engineering and research. Every time developers built an agent, they needed to customize integrations, which led to long development cycles.

MCP solves that primary problem.

"Enterprises needed a standardized, secure way to let AI agents access governed data without building everything from scratch," Rajagopal said. "That's why MCP was created, and why it's seen accelerated adoption across companies. It's a bridge that allows agents to understand and use enterprise data safely and securely, while preserving compliance and accelerating innovation."

A must for all

When Anthropic first released MCP, AWS and Microsoft were among the first to add support for the framework, at least in preview. Snowflake also unveiled support for the protocol in late 2024 before making it generally available earlier this month.

Then, in spring and summer 2025, a wave of data management vendors added MCP support to either existing agent development frameworks or new ones, also mostly in preview. For example, Databricks revealed its support for MCP in a beta release in June. Among many others, Alation, Confluent, Oracle, SnapLogic, Starburst, StarTree and Teradata have all unveiled MCP support as part of development suites.

Now, it's reached a point where data management vendors that haven't introduced MCP support in preview or made the open protocol GA are putting themselves in peril, according to Ni.

"In the rising noise around the agent economy, the platforms that market MCP become part of or form their own ecosystems," he said. "The ones that don't become perceived as dead ends. If a data platform can't be discovered, invoked and orchestrated by agents, it is not just behind from a protocol uptake, it is invisible. Invisible vendors get replaced."

MCP support, however, wouldn't mean much if the protocol didn't work as intended.

If it were just as difficult to build AI pipelines with MCP as without it -- or only marginally easier -- it would be a failure. The framework, however, is having its desired effect, making it easier for developers to build agents and other AI tools that require combining applications, models and proprietary data sources, according to Menninger.

"It's making development easier and more accurate in terms of the results that they're getting from their AI efforts," he said. "I have not encountered much in the way of negative feedback about MCP."

However, MCP support can mean different things, depending on the vendor, Menninger continued.

Some are in their first phase of adding support for the protocol, such as providing an MCP server or integrating MCP code in a development environment. Others are more advanced, embedding in-line calling to LLMs in SQL or other coding languages or making unstructured data accessible to agents in addition to more traditional structured data.

"Many of the vendors have done a first-phase implementation and are planning on expanding their MCP support, so I think there's awareness that there's more they can do," Menninger said. "But I'm not hearing about failed MCP efforts that lead people to think, 'No, we're not going to do that anymore.'"

Informatica's Agrawal, though now overseeing product development plans, is also a developer with hands-on experience building agents.

One of the agents he built is designed to monitor a laptop. Using tools from Google and Salesforce, Agrawal relied on a series of APIs to build logic within the agent.

"It was not easy," he said. "It was very complicated prompting."

Using MCP, only one API would have been needed, Agrawal continued. And it would have required no coding.

"It used to take weeks to develop an agent, and now it's down to hours, and I firmly believe that is the power of MCP," Agrawal said.

A graphic displays the components of a Model Context Protocol workflow.
One year after Model Context Protocol's launch, support for the open framework is a must for data management vendors aiming to help customers build and deploy AI agents.

Looking ahead

MCP is not the only open standard for AI development.

It standardizes the way agents connect to data sources but does not, address how they interact with each other in multi-agent systems, including collaboration and exchanges of data. In May 2025, Google Cloud launched Agent2Agent Protocol (A2A), which in a sense picks up where MCP leaves off, providing a standard for agent interactions once MCP has simplified development.

While MCP support has become a necessity for data management vendors, A2A has not -- at least, not yet. Nonetheless, many data management vendors that support MCP do also provide A2A support. In addition to Google Cloud, fellow data platform vendors including AWS, Databricks, Microsoft and Snowflake all provide A2A support. So do other vendors, including Alation, Informatica and MongoDB.

However, with most enterprises still working to build and deploy individual agents and not yet launch entire agentic AI ecosystems, A2A isn't yet as vital to users as MCP, according to Rajagopal.

"Today, most AI agents operate in walled gardens, unable to communicate or collaborate with agents from other platforms," he said.

But that is just the current state. Next year, interoperability will likely be a focus for many enterprises, necessitating more A2A adoption, Rajagopal continued.

"The next major frontier in enterprise AI will be interoperability, [requiring] the development of open standards and protocols that allow disparate AI agents to speak to one another," he said.

Menninger similarly noted that the popularity of MCP vs. A2A is a matter of timing, with many enterprises not yet at the stage when A2A is needed.

"We're still, as an industry, trying to figure out exactly what agentic AI is and how to properly build systems," he said. "We're [focused], as an industry, on developing agents, so the coordination of agents is a second-order effect. You have to have agents before coordinating them."

Meanwhile, there are ways MCP can be improved to more fully address the needs of enterprises building and deploying agents.

In particular, data and AI governance are still left up to individual organizations. If MCP included not only code that simplifies connecting agents with data sources but also capabilities that enable enterprises to easily add permissions and policies, it would further help AI development, according to Ni.

"MCP solved the tool integration problem, but it still lacks the depth of identity support, policy enforcement and … the shared semantics enterprises need to operationalize agents safely and at scale," he said. "Until MCP links users to agents … under governed policies, it will remain a strong developer standard but not an enterprise decisioning standard."

Agrawal similarly cited added governance as a way to improve MCP.

One way cybercriminals can attack enterprises is by replacing JSON specifications with malicious tools so that when agents make data calls to inform decisions, they are calling on tools that result in misinformed outputs. Governance capabilities that better secure retrieval-augmented generation pipelines and control how agents invoke tools are needed.

"One key opportunity for MCP to improve is on security," Agrawal said. "I look at security as both authentication and authorization, so how you can enforce policies so you know who can invoke things [is critical]. … As a standard, this has to evolve, and there have to be more things so MCP is not vulnerable."

Eric Avidon is a senior news writer for Informa TechTarget and a journalist with more than 25 years of experience. He covers analytics and data management.

Dig Deeper on Data integration