Cisco-Splunk strategy shift unveiled with Data Fabric
Cisco Data Fabric emphasizes bringing Splunk analytics to data where it lives, rather than a central ingestion point, and will add more third-party data sources such as Snowflake.
Cisco and Splunk revealed a fresh strategic direction to better support enterprise AI under the Cisco Data Fabric architecture this week.
A data fabric architecture provides a unified view of data across different systems, with each data source virtualized so data doesn't need to be moved into a central repository for analysis. Splunk started making efforts to support data fabric features such as federated search and analytics in recent years, including support for federated search on data stored in Amazon S3 object storage. Last year, it also added support for a new Pipeline Builder and Ingest Processor for Splunk Cloud, along with federated analytics, which improved data management under Splunk and made it more inclusive of third-party data sources.
This week, Cisco took that a step further with the unveiling of the Cisco Data Fabric, an architectural rework of Splunk Enterprise and Splunk Cloud Platform that will guide product releases into 2026. The new architecture will include further integrations between observability and security tools under the Cisco umbrella including Splunk, AppDynamics, Isovalent and ThousandEyes.
Cisco Data Fabric isn't a new, separate product. Instead, it will be delivered to existing customers of Splunk Enterprise and Splunk Cloud Platform over the next year, according to a Splunk spokesperson. There will be changes behind the scenes that add connective tissue between data sets and analytical tools, such as automated field extraction that supports data discovery by detecting format drift, and self-healing pipelines that eliminate cumbersome work with regular expressions during setup. Splunk will also further unify analytics across machine and business data.
While the new overall direction for Splunk and other Cisco properties was made clear, along with some aspects of the planned product integrations, much of what was discussed this week is at an early preview stage, and practical questions such as the total operating costs for the new architecture remain to be seen.
For one large Splunk customer, time will tell if the potential value of Cisco Data Fabric outweighs the time it will take to implement.
"Each tool works fine on its own, but when combined, you get faster insights and less time to value," said Steve Koelpin, principal AI observability engineer at a Fortune 50 company and a member of the SplunkTrust user community. "AppDynamics and Splunk can [already] share data through APIs or connectors, but there isn't a true native correlation layer. Data Fabric closes that gap by stitching telemetry from AppDynamics, Cisco infrastructure and external tools into one correlated fabric."
Data fabric expands federation focus
Cisco Data Fabric represents a further departure from Splunk's early roots in enterprise log management, which was predicated on ingesting and indexing data into a centrally managed system.
"All the data being ingested into Splunk is not a practical idea," said Kamal Hathi, senior vice president and general management of Cisco's Splunk business unit, during a press briefing last week. "And so what we're talking about is taking Splunk to the data, going where the data lives. Instead of trying to build this giant data lake, we're talking about going to these little ponds and puddles of data and combining it into one federated, distributed view."
That said, there will also be a new Splunk Machine Data Lake, Hathi said.
"It's a virtual lake, not a physical one, that spans all these federated sources, but it takes all the work we do in search and analytics -- the schematization, the correlations -- and creates persistent catalogs that … you can start using for AI processing," he said.
For example, "maybe you're a manufacturing company and you want to know what will happen over the next three days if the temperature outside the factory goes up by a degree," Hathi said. "It's very hard to correlate these kinds of things now. Or maybe you want to know what the meaning of, you know, a 3% cash rate increase overall on availability is. One is performance, one is availability, but they may be related … We can start combining these multivariate signals into this new class of AI."
Hathi said Cisco Data Fabric will also gather machine data from edge and network devices via Isovalent, including ambient sensors used in manufacturing, along with business context through a new partnership with Snowflake to feed a broader set of information into new foundational generative AI models. These will include a new open weight time series foundational model planned for November on Hugging Face.
Industry's AI gold rush intensifies
Observability and security competitor Datadog already has a new time-series model available in open source, and Dynatrace introduced its Grail data lakehouse in 2022, along with a data pipeline service in 2024.
But Cisco's large customer base in enterprise networking among enterprises with high volumes of machine data could offer it a unique advantage, according to industry analysts.
"The Cisco Data Fabric provides Splunk with access to machine data from all kinds of network equipment that its competitors cannot access, as a lot of that data never leaves its device for cost or compliance reasons," said Torsten Volk, an analyst with Enterprise Strategy Group, now part of Omdia. "Extracting signals from all of this data through edge processing and combining them with traditional operations data could help unlock signals that you could not otherwise get through centralized processing."
Splunk also has competitors in edge data processing and federated search, including longtime nemesis Cribl and Elastic. But plans to bring the Cisco AI Canvas demonstrated at Cisco Live in June to Cisco Data Fabric by 2026 could turn enterprise heads, according to one analyst.
"AI Canvas is the best AIOps interface I've yet seen," said Steven Dickens, CEO and principal analyst at HyperFrame Research. "Mostly, what I liked about the AI Canvas piece was this multiplayer mode, where you build a canvas with [colleagues], and all of you look at things together … which [follows] how a major incident might happen -- a small ad hoc team of a network person, an observability person, maybe a server person, all come together to examine a problem for three or four hours to figure it out. Then that team dissolves, and people go back to their single-player mode."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.