Getty Images/iStockphoto

Data domain ownership, data mesh chart path to AI-ready data

That AI initiative won't get off the ground without timely, reliable data. Bringing technology and team practices into sync reduces delays and boosts data readiness.

Centralized data management teams have struggled to keep up with AI's insatiable appetite for data, but shifting data ownership closer to the business can spread the work and better align it with organizational goals.

A data mesh architecture with federated governance supports that shift by redefining who owns data and how standards are enforced. Rather than funneling all data management and governance work through a single team, responsibility moves to separate domain teams that build and maintain data products in departments and business units. Meanwhile, the federated governance platform applies company-wide rules for quality, security and interoperability. This model reduces delays, improves confidence in shared data and supports AI initiatives, while future-proofing the organization for emerging requirements, such as the EU AI Act.

Why AI exposes limits in traditional data management

Enterprises committed to centralized data management are increasingly finding the approach is now working against them, hindering business users who need timely data access.

They've given it a good shot. Over the years, companies have rolled out an array of data warehouses, data lakes and now data lakehouses, both on-premises and in the cloud, but rapidly growing data volumes have swamped centralized organizations. Central data teams have become chokepoints as sales, marketing, finance and other departments clamor for more and better data.

The rise of AI only exacerbates the challenges, with data issues surfacing as a key obstacle to adoption. Gartner's 2025 AI Hype Cycle report said that 57% of surveyed organizations believe their data is not AI-ready, noting that leaders "must evolve their data management practices" to get there.

Against that backdrop, some enterprises are moving toward a data domain ownership model, in which individual departments own and maintain their data themselves. Centralized standards don't disappear in a data mesh, however. Federated governance sets enterprise-wide guardrails using common policies and automated enforcement.

Thoughtworks, a technology consultancy, introduced the data mesh concept in 2019. In the AI age, this model is vying with centralized data management for enterprise mindshare.

"The traditional approach is, 'Boil the ocean. Let's bring all our enterprise data and get it AI-ready.' It sounds great, but it isn't achievable," said John Spens, vice president of data modernization at Thoughtworks.

Alexey Utkin, head of the data and analytics lab at DataArt, a software engineering firm, said AI is exposing the limits of centralized methods.

"AI use cases move fast," he said in an email. "Organizations expect to deliver them in days or weeks, not months. When your data architecture relies on bringing everything to one centralized place before it can be used, that architecture becomes a bottleneck."

Finding the middle ground between control and sprawl

There's a broad data management spectrum in many organizations. At one end, Spens said, a centralized team endlessly toils to create completely interoperable data, while at the other, spreadsheet jockeys work on their own to assemble "point solutions" that support only particular parts of the business, lack interoperability and aren't governed, Spens said.

"Data mesh tries to split that difference," he said.

Creating data products, a core component of data mesh, balances data domain ownership and enterprise standards. The idea is for domain owners to build data products that can be readily used by business teams across the organization. By making them interoperable, they are consistently structured so end users can make apples-to-apples comparisons.

A self-service data platform is central to this process. Spens said this "workbench" provides the templates and tooling for governance components, such as policy-as-code and automated policies. As a result, independent teams can produce compatible products, he noted. 

"You are providing a paved path for doing the right thing," Spens said.

Thomas Squeo, CTO for the Americas at Thoughtworks, described data products as API-accessible data assets. At the API layer, a contract defines the data product's attributes, such as its semantics, quality checks, change rules and service-level objectives. The API also provides rules for using data products, including access methods and controls and the format for requests and responses, Spens added.

Utkin said the organizational shift to implement data mesh is often harder than the technical challenge of delivering a centralized data platform. Sales, investor relations and other business teams traditionally haven't been responsible for data products, he said.

"For most business domains, building and owning data products is fundamentally new," he said. "This is a significant cultural change."

A new operating model for AI-era data

A decentralized data mesh strengthens the data foundation for AI initiatives and helps organizations prepare for requirements, such as the EU AI Act, which applies in phases, with most provisions taking effect in August 2026.

 "There's a strong structural alignment between what the EU AI Act demands and what a data mesh approach naturally enables," Utkin said.

The act's requirements for high-risk AI systems include data quality, provenance, bias detection, transparency, documentation and ongoing monitoring, he said, noting that a well-executed data mesh can make those capabilities foundational.

Squeo said data mesh also offers governance, risk and compliance controls, which are useful regardless of the regulation.

"What data mesh brings to the table is governance evidence," he said. "Giving evidence is the thing that all of these regulations require."

Steps to accomplish data domain ownership and data mesh

Enterprises ready to pursue domain ownership and data mesh should begin with a pilot project. In its consulting work, Thoughtworks often uses a "show, shift and scale" approach.

The show phase involves launching a pilot to demonstrate "proof of value" for a particular use case, Spens said. The pilot must take place in an isolated data space to avoid affecting the rest of the organization.

"Because you're about to change not just your architecture but also your processes, it's best to create that little sandbox," Spens said.

Proof-of-value initiatives should be specific and tied to a business outcome, he noted. For example, in life sciences, a use case could consolidate data from multiple systems to analyze trial diversity. Thoughtworks' goal is to deliver the proof of value within 90 days.

A successful pilot builds momentum for the shift phase, where the one-off becomes the standard. This phase includes a rollout plan across the enterprise.

Change management becomes critical at this point, Spens said. Businesses must clarify new data roles and responsibilities for employees and how they'll collaborate. It also supports the scale phase, which requires company-wide buy-in as organizations take data mesh across multiple domains.

"Plan your change effectively and invest in change management to make sure that people adopt it effectively," Spens advised.

John Moore is a freelance writer who has covered business and technology topics for 40 years. He focuses on enterprise IT strategy, AI adoption, data management and partner ecosystems.

Dig Deeper on Data management strategies