michelangelus - Fotolia

Data and AI governance must team up for AI to succeed

AI applications won't produce reliable results -- and could create compliance and business ethics risks -- without strong data governance processes underpinning them.

Data governance and AI governance are distinct functions, but governance leaders must tightly integrate them to ensure AI applications produce trustworthy results.

Speakers at Dataversity's Enterprise Data Governance Online 2026 conference said effective data governance prevents users from building AI models on a shaky foundation or misusing data in ways that create regulatory compliance and business ethics risks. As a result, they said, it's imperative that data governance teams play a central role in AI initiatives.

When data scientists and analysts design AI models, they need to know they're using the right data and doing so in the proper context, said Shannon Fuller, data governance lead at supermarket company Ahold Delhaize USA. He added that as the use of AI tools and models progresses in an organization, it will expose immature or ineffective data governance practices, increasing the potential business risks.

Theresa Ancick, owner of consulting firm Accura Business Services LLC, agreed that an AI governance program aiming to monitor and control AI deployments must "plug into data governance." Organizations that don't do so learn the hard way that  AI models built on poorly governed data generate inaccurate outputs in testing and can't be put into production use, said Ancick, who joined Fuller and other speakers in a panel discussion on data and AI governance.

Eric Riz, president of eMark Consulting, echoed their sentiments in a separate session on governing data for AI and machine learning initiatives. Riz said data readiness is critical to AI success. Ensuring data is ready for planned AI uses requires not only high data quality, he noted, but also accountability for maintaining data sets and comprehensive data lineage documentation to provide context and explainability for AI results. Those are all core elements of data governance programs.

Ignore data governance for AI at your own peril

"Every organization I work with," Riz said, "is asking the same question: 'How fast can we get AI into production?'" In the rush to deploy AI tools and applications, data governance is often a secondary consideration. But he warned that ignoring it is even riskier in AI applications than in conventional analytics use cases.

"In traditional systems, bad data creates localized damage," Riz said. He cited wrong information in a particular dashboard or report as an example. In AI systems, such issues compound themselves, he said, pointing to model drift, a common occurrence in which AI models become less accurate over time.

"The most dangerous AI failures don't crash systems," Riz said. "They slowly become wrong while still sounding right." Ultimately, preventing model drift is a data governance issue more than a data science one, he added.

On the other hand, traditional data governance processes with static policies reviewed annually and enforced manually can't keep up with AI development, according to Riz. He said AI requires a "living governance" approach that includes continuous workflow monitoring, automated enforcement and predefined paths for escalating issues to data governance managers, data stewards and data owners.

Enable rapid AI development in a controlled way

A common user concern is that both traditional and AI governance processes will slow them down or stop them from doing their job effectively. To assuage such concerns, Fuller said data and AI governance leaders should enable data science teams to experiment with new AI models at a fast pace.

His team puts that into practice at Ahold Delhaize USA. "We give them the freedom to go figure things out -- ask questions, try different things [in data sets]," he said. "You have to allow for that kind of sandbox mentality."

Fuller added that the governance team focuses on helping users accomplish what they want to do, rather than adopting a just-say-no attitude and locking down AI development. "Sometimes, the answer is no or not right now," he said. "But if that is the answer, then we have to work with them to figure out a different path to get to where they're trying to go."

It's a balancing act, though. Fuller said governance teams need to establish a controlled environment for AI model development, with "tight guardrails" to avoid potential missteps in complying with AI and data privacy regulations.

For example, as users at Ahold Delhaize USA create experimental models that show potential business value, Fuller's team ensures that appropriate security and privacy controls are applied to the data being used before development proceeds. If a model incorporates multiple data sets, it also confirms that they're intended to be used together. In addition, Fuller said governance leaders must define what he called their organization's "ethical red lines" on the use of data in AI applications and then ensure that users don't cross them.

Craig Stedman is an industry editor at TechTarget who creates in-depth packages of content on data technologies and processes.

Next Steps

How AI changes data governance roles and responsibilities

AI governance can make or break data monetization

Best practices for successful data governance programs

How AI-powered governance enables scalable AI deployment

Dig Deeper on Data governance