Getty Images
Data governance for AI requires a cross-functional approach
AI systems create risks that span data, security and model integrity. A cross-functional governance model distributes ownership without creating silos.
Traditional governance models assumed data teams could own the full risk picture. AI systems break that assumption by creating interlocking risks across data, security and model integrity that no single function can see.
AI systems are not just consumers of data. They transform data into models that drive automated decisions and, with agentic AI, autonomous executive actions. A biased data set gives rise to a biased model, and a misconfigured model results in flawed decisions. The risk chain extends well beyond what traditional data governance was designed to address.
Security governance has historically focused on infrastructure, access and policy. That scope is necessary, but insufficient for AI-specific threats such as model manipulation, prompt injection, data poisoning and inference attacks. AI teams, meanwhile, can optimize for better model performance but cannot by themselves mitigate data misuse, adversarial threats or legal exposure.
The expanded risk surface of AI systems
AI systems create risks across data, security and AI domains simultaneously. Data risks include training data bias, data poisoning, lineage opacity and downstream misuse through models. Security risks span model misuse, adversarial manipulation, unauthorized inference and supply-chain compromise. On the AI side, explainability gaps, fairness violations, performance decay, silent failure and unsafe autonomous behavior all fall outside traditional governance structures.
These risks are intertwined. A security breach can corrupt training data. A drifting model that drifts can silently degrade until it causes regulatory or reputational damage. No single function can see the full picture.
Governance must move from overseeing siloed artifacts to overseeing outcomes. Regulation increasingly requires governance across the AI lifecycle, and disjointed ownership won’t meet audit requirements. All roads lead to data, security and AI co-owning governance.
Expanded scope of governance
A modern governance operating model must expand scope along three key dimensions:
- Lifecycle coverage. AI systems change over time through data drift, retraining and evolving usage. Governance must encompass the full lifecycle: design, data sourcing and preparation, model development and validation, deployment and monitoring, incident response and remediation, and decommissioning.
- Asset coverage. Cataloging needs to extend beyond data sets to include data pipelines and feature stores, models and model versions, prompts, configurations and policies, embeddings, retrieval corpora used in RAG systems, and AI agents with their associated tools.
- Evidence and auditability. Enterprises must produce auditable proof of why a system was built, what data it uses, how it was tested, who approved it, how it is monitored, and what happens when it fails. That evidence must be generated by design, not assembled after an incident occurs.
Role clarity without silos
Co-ownership does not have to mean ambiguity or diluted accountability. Roles, responsibilities and ownership domains must be clearly defined. The following delineation is practical and can serve as a starting point for enterprises.
Data teams own the following:
- Data quality, stewardship, lineage and cataloging.
- Legal and ethical data use.
- Training data approval and ongoing data drift monitoring.
Security teams own the following:
- Access control, identity and infrastructure security.
- Threat modeling and adversarial defense.
- Anomaly detection and incident response for AI systems.
AI teams own the following:
- Model development, validation and performance.
- Explainability, fairness testing and drift analysis.
- Safe deployment and controlled autonomy.
In practice, collaboration between functions is required. Training data approval requires data and AI. Model deployment requires both AI and security. Incident response requires all three. The co-owned operating model assigns a single accountable owner per decision while mandating cross-functional consultation at defined control points. This avoids two failure modes: siloed veto power and collective neglect.
Building a shared governance operating model
The envisioned shared operating model has four defining characteristics.
Cross-functional decision rights
Policy is set jointly by a cross-functional governance body. Enforcement is executed by domain teams through embedded controls. Exceptions require explicit risk acceptance by senior leaders. If an AI system presents unacceptable risk, designated leaders can halt deployment or operation without negotiation.
Risk-tiered governance
AI use cases are classified as low, medium, or high risk based on impact, autonomy, data sensitivity, and regulatory exposure. Not all AI systems warrant the same level of scrutiny. High-risk systems require formal review, documentation, validation and continuous monitoring. Low-risk systems flow through automated guardrails. This balances enterprise agility with protecting against systemic harm.
Integrated workflows
Oversight can be embedded into development and deployment pipelines through policy-as-code. Models cannot deploy unless the required pre-agreed controls are satisfied:
- Block deployment if required tests fail
- Require documentation before release
- Prevent the use of unapproved data
- Enforce monitoring configuration
Shared KPIs
Data, security and AI teams must share outcome-based metrics: governance coverage, incident rates, time-to-approval, remediation effectiveness, and compliance status. Incentives align around safe delivery, not functional optimization.
How to operationalize the model
To put this model into practice, the governance tool set must support data and model lineage tracking, a central AI asset inventory, validation and testing frameworks, monitoring and alerting infrastructure,
policy enforcement mechanisms, evidence repositories and dashboards. Few vendors cover every requirement, so enterprises should evaluate their existing stack for gaps before investing in new platforms.
Co-owned governance in practice
The co-owned model looks different depending on the AI use case, but the coordination pattern remains the same.
RAG. When a retrieval corpus includes misclassified documents, data teams curate content, the security team enforces access controls, and AI teams enforce source grounding and reduce hallucinations.
Traditional ML. Bias seeping in through proxy variables is considered a governance failure. Data teams must detect representational issues. AI teams test models and mitigate bias, and security ensures model integrity.
Agentic AI. When an autonomous IT agent acts outside safety limits, security constrains tool access, data teams control data access and AI teams build guardrails.
Implementing the new governance model
The shift to co-owned governance can happen in phases. Timelines will vary by enterprise context, but this roadmap provides a starting point.
- 90 days. Establish structure: governance council, asset inventory, interim policies, pilot enforcement and shared visibility.
- Six months. Institutionalize processes: risk tiering, integrated workflows, monitoring critical systems and refined documentation.
- One year. Achieve maturity: governance embedded in lifecycle, automated evidence generation, shared KPIs tied to performance, and readiness for audit and regulatory scrutiny.
Governance must move from a single-team responsibility to shared ownership. Data, security and AI each control distinct failure modes, but none can independently manage the full risk spectrum of AI systems.
Kashyap Kompella, founder of RPA2AI Research, is an AI industry analyst and advisor to leading companies across the U.S., Europe and the Asia-Pacific region. Kashyap is the co-author of three books, Practical Artificial Intelligence, Artificial Intelligence for Lawyers and AI Governance and Regulation.