sdecoret - stock.adobe.com

Tip

What CISOs need to know about AI governance frameworks

AI offers business benefits but poses legal, ethical and reputational risks. Governance programs manage these risks while ensuring responsible use and regulatory compliance.

CISOs understand that AI is rapidly transforming how companies do business, but the technology itself poses significant risks. Left unmanaged, these dangers can expose organizations to legal, ethical and reputational harm.

Among those risks are AI systems that inadvertently perpetuate bias, infringe on privacy or produce unpredictable outcomes that undermine stakeholder trust. CISOs can combat these hazards by establishing a comprehensive AI governance program. Designed properly, these programs can identify, assess and control risks while ensuring AI technologies are used responsibly, transparently and in alignment with evolving regulatory requirements. A careful AI governance approach enables companies to harness AI's full potential while safeguarding their operations, customers and brand.

Principles and components of an AI governance program

A reliable AI governance strategy is built on three important components:

  1. Managing risks. Identify and address AI-specific risks, including bias, privacy violations, safety issues and cybersecurity threats, to reduce the chance of harmful outcomes and costly failures. Also assess the risks of third parties and partners. This form of risk management also includes the positive side of risk -- a.k.a. benefits. Companies would never add AI to their already working systems if they couldn't accurately cite benefits.
  2. Build trust. Demonstrate to customers, partners, regulators and investors that the organization prioritizes ethical, transparent and fair AI practices, thus strengthening brand reputation and stakeholder relationships.
  3. Enhance quality and reliability. Establish consistent standards for AI development, deployment and monitoring that meet compliance regulations. The goal is strong, reliable, maintainable and compliant AI systems.

Regulatory compliance requirements

Establishing an AI governance program yields benefits within a risk management perspective, but there are also compliance requirements. The following laws and regulations could apply to your organization.

United States

  • State-level regulations. New York City Local Law 144 requires bias audits for automated employment decision tools. The California Privacy Rights Act covers profiling and automated decisions involving personal data.
  • The Federal Trade Commission Act and the Fair Credit Reporting Act. Apply to unfair or deceptive practices in automated decision-making.

European Union

  • AI Act. Applies a risk-based approach to AI -- prohibited, high-risk, limited-risk, minimal-risk -- with mandatory requirements for high-risk AI systems. Includes risk management, data governance, technical documentation, human oversight and post-market monitoring. It's the world's first comprehensive AI law.
  • GDPR. Applies if AI uses personal data. Relevant for data minimization, fairness, transparency, explainability and data subject rights -- e.g., right to explanation under Article 22.
  • Digital Services Act and Digital Markets Act. Although not AI-specific, these regulations apply transparency and accountability obligations relevant to AI systems in online platforms.

Common AI governance frameworks

Standards and best practices race to keep pace with the rapid development of AI, and a number of them have emerged within the last few years. The following frameworks help organizations achieve the three basic principles of an AI program:

  • OECD AI Principles (2019). Adopted in 2019 and updated in 2024, the Organisation for Economic Co-operation and Development AI Principles emphasize transparency, accountability and human-centric values in AI systems. The international standard has been endorsed by 47 countries.
  • ISO/IEC 42001:2023 Information technology -- Artificial intelligence -- Management system. This standard outlines requirements for establishing, implementing, maintaining and continually improving an AI management system. It is the first international AI management system standard.
  • NIST AI Risk Management Framework 1.0 (2023). NIST's AI RMF provides a comprehensive approach to identify, measure, manage and monitor AI risks through four core functions: govern, map, measure and manage. It is widely adopted throughout public and private sectors.
  • IEEE 7000 series. This series of standards focuses on ethical and governance considerations for AI -- e.g., IEEE 7001-2021 for transparency, IEEE 7003-2024 for algorithmic bias.

How to implement an AI governance program

There are several ways to establish an AI governance program, and a number of steps to take to implement the program. For our purposes, we'll use NIST Special Publication 800-221A as a foundation AI governance framework. The report, "Information and Communications Technology (ICT) Risk Outcomes -- Integrating ICT Risk Management Programs with the Enterprise Risk Portfolio" might seem like a daunting mouthful, but in reality, it's a simple model, much like the NIST Cybersecurity Framework, Privacy Framework and AI RMF, that covers ICT risk from a more abstract perspective. These risk outcomes will help organizations get started with an AI governance initiative. Note: I've placed these outcomes slightly out of order to reflect my priorities.

The two main functions of the NIST SP 800-221A are govern and manage. Within these functions are categories like the above-mentioned frameworks. Those familiar with the NIST Internal Report 8286 Series will see the overlap and commonalities in the manage function.

NIST SP 800-221A: Govern

  • Roles and responsibilities. Establish a single role for AI governance. Other roles might fall under this umbrella, but having a single role with the responsibility and authority over AI governance is key to accountability.
  • Context. Create clear performance goals for AI implementations. These performance goals will be informed by organizational missions, goals and objectives. Tie into these enterprise-level data points to enable those overseeing AI governance to make strategically sound decisions.
  • Benchmarking. Create a risk register. A risk register -- described in the NIST IR 8286 Series -- serves as a single point of reference for AI risk management. Track positive risks (benefits) and negative risks.
  • Policy. Create AI policies informed by risks (positive and negative). For example, institute a training policy in which employees agree not to use AI tools and systems until they are trained.
  • Communication. Establish clear lines of communication. These communications can be internal and external for incident response or breach notification. Similarly, these lines of communication can be with other departments and teams, such as privacy and cybersecurity. Create templates for individual AI risk scenarios, response communications and other issues.
  • Adjustment. Reevaluate the risk register as conditions shift. These changes include incidents, reorganizations, mission changes, market fluctuations, technology shifts or new threats.

NIST SP 800-221A: Manage

  • Risk Identification. Establish regular AI risk meetings. While there will be more sophisticated ways to identify risks in the future, simply setting up a firm schedule of meetings aimed at discussing AI risks will be more than enough to start. Use the risks identified in NIST SP 600-1 "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile" to get started.
  • Risk analysis. Analyze each risk in the risk register and determine its impact on the organization.
  • Risk prioritization. Prioritize risks in the risk register. Some organizations rank risks by their impact; others rely on other strategies. Organizations should use their specific performance goals to inform their prioritization strategy.
  • Risk response. Determine an action plan. The risk response could be as simple as accepting the risk and moving forward. Or, it might be more comprehensive and require a number of subject matter experts and stakeholders. Each risk must have a clear response strategy.
  • Risk monitoring, evaluation and adjustment. Monitor risks in the risk register. At the regular AI risk meetings, discuss the progress of each risk response, evaluate its effectiveness and adjust the response or response type altogether. The key is to constantly discuss risks.
  • Risk communication. Communicate risk status up the chain. Technical details might not be necessary; a simple status of "in progress" or "complete" could suffice. Ask for help or resources when facing a bottleneck in time or technology. These discussions should be easy to prioritize and resolve if risks are appropriately tied to enterprise strategy.
  • Risk improvement. Learn lessons from others. Some organizations might not have a realized risk, while other organizations will not be so fortunate. If applicable lessons are learned from an incident, evaluate their applicability to the organization and adjust risk response or strategy as appropriate.

Future-proofing AI governance programs

While there are no crystal balls for any technology discipline, it's clear that AI is a rapidly evolving field that has exploded in capital expenditure, model size and capability. Organizations should future-proof their AI governance framework to ensure it is effective today and in the future. This involves using it to accelerate the risk management cycle. The sooner a risk is identified, the sooner it can be mitigated.

Empower the AI governance lead with the authority to make decisions that affect many systems to ensure the organization can continue to reap the benefits of AI. Continue to collectively evaluate emerging AI risks to keep issues at bay and avoid making headlines for an incident.

Conclusion: Effective AI management unlocks benefits

AI is rapidly reshaping the competitive landscape. Establishing a strong AI governance program is no longer optional but a strategic imperative for CISOs.

Putting an effective governance framework and program in place helps organizations unlock the transformative benefits of AI with confidence, ensure compliance with rapidly evolving regulations and build trust with customers, employees and stakeholders.

Responsible AI governance will not just protect organizations from emerging risks, but also position them for long-term success in a future defined by ethical, transparent and human-centered innovation.

Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI.

Dig Deeper on Risk management