kras99 - stock.adobe.com

CISO playbook for securing AI in the enterprise

CISOs must partner with executive leadership to adopt a business-aligned AI security strategy that protects the organization while enabling responsible AI adoption.

Having an artificial intelligence (AI) security strategy protects an organization while enabling responsible AI adoption.

AI is an integral part of our everyday lives, whether we are aware of it or not. Computers make decisions that significantly affect our lives, such as driver assistance or ambient scribing in a doctor's office. AI is also transforming enterprise operations across all sectors. While AI may offer innovation and help with productivity, it also introduces security, compliance and reputational risks.  

As AI becomes embedded in more software, platforms and processes, chief information security officers (CISOs) must mature their cybersecurity programs to address AI-related risks, enabling the business to use these capabilities. CISOS must consult with business leaders to adopt or establish a risk framework for AI adoption, rather than taking an outright ban. Shadow AI, or the unknown use of AI programs by employees, also needs to be addressed, especially if there is a risk of sharing sensitive information.

AI initiatives must be deployed both securely and ethically. Failing to do so can result in regulatory penalties, lawsuits, data breaches and reputational damage that directly affect shareholder value. Conversely, organizations that invest in AI governance and transparency early are better positioned to drive innovation with confidence and earn stakeholder trust.

A CISO can also use tools to manage these AI risks. Here are some fundamental questions CISOs must answer to securely onboard AI in support of the business. Here are some key AI risk objectives to understand as a starting point to help you build a more comprehensive cybersecurity strategy addressing the inherent risks of AI.

Understanding enterprise AI security risks

As organizations increasingly rely on AI to drive innovation and boost efficiency, understanding the associated security risks becomes crucial. From statistical variations that influence decision-making accuracy to the challenges of data permanence affecting privacy, AI introduces complexities that conventional security measures may not adequately address. Moreover, the software supply chain for AI tools frequently lacks thorough auditing, leaving vulnerabilities susceptible to exploitation.

The lack of universally accepted standards, coupled with the intricate copyright implications surrounding AI-generated content, further exacerbates the risks, necessitating increased vigilance and proactive risk mitigation strategies. IBM's risk atlas is a valuable tool for categorizing the risks associated with AI adoption.

Key risks include the following:

Input risks

  • Data privacy/confidential information/intellectual property.
  • Poor data quality.
  • Data reclassification.

Output risks

Non-technical risks

  • Regulatory/legal compliance.
  • Reputation.
  • Generated content IP and ownership.

CISOs need to identify and prioritize AI risks for their enterprise and industry, enabling them to determine the order in which those risks are mitigated or managed. Otherwise, a CISO may be addressing an expensive and time-consuming capability that does little to address the enterprise risks at hand.

AI security tools and frameworks

Organizations can use a growing list of AI security tools and frameworks to address risks effectively. Frameworks such as the NIST AI Risk Management Framework guide the identification, assessment and mitigation of AI-related risks. These frameworks emphasize aspects such as explainability, bias detection and model performance. Additionally, technical tools, such as AI model monitoring platforms and adversarial testing suites, help ensure that AI systems are resilient against attacks and function as intended. Automated tools for data validation and model transparency are also critical in maintaining the integrity and security of AI implementations.

Learn the AI regulations specific to your state and industry before implementing any programs. These regulations are fluid, as neither organizations nor legislatures fully understand the depth to which AI will be implemented. Currently, there is no comprehensive federal regulation beyond executive orders issued by the Biden administration, which were subsequently rescinded by the Trump administration. As AI becomes embedded in an organization's core business capabilities, ensure management is consulting with the legal team to ensure compliance with industry, state, federal and international laws.

Finally, technical tools that your organization must consider as it embarks on the path toward AI consumption and/or development to safeguard AI systems against vulnerabilities, threats and misuse, include the following:

Model Context Protocol (MCPs)

Model context protocol (MCPs) are structured methodologies designed to effectively regulate and monitor AI systems. These protocols ensure that AI models and implementations align with organizational risk mitigation strategies and adhere to established security standards.

Core features of MCPs

MCPs function as a set of predefined rules, workflows and safeguards that oversee AI systems throughout their lifecycle. Here are some of the core features of MCPs:

  • Access control. MCPs enable detailed authorization mechanisms, ensuring that only designated personnel or systems can interact with specific AI functionalities or datasets.
  • Audit trails. Comprehensive logging capabilities within MCPs allow organizations to track AI system activities, providing transparency and accountability.
  • Model validation. MCPs ensure AI models are periodically validated against performance benchmarks and ethical standards.
  • Incident response. Protocols include predefined response measures to identify, analyze, and mitigate threats or anomalies within AI systems promptly.

Benefits of MCPs

The integration of MCPs into AI usage provides several advantages:

  • Enhanced security. By enforcing access restrictions and monitoring AI interactions, MCPs reduce the likelihood of unauthorized usage or data breaches.
  • Operational consistency. MCPs ensure that AI systems function reliably, minimizing interruptions caused by technical failures or security incidents.
  • Risk mitigation. MCPs proactively identify vulnerabilities within AI systems and implement safeguards to mitigate these risks.

Artificial intelligence security platforms (AISP)

AISPs are holistic tools designed to monitor, analyze and secure AI systems in real time. They serve as a comprehensive suite of technologies that address various aspects of AI security, including threat detection, operational transparency and compliance management.

Core Features of AISPs

Some core features of AISPs include the following:

  • Threat detection. AISPs use advanced algorithms to identify and neutralize malicious activities targeting AI systems, including adversarial attacks and data manipulation.
  • Model explainability. By providing insights into AI decision-making processes, AISPs enable organizations to ensure the ethical and transparent use of AI.
  • Compliance monitoring. AISPs include tools to verify that AI implementations conform to industry, state, federal and international regulations.
  • Integration capabilities. AISPs are often designed to integrate seamlessly with existing IT security infrastructures, enhancing their effectiveness within an organization's ecosystem.

Benefits of AISPs

AISPs offer a wide range of benefits for safeguarding AI usage, including the following:

  • Real-time protection. Continuous monitoring ensures that AI systems remain secure against emerging threats and vulnerabilities.
  • Improved trust. AISPs foster trust among users and stakeholders in AI systems by delivering explainability and transparency.
  • Regulatory compliance. AISPs streamline the process of adhering to complex AI regulations, reducing the risk of legal liabilities.

Best practices for securing AI in the enterprise

The best practice for securing AI in the enterprise is ensuring that the organization's cybersecurity program is mature in capabilities (people, processes and technology) as measured against frameworks such as the NIST Cybersecurity Framework (NIST CSF). Cybersecurity practices, such as asset management, patch and vulnerability management and data classification, are fundamental in securing AI.

Additionally, CISOs must understand the business objectives of AI use within their organization, including the specific AI models that will be used and the data that will be ingested. Data classification and data quality are crucial in ensuring that sensitive data, such as protected health information, is not inadvertently disclosed.

Knowing the organization's current cybersecurity gaps and weaknesses is a crucial first step in developing an AI security strategy. If the organization has significant weaknesses in security best practices, it is imperative that those are addressed as part of the strategy to securely onboard AI.

Establishing an AI governance board or committee consisting of business, IT, and cybersecurity leaders, as well as legal experts, can ensure the ethical, accountable and secure onboarding of AI into an enterprise.

Key takeaways for CISOs

Securing AI in the enterprise requires a multifaceted approach that integrates established frameworks, anticipates regulatory changes and uses tools such as MCPs and AISPs.

These practices collectively enable enterprises to deploy AI securely, ensuring operations are ethical, transparent, and compliant with emerging regulations. Here are some steps CISOs should consider when determining their AI security protocols.

  • Adopt established risk frameworks. Use the NIST AI RMF to systematically assess and mitigate AI risks. This ensures a standardized baseline for safety, reliability and accountability.
  • Monitor regulatory changes. Stay proactive in tracking laws, such as the EU AI Act, and state-specific regulations. Appoint specialized compliance teams or use AI-driven monitoring tools to simplify adherence.
  • Implement model governance protocols. Use the MCP to govern the development, deployment, and monitoring of AI models. Metadata, such as intended use cases and data provenance, reinforces accountability and reduces the risk of misuse.
  • Enhance security with platforms. Deploy AISPs for real time threat detection, compliance support and explainability. These platforms integrate into IT infrastructures to increase the enterprise's security posture.
  • Prioritize explainability and trust. Ensure AI systems are transparent and explainable to foster trust among users and stakeholders. Employees should understand which AI tools are permitted and how they can be used, which also creates employee accountability. Be sure to prioritize continuing communication on the dangers of AI misuse.
  • Establish an AI oversight committee. This committee should have cross-functional representation that includes areas such as legal, human resources, IT security, data and compliance. The committee should regularly report to the CISO and CIO on any AI risks, uses and mitigations.

John Doan is the senior director of cybersecurity advisory and cybersecurity domain architect for a world-renowned healthcare organization.

Next Steps

Generative AI security best practices to mitigate risks

Dig Deeper on Security operations and management