TechTarget.com/searchsecurity

https://www.techtarget.com/searchsecurity/feature/CISO-playbook-for-securing-AI-in-the-enterprise

CISO playbook for securing AI in the enterprise

By John Doan

Having an artificial intelligence (AI) security strategy protects an organization while enabling responsible AI adoption.

AI is an integral part of our everyday lives, whether we are aware of it or not. Computers make decisions that significantly affect our lives, such as driver assistance or ambient scribing in a doctor's office. AI is also transforming enterprise operations across all sectors. While AI may offer innovation and help with productivity, it also introduces security, compliance and reputational risks.  

As AI becomes embedded in more software, platforms and processes, chief information security officers (CISOs) must mature their cybersecurity programs to address AI-related risks, enabling the business to use these capabilities. CISOS must consult with business leaders to adopt or establish a risk framework for AI adoption, rather than taking an outright ban. Shadow AI, or the unknown use of AI programs by employees, also needs to be addressed, especially if there is a risk of sharing sensitive information.

AI initiatives must be deployed both securely and ethically. Failing to do so can result in regulatory penalties, lawsuits, data breaches and reputational damage that directly affect shareholder value. Conversely, organizations that invest in AI governance and transparency early are better positioned to drive innovation with confidence and earn stakeholder trust.

A CISO can also use tools to manage these AI risks. Here are some fundamental questions CISOs must answer to securely onboard AI in support of the business. Here are some key AI risk objectives to understand as a starting point to help you build a more comprehensive cybersecurity strategy addressing the inherent risks of AI.

Understanding enterprise AI security risks

As organizations increasingly rely on AI to drive innovation and boost efficiency, understanding the associated security risks becomes crucial. From statistical variations that influence decision-making accuracy to the challenges of data permanence affecting privacy, AI introduces complexities that conventional security measures may not adequately address. Moreover, the software supply chain for AI tools frequently lacks thorough auditing, leaving vulnerabilities susceptible to exploitation.

The lack of universally accepted standards, coupled with the intricate copyright implications surrounding AI-generated content, further exacerbates the risks, necessitating increased vigilance and proactive risk mitigation strategies. IBM's risk atlas is a valuable tool for categorizing the risks associated with AI adoption.

Key risks include the following:

Input risks

Output risks

Non-technical risks

CISOs need to identify and prioritize AI risks for their enterprise and industry, enabling them to determine the order in which those risks are mitigated or managed. Otherwise, a CISO may be addressing an expensive and time-consuming capability that does little to address the enterprise risks at hand.

AI security tools and frameworks

Organizations can use a growing list of AI security tools and frameworks to address risks effectively. Frameworks such as the NIST AI Risk Management Framework guide the identification, assessment and mitigation of AI-related risks. These frameworks emphasize aspects such as explainability, bias detection and model performance. Additionally, technical tools, such as AI model monitoring platforms and adversarial testing suites, help ensure that AI systems are resilient against attacks and function as intended. Automated tools for data validation and model transparency are also critical in maintaining the integrity and security of AI implementations.

Learn the AI regulations specific to your state and industry before implementing any programs. These regulations are fluid, as neither organizations nor legislatures fully understand the depth to which AI will be implemented. Currently, there is no comprehensive federal regulation beyond executive orders issued by the Biden administration, which were subsequently rescinded by the Trump administration. As AI becomes embedded in an organization's core business capabilities, ensure management is consulting with the legal team to ensure compliance with industry, state, federal and international laws.

Finally, technical tools that your organization must consider as it embarks on the path toward AI consumption and/or development to safeguard AI systems against vulnerabilities, threats and misuse, include the following:

Model Context Protocol (MCPs)

Model context protocol (MCPs) are structured methodologies designed to effectively regulate and monitor AI systems. These protocols ensure that AI models and implementations align with organizational risk mitigation strategies and adhere to established security standards.

Core features of MCPs

MCPs function as a set of predefined rules, workflows and safeguards that oversee AI systems throughout their lifecycle. Here are some of the core features of MCPs:

Benefits of MCPs

The integration of MCPs into AI usage provides several advantages:

Artificial intelligence security platforms (AISP)

AISPs are holistic tools designed to monitor, analyze and secure AI systems in real time. They serve as a comprehensive suite of technologies that address various aspects of AI security, including threat detection, operational transparency and compliance management.

Core Features of AISPs

Some core features of AISPs include the following:

Benefits of AISPs

AISPs offer a wide range of benefits for safeguarding AI usage, including the following:

Best practices for securing AI in the enterprise

The best practice for securing AI in the enterprise is ensuring that the organization's cybersecurity program is mature in capabilities (people, processes and technology) as measured against frameworks such as the NIST Cybersecurity Framework (NIST CSF). Cybersecurity practices, such as asset management, patch and vulnerability management and data classification, are fundamental in securing AI.

Additionally, CISOs must understand the business objectives of AI use within their organization, including the specific AI models that will be used and the data that will be ingested. Data classification and data quality are crucial in ensuring that sensitive data, such as protected health information, is not inadvertently disclosed.

Knowing the organization's current cybersecurity gaps and weaknesses is a crucial first step in developing an AI security strategy. If the organization has significant weaknesses in security best practices, it is imperative that those are addressed as part of the strategy to securely onboard AI.

Establishing an AI governance board or committee consisting of business, IT, and cybersecurity leaders, as well as legal experts, can ensure the ethical, accountable and secure onboarding of AI into an enterprise.

Key takeaways for CISOs

Securing AI in the enterprise requires a multifaceted approach that integrates established frameworks, anticipates regulatory changes and uses tools such as MCPs and AISPs.

These practices collectively enable enterprises to deploy AI securely, ensuring operations are ethical, transparent, and compliant with emerging regulations. Here are some steps CISOs should consider when determining their AI security protocols.

John Doan is the senior director of cybersecurity advisory and cybersecurity domain architect for a world-renowned healthcare organization.

30 Jun 2025

All Rights Reserved, Copyright 2000 - 2025, TechTarget | Read our Privacy Statement