Generative AI security risks: Best practices for enterprises How to calculate cybersecurity ROI with concrete metrics
X
Tip

How to craft an effective AI security policy for enterprises

Enterprises unable to manage AI risks face data breaches, algorithmic bias and adversarial attacks, among other risks. Learn how to implement a comprehensive AI security policy.

The rapid adoption of generative AI has spurred enterprises to examine closely how AI tools affect their security strategies and how to prevent these powerful technologies from causing more harm than good.

A key concern is the role GenAI might play in a security breach. To tackle this concern, enterprises must take a step back and look at AI as an integral part of their cybersecurity planning. Let's discuss some key security issues AI presents and examine what to include in an AI security policy.

How does AI affect cybersecurity measures?

AI introduces a host of cybersecurity risks to businesses. Organizations unable to manage AI-associated risks open themselves to data loss, system access by unauthorized users, and malware and ransomware attacks, among other threats.

Cyber-adversaries might use GenAI, for example, to craft convincing social engineering and phishing scams, including deepfakes. GenAI is also highly vulnerable to prompt injection attacks, in which malicious actors use specially crafted input to bypass a large language model's (LLM) normal restrictions.

Machine learning models and generative AI are susceptible to data poisoning attacks, in which attackers alter or corrupt the data set used for training; adversarial attacks, in which attackers make subtle changes to the input data in order to corrupt the output; and model inversion attacks, in which attackers try to infer sensitive information about the original training data by analyzing the outputs.

Other AI risks include employees exposing sensitive data, shadow AI use, vulnerabilities in AI tools and compliance obligation breaches.

Employees must recognize that hackers are using AI to develop their cybersecurity attacks and, therefore, must exercise due diligence when operating their systems and applications.

AI standards and frameworks

Standards and frameworks play a key role in helping organizations develop and deploy secure AI. Each of the following ISO standards and NIST frameworks addresses AI risk in varying degrees:

  • ISO/IEC 22989:2022 Information technology -- Artificial intelligence -- Artificial intelligence concepts and terminology.
  • ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML).
  • ISO/IEC 23894:2023 Information technology -- Artificial intelligence -- Guidance on risk management.
  • ISO/IEC DIS 27090 Cybersecurity -- Artificial Intelligence -- Guidance for addressing security threats to artificial intelligence systems. This standard is currently under development.
  • ISO/IEC 42001:2023 Information technology -- Artificial intelligence -- Management system. This standard provides a framework for establishing a management system that focuses on the development and deployment of AI-based systems.
  • NIST released the AI Risk Management Framework in 2023 as an essential document for organizations developing and deploying secure, trustworthy AI systems.
  • In March 2025, NIST released updated AI security guidelines -- NIST AI 100-2e2025, "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations" -- which provide guidance on responding to cyberattacks affecting predictive AI and generative AI systems.

Consult the above references for reducing risk when developing an AI-based system, especially when cybersecurity is a concern. These guidelines and their recommended activities can translate to controls that can be built into an AI security policy document. These can then, in turn, be translated into detailed procedures for managing AI-based cyberthreats.

What goes into an AI security policy?

The first decision is whether to update an existing cybersecurity policy to include AI or create a separate AI cybersecurity policy. For the purposes of this article, the goal is to develop a separate AI cybersecurity policy that references GenAI.

To start, consider the following steps:

  1. Accept that security breaches happen.
  2. Ensure that senior management acknowledges cyberthreats, their potential impact and the importance of preventing them.
  3. Review the standards and frameworks shown in this article to obtain initial guidance and direction.
  4. Establish procedures to identify suspicious activity -- and its source -- and address it.
  5. Collaborate with departments such as legal and HR, as well as business units whose activities could be threatened by cyberattacks.
  6. Initiate activities and procedures to reduce the likelihood of such security events occurring and mitigate their severity and impact on the organization.

A cybersecurity policy that focuses on AI must do the following:

  • Identify the risks, threats and vulnerabilities associated with AI-based systems.
  • Identify standards, regulations and frameworks that will be addressed for compliance.
  • Specify actions to take that will detect, analyze and mitigate cyberattacks that affect AI-based systems.

Once a policy has been developed and approved, follow the steps detailed in the "AI security policy implementation" section below.

The impact of AI security on business goals

The following sections identify the many components of the organization that must be addressed when developing an AI security policy and highlight the importance of aligning that policy with the company's business goals.

People

  • Establish a process for identifying suspicious activity that can be associated with AI activities.
  • Work with HR to set up procedures to identify and deal with employees suspected of AI security exploits.
  • Work with the legal department to address how to prosecute AI security breaches.
  • Establish how the company will respond to such activities -- e.g., reprimand or termination -- based on HR policies.
  • Determine legal implications if perpetrators fight legal action.
  • Identify outside expertise -- e.g., legal teams or insurance experts -- who can assist with AI security attacks.
  • Establish procedures to help prevent the accidental inputting of sensitive data into AI tools, such as ChatGPT, GitHub or Copilot.
  • Develop training programs for employees regularly using AI to prevent misconfiguration of AI tools that address enterprise systems.
  • Develop guidance and training for employees using AI to prevent unquestioned trust of AI-generated output, which could lead to security gaps or flawed decisions.

Process

  • Examine existing procedures for recovering and reestablishing disrupted IT operations to see if they can be used for AI-based breaches.
  • Examine existing technology disaster recovery (DR) and incident response plans to see if they can be used to recover operations from AI-based events.
  • Develop or update existing procedures to recover, replace and reactivate IT systems, networks, databases and data affected by AI-based security breaches.
  • Develop or update existing procedures to address the business impact -- e.g., lost revenue, reputational damage -- from AI-based security breaches.
  • Consider using external experts to assist in the aftermath of AI-based events.
  • Determine if any standards or regulations have been violated by AI-based cyberattacks, as well as how to reestablish compliance.

Technology operations

  • Examine technology that can identify and track cybersecurity activities with suspected AI signatures, whether they're within the firm's IT infrastructure or with outside firms, such as cloud services.
  • Establish methods to shut down AI-based activities once a cyberattack has been detected and verified. Quarantine affected resources until the issues have been resolved.
  • Review and update existing network security policies and procedures following AI-based attacks.
  • Update, patch or replace existing cybersecurity software and systems to be more effective in AI-based cyberattacks.
  • Repair or replace hardware devices that have been damaged by attacks.
  • Repair or replace systems and data affected by attacks.
  • Ensure critical systems, data, databases, network services and other assets are backed up.
  • Ensure encryption of data at rest and in motion is in effect.
  • Recover IT operations, applications and systems that might have been affected by AI-based attacks.
  • Ensure that security assets, such as a security operations center (SOC), are properly configured and security teams are trained.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Security operations

  • Establish and regularly test procedures for dealing with physical and logical breaches caused by AI-based security events.
  • Establish and regularly test procedures to prevent theft of intellectual property and personally identifiable information.
  • Establish and regularly test procedures to ensure data privacy and protection, especially in light of statutes such as GDPR.
  • Establish and regularly test procedures to address attacks on physical security systems -- e.g., closed-circuit television cameras, building access systems -- from AI-based attacks.
  • Establish and regularly test an incident response plan that addresses all types of cybersecurity events, including those from AI-based breaches.
  • Ensure that all security activities are compliant with the required standards, regulations and frameworks.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Facilities operations

  • Develop, document and regularly test procedures to repair, replace and reactivate data center and other facilities that might have been disrupted by AI-based security breaches.
  • Establish and regularly test procedures to address attacks on physical security systems, such as card entry systems, from AI-based attacks.
  • Establish and regularly test a technology DR plan that addresses all types of cybersecurity events, including AI-based attacks.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Financial and legal considerations

  • Develop and regularly review procedures for evaluating the impact of AI-based security attacks on financial and general business operations.
  • Define potential legal and regulatory penalties for failure to comply with specific regulations as a result of AI-based security breaches.
  • Identify potential insurance implications of AI-based cybersecurity attacks with the company's insurance provider(s).
  • Identify potential legal implications of AI-based cybersecurity attacks with the company's legal department.
  • If additional expertise is needed, consider retaining external vendors or consultants.
  • Develop procedures to repair potential reputational and other damage from AI-based cyberattacks.
  • Develop procedures for responding to media inquiries about reported AI-based security breaches.

AI security policy implementation

Once an AI security policy has been approved, embedding it into the company culture becomes essential. Simply rolling out the policy without awareness and training activities is a surefire way for it to fail, which might increase the company's vulnerability to an attack. The following security best practices will help ensure a successful AI policy rollout:

  • Ensure that senior management is on board, as a message from the top can often get people's attention and encourage them to comply.
  • Collaborate with HR for guidance on how to effectively launch the policy; this can include training for existing employees and onboarding for new employees.
  • Collaborate with PR or a similar department for guidance on how to format and present the message to maximize its response; for example, they could help develop a training class.
  • Schedule training classes for all employees.
  • Use internal networks, such as email and SharePoint, to deliver the message and provide ongoing reminders; this is important for both in-house and remote workers.
  • Designate employees who can act as subject matter experts for employees who have questions.
  • Ensure that the existing help desk and SOC, if available, are fully briefed on the policy and can answer employee questions.
  • Periodically send out reminder messages on the importance of following the AI security policy and how it benefits the company.
  • Consider sending out a questionnaire to employees to gauge their awareness of the policy and how they are using AI.
AI security policy template thumbnailClick here to download
the Artificial Intelligence
Security Policy Template.

Policy template

An AI security policy template that covers AI-based attacks largely incorporates the same line items as a standard cybersecurity policy. It also recognizes that the organization must be able to identify security breaches that exhibit signatures that indicate something other than a "normal" attack.

Use the accompanying template as a starting point for creating a policy to address AI-based attacks and exploits. Again, the result could be a standalone policy or the addition of AI content to an established cybersecurity policy.

Editor's note: This article was expanded and updated in June 2025.

Paul Kirvan is an independent consultant, IT auditor, technical writer, editor and educator. He has more than 35 years of experience in business continuity, disaster recovery, security, enterprise risk management, telecom and IT auditing.

Next Steps

AI-powered attacks: What CISOs need to know now

RSAC Conference news and analysis

The advantages and disadvantages of AI in cybersecurity

AI model theft: Risk and mitigation in the digital era

How to develop a cybersecurity strategy: Step-by-step guide

Dig Deeper on Security operations and management