Askhat - stock.adobe.com

Tip

4 strategic approaches to effective shadow AI governance

Shadow AI is a technological risk and a governance challenge. Organizations must combine structured frameworks, continuous visibility and education to control these risks.

Software-as-a-service applications provided businesses with convenience and productivity, but with an unintended consequence: the shadow IT problem. Now, a companion problem is rising in the form of shadow AI.

Shadow IT is the use of any software program, hardware or online service without formal oversight from an organization's IT department. Shadow IT introduced several problems, including increased risk, inconsistency across workflows and a lack of control over tool use. After the public release of ChatGPT in 2022, businesses faced the risk that employees would use AI tools without appropriate oversight and approval. In fact, Boston Consulting Group's 2025 "AI at Work" study found that 54% of the more than 10,000 employees surveyed used unauthorized AI tools to complete their work, raising internal security risks for their businesses.

Strategies exist to prevent shadow IT software use, and now businesses must implement new safeguards to prevent shadow AI use across their workforce. Discover the risks shadow AI presents and learn how to best mitigate the worst of them.

What are the risks of shadow AI?

While the legitimate use of AI can support critical business functions, businesses must manage it carefully. Employees using unsanctioned AI-powered tools can pose serious threats to their organizations, including the following:

  • Data breaches. Employees who use AI-powered tools to facilitate work tasks could inadvertently expose sensitive customer- and business-related information to these tools. An employee could use ChatGPT to summarize a report containing sensitive business information. Future AI models could then train on that information, and it could resurface in responses to future user prompts. According to IBM's 2025 report, "Cost of a Data Breach," one in five organizations reported a data breach due to shadow AI incidents.
  • Compliance and regulatory concerns. Providing sensitive information to AI tools could result in regulatory fines and penalties. For example, exposing customers' personally identifiable information (PII) violates GDPR, HIPAA and Payment Card Industry Data Security Standard regulations. This could cause businesses financial and reputational damage.
  • Inaccurate outputs. Basing critical business decisions, such as hiring or developing business plans, on AI output can lead to operational and strategic failures, because AI tools can hallucinate or use outdated information. Implementing AI decisions without verification can cause reputational damage, especially when they result in customer segmentation or the exclusion of prospective employees.
  • Increased attack surfaces. Some AI-powered tools are available as web browser add-ons. A compromised AI plugin will increase a business's attack surface and create new entry points for attackers to exploit.
  • Lack of audibility. Security teams can't track or audit the use of shadow AI. This makes it difficult to understand how a particular business decision was made or to investigate a security incident.

Examples of shadow AI

Shadow AI in business takes various forms. It typically involves using unsanctioned AI tools to do tasks, such as the following:

  • Using unauthorized chatbots to generate answers for customers' inquiries.
  • Pasting confidential company data into an unvetted AI tool to generate a summary report.
  • Using a publicly available AI tool to write emails that contain confidential business information.
  • Screening applicants' CVs with unsanctioned AI-powered tools.
  • Drafting contracts using unmanaged AI tools, exposing privileged information.

How to mitigate the risks from shadow AI

Despite the difficulties businesses face in tracking and auditing shadow AI use, there are still strategies to mitigate the risks it poses. Consider the following four approaches to prevent damage wrought by shadow AI.

The 5-pillar approach

The five-pillar approach focuses on shifting unsanctioned employee AI use toward adoption by the organization itself, with specific controls and guidelines. This approach combines the following five elements:

1. Accept

The organization should first accept that employees are using AI tools to facilitate many daily tasks, such as generating ideas, summarizing lengthy reports and drafting emails to clients. Identifying such use cases helps security teams propose relevant policies to enable safe use, rather than banning them.

2. Enable

After accepting the use of AI tools to support specific workflows, the next step is to provide enterprise AI tools that are secure and safe to use. Businesses can deploy a private instance of an LLM or purchase a subscription from a popular enterprise platform that offers data privacy and regulatory compliance. Or they can use an integrated AI assistant within an existing productivity suite.

3. Assess

Establish a template to evaluate new AI tools that employees want to use. A rigid, slow approval process leads to more shadow AI use. Businesses need a rapid, agile intake approach to evaluating new AI tools efficiently. If a security team wants to use an AI-powered open source intelligence (OSINT) platform for dark web monitoring, the security and legal teams should assess the platform's compliance with applicable data protection regulations on data handling and check the data sources used for training before allowing its use.

4. Restrict

To prevent unauthorized disclosure of sensitive data to AI tools, businesses should ban employees from using personal AI accounts to process proprietary organization source code, customer data or any sensitive business information. Businesses can apply data loss prevention policies to prevent pasting confidential business data into web-based AI interfaces.

5. Eliminate

Data input from employees' personal AI tool accounts can remain within the tool indefinitely. According to 2025 research by Harmonic Security, a company that sells an AI governance and control platform, 45% of sensitive AI interactions came from personal email accounts. This means that employees who leave their jobs might still have access to confidential business data stored in the histories of these public AI tools. Reviewing personal AI accounts as part of the employee offboarding process helps ensure no sensitive data is left in them. Encouraging the use of enterprise AI tools for work-related tasks is another way to ensure that all data these tools process remains under enterprise purview.

AI amnesty programs

AI amnesty is a voluntary program that encourages employees to disclose their unauthorized AI use without punishment. These programs usually have a defined time frame and ask employees to provide information about the AI tools they use, for what purposes and what kinds of business data they feed into them. The aim is to transform shadow AI from a hidden problem into a governance issue that organizations can tackle.

Developing an amnesty AI program requires planning, strong sponsorship from management and a communications strategy to build trust among employees and ensure their participation.

Organizations can use the following seven steps to build an AI amnesty program:

1. Secure CEO approval

Upper management should approve and encourage the amnesty program. CEOs should clearly communicate the no-punishment concept to all employees. This encourages participation and honest answers.

2. Define amnesty timeframe

Define the program's time window, typically between one and two months. Having start and end dates creates a sense of urgency and encourages all employees to participate early.

3. Develop a questionnaire

Collect employees' responses regarding unauthorized AI tool use with a survey or questionnaire. The questionnaire should cover the following areas:

  • The specific AI tools used.
  • The purpose of the AI tool.
  • The type of data used in the AI tool.
  • The tool's benefits to the user.

4. Establish communication channels

Define the communication channels employees can use to submit their answers, such as a specific email address or an online submission form. The amnesty program can also provide anonymous submission forms if desired.

5. Formulate an amnesty committee

A committee of employees from relevant departments, including legal, human resources, IT and risk management, should run the amnesty program. The committee oversees the program's objectives, analyzes employees' disclosures and suggests strategies to restrict and govern the use of AI tools at work.

6. Analyze findings

After the amnesty program ends, the amnesty committee should gather to analyze the employees' disclosures. For high-risk responses, such as using customer PII, take immediate action. Formalize and adopt responses that provide a window for using AI securely and innovatively.

7. Feedback

The amnesty committee provides feedback to employees on the decisions it has made. By doing so, the committee demonstrates that employees' answers led to corrective actions instead of punishment. The main decisions the amnesty committee will make are either to ban the use of shadow AI for high-risk use cases or to provide an approved list of AI tools with clear governance guardrails.

Continuous visibility

Continual monitoring of employee AI use is critical to governing shadow AI. Organizations can't protect what they can't see. Without proper visibility, organizations will remain vulnerable to many risks, including those associated with unauthorized AI use.

To ensure effective visibility over sanctioned or unsanctioned AI use, businesses should implement a layered detection approach. This gives security teams full visibility into shadow AI use.

Security teams should monitor the following layers across the organization's IT environment:

  • Network layer. Businesses should deploy security measures, such as firewalls and security information and event management to monitor web traffic and API calls to AI services.
  • Endpoint detection layer. Organizations can use endpoint detection and response and extended detection and response services to detect AI use at the device level. When an employee connects to an AI service or installs an AI add-on for a web browser that interacts with an external AI service, the service can trigger an alert or prevent the action.
  • SaaS layer. Many AI services integrate with common SaaS tools. Organizations should deploy cloud services like cloud access security brokers and SaaS security posture management to identify unauthorized AI integrations within sanctioned applications like Slack, Salesforce and Microsoft 365.
  • Data protection layer. This layer focuses on monitoring the data sent to the AI tools. This lets organizations prevent exfiltration of sensitive data, such as PII, trade secrets and sensitive business information, before it leaves the organization's environment and enters an unauthorized AI service.

Businesses must balance their continuous monitoring strategy while respecting employee privacy. To achieve this, organizations should consider the following measures:

  • Monitor who's using what AI tool rather than capture every employee's AI prompt.
  • Be transparent with employees about what teams are monitoring and why.
  • Implement a policy-based trigger to escalate an alert into human monitoring.

Education

To combat shadow AI effectively, organizations should think beyond security controls. Investing in AI literacy training enhances employees' capabilities so they can use AI tools to boost productivity while maintaining the security and privacy of business data.

AI literacy refers to the employee's ability to understand, use and evaluate AI technologies at work. It doesn't mean turning employees into AI experts; however, it ensures employees can use these tools effectively and understand the inherent security and privacy issues surrounding AI when handling business data.

AI literacy is being incorporated into official regulatory frameworks. For example, Article 4 of the EU AI Act mandates that providers and deployers of AI systems undertake efforts to ensure there's adequate AI literacy among their personnel, as well as other individuals involved in the operation and application of AI systems on their behalf, taking into consideration their technical knowledge, expertise, education and training.

While the general understanding of AI is critical for employees, it's also essential to provide role-specific training that addresses the unique challenges of using AI in specific job functions and across different departments. Software developers should focus on best practices when using AI code assistance and understanding secure integration with AI tools via API and data residency. Upper management should understand all the risks of AI, including privacy, security, legal and ethical liabilities.

Nihad A. Hassan is an independent cybersecurity consultant, digital forensics and cyber-OSINT expert, online blogger and author with more than 15 years of experience in information security research. He has authored six books and numerous articles on information security. Nihad is highly involved in security training, education and motivation.

Next Steps

Dig Deeper on AI business strategies