
Shadow AI: How CISOs can regain control in 2026
Shadow AI threatens enterprises as employees increasingly use unauthorized AI tools. Discover the risks, governance strategies, and outlook for managing AI in today's workplace.
AI, and more specifically generative AI (GenAI), is more accessible now than ever. With just a few taps or keystrokes, anyone can use tools such as OpenAI's ChatGPT, Microsoft Copilot or Google Gemini. However, with that sort of unfettered availability, there are also risks. One of the prominent risks is the unauthorized use of AI in an enterprise setting, more commonly known as shadow AI.
Without proper enterprise AI governance and IT oversight, employees' use of AI could lead to data breaches, failure of regulatory compliance or a decrease in the quality (and reputation) of a business's offerings. Considering the current security landscape, regulations are becoming more stringent and executive leadership is being held increasingly accountable for proper oversight, so staying one step ahead of shadow AI is important as these tools become more prolific.
Where does shadow AI come from?
Shadow AI can easily occur in an enterprise, especially given that GenAI tools are so prevalent and often available on convenient software-as-a-service platforms. AI is even built directly into some devices and web applications these days, making it no surprise that it's often used without consulting IT first.
Given AI's advertised strength in helping increase efficiency, a common cause of shadow AI stems from employees' attempts to increase productivity and innovation. Relatedly, some employees may turn to AI to circumvent organizational inefficiencies or budgetary constraints.
Shadow AI is a more prevalent issue today than many businesses may realize. According to a survey of over 7,000 individuals conducted by CybSafe and the National Cybersecurity Alliance (NCA), over 38% of employees share sensitive information with AI tools without permission from their employer.
Another cause of shadow AI is a lack of proper governance and education. According to the CybSafe and NCA survey, 52% of employed participants said that they have yet to receive training on safe AI use. If there is no awareness and no explicit guardrails, the likelihood of AI misuse increases.
Business risks of shadow AI
While AI has the potential to increase productivity and efficiency, shadow AI comes with inherent risks that could end up doing a company more harm than good. Shadow AI presents a wide spectrum of risks.
Functional risks
Functional risks result from an AI tool's failure to function properly. If it doesn't, the content it produces can be inaccurate and effectively useless. An example of this is what's known as model drift, where a machine learning model fails to adapt to a changing environment or data, negatively impacting its performance and resulting in output that is misleading, obsolete, or just plain wrong.
There are approaches that can help mitigate or prevent model drift – such as constantly monitoring the quality of data being fed into a large language model (LLM), also known as data observability. However, there's no guarantee that these techniques are consistently employed on the average user's GenAI tools.
Operational risks
Operational risks affect a company's ability to provide a quality product or service effectively and safely. For example, improper AI usage can increase the likelihood of a cyberattack, creating vulnerabilities (or complete system failures) that can be exploited. Similarly, following suspect advice from an AI tool could lead to poor operational or financial performance for a company.
Consider the different types of sensitive information that a user could share with an AI tool, such as medical information, business strategies or personally identifiable information. An LLM that doesn't know any of that information is sensitive, so that information can either be made available to other users in response to their prompts or compromised in a cyberattack scenario.
In addition to potential legal and compliance violations, these scenarios could result in lost customer trust and competitive advantages.
Compliance and legal risks
Unauthorized AI usage can also result in regulatory violations, leading to legal implications. However, legal fines and penalties are not the only potential legal risks with shadow AI.
The issues with poor financial performance stemming from AI use could result in shareholders suing a company, or AI-generated content could violate copyright laws.
Businesses must also ensure compliance with the growing number of state and federal AI-specific regulations.
Resource risks
Unauthorized AI use can affect an organization's bottom line in several ways. Employees' efficiency may be negatively impacted, as shadow AI can lead to duplicated work, fragmented data and siloed teams that forego collaboration and instead turn to these tools. Even if siloed teams and projects (initially sanctioned or not) are ultimately merged, this still costs time and money.
Shadow AI can also lead to endeavors that ultimately fail due to the lack of governance and quality control, which could have a twice-as-costly impact—the money lost on the failed project itself and the fact that those resources could have been spent on successful operations instead.
It's also important to consider the basic cost of AI tools themselves. When employees follow proper company channels for approved AI use, they can benefit from the organization's negotiated rates.
Strategies for CISOs to manage shadow AI
Leadership can take approaches to minimize risk and reduce shadow AI in the enterprise, even though eliminating it entirely may be unrealistic.
Monitor and audit
Some organizations may already have tools in place to monitor network activity, and, if they don't, it may be worth investing in them. AI usage monitoring tools can be used to increase awareness of when unsanctioned AI tools are being used and to block access to them if needed. Similarly, if it becomes apparent that shadow AI has infiltrated a business – or even if there is suspicion – an AI audit strategy can help identify not only when AI was used, but also how and by whom.
Establish a formal AI governance program
No business wants to be overly restrictive in its governance policies, especially when there is some amount of benefit to be had from using AI tools. Instead, it helps to have a formal program that establishes guardrails, while also remaining flexible. Employees should be instructed on accountability, which tools are permitted and how they should be used. This includes ethical AI usage, handling of sensitive data and abiding by regulatory compliance laws when these tools are in use.
Implement AI model risk management (MRM)
Some of the steps above can be part of a greater MRM for AI. In addition to fostering responsible AI usage, MRM also establishes a clear-cut definition of the risks associated with AI and adapts existing frameworks to account for them. This helps anticipate risk and increase accountability while still encouraging safe AI adoption in the workplace.
Educate and Collaborate
Similar to other forms of risk prevention, education and awareness are a big part of AI risk management for CISOs. Keeping employees up to date on the risks associated with unauthorized AI usage, especially as the technology continues to grow, can go a long way. In that same vein, transparency and communication between teams, such as legal and IT, can help employees educate one another while ensuring that they stay compliant with AI governance.
Key questions CISOs should ask to address shadow AI
Here are some key questions for executives to address shadow AI and why the answers are important to the enterprise.
- Are employees educated about the dangers of shadow AI?
- Sometimes, shadow AI may simply be a matter of ignorance. Employees may be using AI tools without realizing their inherent risks or how to use them safely and ethically. Proper education and communication about AI can be helpful tools in heading off problematic usage.
- What policies or governance are in place to prevent shadow AI?
- To prevent the unsanctioned use of AI, an enterprise needs to define what it means to be "unsanctioned." Establishing a clear set of guardrails and a list of approved methods and tools can help employees understand what is or isn't allowed while still enjoying the benefits of GenAI.
- Where might shadow AI already be present in the business?
- Prevention can only do so much, and odds are good that shadow AI is already present in most businesses. So, the next step is to investigate where AI might be getting used. Teams and departments that would benefit the most from the automation of tedious or repetitive tasks are usually a good place to start.
- Are we sufficiently monitoring or controlling network activity?
- The best way to approach the above is to use the right network and security tools to enforce governance policy. Blocking access to certain tools, monitoring network traffic and auditing workflows can help target unauthorized AI use in the enterprise and increase accountability.
What is the future of shadow AI in the enterprise?
The AI landscape is still constantly shifting, and so are the tools and techniques used to combat shadow AI. Examples of popular tools include specialized "app discovery" software to detect AI usage and data loss prevention (DLP) tools. DLP can help track whether sensitive information is being shared with GenAI tools and even block the transmission of data if needed.
Meanwhile, there is no shortage of playbooks and guidelines on AI security for cybersecurity executives. The National Institute of Standards and Technology (NIST), for example, developed the AI Risk Management Framework. The NIST AI RMF for enterprises was a collaborative effort involving input from members of both the public and private sectors and is intended to "improve the ability to incorporate trustworthiness considerations into the design, development, use and evaluation of AI products, services and systems." Likewise, the International Organization of Standardization established ISO/IEC 42001, the world's first AI management system standard that ensures "responsible development and use of AI systems."
Whatever approach enterprises take, though, they need to ensure that they stay on top of the latest regulations concerning AI, which are changing rapidly. The EU, for instance, is implementing the EU AI Act this year, banning certain AI systems and enforcing compliance with the act for "large AI models." Conversely, the U.S. could be headed toward deregulation; a 10-year moratorium on state and local AI regulation was passed in the House of Representatives in May 2025 before heading to Congress.
In short, there's plenty more to come for this technology, which is still in its relative infancy. While eliminating shadow AI entirely is basically impossible for now, the proper tools, governance and awareness can ensure that businesses stay safe and compliant as they enter the age of AI.
Grant Hatchimonji is a freelance writer and solutions architect, where he does software engineering and consulting.