
Blue Planet Studio - stock.adobe
Prevent and manage cloud shadow AI with policies and tools
Unmanaged cloud-based AI tool use can result in data loss and reputational harm, among other risks. The time to stop and prevent cloud-based shadow AI use is now.
As AI becomes increasingly embedded in day-to-day business workflows, cybersecurity teams grapple with a growing blind spot: cloud shadow AI.
The adoption of unmanaged cloud-based AI tools continues to outpace security teams' ability to manage and protect those deployments. In fact, 70% of cloud workloads that use AI software have a critical vulnerability, according to the "Tenable Cloud AI Risk Report 2025."
Organizations must address cloud-based shadow AI now to prevent data breaches, avoid compliance violations and mitigate cyberattacks.
The problem with cloud-based shadow AI
Much like shadow IT in earlier cloud adoption phases, shadow AI refers to the unauthorized or unmanaged use of AI-powered tools. In terms of cloud-based shadow AI, this means cloud-based AI services and the use of AI in cloud workloads, including large-language models (LLMs) such as ChatGPT, public chatbots hosted in cloud SaaS deployments, storage of training data, AI APIs and AI-assisted coding platforms.
Employees often don't use AI-enabled tools with malicious intent. Rather, users such as developers, marketing teams and data scientists adopt them for productivity gains without fully understanding their security implications.
The risks of these tools include the following:
- Users might upload sensitive information, such as source code, regulated data or intellectual property, which could result in data exposure, compliance violations and long-term reputational harm.
- AI tools might provide false information to employees, who then use it to pursue poor investments that affect the organization's bottom line.
- Organizations might incur unexpected costs due to using unsanctioned AI alongside managed AI tools or transitioning workloads from unsanctioned shadow AI to official tools.
The tools themselves also introduce challenges. According to Tenable's report, 14% of organizations using Amazon Bedrock left it publicly accessible, while 77% of organizations had at least one overprivileged Google Vertex AI Workbench notebook service account. Other cloud-based AI services were also implicated.
Because security teams lack visibility into which AI-enabled tools are being used and where, such tools are outside the purview of data security and patching processes, monitoring and policies.
How to secure cloud-based AI tools
Addressing cloud-based shadow AI risks requires a combination of clear, enforceable policies around AI use and adequate security technologies and controls.
Policies for secure AI use
Two crucial policies to secure AI are acceptable use policies and allowlist policies.
Create an enterprise AI acceptable use policy, especially in cloud environments where scale and decentralization increase the risk surface, that defines the following:
- Who is allowed to use AI tools.
- What conditions allow for AI use.
- What categories of data are permitted -- or restricted -- from being processed by AI tools.
Ensure policies reflect regulations, such as GDPR, HIPAA and export control rules, which might affect whether data can be transmitted to certain LLMs or third-party services. Also, define distinctions between internal versus external AI systems, provide guidance on acceptable tools and require risk reviews before new AI services are adopted.
For example, a policy might prohibit uploading client data to public-facing LLMs but permit internal experimentation using self-hosted models in a protected cloud environment. Cloud-native policy enforcement -- using Azure Policy, AWS service control policies or Google Cloud Organization Policy Service -- can help automate these boundaries across different teams.
Another effective way to manage AI risk is to adopt an allowlist-based approach. Authorize specific, vetted AI tools -- such as Microsoft Copilot, Google Gemini or enterprise-hosted LLMs -- and restrict access to all others.
Integrate access controls through identity providers and cloud access security brokers (CASBs) to ensure only authorized users can interact with these tools and that usage can be logged and monitored. Beyond general productivity platforms, development and DevOps teams often use AI-assisted coding tools such as GitHub Copilot or Tabnine. Evaluate such tools for security posture, data retention policies and model training implications before permitting use. In some cases, on-premises or private instances might be preferable to preserve confidentiality.
Tools and controls for secure AI use
To securely adopt cloud-based AI tools, implement the following core security controls:
- Data loss prevention. Enforce DLP policies at endpoints and cloud gateways to prevent sensitive data from being submitted to unauthorized AI tools.
- CASB integration. Use CASBs to discover shadow AI usage, enforce access policies and block risky or unapproved services.
- Zero-trust access. Apply zero-trust principles to AI services to restrict access based on user identity, device health and contextual risk.
- Model and API hardening. Organizations that host their own AI models should use secure API gateways, authentication controls and rate limiting to prevent misuse or prompt injection.
- Auditing and logging. Maintain comprehensive logs of who is using AI tools, for what purpose and what data is being exchanged. This helps support forensic analysis, compliance and auditing efforts.
Start working now to mitigate cloud shadow AI usage
Cloud-based AI adoption in the enterprise is inevitable, but unmanaged cloud-based AI use is a growing liability. Focus on detecting and eliminating shadow AI, defining secure usage policies and empowering users with approved tools that meet security standards. Building guardrails through policies, controls and visibility enables security teams to support innovation without sacrificing trust or compliance.
Getting started now means conducting an audit of current -- both permitted and unsanctioned -- cloud AI usage, engaging with stakeholders across business units and implementing a formal AI security framework. With the right foundations, enterprises can harness the power of AI responsibly and securely going forward.
Dave Shackleford is founder and principal consultant at Voodoo Security, as well as a SANS analyst, instructor and course author, and GIAC technical director.