With shadow AI, sometimes the cure is worse than the disease
Organizations need to implement policies and restrictions around AI productivity tools, but they also need to make sure the policy isn't causing more harm than good.
We provide market insights, research and advisory, and technical validations for tech buyers.
Published: 22 May 2025
A long time ago, there was another site here on the Informa TechTarget network called ConsumerizeIT.com that dealt with the consumerization of IT -- essentially how IT was being affected by users who were far more savvy than ever before.
Who knew how they wanted to work, which devices, tools and apps they wanted to use, and more. Most importantly, they knew how to get what they needed with or without IT's help. We called this FUIT, pronounced "foo-it," which is Latin for was, as in "IT was in charge." Really, it just meant "F U, IT."
This started because of the introduction of the smartphone, as well as a younger workforce built on millennials and Gen Z that never knew a world without the internet. And in the intervening years, we've managed to settle in.
More or less, users have been able to work harmoniously with IT and subsequent generations entering the workforce. But the roots of FUIT go back to the earliest days of business computing, where IT took a heavy-handed approach to policies and tools. It was IT's way or the highway, so to speak.
This started simply because end users didn't know how to use these tools, and though things are far more end-user- or use-case-centric today, we tend to fall back on these more draconian ways when new things emerge. As you might have guessed, one of these things is AI.
Shadow AI is the new FUIT
The expansion of AI productivity tools represents a new wave of this trend, and AI usage by end users is a key focus area of mine. In fact, I've recently conducted research that says 79% of organizations officially support and deploy AI services such as ChatGPT, Copilot or Gemini to their end users. Perhaps most interesting is the rampant use of shadow AI by organizations.
Research from my study titled "AI at the Endpoint: Tracking the Impact of AI on End Users and Endpoints" shows that 53% of corporate knowledge workers admitted to using unsanctioned AI tools -- also known as shadow AI. And despite organizational efforts to monitor, manage and block shadow AI, 44% of users said they or their co-workers not only use shadow AI, but also put privileged, private or confidential data into these unauthorized tools.
Tying this back to FUIT, I find it interesting that IT still thinks it can outright block things and expect users to comply. They'll simply find another way around, as if you put a leaf down in front of a line of ants.
This takes many shapes. Some organizations release their own, customized AI tools for internal use. This could be useful, especially if that tool is tied into corporate data, policies, HR and more. The main challenge here is keeping that customized tool up to date with rapid progression in large language model technology and features.
Others might standardize around a particular public model, so they can always be assured of having the latest capabilities.
This is done in the name of security, data loss prevention (DLP), corporate intellectual property (IP) protection and any number of other things. And a lot of those reasons have merit. The thing is, if any tool is technologically behind or lacking in features compared to tools that end users were accustomed to, users will inevitably find ways around it. Those ways are often less secure and more problematic.
Take, for example, blocking a public model such as Google Gemini -- though this could apply to any model. If a user were accustomed to this tool and didn't want to adapt to the new edict, blocking would have very little effect on them. Think of all the ways around this. Some of these approaches are ridiculous, but they also indicate the overreaction of IT straight-up blocking things:
Users can take a picture of the content and upload it to the Gemini app on their phone. Is it blocked at the network level? Fine -- just turn off Wi-Fi.
They can disable security controls on their device.
They can just type in what they read on their screen manually.
They can start using a personal device to do all their work, resulting in a potentially devastating DLP scenario that is likely far worse than the one the company is avoiding by blocking Gemini.
They can email a document to their personal email address.
They can do even crazier stuff, like uploading it to their personal Google Drive, where it could get pulled into Gemini anyway, then downloading it elsewhere. So now you have to block Google Drive, too, except lots of people use that for non-AI but still work-related things, so that will bring added complications.
The list goes on, and it ultimately reveals a cosmic truth about scorched-earth IT policies: You can't block everything. It also brings to mind something I was told many years ago: Sometimes the "solution" is worse than the problem. Ask yourself: What's worse? A model getting trained on my data in a way that might work some random corporate IP into a response in the distant future, or my user pasting that IP into easily accessible, unsecure and unmonitored locations in its entirety? Both are bad, to be clear, but what's worse?
Proactive education will help with AI policy rollout
There has to be a middle ground. We can't just have a free-for-all, anything-goes scenario, right? Especially when there's such rapid change and an explosion of both good and bad tools that are nearly impossible to tell apart. Seriously, just search for ChatGPT in your phone's app store and see how many things look exactly the same.
Policies must be created with the needs of all involved parties in mind, not just blanket, heavy-handed edicts.
The approach will have to be flexible as well as able to meet the needs of the business, IT and security teams, and the end users. It almost certainly includes a combination of the following:
Strategic blocking of sites and services that should be blocked -- such as knockoff ChatGPT middlemen -- or maybe an allowlist of reputable platforms, so you don't have to try to keep up with maintaining a blocklist.
A clear understanding of how the end users have worked AI into their workflows. Data from my research shows that, for the most part, end users conduct shadow AI for the same reasons that the business wants to use AI: productivity, automation and content quality. IT can build on that. Depending on the stance IT has taken thus far, that might require some amnesty from the powers that be. Better to offer that now than later, when bad habits could be even more entrenched.
Policies created with the needs of all involved parties in mind, not just blanket, heavy-handed edicts. They should state which platforms can be used for which purposes, what classifications of data can be used on those platforms, and other things such as only allowing paid subscriptions with training turned off.
Education of end users to explain the existence and reason behind those policies so that they think twice about the data they post and how they interact with it.
AI policies are just part of the path forward
Sometimes organizations think that policies are enough, and that once everyone reads them, they understand them. In reality, policies only go so far. They're great at giving organizations reasons to fire people "for cause," but it's not always clear why a policy exists or what it means. Of course, that's incentive to create higher-level policies like "No AI tools but the one we make you use," but that's how we got into this situation in the first place.
The real key to success here is education, and once again, this is something that we have research on. Just 19% of corporate knowledge workers said they were completely confident in their ability to assess the security, compliance and privacy risks of using unauthorized AI tools, which indicates that we can do more to train end users.
Similarly, 74% of knowledge workers said their organization had not done a thorough job of communicating the risks associated with AI, which again points to an opportunity for education.
Trying to block everything can often feel like the most secure approach, but it might actually make things worse. This is the reality in 2025, where end users are savvy enough to find ways around the blockages put in place. Heck, they can just ask AI how to circumvent the blockage. You can't keep up with that.
The path forward requires an understanding that, in most situations, you can't block everything. Policies are important. Blocking certain tools is important. Delivering company-specific, integrated tools is useful. But you must do all those things, and in a way that meets the needs of the end users in addition to those of the business and security teams.
It's understandable for IT to have a knee-jerk reaction and block everything. Just be sure you know that there are more unexpected consequences than expected ones.
Gabe Knuth is the principal analyst covering end-user computing for Enterprise Strategy Group, now part of Omdia.
Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.