RSAC 2026: End-user AI agents are here, but invisible

Agentic AI dominated RSAC 2026 as vendors grappled with invisible AI usage, shadow tools, unsecured end‑user agents and a renewed focus on the endpoint.

I've finally had some time to decompress after an incredibly busy RSAC 2026, especially counting the obligatory cold that comes from spending a week in close proximity to 40,000 people in and around Moscone Center in San Francisco. In all, I had nearly 30 meetings with vendors in areas like endpoint security, email security, device management and more. What struck me is the common thread that ran through all of these: Agentic AI.

If you've been to RSAC (or any other independent industry event) before, you probably know there's an unofficial theme each year. A few years ago, it was just "AI," and every vendor was scrambling to incorporate a chatbot into their booth demo. Last year, it was "Agentic," but in a hand-wavy way that didn't really translate to what organizations were doing at the time. This year was agentic again, but with a bit more meat on the bone.

Building on the browser, shadow AI and second brain blog posts I've written, I went into the event looking for what organizations are doing about AI agents in the hands of end users. I was not disappointed. Here are some of the conversations I found myself having throughout the show.

Nobody can see what their employees are actually using

If there was one theme that came up again and again, it was visibility. You can't secure what you can't see. Or, to put it another way, organizations don't know what they don't know. Trend AI (formerly Trend Micro) has structured its entire security approach around four pillars, and the first one is visibility. LastPass built a SaaS app discovery capability using their browser extension because customers genuinely didn't know the scope of what employees were accessing. ESET is building AI observability into its XDR so administrators get a single view of which AI tools are in use across the environment.

The wrinkle is that it's not just ChatGPT or other chatbots anymore. AI is embedded in productivity apps now, and you can't just block those outright. Take, for example, Canva. If your organization allows Canva, you're allowing AI, because Canva has multiple foundation models underneath it and a growing enterprise data layer that feeds into all of them. The same is true for Microsoft Copilot, Google Workspace with Gemini, Adobe's integrations and dozens of other apps that have quietly added AI capabilities without making a big deal about it. Blocking ChatGPT.com is a 2024 response to a problem that has already moved past it.

My own research backs this up. The last time I polled both IT decision-makers and knowledge workers on AI usage, 72% of IT said they had an AI policy in place, but only 44% of end users said they had seen it. And 53% of end users admitted to using unsanctioned AI tools. The gap between what IT thinks is happening and what's actually happening is real, and it's only going to widen as AI becomes embedded in the tools people already use every day.

Blocking AI just creates more shadow AI

When a company blocks access to AI tools and replaces them with an (often inferior) internal tool, people find workarounds -- personal devices, screenshots of spreadsheets uploaded through mobile apps, copying and pasting into tools the network filter doesn't catch, etc. And these are all harder to track than the original problem.

I heard the same dynamic described by vendors across the board. Island's framing stuck with me. The company wants IT to be able to "say yes" instead of "say no," putting guardrails around AI usage rather than blocking it outright so the data stays protected without killing productivity. LastPass made a similar argument: Give employees good, sanctioned tools with real security controls, and they'll use them. Don't, and they'll find ways around whatever you built. Just like we learned with BYOD and mobile devices, the block-everything approach doesn't work. It's time to learn to live with them.

Blocking ChatGPT.com is a 2024 response to a problem that has already moved past it.

My favorite analogy: Saying you solved the DLP problem by blocking third-party AI and/or deploying an internal tool is like saying you solved your cockroach problem by flipping on the light. The cockroaches might have scattered, but they're still there. You just can't see them anymore.

We're quickly entering a situation where IT (including security teams) is holding back both the business and the end users. I'm not saying IT is bad. I'm saying efforts are better spent learning how to live in this new world than trying to play whack-a-mole.

End-user agents are here, but nobody knows how to secure them

Amidst all the "agentic" chatter on the expo hall floor, a few companies stuck out to me that seem to understand the impending agentic knowledge worker revolution. What happens when an end user runs an agent whose every behavior looks just like that of the end user? How do we know if it's the agent or the user? How do we know if the scripts or apps it writes are secure and accurate? Knowing the productivity gains and impact on the business, how can IT allow this, but still implement guardrails and visibility?

Trend AI shared a story about a customer whose CEO mandated deploying 200 AI agents. The chief information security officer's response was essentially, "I know how to deal with ransomware. I have no idea how to deal with a situation where an AI agent gets compromised." That captured the state of things perfectly.

The solution, at least in part, will hinge on identity and granular controls for autonomous agents. Password managers like LastPass and 1Password are trying to solve this, but how do you let an autonomous agent use those credentials on your behalf? If there's a human in the loop who can approve access in real time, it's solvable. But fully autonomous, where the agent needs to book a flight or access a SaaS app while you're getting coffee, there's no good answer yet. Nobody in the password management space has solved it, and there are no standards for it.

Then there are Model Context Protocol (MCP) servers, another facet of agentic AI that is already at "wild west" status. ESET has been scanning over 60,000 MCP skills and described the security landscape as "an absolute mess." Other vendors are wise to this as well. For example, Palo Alto recently announced an intent to purchase Koi Security specifically for this purpose.

As a final "holy smokes!" moment, I spoke with email security companies and learned that end-user agents are actually defeating API-based, post-delivery email security tools because they can process incoming mail instantly, and before the email security tool can take action. This opens the door for AI-specific exploits, and it could potentially reshape what a complete email security platform should look like. (For example, does this make secure email gateways more important?)

Attention is turning back to the endpoint

Another development I want to quantify in research this year is something I kept noticing in the agentic conversations at RSAC: an apparent return of attention to the endpoint. For years, and despite my protests, the importance of the PC has taken a back seat in a world full of browser apps. AI PCs gave it some new life, but the killer use case never really showed up. End-user agents and token economics might be what finally does.

Both forces point in the same direction. The agents that hold up under real workloads aren't pure cloud constructs. They mix local data, scripts and small models on the device with frontier inference in the cloud (like second brains), which makes the endpoint a real participant rather than a swappable terminal. And as knowledge workers start consuming tokens at agentic rates, the cloud bill turns into a budget conversation, which makes the idle silicon already sitting on people's desks look a lot more interesting. OpenClaw and Nvidia's NemoClaw are early proof points worth watching, and Nvidia CEO Jensen Huang has been making the same case from the supply side at GTC 2026. There's a lot more to unpack here, so I'll come back to it in its own post soon.

And so much more

There were lots of other themes, too. Browser security is all of a sudden on everyone's radar now that CrowdStrike and Zscaler bought Seraphic and SquareX, respectively, while Island is establishing itself as a workspace platform, not just an enterprise browser.

Human risk management is increasingly top of mind for email and messaging security companies. KnowBe4, Ironscales, Proofpoint, Abnormal and Mimecast all had something to offer around visibility, awareness and training of end-user behaviors.

There's also the overall convergence of endpoint management and security that's driving autonomous endpoint management and unifying the teams, tools and processes that are tasked with dealing with end users and their many devices, OSes and apps.

RSAC 2026 was an amazing show for me. End-user computing and digital workspace have never been this interesting.

Gabe Knuth is the principal analyst covering end-user computing for Enterprise Strategy Group, now part of Omdia.

Enterprise Strategy Group is part of Omdia. Its analysts have business relationships with technology vendors.

Dig Deeper on Desktop management