At RSAC 2026, AI optimism and anxiety -- and an MIA U.S. government
According to its most ardent proponents, AI is well on its way to creating a new, nirvana-like SOC, in which exposure and threat detection windows are measured in seconds, and human operators are liberated from endless alert triage and chronic overwork.
Its fiercest detractors, on the other hand, warn that AI could create an apocalyptic cyber-hellscape in which organizations' ungoverned use of agentic AI exposes their sensitive data, and attackers find and exploit vulnerabilities at machine speed.
The truth likely lies somewhere in the murky middle. AI, like any powerful tool, can be a force for good or evil -- and without proper safety oversight, it can create more problems than it solves.
Programming at RSAC 2026 reflected this push and pull between AI optimism and concern. In this Reporters' Notebook video, Rob Wright, senior news director at Dark Reading; Eric Geller, senior reporter at Cybersecurity Dive; and Alissa Irei, senior site editor at TechTarget SearchSecurity, discussed what they saw and heard at the conference -- and what the federal government's notable absence might mean for an industry wrestling with questions about AI governance and compliance.
Watch the full discussion now, and check out the following related articles, all part of the Informa TechTarget editorial team's extensive coverage of the RSAC 2026 Conference.
For more on AI in cybersecurity:
- How AI coding tools crushed the endpoint security fortress
- How AI caught a malicious North Korean insider at Exabeam
- Agentic AI's role in amplifying and creating insider risks
- CISOs debate human role in AI-powered security
- 'Do not shift budgets to AI': How businesses should and shouldn't respond to evolving threats
- ISACs confront AI's promise and peril for threat intelligence-sharing
- AI in the SOC: What could go wrong?
For more on the U.S. federal government's absence from the conference and the CVE program's uncertain future:
- 'Missed opportunity': US government's absence from RSAC Conference leaves stark void
- At RSAC, the EU leads while US officials are sidelined
- The CVE Program, a bedrock of global cyber defense, is teetering on the brink
Transcript: At RSAC 2026, AI optimism and anxiety -- and an MIA federal government
Editor's note: The following transcript has been lightly edited for length and clarity by Informa TechTarget's internal AI assistant.
Dark Reading's Rob Wright: Hi, I'm Rob Wright with Dark Reading.
TechTarget SearchSecurity's Alissa Irei: I'm Alissa Irei with SearchSecurity.
Cybersecurity Dive's Eric Geller: And I'm Eric Geller with Cybersecurity Dive.
Wright: And we are here to talk about RSAC Conference 2026. Yes, RSAC, which happened last week. You both were there on the ground in San Francisco. I was covering it from afar. I have my own thoughts on this, but wanted to see what you thought of the show last week, what you heard, and how it stacked up against the theme of the conference, which stood out to all three of us. Alissa, why don't you take it away?
Irei: Sure. The theme of the conference was community, which was an interesting and pointed choice because the acronym on everyone's lips at the conference and in general is AI. The choice to underscore the importance of community seemed intentional. It emphasized the importance of human operators and human involvement in AI processes. There's anxiety, not just in our field but in every field, about job replacement and AI use. The organizers were making the point that we still need humans. Artificial intelligence is not intelligent without human operators, and for the safety of ourselves and others, humans need to be involved in these processes. Eric, what was your impression of the conference on the ground versus the theme?
Geller: Everywhere you looked, there was a focus on AI, particularly understanding the threat landscape and trying to get ahead of it with new defensive solutions. That was a common theme in many sessions, even if they weren't explicitly billed as AI talks. For me, the big theme was the tagline on all the posters, "The Power of Community." However, a major part of the community was missing -- the federal government, which pulled out of the conference a few weeks before it began. Every year, government representatives attend to listen to the community and discuss their own plans. This is one of the places where those conversations are the most fruitful, according to many people I spoke to before and during the conference.
There's anxiety about what this absence means. It raises questions about whether the government is as interested in participating in these events as it used to be. There have been cuts at agencies that work closely with the business community and security researchers who make up much of the attendance at RSAC and similar events. This absence was a striking contradiction to the emphasis on community. Many people wondered whether it sends a broader signal. We're looking for more information from the government about the cybersecurity strategy they recently released. Many felt RSAC would have been the perfect place to roll out details about what the strategy means in practice. That didn't happen, leaving a void in conversations typically stewarded by federal agencies.
Wright: That's interesting. My colleague Becky Bracken at Dark Reading wrote about how other governments, such as those in the EU, brought their cybersecurity experts to discuss developments in their regions. However, the gap left by the US government was noticeable. I wrote a story a few weeks ago about spyware policies and a potential shift in US policy. Many opponents of spyware, including civil society organizations, cybersecurity researchers and vendors specializing in this area, expressed concern about a lack of communication and cooperation with the government. They felt they were flying blind, with no clear strategy or direction. Eric, to your point, this absence has made a major impact.
Irei: It's an interesting moment of unprecedented change. Ideally, this would be a time for public-private partnerships, cooperation and input from the private sector on public regulations and legislation. The absence of the federal government is notable and unlikely to ease anyone's anxieties about AI, which are already plentiful.
Wright: My anxiety is off the charts. Let's talk about AI. Managing all the stories coming in and covering sessions, it was clear that AI was a major focus at the show. More than two-thirds of the sessions had some AI component or were solely focused on AI. One thing I found interesting was the split between C-level executives and researchers. Researchers emphasized the need for human oversight and caution with agentic AI rollouts and coding assistants. They called for more guardrails and oversight. On the other hand, some higher-ups argued that human oversight should be eliminated because it slows things down, and the whole point of AI is to speed things up. What were you seeing or hearing?
Irei: On the business side, there's enthusiasm for new AI use cases and experimentation, often with a "ask for forgiveness, not permission" attitude. And at least from what I saw and heard, this creates opportunities for bad outcomes. Eric, I think you wrote about a session discussing vulnerabilities introduced by vibe coding and the lack of oversight. It's troubling, to say the least. On the flip side, I attended a session with the CISO of Exabeam, who shared an example of agentic AI deployed in their SOC. It autonomously identified a North Korean malicious insider on his first day. According to the CISO, the AI flagged the activity within hours, if not minutes, of the individual logging into his account. Eric, I'll let you weigh in. I know you wrote about this topic.
Wright: That's a good point.
Geller: One of the quotes that stood out to me in that panel I covered was a guy who basically said, "If AI wrote your Yarrow rules, you should delete them now because they're probably crap." And it really speaks to this hunger for automation. And also, I think this hunger for, frankly, profit margins. The fewer people you can pay to do this work, the more money you're going to make, the better you're going to look to shareholders, the more venture funding you can raise. This is really only partly about security. It's largely about looking profitable by shedding some of that labor cost.
Of course, we've seen what happens when you let the AI run rampant. It miscategorizes things. It could cost you a lot of money if you let it do its thing without human supervision. The theme that emerged in a lot of these talks that focused on AI was not so much a balancing act, but kind of both at the same time. Yes, you want some kind of agentic solution taking those mundane tasks off the plate of your specialized expert human, but you also want some kind of governance framework in place so that there's a human periodically dropping in to review what's going on.
If you've got an AI agent that is out of control, you'll see the signs of that when you drop in and check on what it's doing. If it's mismanaging things, if it's mislabeling things, you're going to see evidence of that. And so I think that's where a lot of the conversations ended up: yes, there's a real reason why, especially SOC managers, are looking for ways to change the role of the analyst and bring AI more into the threat analysis part of the job. But at the same time, just as you need human supervisors for human workers, you're going to need human supervisors for AI workers because nothing human or machine is infallible.
Particularly at the scale at which some of these companies operate, the stakes involved in protecting the networks or leaving them defenseless are high. We're talking about a lot of money that can be made or lost, and so you do want a human being involved checking the work of the AI agent.
Wright: Yeah, and that makes sense to me. I know one of the sessions I covered last week, one of the stories I wrote, was from a Check Point session. The researchers basically said that we spent 20 years building up all these security measures to protect our networks, shore up defenses around the endpoint and move execution to the cloud where it's theoretically, or I guess in practice, a lot of times safer.
The AI coding assistants were basically punching holes through these defenses and setting security back. Literally, they said it was setting security back a decade because now it was giving attackers a route from their endpoint -- from an employee's endpoint -- to the crown jewels, to development environments, to really important data. That didn't used to be the case. All this work that was being done for the last 10 to 20 years is now just being thrown away.
The thing that shocked them was how many companies were rushing to these tools without any acknowledgment that, even without a vulnerability, even if you're not exploiting a critical flaw, you're still creating a tunnel from a simple workstation that's probably underprotected to some really important parts of the network that are highly privileged.
They were surprised that people were just going full steam ahead with this stuff and not taking a beat to say, "Hey, is this the best idea? Do we need to do more to protect this? Do we need to do more to oversee what the agents are doing and the privileges we're giving to these coding tools?"
I was surprised myself to hear their surprise. Based on what I was seeing and hearing at the show, I don't think that's going to change anytime soon. Even with all the research out there about the various vulnerabilities and the expanding attack surface that AI introduces, it doesn't seem like many organizations or people are going to suddenly say, "We need to take a step back." If anything, it feels like pressure is continually mounting to make the most of your investment in AI and, like Eric said, shed costs, save money and reduce workforces. That was the concerning thing for me -- just seeing that split and that dichotomy.
Irei: It's tricky too because we talk a lot on SearchSecurity about participating in the discourse around security culture and the importance of security being a business enabler -- not being the department of "no" -- and aligning yourself with business objectives.
Wright: Hmm.
Irei: Which is all true and important. On the other hand, the culture does seem, to your point, Rob, like it's going in that direction of full steam ahead. Don't ask questions. Don't say anything that's going to slow down the road to profits generated from AI.
Wright: Yeah.
Irei: Yeah, it's distressing, I guess.
Wright: Any closing thoughts from the show? Takeaways, surprises, anything that stuck out to you other than the stuff we've already talked about?
Geller: Well, I'll offer one that's sort of related to AI, which is about the CVE program. We've really been hearing a lot of warnings about this program for almost a year. I think it was April of last year when they almost lost their government funding. In the year since then, people have been saying that this is not sustainable.
People have been working in Europe to create alternatives to the CVE program. There are at least two of them in operation right now, one of them run by the European Union. In addition to the precariousness of not having a guaranteed government funding source, there's also the other problem battering this program right now: AI.
The vulnerability reports are coming in faster than they can handle. A person on the panel about CVE from GitHub said the numbers -- the incredible volume of vulnerability reports submitted through their system -- are staggering. A lot of them are coming from AI agents looking for vulnerabilities. Many are low quality, and many are hallucinating vulnerabilities where none exist.
That is an incredible amount of work to sort through. For a program already struggling to classify and label these vulnerabilities just to get them in and out the door and give them a number, AI is making it even harder. It's a tidal wave of reports, most of which are garbage. This is not what this program needed at this moment, but it is a trend that is only going to accelerate.
I think about the AI agent that jumped to the top of the HackerOne tables last year in terms of reporting the most vulnerabilities. We're not putting that genie back in the bottle. What that means for the CVE program, which is really at the bedrock of everything in cyberdefense, is something I'll be watching very closely.
Irei: Yep.
Wright: All right. I bet the AI companies love this because they're probably going to say, "Well, they're going to need AI to decipher all the AI slop that's coming in and sift through it all to find the good stuff."
Irei: That makes me think, Rob, about an informal conversation I had with Diana Kelly, the CISO at Noma Security. She gave a talk on model collapse and the inevitability of AI consuming its own content. The theme of the talk was "Idiocracy," the movie. If the models keep consuming their own content, at some point, we all become very, very stupid.
That brings us back to the theme of community and the importance of human contributions and intelligence. I'll also add, to be the voice of optimism here, that there were moments in the conference -- like the CISO from Exabeam's talk I mentioned earlier -- where there are exciting examples of AI doing what it's supposed to in the SOC.
We know SOC analysts are overworked and overstressed. If these AI agents can alleviate some of that burden, sift through the noise and bubble up actionable items, that would be awesome. Is it the end of the world as we know it or a new level of nirvana in the SOC? Probably somewhere in between would be my guess.
Wright: I'll try to be optimistic. I like ending on an optimistic note, so we'll leave it there. The power of community and positive thinking about AI and its future applications for cybersecurity.
Irei: The power of community.
Wright: Yeah, there we go. Thanks so much, guys. Really appreciate it.