Alex - stock.adobe.com

AI security worries stall enterprise production deployments

From Big Tech executives at Cisco's AI Summit this week to market research, the industry is waking up to a major hindrance in enterprise AI adoption.

AI security has emerged as the single biggest impediment enterprise IT organizations face as they struggle to move AI deployments from proof of concept to production.

Industry leaders from Cisco, OpenAI, AWS and other major AI players, along with a new report this week from Black Duck, reached similar conclusions: enterprise trust in AI remains limited, a persistent problem that Cisco officials described as a "trust deficit."

"AI is moving at a pace much faster than organizations have the capacity to absorb it," said Jeetu Patel, president and chief product officer at Cisco, during a presentation at Cisco AI Summit on Feb. 3. "It's truly a paradox of progress. On one hand, every day, AI is solving harder problems. On the other hand, you can start to see that we are struggling with articulating a concrete impact on ROI with these technologies on a consistent basis."

This 'paradox of progress' has led to slower enterprise AI adoption than some AI pioneers anticipated.

"I was just naive and didn't think about it that hard, and in retrospect, looking at the history, it shouldn't be surprising," said OpenAI CEO Sam Altman during a co-presentation with Patel. "It feels fast in some ways relative to other things … and yet, looking at what's … possible, it does feel sort of surprisingly slow in terms of figuring out how to set up enterprises [to] quickly absorb these new tools and not have a year or multiple years of [evaluation]."

Altman also named AI security, especially data access controls, as a top concern in this area.

"How are we going to balance security and data access versus the utility of all of these models?" he said. "I don't think anyone has a great answer to this yet. It feels to me like there is a new kind of security or data access paradigm that needs to be invented."

It feels to me like there is a new kind of security or data access paradigm that needs to be invented.
Sam AltmanCEO, OpenAI

AI sovereignty has emerged as another hot topic in AI security, with the potential to hinder companies' purchasing and deployment decisions for AI services, Patel said.

"In this increasingly nationalistic world that we're starting to live in, [we're starting to ask], is sovereignty more important than raw intelligence?" Patel said. "Because every country, every company, might want to make sure that they're focused on resilience, and sovereignty is a proxy for resilience."

AI security as a prerequisite

"This is the first time that security is actually becoming a prerequisite for adoption," Jeetu said. "In the past, you'd always ask the question whether you want to be secure or you want to be productive, and those were kind of offsets of each other. And now what you're starting to see is if people don't trust these systems, they will never use them."

Screenshot of Jeetu Patel at the Cisco AI Summit 2026.
Jeetu Patel, president and chief product officer at Cisco, presents during the 2026 Cisco AI Summit.

Security as a prerequisite rather than an afterthought also emerged as a pattern in Black Duck's Building Security in Maturity Model (BSIMM) survey report published this week. Securing AI-generated code emerged as the top priority among 128 application security activities tracked by BSIMM for the first time in the survey's 16-year history.

Black Duck's assessment of 111 organizations' security activities last year showed a 12% rise in the number of teams conducting risk-ranking to decide where LLM-generated code can and can't be deployed. The survey also revealed a 10% increase in custom security rules designed specifically to catch AI-generated flaws. 

"[Organizations are] being very deliberate about bringing new technologies in, and when it comes to AI, we're seeing this more than we've seen with other technology evolutions," said Mike Lyman, principal consultant at Black Duck, in an interview with Informa TechTarget. "People are really trying to get ahead of the game and making sure the security is involved."

AI security and organizational change management

There are potentially security risks to delaying AI deployments, as attackers have already begun to use the technology to conduct faster, more intense attacks. Tools are also emerging from vendors to address AI security risks.

But the key to overcoming AI security hurdles and ROI challenges will be organizational change, including better defining goals and measuring the results of AI deployments, according to AWS CEO Matt Garman during a Cisco AI Summit presentation.

"One of the challenges many companies faced was that, when they started running a bunch of proofs of concept with AI, they didn't actually have good success criteria [at] the beginning," Garman said. "Knowing what that is and helping to find that is one of the first steps of really understanding which of those things to move to production."

One analyst who attended the Cisco AI Summit said he agreed that enterprises must place greater emphasis on well-defined goals.

"There was a general sense that organizations [at the event] need a more 'experiment-friendly' mindset to identify opportunities to deploy AI," said Fernando Montenegro, an analyst at The Futurum Group, in an interview with Informa TechTarget. "This includes finding the right projects -- mission-critical use cases or less important ones -- the right stakeholders -- both AI-savvy and domain experts -- and the right change management in place that allows experimentation."

Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism. Have a tip? Email her.

Dig Deeper on Systems automation and orchestration