As the generative AI gold rush continues throughout the IT industry, VMware and Cisco are among the vendors planning applications for the technology in security operations.
This week, VMware rolled out tech previews for a group of generative AI add-ons under the name Intelligent Assist for its Tanzu, Workspace One and NSX+ product lines during its VMware Explore conference. NSX+ with Intelligent Assist ties in with network detection and response systems to help security analysts determine the relevance of security findings and more quickly remediate threats. Cisco officials, meanwhile, discussed plans to incorporate IP from the company's Armorblox acquisition into a new firewall policy assistant and security operations center (SOC) assistant over the next six months.
Security operations is an area ripe for renewal through generative AI, though machine learning and deep learning AI systems have been available in SecOps products for some time, according to analysts.
"Rather than a time-consuming analysis during an incident, using a generative AI-based interface to communicate in natural language to find the root cause, relevance or even possible solutions could help shave off minutes or hours when time is of the essence during major outages," said Andy Thurai, an analyst at Constellation Research.
"A human in the loop"
The phrase "a human in the loop" has become commonplace among IT vendors discussing generative AI tools. Virtually all vendors that use large language models to generate recommendations acknowledge the risk of their relative immaturity and the need to fact-check results with knowledgeable human operators before putting them to use.
"We were very deliberate in calling it Intelligent Assist," said Chris Wolf, chief research and innovation officer at VMware, during a VMware Explore press conference this week. "We firmly believe that the AI is there as a tool to assist the human, where the human is making the decision."
Generative AI has potentially strong benefits, but it's important to be selective at this stage about where to use it, said AJ Shipley, vice president of product for threat detection and response at Cisco, in an interview with TechTarget Editorial this month.
"The great thing about generative AI is that when it gets things right -- and it probably gets things right 99% of the time -- it's really good," Shipley said. "It's arguably better than a person -- certainly a person who's under a lot of duress because they're trying to respond [to an incident] in the heat of the moment. But the problem is when it gets things wrong, it gets them really, really wrong."
The technology may evolve beyond that limitation. But for now, Shipley said generative AI for SecOps is best applied "where it's OK to be 99% right."
For example, generative AI tends to summarize incidents well for reports from the SOC to upper management, he said. It can also recommend specific updates, including suggested code snippets, for firewall policies or responses to future incidents, both of which Cisco plans to add to its products between now and early 2024.
While there are risks when generative AI gets it wrong, there are also potentially unique benefits for SecOps as generative AI matures and distributed computing systems grow more complex, Shipley said.
"One of the leading causes of breaches of incidents in our organization is misconfiguration of devices, and specifically, differences in policies across multiple devices," he said. "You either have overlapping areas … or you have little, tiny cracks between different controls that adversaries could slip through. … AI could be a huge productivity improver by saying, 'This action in your firewall is potentially either going to complement or counter a capability that you have in another device' -- really helping to align those policies and eliminate those cracks."
VMware's Wolf cited another compelling impetus for applying generative AI to SecOps: Attackers are already using generative AI themselves.
"The threats that typical companies are facing are very sophisticated, and many of them are being driven by artificial intelligence," he said. "So now your defensive posture has to be equally -- if not even more [quickly] -- reactive than the threats you face."
SecOps humans prepare for the generative AI loop
While keeping "a human in the loop" ostensibly offers an insurance policy against false positives and other inaccurate results from generative AI tools, that insurance is only as effective as the human skills available to an enterprise organization, said Nathan Bennett, cloud practice lead at Sterling Computers, a value-added reseller in North Sioux City, S.D.
AJ ShipleyVice president of product, Cisco
"You should have a skilled NSX engineer validating the different pieces of NSX+ [with Intelligent Assist] before implementation," Bennett said. "If not, then the organization should invest in a development environment where generative AI changes can be tested before they are implemented in production."
Not every enterprise IT organization is ready to take on such a workload -- but some are, Bennett said.
"I don't really see a difference between generative AI assistance and utilizing Stack Overflow for assistance in IT tasks," he said. "The only real difference is the trust that the operator has when implementing. And in that case, most operators double- and triple-check their work, on top of change advisory board approvals, etc."
Beth Pariseau, senior news writer at TechTarget Editorial, is an award-winning veteran of IT journalism. She can be reached at [email protected] or on Twitter @PariseauTT.