LAS VEGAS -- As expected, the theme at Black Hat USA 2023 was generative AI, led by product announcements and keynotes about a cultural explosion approaching a full year of relevance.
The annual security conference was kicked off with an introduction by Black Hat and DEF CON founder Jeff Moss as well as a keynote by Maria Markstedter, founder of Azeria Labs. Markstedter's keynote, titled "Guardians of the AI Era: Navigating the Cybersecurity Landscape of Tomorrow," was dedicated to the transformation and challenges generative AI is bringing to the security industry.
"When the world is shifting toward a new type of technology, corporations are racing to dominate the market," Markstedter said. On the vendor show floor at Black Hat, references to AI, generative AI and large language models (LLMs) were clear and visible.
During the initial keynote, Moss and DARPA announced the "AI Cyber Challenge" (AIxCC), a two-year competition that intends to challenge computer scientists and software developers to develop AI-powered cybersecurity tools.
The semifinals and finals for the competition will be held at Black Hat 2024 and 2025 respectively. Each of the top five teams in the semifinal will be awarded $2 million each, while the top three winners in the final will be awarded $4 million, $3 million and $1.5 million (in order from first to third).
"In the past decade, we've seen the development of promising new AI-enabled capabilities. When used responsibly, we see significant potential for this technology to be applied to key cybersecurity issues. By automatically defending critical software at scale, we can have the greatest impact for cybersecurity across the country, and the world," said Perri Adams, DARPA's AIxCC program manager, in a press release.
Evolving or plateauing?
Following the cultural and technological explosion brought forth by the launch of OpenAI's ChatGPT last fall, the security industry has seen an enormous investment in LLM-based technology such as generative AI as well as multiple high-profile product announcements.
Google and IBM both announced generative AI-powered offerings back at RSA Conference 2023 in April -- a show that also heavily featured generative AI as a theme. Microsoft, meanwhile, announced its "Security Copilot" virtual assistant in March.
This week, Tenable announced its generative AI-powered offering named ExposureAI. The new platform will integrate LLM-capabilities into the vendor's Tenable One platform and can offer customers capabilities such as prioritized mitigation advice, actionable insights and recommended actions. Tenable CTO Glen Pendley told TechTarget Editorial that the big differentiator between it and other recent LLM-powered offerings is the quality of the security vendor's data.
Approximately a year out from ChatGPT's emergence, vendors have launched new generative AI products and others have integrated features into pre-existing products. Though each product has its own capabilities, there are also clear overlaps. Google Cloud Security AI Workbench and IBM's QRadar Suite, the respective vendors' recent AI-powered launches, both feature automated threat hunting and prioritized breach alerts.
Levi Gundert, CSO at Recorded Future, said he agreed that there's "a little bit of a learning curve already built in at this point." Gundert said he spends a large part of his time now considering how to utilize the technology beyond basic intelligence and toward second-order thinking, which refers to a being's capability to think long-term and consider secondary consequences.
Gundert's company in April announced Recorded Future AI, a tool based on OpenAI's GPT model that can make real-time, automatic threat assessments of a customer's environment. According to an accompanying press release, the model has been trained in a decade of data from the vendor's Insikt Group threat intelligence team.
Brian Fox, CTO of supply chain security vendor Sonatype, said he thought one issue keeping generative AI capabilities at a kind of plateau is rights management -- the idea of who owns the data fed into and created by AI models.
"It's that kind of concern that I know we have and I'm sure others have. We will get past those things, but as we sit right now, there are still those unanswerable questions," he said. "That's why I think everybody's gravitating toward saying, 'It's useful for these cases. It might be useful for these other cases. But we're not sure what the legal implications or the security implications of doing it are.'"
Eric Skinner, Trend Micro's vice president of market strategy and corporate development, told TechTarget Editorial he thought there would be a brief plateau in the technology's capabilities for the time being because some of the more obvious use cases have been "quickly knocked down."
"There was a quick assessment that this [technology] actually targets a real problem today. There are junior employees in security teams who are struggling with the bombardment of alerts, and so you can apply it to that problem," he said. "I think this plateau is short lived, because there are going to be the next wave of more innovative, more unusual use cases, for this technology. And the same thing is going to happen with the attackers, so we're going to see attackers figuring out some interesting ways to use generative AI."
Alexander Culafi is a writer, journalist and podcaster based in Boston.