This content is part of the Conference Coverage: The latest from Black Hat USA 2023

Google to discuss LLM benefits for threat intelligence programs

Large language models are the backbone of generative AI products launching in the security space. Google will discuss how best to integrate the technology at this week's Black Hat USA.

LAS VEGAS -- At Black Hat USA 2023, Google will demonstrate how organizations can best utilize large language models, such as those used in generative AI products, to benefit their threat intelligence programs.

The Thursday session, titled "What Does an LLM-Powered Threat Intelligence Program Look Like?," will be hosted by Google Cloud data scientist Ron Graf and head of Mandiant intelligence analysis John Miller. Artificial intelligence technologies and LLMs such as Google PaLM and OpenAI's ChatGPT are poised to be major focal points at this year's Black Hat conference, starting with an opening keynote Wednesday morning from Maria Markstedter, founder of infosec training firm Azeria Labs.

Google's session will, according to the conference website listing, "evaluate how this advancement aligns with a framework for CTI [cyber threat intelligence] program capabilities, and assess how security leadership can factor the emergence of LLMs into aligning their CTI functions' capabilities with their organizations' needs."

AI was the theme of RSA Conference 2023 in April, as a number of vendors launched generative AI-powered products and features. IBM, for example, announced QRadar Suite, a subscription service for AI-driven threat detection, while Google launched its Google Cloud Security AI Workbench offering, a security suite that uses generative AI to enable services such as prioritized breach alerts and automatic threat hunting.

During a pre-briefing, Graf told TechTarget Editorial that to utilize LLM-based technologies effectively and gain a return on investment, an organization must carefully consider implementation. If done well, however, "it can result in exploiting data sources that you're often overlooking," such as translating log and packet data into something human-readable.

"The tasks that are best suited for LLMs are high volume of text-type tasks that require less critical thinking," Graf said. "Specific examples could be very basic malware reverse engineering reports, where instead of having an analyst pore over lines of assembly, you could engineer a process where the LLM processes the assembly from the malware sample and produces a report for humans."

Graf added that due to the nature of LLMs and interpretations (including hallucinations), the organizations must utilize critical thinking and apply a framework to utilize the technology. "If you're short on time and the LLM comes back with something completely fabricated, it won't result in some crazy repercussion where you've shut down your production network or something like that," he said.

Graf and Miller emphasized that the opportunity for LLMs exists best as a companion to existing workflow, where stakes aren't as high, and a quick initial analysis could speed up the organization's processing ability. Miller called it the "low-hanging fruit." Examples include reviewing log data and answering stakeholder questions in an accessible way.

Miller said he wants the audience to come away with the feeling that LLM implementation has been "demystified."

"What I hear now are people saying their senior leadership is asking if a product is going to save millions of dollars in the next budget. And the hopefully helpful takeaway is that they can confidently speak to what the answer is," he said. "And the answer is, there's a lot of opportunity right now for organizations to figure out how to deliver improved security outcomes with the resources they already have."

Miller cautioned that while LLMs can provide valuable assistance for existing CTIs, they won't replace an organization's team of experts. But they may be able to give infosec professionals the ability to show a higher return on investment for their existing security resources.

While the cybersecurity industry has rapidly embraced LLMs and generative AI following the launch of ChaptGPT, there has been little insight thus far into how effective the technology can be for security functions within enterprises. In June, security experts told TechTarget Editorial their thoughts on the rise of generative AI and LLMs and debated whether emerging products are more the result of technological innovations or product messaging.

Alexander Culafi is a writer, journalist and podcaster based in Boston.

Next Steps

Google and Mandiant flex cybersecurity muscle at mWISE

Dig Deeper on Security analytics and automation

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close