putilov_denis - stock.adobe.com

Government officials: AI threat detection still needs humans

At the Ai4 Cybersecurity Summit, infosec professionals from CISA and the state of Tennessee discussed the promise and potential obstacles of AI for threat detection.

Artificial intelligence provides enormous benefits for cyber threat detection, but the technology can't do the job alone.

That was the primary message during a session at the Ai4 2022 Cybersecurity Summit featuring two government cybersecurity professionals -- Garfield Jones, associate chief of strategic technology for the Cybersecurity and Infrastructure Security Agency (CISA), and Peter Gallinari, data privacy officer for the state of Tennessee. The duo discussed the promise of AI threat detection and fielded questions about what they saw as the future of such technology, the potential challenges and how humans will fit into the picture.

Jones made it clear early in the panel that every cybersecurity system implementing AI will still require human involvement.

"My perspective on this is that AI definitely has a future in threat detection and response," Jones said. "I have to caveat that with, we are still going to be needing humans to be part of this. As we evolve these tools, and [they] start to learn and perform faster for detection and response, a human is definitely going to be needed in the loop.

"With the quick-changing dynamics of the threat, AI has really become more prevalent. The capability and the computing resources are definitely there to collect the data, to train the data, retrain the data, retrain the algorithm in the data to provide us with a strong solution when you're looking at the best course of action for the present detection."

Gallinari noted that while AI can be and is already used for timely threat detection and response, one of its best features is its ability to run practice tests on cybersecurity systems.

"The best thing to come to the table right away is how quickly to detect [and] how quickly can we remediate an issue and see if there are any downstream concerns," Gallinari said. "It also gives you the ability of AI pretending to be a hacker, which is really nice. [AI models] can pretend to be a hacker to see what's in the mindset of a hacker just by profiling the data, and they could get a better [incident response] report out of what they could see coming in and what's going out."

Jones said AI is heavily reliant on being fed information from the user and is then able to take the data and identify the most efficient responses to a given threat.

"Once it's learned the threats, it's able to give you the best response and tell the analysts, 'This is the best response for this type of threat,'" Jones said. "With machine learning, it's going to give you the best probability as far as which response will help mediate that threat."

Both speakers, however, pointed out that the data being fed to the AI can often create issues. Gallinari, for example, highlighted the issue of "poisoned data."

"[AI] is using data, a lot of data, from all different sources," Gallinari said. "You have to think about the security concerns around using AI from a data privacy standpoint. Think about autonomous cars. AI is telling the car where to go and how to go and when to stop. If that data was ever poisoned, or compromised, that car's not going to stop. It will run somebody over. You have to think about what are you doing with that data, how pure is that data, who controls that data, what's wrapped around it?"

While he sees the risks, Gallinari also discussed the great advantages and threat detection efficiency that comes with AI.

"We're seeing things change every day out there, not by the hour, but by the minute. They're getting hit with over 600 malware events per minute, on average -- who could handle that?" Gallinari said. "We will never get the info we need quick enough if AI and machine learning were not components of the environment.

"So, I think that it's going to play a great part in the soft rollover of being able to distribute meaningful information as long as it's correct information, and then using the right inputs and right outputs coming out. I think everyone benefits, and at the end of the day, we can react faster before it can corrupt the rest of our environment."

Jones talked about how AI is already being used to strengthen cybersecurity at CISA, specifically in terms of access control and authentication within a zero-trust network.

"For us, it's basically tracking and controlling every access request, if you're not familiar with a lot of the zero-trust principles," Jones said. "When you start looking at behavior and the dynamics of behavior, and where the bounds are, AI and machine learning is really helpful. It's going to give that probability of whether this person should have access to some part of that network, if they really do need access based on their profile.

"Yes, we're going to get some inaccuracies from the AI, we're going to get those false positives as I talked about, and we're going to start to get scores and everything else like that. But I think zero trust and AI, that's a marriage that's going to be together very soon."

Jones also noted that while AI can be used to identify specific threats, the world of cybersecurity is always changing, and human analysts cannot be completely removed from the equation. Each new day brings a different type of threat, and both Gallinari and Jones said AI threat detection needs to be updated and maintained by human analysts so that it is prepared to handle any emerging threats.

At the end of the panel, Jones said AI could be used not just for threat detection and the discovery of malware once it enters a network, but for predicting when it's about to strike as well.

"I think it's going to go to the point where it's not just detection that we're looking at, but we also have AI when, once it learns enough, we're going to get to that prediction part of it, where it's threat prediction," he said. "I think that's where you'll see the bigger benefit."

Dig Deeper on Security analytics and automation

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close