AI security concerns keeping infosec leaders up at night
Enterprises must manage an overabundance of cyberthreats, with new attack vectors emerging constantly. Machine learning and AI security concerns in particular are keeping security leaders up at night, which further complicate an already daunting threat landscape.
The hand-wringing is not for nothing, either, according to Ariel Herbert-Voss, senior research scientist at OpenAI.
"The more popular machine learning gets as a tool to use for business, the more popular it gets [with] people who want to make money off of the exploitation of these kind of systems," Herbert-Voss said in a Black Hat USA 2020 conference session on adversarial AI.
Due to the hype surrounding this new technology, some decision-makers may assume they can resolve their organization's security risks by throwing AI at the problem. But it's not so simple, said Jessica Groopman, founder and analyst at Kaleido Insights. AI deserves the scrutiny of C-suite leaders, who, now more than ever, have their organization's security posture top of mind, Groopman said.
The best way to successfully implement AI in cybersecurity is to ensure CISOs factor in the potential drawbacks that could affect their organization's security posture. This context is critical to ensuring AI readiness.
In this video, explore the most significant enterprise AI security concerns -- and the potential implications of the technology on future infosec programs.
Editor's note: The following transcript has been lightly edited for clarity and brevity.
Fortifying security with AI has become critical for organizations to detect and respond to modern security threats. It's important to note: AI-enabled security tools serve to augment human security analyst roles, not replace them, said Jessica Groopman, founder and analyst at Kaleido Insights.
Groopman: You don't see cybersecurity AI agents taking over, you know, security operations centers today. It is very much kind of onboarding these tools, putting them into the hands of analysts, so that that use case of human augmentation or security analyst augmentation is really the most common starting point that we see.
Sophisticated AI capabilities in pattern detection -- for example, of user behavior or new attack types and vectors -- will be critical to information security programs, especially as the number of endpoints continues to increase. But AI cybersecurity deployment is not without its challenges. Just as organizations are incorporating AI into the security programs…
Groopman: … so are the various adversaries and bad actors. We cannot underestimate the sophistication of these new techniques, particularly when they are combined with social engineering.
Ensuring AI tool success will require significant skill-building. Analysts must be trained to use AI to triage and prioritize threat intelligence. But the need for education doesn't stop there.
Groopman: Something that gets lost in the discussion is the need for training the broader set of employees. If you think of any threat landscape as sort of the culmination of many different nodes -- whether people, devices, equipment, whatever -- the weakest link is the weakest node.
In addition to AI literacy, there is also the challenge of what Groopman called 'AI explainability.'
Groopman: This notion of AI explainability, which is sort of a term for how do we understand why machine learning models make the recommendations or assign the scores or deliver the outputs that they do.
It's easy for infosec pros to communicate suspicious patterns identified by AI, but they also must be able to explain how the technology came to its conclusions as well.
AI in cybersecurity is in its early stages, so not all the long-term security implications are clear yet. But, as with any emerging technology, the right training, communication and caution are key to a successful and secure deployment.