Cybersecurity strategies are increasingly tapping into artificial intelligence technology, including machine learning, to detect threats quickly and determine an accurate response.
As the products mature, AI security technology is playing a big role in helping organizations fight off cyberattacks.
"We're at an evolutionary inflection point with AI and machine learning, where there's enough horsepower and compute and data to start finding security risks in new ways," said Brian Johnson, a former CISO who is now co-founder and general partner of Silicon Valley-based information security firm Crucyble.
Organizations today use machine learning in SIEM software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, it can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations.
As a result, AI security technology both dramatically lowers the number of false positives and gives organizations more time to counteract real threats before damage is done.
"Every company has tons of logs to go through, and what machine learning does is do smart things like prioritize," said John Pescatore, director of emerging security trends at the SANS Institute, a nonprofit that specializes in security and cybersecurity training. "So even with the same size security staff, the odds are much higher that they're able to find the most dangerous events."
AI security moves into the enterprise
Large organizations that can afford to hire the needed number of data scientists and employees with AI experience are building their own AI and machine learning capabilities, drawing on their own vast data sets to create algorithms customized to their unique needs, Pescatore said.
Most organizations, however, see such smart technologies coming into their enterprises in the security products they're buying.
CISOs are seeking out such technologies: The PwC "2018 AI Predictions" report found that 27% of the responding 9,500 business and executive respondents were planning investments in security tools with AI or machine learning capabilities last year.
But even as executives turned to smart technologies to boost their cyberdefenses, experts said organizations should understand how to both maximize the value of AI in their cybersecurity practices and recognize its limits.
To do that, experts said CISOs should keep in mind the following things.
AI security tools won't reduce the need for skilled security professionals. Experienced workers will remain crucial to robust security and compliance programs, as they'll be the ones to analyze the potential threats identified by the AI tools. For example, only humans can understand context when analyzing issues and threats, security experts said.
"AI is a decision-support tool -- it should take some of the drudgery away from humans, but you always need a human being around to make the final call," said Steve Wilson, vice president and principal analyst with Constellation Research.
Steve Wilson Analyst, Constellation Research
While security pros shouldn't worry about being completely replaced by AI, CISOs will be able to make better use of their employees by allowing them to focus on higher-value work instead of chasing false positives and handling mundane repetitive tasks, Pescatore said. As such, their security staff will need to be ready with the right skills to handle those increasingly complex tasks.
Actual investments are needed. Some security leaders and their executive colleagues are hesitant to make the investment in the artificial intelligence and machine learning capabilities because they feel their existing systems are adequate.
However, these new products provide more efficient and effective results that can quickly deliver a return on investment, said Crucyble co-founder Johnson, who is also part of the Center for Strategic and International Studies, a think tank in Washington, D.C. Moreover, these new capabilities will be essential to combat emerging cyberattacks that also use AI.
Data siloes must be broken down. Organizations need to train AI and machine learning systems with their data, yet they often have data residing in different places that needs to be brought together to give their systems the most accurate and complete picture of their operations, risks and security posture.
"If the security operations group doesn't have data from the network operations group or the business system running in the cloud, then the AI tools can't help," Pescatore said.
Not all products will produce the same results. Vendors are increasingly touting their products' AI security capabilities, but CISOs should ask their vendors to demonstrate how well their products actually work. "They need a vendor who has evidence in how these tools work in the wild, not in the lab," Wilson said.
It's also important to remember to manage your AI tools once they are in place. "Don't ever set it and forget it," Wilson said. "It's never going to work like that."
Security policies and processes should be reviewed and improved. AI and machine learning capabilities are usually paired with increased automation, so Pescatore said organizations need to ensure their processes are efficient and effective. Otherwise, the technology is built on broken systems.
It's also important to note that these AI and machine learning capabilities should only be part of a diverse security program. In other words, don't rely completely on emerging, AI security tech to protect data.
"Good CISOs still need to take a layered approach with diversity in their tools," Wilson said, noting that organizations should always ensure they have a multipronged approach to security, risk and compliance. "No AI tool is ever going to be the silver bullet for all your problems, so always have some diversity."