putilov_denis - stock.adobe.com
Artificial intelligence, along with machine learning and other deep analytics strategies, is seeing a resurgence among enterprise technology practitioners. Companies are using AI to do everything from improving customer experience to optimizing supply chains, but one of the most widespread use cases for AI is improving an organization's cybersecurity stance.
That said, CIOs and other enterprise technology practitioners need to clearly understand where AI and ML can and can't assist cybersecurity initiatives. To clarify this, it makes sense to look at a range of use cases or scenarios in which AI and ML can be good fits. For a full list of potential cybersecurity scenarios, CIOs should have their teams assess against a standard framework; one of the best is the Mitre ATT&CK framework. A good way to get started assessing cybersecurity vulnerabilities is to use the Mitre ATT&CK framework to provide guidance around the types of attacks to which an enterprise may be vulnerable.
Cybersecurity's AI use cases
There are some scenarios where AI and ML stand out as highly effective cybersecurity techniques:
Log analysis. AI is ideal for problems that require automated correlation and assessment of large volumes of data. The challenge for cybersecurity professionals is often to translate information (the output of device, network and system logs) into knowledge (security alerts). Human security analysts don't have the mental or physical bandwidth to process these high-volume data streams and determine which combinations of data points equate to security alerts or events.
AI tools can find commonalities across disparate data feeds and convert data points into actionable events for analysts, thereby reducing the time required to uncover and respond to attacks. Log analysis tools that rely on AI and ML include products from Splunk, SolarWinds and LogRhythm.
SOC automation. Combining AI with robotic process automation (RPA) can reduce the time required to react to critical events. Essentially, AI plus RPA means that security analysts can preconfigure automated responses to ensure that if the AI uncovers a certain scenario (scenario X), the appropriate action (action Y) will be undertaken.
The action can be fully automated, meaning without requiring human intervention, or it can require a human to review and approve the action before it's taken. Many security orchestration, automation and response (SOAR) vendors leverage AI in their products to deliver this kind of functionality.
One benefit of using AI in this context is that it enables these tools to learn over time. Initially, a response may require human input, but over time SOAR functionalities can capture and codify incident response processes into dynamic playbooks. Vendors offering SOAR tools include IBM, Jask, Demisto (Palo Alto Networks), Siemplify and ThreatConnect.
Behavioral threat analytics. Many organizations ask, "Is any system or user in my environment behaving in ways that it, or they, shouldn't?" This question is harder to answer than it appears because the definition of "ways they shouldn't" can be ambiguous. For example, if an accountant normally accesses a specific database from an on-premises office during normal working hours, then logs in at 3 a.m. from a home office, that may or may not constitute inappropriate behavior. She could either be downloading proprietary information in preparation for a job switch or simply working late to prepare for closing the quarter.
The broad category of behavioral threat analytics (BTA) is an area in which AI provides a much-needed assist. Products that deliver BTA include those classified as user behavior analytics (UBA) or user and entity behavior analytics (UEBA), such as tools from Securonix, Exabeam, Gurucul and Splunk. They also include those classified as extended detection and response (XDR), such as eSentire, CrowdStrike, LogRhythm and Palo Alto Cortex.
Digital forensics and auditing. Another area in which AI can assist cybersecurity initiatives is when it comes to digital forensics and auditing. These efforts require sorting through large volumes of data to determine patterns that can uncover the anatomy of attacks and help identify perpetrators. AI-based digital forensics providers include Exterro Smart Investigator, IBM, FireEye, LogRhythm and Paraben.
Threat hunting and monitoring. Threat hunting and monitoring is another great application for AI within cybersecurity. As the name implies, threat hunting and monitoring solutions review a range of data sources -- such as logs, information about an enterprise environment and external threat monitoring or threat intelligence feeds -- to quickly determine whether an enterprise is at risk of attack.
Many of these tools use predictive analytics and automated profiling to give enterprise cybersecurity practitioners threat warnings in advance. Vendors and solutions in this area include Cybereason, Cylance, Anomali, White Ops, Darktrace and Sovereign Intelligence.
Benefits of AI in cybersecurity
In all these use cases, there are common benefits to using AI for cybersecurity. The top one is speed: AI can sort through vast amounts of data far faster than human analysts. When coupled with automation techniques and RPA, this faster analysis can deliver faster actions -- thus reducing the mean total time to contain a breach, a key cybersecurity metric.
The second benefit is robustness. Although it's not the case that AI is always right, AI algorithms are typically highly consistent. This minimizes the likelihood of errors due to inconsistency, which are common errors in any human activity.
Another key reason to deploy AI in cybersecurity environments is that the bad guys are already doing so. If your adversaries are using an advanced technology, it's risky not to use the same technology yourself. Hackers, particularly nation-state attackers, are already deploying advanced AI to uncover vulnerabilities and launch attacks. Any enterprise that's not deploying AI in response is at risk.
The Sorcerer's Apprentice, and the downsides to AI
All that said, AI has its downsides just like all technologies. The biggest one is often referred to as the "Sorcerer's Apprentice" issue. This refers to the risk of setting in motion actions and consequences that are no longer controllable by humans.
There are multiple examples of this outside cybersecurity. For starters, AI algorithms can foment violence among social media users by stirring up intense emotions -- particularly fear and anger -- with the goal of generating more social media engagement. Additionally, AI algorithms can also perpetuate racism by embodying unintentionally racist elements. For example, early AI-based automatic handwashing machines in airports did not recognize dark skin and wouldn't function for people who had darker skin tones.
There are two main approaches to mitigating these risks. The first is to build in human checkpoints that limit the autonomous actions of AI algorithms. That is, before an "automatic" response to an AI algorithm kicks in, a human must approve the response. The second is training. Most AI algorithms require training, and one of the best ways to prevent unanticipated responses is to train algorithms in the widest possible set of scenarios.
For instance, testing the handwashing algorithms with people that have a range of skin tones would have uncovered the skin tone bias and enabled these algorithms to be adjusted.
The bottom line is that AI has become an indispensable tool in the cybersecurity toolbox. Enterprise technologists should be moving to select and deploy the AI tools that are most appropriate for their use cases and environments.