Getty Images/iStockphoto

Tip

AI's growing cybersecurity role

Artificial intelligence capabilities are increasingly used to detect cybersecurity threats. As threats proliferate, AI cybersecurity capabilities will likely be the norm.

Cybersecurity teams are eager to deploy AI-powered tools and are driven by factors including persistent hiring challenges. Vendors are eager to sell their AI-powered tools either to differentiate their new wares from older iterations as capable of meeting rising needs or to stand out among their competitors.

Machine learning and other AI techniques are applied to multiple aspects of cybersecurity, including anomaly detection, solving the false-positive problem and conducting behavioral threat analytics. They can also provide rapid, accurate responses to any cybersecurity compromises.

AI for identifying threats

The basic challenge of cybersecurity is seeing enough of what is going on to determine when things are happening that shouldn't. This means detecting anomalies in the logs of system and network events that stream from every piece of an infrastructure, as well as all the major application and cloud services and environments. AI tools, without feeling bored or exhausted, can pay unwavering attention to event data streams, correlate and learn from events or observations in other environments via threat feeds.

This makes AI central to the most important kind of security analysis: behavioral threat analytics (BTA). BTA, sometimes called user and entity behavioral analytics or UEBA, looks at the event streams with a focus on individual actors. Beyond analyzing unusual behavior in an environment, it analyzes how a specific person or system behaves at a specific time and in a specific context. As more organizations aim to implement a zero-trust architecture (ZTA), BTA will come to the forefront as the critical link between ongoing behavior in an environment and the enterprise trust map that ZTA relies on. BTA tells the environment when to reduce or remove an entity's right to operate within it.

Other aspects of anomaly detection include:

  • threat hunting;
  • fraud detection (not a cybersecurity threat per se);
  • malware discovery; and
  • phishing detection.

AI for containing threats

Automated responses to security compromises are quick, enterprise-scale and highly reliable. With an improved ability to act in ways sensitive to context, AI systems can improve traditional security automation.

The main reason cybersecurity professionals tell us that they don't lean more heavily on fully automated responses to detected compromises is trust. They don't trust their own or their vendors' ability to automate both fully responsively and responsibly. In their experiences, it's been OK to automate a raft of security actions in response to ransomware attacks spreading across company networks and put that automation behind a metaphorical "big red button."

A security operations center (SOC) employee can press the button when that kind of attack is detected. It has not been OK to close that loop, though, by allowing software to decide to press the button. Cybersecurity professionals tell us they haven't been able to build in enough rules for deciding what to do in such a moment, and they aren't confident they will even know in advance what relevant conditions would be needed to program those rules.

The hope is that AI systems will have some baseline of broader understanding of how to respond to attacks while preserving as many normal operations as possible. They should be able to take actions ranging from temporarily blocking a single account's access to a system to fully quarantining nodes at the network level in a way that provides full containment of a threat -- all with minimal impact to users, systems and the business.

The hope is that AI tools can relieve the burdens of cybersecurity staff by redirecting their attention from low-level evaluation of myriad events to higher-level evaluations of anomalous ones.

Why modern security will rely on AI

A common idea unites these use cases. For decades, IT has used security software to replace human hands to perform repetitive actions. Also, people -- with some help from tools -- have had to do all the event analysis, anomaly identification and event correlation required to discern indicators of real security problems from torrents of false alerts and other data. They have therefore been the only means of formulating and executing responses to breaches or attempted breaches. With the help of AI, software will be able to replace some of that human attention. AI should be able to do some of the evaluating of events and enact the correct responses to maintain and, ideally, improve cybersecurity.

Human attention is the scarcest resource in any cybersecurity team. Cybersecurity teams are persistently challenged to find, train and retain staff, and are typically overworked (like most IT professionals). The hope is that AI tools can relieve the burdens of cybersecurity staff by redirecting their attention from low-level evaluation of myriad events to higher-level evaluations of anomalous ones. For instance, a well-executed zero-trust strategy should result in an environment in which fewer anomalous events can take place.

AI cybersecurity platforms

Some AI is built into special-purpose security software under the hood. For example, RedSeal can analyze a network as built and find all the possible ways traffic could flow in it. Balbix, INFRA and Secureworks Taegis are all built to identify vulnerabilities in an environment. Other platforms, like Securiti, can uncover protected information at risk across multiple clouds.

Different systems can provide AI assistance as a platform for broader security automation. Security orchestration, automation and response (SOAR) packages, as well as latter-day security information and event management (SIEM) systems and extended detection and response (XDR) systems are intended to infuse their systems with AI to power anomaly detection. Vendors including Fortinet, Palo Alto Networks, Splunk and Swimlane have taken up AI as a central feature of new and evolved products.

The infusion of AI into cybersecurity products and operations feels inevitable, given the realities of cybercrime, cyberactivism, the internet and the need to fix the cybersecurity staffing crunch. The speed of adoption will depend largely on how well the products deliver on their promises. Cybersecurity teams will need time to develop trust in the new capabilities, and significant misfires will likely turn into major delays in adoption.

Dig Deeper on AI business strategies

Business Analytics
CIO
Data Management
ERP
Close