E-Handbook: Employ AI for cybersecurity, reap strong defenses faster Article 4 of 4

aleutie - Fotolia


AI-driven cybersecurity teams are all about human augmentation

AI is often associated with technology replacing humans. In the case of AI-based cybersecurity teams, however, AI will augment its human counterparts, not supplant them.

The scope of cyberthreats is accelerating and widening, from enterprise assets and elections to health data and physical infrastructure -- not to mention the unforeseen effects of emerging technologies. With global infosec spending expected to reach $170 billion by 2022, all eyes are on the cybersecurity industry to innovate better and more resilient approaches.

These tools are increasingly powered by AI, an umbrella term for machine learning algorithms and related techniques that scale threat analysis and triage, better understand anomalies, automate response and, most importantly, develop proactive measures. Yet, unlike many other industries wherein process automation and AI foretell job losses, the future of threat intelligence is one of human augmentation, not displacement. Our research finds three main reasons for this.

1. To augment threat triage and prioritize threats

Today's AI-powered security tools use machine learning to augment security analysts and security operations centers (SOCs) in two important ways:

  1. By automating repetitive tasks, such as tedious data enrichment tasks or triaging low-risk alerts.
  2. And, as a result of the first way, by raising the baseline for threat intelligence, such that human analysts start with higher-order threats.

Taken together, these effects create a tertiary benefit. Security analysts historically had to spend hours compiling threat analysis reports, which are more about awareness and understanding than mitigating the risk itself. Thus, relegating lower-risk tedium frees human analysts' time for higher-value decision-making, which is not only beneficial from a risk mitigation perspective, but essential given the growing scope and complexity of today's constantly recalibrating threat landscape. Although automated techniques are better than humans at managing the volume of potential threat vectors, human analysts remain essential arbiters of controls, context, knowledge and explainability.

2. To supplement the talent gap

Human augmentation is critical to addressing the cybersecurity skills shortage. With the accelerating and widening trajectory of cyber-risks comes a parallel need for experienced security analysts. ESG reported more than 50% of businesses globally cite "problematic shortages" of security workers, and (ISC)2 predicted the number of security-related employees worldwide needs to grow by 145% to meet current demand.

AI-powered tools will never close this gap by themselves, but using them for automated big data analysis, reporting and triage is critical to scale an already dire talent shortage. In effect, such tools offer a force multiplier for the current and next-generation of AI-driven cybersecurity analysts because they:

  • are now table stakes given exponential growth of data, endpoints and threat vectors;
  • free up existing analysts and focus next-gen analysts on higher-order tasks needed for investigation, for example, nuance, patterns, creativity and expertise;
  • extend the reach of individual analysts -- less time is spent simply understanding what is going on and more time is spent mitigating and addressing risks;
  • increase productivity to free up senior analysts to mentor junior analysts; and
  • create threat analyses which, in aggregate, may help SOCs and national efforts toward stronger, multilateral, more proactive cyberthreat defense.

3. To extend 'democratized' security protection

The longer-term impact of human augmentation via AI-driven cybersecurity has far less to do with technology and more to do with people. It parallels an adjacent trend known as data democratization, in which organizations aim to better and more broadly activate enterprise data by empowering employees -- data specialists and average end users alike -- to contribute and extract insights without external assistance. Put simply, longer-term offense and resilience to attacks requires a culture of security in which every employee is trained, equipped and empowered. Outnumbering bad actors with a security-minded workforce may well be the best defense over time.

The supply side market of AI-powered security tools is already evolving in this direction. For example, in UX and UI, in integration with other software suites, multilanguage support, incorporating explainability into SIEMs and Security Orchestration, Automation and Response platforms and more. Meanwhile, on the adoption side of the AI-enabled cybersecurity market, there is a growing shift toward security and privacy by design across product teams; emerging employee interfaces and safety investments, such as augmented reality and audio; and a rising sense of how bad actors employ social engineering to ensure proper defense. Just as broader democratization of data relies on both vendor support and enterprise culture, training and investments, so, too, does the democratization of security protection.

Enterprises must constantly work to expand their arsenals, efficacy and security strategies, just as nefarious actors are doing. But instead of a race to the bottom of cyberwarfare, organizations play a vital role, one whose importance we may yet fully grasp and implement -- augmenting people with tools for protection and resilience in the digital age.

Next Steps

Augmentation a better approach than automation for AI

Dig Deeper on Security analytics and automation

Enterprise Desktop
Cloud Computing