kras99 - stock.adobe.com
Calculating the ROI of AI in cybersecurity
Investing in AI tools can benefit an organization's security posture. Understanding and quantifying those improvements, however, poses a real challenge.
As with many technologies, AI and cybersecurity are becoming increasingly intertwined. An organization can expect AI to support the cybersecurity mission in multiple ways, including reducing overall risk, boosting efficiency and making security more cost-effective.
What's not easy to determine is the ROI of AI cybersecurity investments.
Measuring AI's ROI: Metrics matter
When it comes to AI investments in cybersecurity, the ROI conversation must begin with the right metrics. Not all value shows up on a balance sheet, so security leaders need to think across three distinct categories: efficiency gains, risk reduction and cost avoidance.
Efficiency gains are often the most immediate and measurable metric. AI can effectively multiply the capacity of a security team without adding head count. Rather than asking how many people AI replaces, ask how many more actions your existing team can take with AI's assistance. The metric here is throughput, which is the number of incidents investigated, configurations reviewed or alerts triaged per analyst per day, before and after AI deployment.
Risk reduction is harder to quantify, but it is arguably more important for conversations with the board. Relevant metrics include mean time to detect (MTTD), mean time to respond (MTTR), reduction in the number of unaddressed vulnerabilities over a given period, and improvements in coverage across the attack surface. Security leaders should also track whether AI is closing the gap on configuration and patch management work that used to slip through the cracks. The common complaint, "We didn't catch that because we didn't have enough people," often stymies security organizations.
Another metric to consider is cost reduction. This includes avoided breach costs, reduced reliance on outside professional services for routine security hygiene and the cost differential between scaling AI capabilities and scaling head count to achieve the same outcomes. Reports from Gartner, IBM and others provide useful industry benchmarks about the costs of data breaches that CISOs can use to anchor these estimates.
The challenges of calculating ROI
Even with the right metrics defined, calculating ROI for AI in cybersecurity is genuinely difficult.
When a breach does not occur, it's nearly impossible to prove definitively that AI prevented it. Security has always struggled with this counterfactual challenge, and AI doesn't solve it -- it inherits it. The best approach is to establish clear baselines before deployment and track directional improvement over time rather than claiming precision that simply is not achievable.
ROI calculations are also complicated by shadow AI. Measuring the return on sanctioned AI security tools without accounting for AI deployments that create risks elsewhere will yield misleading results. Creating a complete inventory of AI usage -- sanctioned and unsanctioned -- is a prerequisite for any credible ROI analysis.
Another challenge is that AI outputs are not always reliable enough to act on. Organizations are confronting this in real time. For security use cases where a bad recommendation could take down a manufacturing line or open an attack vector, reliability isn't optional. ROI calculations need to factor in the cost of human review and validation that responsible AI deployment requires.
AI tools perform based on the quality of the data, processes and people they operate against. Organizations that lack clean asset inventories, consistent logging or mature detection workflows will see lower returns than those that have done the foundational work. ROI projections that don't account for an organization's starting point tend to disappoint.
Best practices for calculating and maximizing ROI
Getting the numbers right matters, but so does ensuring that AI investments deliver. Here's how leading CISOs approach both.
Start with business outcomes, not technology
Before deploying any AI capability, define the specific security problem you intend to solve. Decide what success looks like in measurable terms. This discipline makes ROI measurement straightforward because the metrics are defined before deployment, not retrofitted later.
Design with a human-in-the-loop mindset
Organizations seeing the best results from AI in cybersecurity are not trying to remove humans from the equation. They use AI to make human judgment faster and better informed. This design is not just good risk management. It also makes ROI easier to measure because it becomes possible to track how often and how quickly AI-generated recommendations are acted on -- and to what effect.
Report ROI in the language of your audience
CISOs presenting to the board need to translate security metrics into business outcomes: reduced risk, avoided costs and improved competitive positioning. When presenting to their team, a security leader needs to show how AI is making the work more impactful -- not threatening people's roles. Tailoring the ROI story to the audience is as important as the underlying data.
Establish baselines before deployment
It is impossible to demonstrate ROI without a before-and-after comparison. Document the relevant metrics, such as MTTD, MTTR, analyst-to-alert ratios and open vulnerability counts, before turning on any AI capability. These baselines serve as the foundation for every subsequent ROI conversation.
Revisit and recalibrate regularly
AI capabilities and the threat landscape they are designed to address evolve rapidly. An ROI framework that was relevant six months ago might need to be updated. Build quarterly reviews into AI investment governance processes and be willing to reallocate if certain tools underperform relative to their costs.
Ashwin Krishnan is the host and producer of StandOutIn90Sec, based in California, where he interviews tech leaders, employees and event speakers in short, high-impact conversations.