SFIO CRACHO - stock.adobe.com

Tip

How to construct an effective security controls evaluation

Some CISOs believe their security controls are sufficient, but reach that conclusion without any method for measuring their effectiveness. There's a much better way.

I once received an ad from a company that promised to lower home energy costs by conducting a free energy audit. The audit, it said, could be done over the phone -- no home visit -- and would require absolutely "zero questions asked" -- i.e., about our current energy use, heating and cooling systems, insulation or anything else.

It struck me as objectively ridiculous. How can you reach a fact-based, evidence-driven conclusion without at least measuring something?

I bring this up because I see CISOs promising something similar with their security strategies. Namely, they say they can manage their security controls in the absence of important contextual knowledge, without information about control efficacy -- let alone efficiency -- and, in some cases, without any operational performance data at all. Yet, just like the information-free "energy audit," this approach undermines decision-making. Missing information means we pay more for an outcome that diminishes our control, makes no impact on reducing risk and yields poorer security overall.

By contrast, better measurement reduces risk. Contextualized performance information helps us understand how well controls perform relative to each other, which in turn makes investments more efficient and improves how we manage and operate those controls.

Let's take a look at how to better measure security controls, how to use the data collected to best effect and why a security controls evaluation matters in the first place.

Multiple angles of security control evaluation

To start, it's important to realize there are multiple dimensions, or vantage points, from which to measure controls. And there are countless ways to measure control performance. The three I've found most helpful to measure are:

  1. Effectiveness. Does the control work?
  2. Maturity. How reliable is the process supporting the control?
  3. Efficiency. How does the control perform economically?

The first area is perhaps the easiest to intuitively understand. Effectiveness assesses how well the control performs at its intended task. Is it implemented? Does it work? Is it appropriately scoped? Does it cover the portions of the environment we need it to?

If you were to conduct a compliance audit against a set of controls -- for example, something like the controls in ISO/IEC 27001:2022 Annex A, NIST Special Publication 800-53 or specific controls required by a regulatory framework such as PCI DSS -- this is the lens that would magnify most of the evaluation. In addition to measuring whether the control exists or not, though -- as you would for a regulatory compliance audit, for example -- you also want to account for how well it performs. The specifics of this will vary based on the individual control. Some systems might involve comparing rates of false/true positives versus false/true negatives; others might measure remediated versus unremediated issues, for example, quarantined malware versus unquarantined.

The second dimension is the maturity of the implementation or processes that support the control's operation. Different processes -- even those designed to achieve the same or similar outcomes -- can have different levels of maturity. Consider two separate approaches to a single task -- for example, change management. One company might use a disorganized process for oversight, while another uses a well-documented, quantitatively measured one. Even if these processes perform equivalently, the more mature process has advantages that the less mature one does not -- for example, resilience to adverse events such as personnel attrition or process failure. This leads to, in aggregate, more predictable security outcomes.

How might you measure maturity? There are whole frameworks devoted specifically to this. For example, the Capability Maturity Model defines five levels of maturity:

  1. Initial. Unpredictable, ad-hoc, reactive process.
  2. Managed. Planned with controlled requirements.
  3. Defined. Process documented and standardized.
  4. Quantitatively managed. Quantitative measurements -- i.e., metrics -- manage process.
  5. Optimizing. Continuous improvement loop in place.

The last dimension is efficiency -- more specifically, economic efficiency. As with maturity, how a company implements a control will yield different economic characteristics.

Once again, it's helpful to compare two implementations side by side. Take data discovery as an example. One company might use a software tool to find and flag files containing sensitive data, while another might pay hundreds of consultants to manually review individual files. Granted, no sane security program would use this second method. But this extreme example illustrates unambiguously that economic performance is not the same even if both approaches are equally effective and mature.

Indeed, the economic disparities are stark, ranging from initial startup costs and monthly and annual fees to Capex/Opex composition and total operating costs. To understand the economics of control performance, then, you need to understand and document each one. How? A useful starting point is the budget -- i.e., the actual hard dollars spent on any services or products involved in delivering a control both year 1 and year n costs. Extend this to factor in soft costs -- e.g., head count required to support, staff time, etc. -- as well as any other required financial outlays, such as data center, compute, storage, bandwidth, etc. Ultimately, the goal is to calculate the control's total cost of ownership (TCO) and use this as the unit cost in final risk/cost assessments.

Putting it all together

The next step in a security controls evaluation is to bring the information together. An approach I've used is to correlate these dimensions with quantitative risk scoring. This enables you to view controls through the lens of the amount of risk reduced -- quantitatively expressed -- per dollar invested, or, as security metrics guru Pete Lindstrom terms it, "risk reduced per unit cost."

This is valuable for a couple of reasons. First, it helps find underperforming controls and, once found, helps justify rescoping, realignment or even removal. If the idea of removing a control sounds scary or heretical, I get it. Security is, after all, a probabilistic discipline. Scenarios can arise where a legacy control -- even one that hasn't provided much value in years -- is precisely what would have prevented an attack.

In this context, opportunity cost is what you could have done instead with resources you invest in a given control. Unless you have an infinite budget -- and who does -- every investment comes at the cost of other measures you could have done but didn't.

But it's not realistic to keep every control around forever. One argument is opportunity cost: the second reason a risk/cost approach is valuable. In this context, opportunity cost is what you could have done instead with resources you invest in a given control. Unless you have an infinite budget -- and who does -- every investment comes at the cost of other measures you could have done but didn't.

Say an organization has an old legacy control offering little value in the current environment -- for example, a modem wardialer. For most organizations and barring exceptional circumstances such as industrial control networks, remote substation facilities, etc., a control like this provides negligible value in modern ecosystems. But consider how else the organization could invest those same resources. Container scanning? Secrets management? Cloud security posture management? A large language model gateway? These represent choices you could have made but didn't because the resources were already engaged.

Each of the three dimensions I covered earlier helps build your analysis. Effectiveness helps inform risk mitigation and, coupled with a quantitative risk modeling approach, can help you understand likelihood in a risk calculation. Likewise, maturity helps you understand impacts: lower maturity controls are less resilient, thereby increasing impact. Economic analysis helps you understand potential loss, opportunity cost and control selection.

Just as no reasonable person would accept an energy audit conducted without any actual data collection, security leaders shouldn't attempt to conduct a security controls evaluation without proper measurement. Yet, many enterprise security strategies do exactly this. They make decisions about control investments without understanding effectiveness, maturity or efficiency. That results in unnecessary risk and wasted resources.

The solution is straightforward: Evaluate controls by measuring their effectiveness -- does the control work as intended and cover what it needs to; their maturity -- how resilient and predictable are the processes supporting the control; and their efficiency -- what's the TCO.

When you bring these dimensions together, you can calculate, articulate and defend what truly matters: risk reduced per dollar invested. This gives you the ammunition to identify underperforming controls worth removing, leads you to better risk-based decision making  and reveals opportunity costs -- the better security investments you could make -- with those same resources. In short, measurement is the key to building a performance program that reduces risk efficiently.

Ed Moyle is a technical writer with more than 25 years of experience in information security. He is a partner at SecurityCurve, a consulting, research and education company.

Dig Deeper on Security operations and management