Return on security investment: The risky business of probability

You are better off with real numbers when it comes to measuring probability and the elements of security risk, even if they are wrong.

Pete Lindstrom

As most of us know, return on security investment is basically the amount of risk reduced, less the amount spent, divided by the amount spent on controls. Net amount of risk per amount of control is the essential formula for any "return on" ratio -- return on investment, equity, assets and so on. (It isn't like this stuff is just made up; there's history and an interest in consistency here.)

The challenge for technology risk management professionals is really a gut check: Are we really, truly reducing risk by the amount we are spending on security? As I noted in my November column, first, realize that you are making that assertion every time you allocate resources to some function. So take a step back and verify that the costs of your recent actions -- salaries, operating expenses, capital investments -- meet these criteria.

But breakeven is never good enough, and we really haven't gotten to the bottom of the individual values of probability and impact (the elements of risk). It's useful -- perhaps even crucial -- to have an objective understanding of these values; especially, because risk can generate a lot of emotions.

Real probability numbers

The first thing to recognize when you are trying to predict the future of "badness" involving intelligent adversaries is that there is no way to perform these measurements with precision, so you should opt for accuracy. You are better off identifying "confidence intervals" that bound the upper and lower likelihoods as tightly as possible. The tighter the range, the more you can call yourself an "expert," that is if it plays out in your favor.

As you consider your estimated risk for individual projects, remember that the probability x impact that you are addressing must be greater than the amount you are spending on it.

It's common for organizations to use scales like "very low to very high" or 0-5 (you do include zero, right?) to create these intervals. You are better off with real numbers, even if they are wrong. Research has shown that broad ranges and gaps tend to be in places when people try to interpret scales. Nowadays, it's not too difficult to find numbers to use as your guide in coming up with probability -- internal metrics, published data reports, surveys and so on.

One challenge with probability estimates is how to determine what the population should be (that's the denominator). This can be as simple as the organization overall -- three out of every 10 companies in the population. But more likely, the probability is based on percent of assets -- users, systems, applications -- expected to be compromised over a defined period of time. Completely crazy people (like me) may want to select a population of event actions.

A smart way to deal with the difficulty of identifying pertinent populations is to use frequencies instead; simply, estimate the number of unwanted outcomes per time period. This nifty little trick was introduced to technology risk management professionals in the Fair model. Predicted frequencies of attack, or compromised assets, focus on your specific organizational entity and are generally easier for people to work with. The downside comes when you are trying to normalize across organizations or getting more granular in actually addressing the risks by applying controls. (Hint: beware of the base rate fallacy.)

Bottom line impact

After determining the probability (or frequency), you then need to figure out how to measure impact. These measurements can develop through revealed preferences, but many folks believe you can't measure the impact on brand or reputation -- or being on the proverbial front page of the Wall Street Journal. The truth is, this kind of concern is often more reflective of embarrassment or reputation of senior executives than it is an impact to the organization.

To the extent I'm wrong in this assertion (Got a rise out of you, didn't I?), then you must agree that for economic entities like the organizations we protect, the only reason brand or reputation should matter is if it increases our short or long term costs, or decreases our revenue. And there's the rub, all estimates of impact can -- and should -- be translated into financial terms. I slyly added that "short and long term" qualifier, because often it's easier to assess the short term impact prospects of an incident than the longer term ones.

The technology risk management field has good data to start with on the costs associated with response and recovery from breaches. The Ponemon Institute's U.S. Cost of a Data Breach Study comes to mind, but there are a few others as well. The best part is these reports give us some notion of the categories to consider when estimating losses. Just stay away from any sort of "per record" cost as most costs and losses are fixed and don't vary with record numbers.

As you consider your estimated risk for individual projects, remember that the probability x impact that you are addressing must be greater than the amount you are spending on it. With a proper understanding and use of confidence intervals and estimates, you can do a better job of getting to that ever-elusive return on security investment.

About the author:
Peter Lindstrom is principal and vice president of research for Spire Security. He has held similar positions at Burton Group and Hurwitz Group. Lindstrom has also worked as a security architect for Wyeth Pharmaceuticals and as an IT auditor for Coopers and Lybrand and GMAC Mortgage. Contact him via email at [email protected], on Twitter @SpireSecor on his website

Dig Deeper on Risk management

Enterprise Desktop
Cloud Computing