Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It’s become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It’s a good idea in theory, but it’s mostly bunk in practice.

Before I get into the details, there’s one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It’s an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn’t make sense in this context.

But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercises knows, when you’re trying to make your numbers, cutting costs is the same as increasing revenues. So while security can’t produce ROI, loss prevention most certainly affects a company’s bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth. Conversely, it shouldn’t ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you’re wasting money. Spend less than that, and you’re also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it’s worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn’t.

The Data Imperative

The key to making this work is good data; the term of art is “actuarial tail.” If you’re doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you’re having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won’t get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn’t enough good data. There aren’t good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don’t even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we’re trying to prevent change so quickly that we can’t accumulate data fast enough. By the time we get some data, there’s a new threat model for which we don’t have enough data. So we can’t create ALE models.

But there’s another problem, and it’s that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can’t argue, since we’re just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you’re in charge of terrorism mitigation at a chlorine plant. What’s the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I’m making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They’ve jiggered the numbers so that they do.

This doesn’t mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don’t even show the vendor your improvements; it won’t consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you’re deciding what security products and services to buy.

This essay previously appeared in CSO Magazine.

Posted on September 2, 2008 at 6:05 AM48 Comments

Comments

Anonymous September 2, 2008 6:21 AM

I’m getting weird typographical characters. Let’s see if copy’n’paste works: ”. Are those two boxes ASCII 52 and 94?

Anyone? Anyone? Bueller?

andy September 2, 2008 6:31 AM

yes, i see the same weird characters – an A with a tilde followed by a box containing the numbers 00 above the numbers 82 and an A with a circumflex followed by a box containing 00 above 94

Matthew September 2, 2008 6:40 AM

I disagree with the premise that ROI doesn’t account for cost savings. I understand the point you are trying to make, but I think you are fudging the question/setup in order to provide the insight. Cost savings due to is a perfectly valid and critical factor to consider when determining ROI.

The more critical piece to discuss around security and investment is responsibility to customers – which you discuss often. I think of security like toxic waste – it’s cheaper to dump it, sometimes even if you get caught, but it’s not right and legislation needs to take it out of the ROI line and force it into the cost of doing business line.

Matthew September 2, 2008 6:44 AM

Slight further note – I am commenting on SW/consumer product security costs, not state/physical where legislation and “security” has especially impacted quality of life in the US and the UK.

CG September 2, 2008 7:02 AM

“Spend less than that, and you’re also wasting money.”

Not necessarily: When a countermeasure that costs only $100, mitigates the risk completely: you’ve earned yourself $900 per year.

Ian Eiloart September 2, 2008 7:58 AM

‘the increased waiting time has “killed” 620 people per year’. I don’t get it. Waiting is equivalent to being dead?

If I offered to reduce your waiting time to zero for every journey you make, and take half of that off the end of your life, would you take my offer?

You’d be better advised to learn to chat to the people you’re waiting with. Or buy a magazine, a book or an iPod. Heck, you’re about to board a plane where you’re expecting to wait a few hours before getting off. If that were equivalent to death by installments, you’d learn to do without flying.

Derob September 2, 2008 8:00 AM

I think this is just a very long argument about something very simple. If you have good data related to security measures then a good estimate of cost savings, ALE or ROI (or whatever you like) can be made.

If you have not, you cannot. For instance, if you did not keep statistics, if events are to few to keep meaningful statistics, if the problem is new, if the costs are difficult to express (e.g. deaths), or if the cost are external.

BTW I think it should be much easier to compile meaningful statistics in the IT domain then about societal things such as the effect of security cameras on the streets to protect shopkeepers.

Phillip September 2, 2008 8:47 AM

@Ian Eiloart

Fly much? 🙂 You may as well be dead. If you look at it from a purely economical view, unless you are checking emails or doing something of “value” (to your company, not you) on your Smart Phone, you may as well be dead.

I’ve had the exact same thought process as Bruce just had shortly after all the new brilliant security measures were put in place since 2001.

Kashif September 2, 2008 8:59 AM

I just started to read Schneiers old Book (Secrets & Lies) and in the first few chapters he talks about changing security ROI by 1) forcing statutory charges on companies neglecting security 2) wjich would force the CEO’s get some compensating insurance 3) and these insurance companies would force the CEO’s to invest in sensible security to reduce insurance policies…

Some time some regulation does help the market to get some incentive.

Of course we all would have to live with the resulting higher costs of products/services due to these enforced security but I believe the net result would benefit us all in the end

Wicked Lad September 2, 2008 9:05 AM

Good essay, Bruce. Thank you. I agree with Matthew about cost saves, but really, most of security ROI is indeed focused on tail events, and by definition you can’t get a good, usable data set on that.

I’ve been working in information security for over 20 years now, and the most successful way to sell security is fear, specifically fear of auditors, regulators and embarrassing lawsuits. Once you make that sale, you need to help management justify it by the everyone-else-is-doing-it argument and/or (embarrassingly) with an ROI that consists of wild guesses about the potential loss and wild guesses about the probability. When I have to provide that, I explain these are wild guesses, but management takes them seriously anyway. Go figure.

Martin September 2, 2008 9:17 AM

Some kind of “ROI” analysis is probably inevitable, but engineers and other professionals (MDs etc) have other yardsticks: professional standards, codes of ethics, “best practices”.

For example, is it OK to send your backup tapes out via UPS unencrypted? Playing the odds, the cost of development time to develop, test, and maintain a secure strategy might exceed the expected “savings”. But I’d argue for it anyway, because it’s best practice, a place where we want to be.

At some level, we tell management that if you want to do that, you need to find another engineer.

Bill September 2, 2008 9:51 AM

@Ian Eiloart

“If I offered to reduce your waiting time to zero for every journey you make, and take half of that off the end of your life, would you take my offer?”

Half of zero? Yes, I’ll take your offer.

Humunculous September 2, 2008 9:56 AM

Sure the Postini calculator is nonsense, but the comment that comes back to the inside with the SMTP 354 is “Feed me” and a Little Shop of Horrors reference ought to count for something… 🙂

kangaroo September 2, 2008 9:56 AM

And what about secondary costs? You’re security system ends up cutting productivity by 10% – a number which has to be pulled out of your *ss as well, since you can’t really account for the costs of a badly implemented security system: straight time wasted, opportunities lost, and so on. And who’s calculated these costs? Probably the same dim-wit you’ve hired for 45k to be your “security analyst” so you can cover your due-diligence *ss.

And what about all the costs of actually getting a somewhat reasonable estimation? Those quickly rise (information versus time is greater than exponential).

So the non-insane do as Martin says: do what’s “right” instead of trying to calculate a cost-benefits analyses of the unknowable.

Bill (again) September 2, 2008 10:03 AM

@martin
“…it’s best practice, a place where we want to be. ”

Who gets to decide ‘best practice’? How have they decided it? What makes them the authority? What makes you think I want to be anywhere near it?

‘Best practices’ is all to often the mantra of the uncritical, and lends itself too easily to tick-in-a-box security theatre.

Recent example – auditors said internaly our VOIP should be encrypted. I asked “why?” they replied “best practice”. I got a quote, Cisco wanted $100,000 USD for a countermeasure that wouldn’t mitigate the 101 other ways Eve can listen in on Bob. Not worth it, not doing it. End of.

Mariner September 2, 2008 10:03 AM

I’m trying to figure out how this helps me, or is the point that I’m beyond help?

Let’s say I’m the manager of a computer-controlled water treatment facility for a large metropolitan area. My operations seem put me in the “It gets worse when you deal with even more rare and expensive events….” paragraph, with any answer being a guess.

I would seem to be in a no-win situation. If I spend $100 of my million dollar budget and nothing ever happens, then it’s perceived that I’m wasting $100. If I spend three quarters of that million dollars and there’s a catastrophe, then I get hung out to dry for not asking for enough money.

SecureApps September 2, 2008 10:12 AM

At what point in time did things always become about the bottom line more than about the quality and the reputation of the product and company that was being run?

I just wonder how many of these McMansions will be standing in 100 years like some of those old mansions are.

Brandioch Conner September 2, 2008 10:35 AM

@Bill (again)
“Who gets to decide ‘best practice’? How have they decided it? What makes them the authority? What makes you think I want to be anywhere near it?”

That depends upon what segment you are talking about.

“Recent example – auditors said internaly our VOIP should be encrypted. I asked “why?” they replied “best practice”. I got a quote, Cisco wanted $100,000 USD for a countermeasure that wouldn’t mitigate the 101 other ways Eve can listen in on Bob. Not worth it, not doing it. End of.”

There isn’t enough information there.

Are you sending unencrypted voice packets over the Internet? If you are, that is stupid.

Or is your VoIP system strictly internal and the external communications go over a regular PBX and T1?

You’re confusing “Best Practices” with what an auditor tells you. They aren’t the same.

If you’re complaining about the cost of Best Practices … why weren’t those costs included in the cost of deploying those services?

Patrick Cahalan September 2, 2008 10:59 AM

A coauthor gave a talk at IRMA two years ago, and the keynote speaker’s throwaway line in his speech regarding ROI analysis was, “We just need to stop doing this.”

His premise was that IT projects are inherently emergent, and as such it was impossible to truly calculate value. While I agree that ROI analysis on emergent projects is basically a smoke and mirrors game, a lot of IT is commodity work, and an ROI analysis is probably not only doable but worthwhile.

There’s also the additional factor that if you’re working on selling a project or defending a project to a bunch of MBAs, outright refusal to talk using their language is not a good way to establish a good working relationship.

When it comes to security analysis, I think the right place to start is finding catastrophic failure points and eliminating them from your organization. Such a large volume of security announcements include business practices that are simply mindbogglingly stupid from a risk-reward (as opposed to cost-return) standpoint. Cost-return is great for cutting costs, not so great for deciding “hey, this is really dumb, we need to stop doing this in the first place.”

If a data breach could cost your company $10 million, you can probably reduce your risk window substantially without spending any money at all -> rather than buying encrypted databases and laptops with FDE and whatnot… just… stop… giving… access… to the database… duh? If you only need to protect three machines, it’s not going to cost you much no matter what you’re doing. If you need to protect 50 laptops carried around by the sales and executive staff (who don’t actually need that data, and don’t take care of it)… it doesn’t really matter how much you spend and what you do, you’re going to lose control anyway. Full disk encryption isn’t going to help you; Murphy’s dictates that the laptop that you lose is going to be the one with the password taped to the bottom of the laptop. Lojack systems are going to help you, but you’re still in the news as having lost the laptop in the first place.

Pete September 2, 2008 11:02 AM

It may be worth distinguishing between the ROI that might be gained by reducing O&M costs, perhaps by outsourcing an existing security function, and the ROI gained by reducing ALE — often called ROSI for return on security investment.

The difference is crucial as the former is about efficiency and can be proven through a look at the financial records while the latter is about effectiveness and is extremely difficult to prove, even though most security buying decisions include a form of ad hoc ROSI.

neill September 2, 2008 11:49 AM

excellent math! 930 people per year probably does not include those who die from higher-than-normal stress while being searched/detained or tried to get off the no-fly-list!
one should look at man-hours etc to see the total loss of productivity and total cost of those measures

Spider September 2, 2008 11:57 AM

Awesome. I just proposed looking at security in the exact same fashion as What Bruce just described last week. I thought I was just pulling it out of a hat. Nice to see my idea is backed by some serious security pros.

Clive Robinson September 2, 2008 12:31 PM

Bruce, your view point appears to be aging sensibly 😉

However you have not yet mentioned the “M word” (metrics) which is the foundation of any sensible actuarial approach.

You can collect all the “good data” in the world and look at it all you want but untill you have a reliable way of measuring the data and being able to do more than simple comparisons then you are not going to get any where in a hurry.

Industry “Best Practice” is as some here have noted not realy worth a “hill of beans” unless it is based on “verifiable reality” which means you need reliable “data gathering” and “measuring and comparison” systems.

The first issue is that in general data released to the public from private organisations is very unlikley to be either raw or reliable. The second is that as we do not have any real metrics we have no way of assessing reliably what data we do get.

jnarvey September 2, 2008 1:05 PM

As you say, measuring ROI for web security in particular is a pretty tricky endeavor. One tech blogger has attempted to square the circle with a blog post appropriately entitled: How To Calculate Return on Investment for Web Security. http://www.pcis.com/web/vvblog.nsf/dx/how-to-calculate-return-on-investment-roi-for-web-security

There’s also a white paper from PCIS on the topic (that quotes you in it’s second part): Calculating Return on Investment of Devfense for Web Application Security
http://www.boonbox.net/pdf/WP_DevfenseROI_2008August.pdf
The quote? “If the chance of you being attacked is one in a million and I change it to one in two million … I have halved the amount of money you should spend… Maybe your reputation is worth US$20 million, or maybe it is only worth US$10 million, or maybe it is worth US$40 million. Suddenly I can completely perturb your budget — because the numbers are so big and so small that minor changes … make huge changes to the product.”

You’re awfully consistent, Bruce. I always get an education from reading your blog. Cheers.

paul September 2, 2008 1:27 PM

Or just look at the waiting time from a purely economic standpoint. Figure a mean of $20/hr for airline passengers (probably low, but then kids travel too), and that’s $7.6 billion a year before the cost of the security apparatus itself, which would be enough to rebuild about one WTC annually…

That number is also bogus, of course, but it does a good job of pointing out that most such numbers are.

Fergie September 2, 2008 1:35 PM

My favorite quote on the topic of “Network Security ROI” is from Dennis Hoffman, whoi is (was?) RSA vice president of enterprise solutions, speaking at the IT Security Training Conference in Washington, D.C, in 2006, regarding the issues surrounding how to put a Return-on-Investment (ROI) value on security.

He said:

“The day before a breach, the ROI is zero. The day after, it is infinite.”

That about sums it up for me.

Ref: http://www.gcn.com/online/vol1_no1/42229-1.html

Anonymous September 2, 2008 4:01 PM

“And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth.”

This is not quite correct when you consider externalties. While it will still make sense to pay nothing for security if the incurred damage is an externalty it probably does not make sense for society as a whole as the sum of all risk mitigation efforts for each customer is much higher than the equal risk mitigation effort made at the companies side.

Think of government database security for example. They often get lost, fall into the wrong hands etc. But the average citizen can’t or won’t get his lost money (e.g. due to identity theft) back from the government, so the govt is lax about security since it is no risk for them.

Same applies for banks and securing of bank accounts when the customer has to pay or the mentioned catastrophic high risk, low probability events where it’s usually the govt that does the cleanup instead of the company itself.

Scott September 2, 2008 4:12 PM

If the airport waiting has “killed” less than 1000 people (or bored them to death perhaps), if stopping the extra screening would actually allow another 9/11 style weapon leads to hijacking leads to crash into large building disaster, than you accidentally make the TSAs point, since more people died in 9/11 than that.

Crane September 2, 2008 4:24 PM

Maybe the only worthwhile reason for utilizing security ROI analysis is to compare available options for addressing a significant risk. Take the security camera example that Bruce used. It’s rare when a shop owner would start a risk analysis by looking at the ROI of the camera. More than likely the ROI process would start when the shop owner perceived robbery as a significant risk and decided to weigh the various options for reducing that risk. While it may be difficult to accurately predict the ROI for installing the camera, even a good guess would have value when used to compare the ROI from hiding a gun under the counter, hiring a security guard, installing a mantrap at the front door, etc.

Anonymous September 2, 2008 4:26 PM

Nice try, Scott, but that figure was only for 2007. Seeing as we’re into year 7 of this post 9-11 world, the cumulative figure would be closer to 7,000 people “killed”.

kats September 2, 2008 6:20 PM

Don’t forget to add in all the time wasted by TSA agents, too. If it weren’t for the stupid screenings, they would be able to spend their time more constructively, helping the economy rather than killing it. Assuming 1000 TSA agents, in the 7 years since 9/11, they’ve wasted 7000 person-years all by themselves.

Dean Petrovic September 2, 2008 8:40 PM

Interesting article – though I would like to point out that it isn’t -always- “loss prevention”; There can actually be earnings benefits of security.

Example: A financial instituation that implements a fancy 2-factor Auth token could benefit from increased revenues based on favourable customer perception.

Rob Lewis September 3, 2008 8:59 AM

Definitely agree with Matthew.

A customer told us that as well as eliminating the insider threat for a very valuable intellectual property (they had had two previous breaches that were embarrassing), they were pleasantly surprised that our products paid for themselves within a year. Some of those savings have continued for another 4 years, to the point that operational costs have been saved to the tune of 2 or 3 times the original purchase price. I would call that a real ROI.

bob!! September 3, 2008 3:19 PM

It’s worth noting that in many cases, the best expenditure of mitigation dollars is on insurance, rather than security.

MateFrio September 3, 2008 6:19 PM

After the business puts a value on the data it’s all down hill from there finding the ROI right?

/ 🙂

Filias Cupio September 4, 2008 12:29 AM

Another point: if the chlorine company is only worth $10 million, then the company can’t possibly lose more than this in the explosion. Although the damage may be $10 billion, $9.99 billion of that is an externality to the company.

Particularly if the company is struggling, it makes financial (but perhaps not ethical) sense to ignore low-chance, high-damage risks. A classic example would be a small airline cutting corners on safety.

Gabriel September 4, 2008 2:03 PM

“and the increased waiting time has “killed” 620 people per year”

This line prompted the thought–since people are usually killed after living a significant amount of their life, it may be more accurate to portray this as “killed” 620 newborn babies per year…

I know it was an example and not the main theme of THIS post… but that example really drives it home for me.

Kramer September 4, 2008 5:12 PM

@bob!!

“It’s worth noting that in many cases, the best expenditure of mitigation dollars is on insurance, rather than security.”

Good point, however, it’s also worth noting that insurance providers are in the business of making money. As such, many are now requiring artifacts proving implementation of “sound” security technologies/practices/policy/etc. before they agree to take on the liability…however, for a few dollars more….

crane September 4, 2008 6:17 PM

The problem with “best practices”, as hinted at Bill (again), is that they’d be ok if they were “the best ideas that thinking people would apply in this situation”, but that most often they’re “what the people who sit on the committee decided upon, often for political reasons, and that they’re viewed as a box that you have to tick rather than something that you should actually think about”. Indeed, it’s revealing that people who actually DO things rarely talk about best practices, it’s people who TALK ABOUT DOING THINGS or who MANAGE things who are fondest of discussing “best practice”.

Tamara September 8, 2008 9:01 PM

Eureka! This article makes it very clear that one of Schneier’s greatest talents is teaching Logic. I wish more Logic was taught in schools, maybe skirting the “Math” word that scares so many. By articulately joining “common sense” with trained Logic, Schneier helps organize issues by size and threat, by significance. In Astronomy sometimes this is casually called ‘order of magnitude’– a way to get a grip on the significance of an issue without a napkin to write on to figure out the exact statistics.

Andrew S September 18, 2008 6:46 PM

In competitive markets you’ll generally get several acceptable options to mitigate your risk, all of which give you a high ROI.

Only in the world of monopolies and bad competition (i.e. government) do you end up with prices which are set based on the ROI that the seller thinks you need.

Mats October 30, 2008 9:51 AM

The article in CRYPTOGRAM on Secuirty ROI merits a comment.
[I agree with many others regarding the limitations of ROI calculations
and also on the problems of metrics.] However,…

The notion that the loss reduction should be equal to the security costs
is fundamentally wrong. The article says “if you have… a 10 % chance of
…reducing the loss of 10000$ then you should spend 1000$ on security.
Spend more.. or less than that..you are wasting money.” Not so.

The best spending is when an additional, marginal dollar saved in reducing
the expected loss (ALE) is exactly balancing the extra dollar spent on security.
You may consider two increasing curves; one with the increasing gain in
reduced ALE, one with the increasing costs for security. The optimal spending
is where the difference between these two curves has its maximum.
If you have a linear line for the security costs, the optiumum
occurs where the reduced ALE-curve has the same slope as the security cost line.

In fact, spending 1000$ to gain 1000$ does not seem to be a good choice.
It will most probably be a “waste”.

Marcadel December 3, 2008 5:53 PM

Dealing with low-probability high-impact events has always been a problem in ROI calculations. Sophisticated risk assessment approaches that measure risk as a current price for future regret require multiple scenarios including some that oppose current wisdom. In my opinion without at least one scenario that is very optimistic and proposes some kind of breakthrough strategic gain from a security initiative, and another scenario that is very pessimistic and proposes a catastrophic loss or total failure of the organization, you cannot actually make a really informed decision.

The approach I developed for this was to gather one over-optimistic and one over- pessimistic scenario and keep refining them to be more extreme until literally no one in the room could accept that either one was realistic. In other words, if the scenarios had been presented to them cold, they’d have to assign them probability zero. However, since the scenarios weren’t presented cold but strongly resembled ones that they could accept at least as possible, it was relatively easy to get them to agree to nominal probabilities for these scenarios, on the grounds that someone more qualified than themselves might believe in them, or that they might be wrong. If any of these scenarios was assigned a probability above 3%, we’d keep going to make them more extreme until they were ideally at about 1%.

Then, the group was able to easily agree on very robust “best case” and “worst case” scenarios, stated in the terms that emerged from discussing extreme cases. Finally they could characterize the “status quo” or linear continuation as a fifth scenario and begin to understand just how fragile and assumption-prone it was – ideally assigning it a probability of maybe 33% of continuing through say five years.

These five scenarios could be augmented with others as new information emerged during the planning period, but would at least provide a properly probability weighted ROI analysis that at least listed most of the major expected gains and regrets. And even some idea of which changes in conditions would warrant a re examination of those probabilities.

I used this for strategic innovative projects inside companies and nonprofits.

If you use this approach in security ROI analysis, I’d like to hear how it went… 😉

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.