ANSI Cyberrisk Calculation Guide

Interesting:

In a nutshell, the guide advocates that organizations calculate cyber security risks and costs by asking questions of every organizational discipline that might be affected: legal, compliance, business operations, IT, external communications, crisis management, and risk management/insurance. The idea is to involve everyone who might be affected by a security breach and collect data on the potential risks and costs.

Once all of the involved parties have weighed in, the guide offers a mathematical formula for calculating financial risk: Essentially, it is a product of the frequency of an event multiplied by its severity, multiplied by the likelihood of its occurrence. If risk can be transferred to other organizations, that part of the risk can be subtracted from the net financial risk.

Guide is here.

Posted on October 24, 2008 at 7:04 AM26 Comments

Comments

Carlo Graziani October 24, 2008 7:31 AM

“Essentially, it is a product of the frequency of an event multiplied by its severity, multiplied by the likelihood of its occurrence.”

Er, isn’t there an extra term there? Shouldn’t the risk be just the sum over events of severity times either “likelihood” or “frequency” (actually event probability)?

Clive Robinson October 24, 2008 7:41 AM

Sounds like another “sticking plaster for a broken bone” and about as usefull.

When oh when are people going to start comming up with Metrics…

It is pointless saying “ok your gut tells you that A is more of a risk than B”, you micht as well ask is that because you had pickle on your chesse sandwich…

What we need is first “raw data” then “proto-metrics” which can be reviewed and refined to produce real metrics.

Until then all of these systems are without doubt “security theater” so draw up your seats sit down and enjoy the entertainment…

BillF October 24, 2008 7:52 AM

This is a good start. Security Breaches affect everyone in the organization, and since everyone is a stakeholder, then everyone should be part of the risk equation. Also, a good risk analysis will combine the results of both qualitative and quantitative analyses to derive a more accurate and useful risk assessment.

Bob H October 24, 2008 9:33 AM

This sounds exactly like what CERT has been preaching for almost 10 years now. OCTAVE was designed by CERT to help businesses assess risk management. It is a highly complex strategy to allow management to do a risk assessment. The ANSI strategy outlined in this post reminds me of the OCTAVE approach.

Unix Ronin October 24, 2008 11:04 AM

“If risk can be transferred to other organizations, that part of the risk can be subtracted from the net financial risk.”

That looks an awful lot like “If it can be made into somebody else’s problem, screw it, it’s no longer /our/ problem.”

Adam October 24, 2008 11:06 AM

Matt,

You make up the numbers, like we’ve been doing for 30 years.

In The New School of Information Security, Andrew and I point out that all these ALE-derived formulas suffer the exact same problem: we have something between no data and awful data for both the loss magnitude and probability.

Pat Cahalan October 24, 2008 11:52 AM

You make up the numbers, like we’ve been doing for 30 years.

In many ways, this makes the most sense, actually. Well-trained security analysts who are familiar with an organization’s processes are probably decently equipped to establish SWAG numbers for this sort of thing. Well-trained security analysts who are familiar with an organization’s people are also going to be better equipped to judge when those numbers ought to be tweaked to encourage better behaviors (e.g., “The CEO loses his laptop every 4 months. We either need to encrypt it or keep him from carrying sensitive data on it. Frankly, we don’t want him carrying this data around anyway, so we’ll massage the numbers to show encrypting all the laptops is way too expensive and we ought to just have stricter data retention policies…”)

Unfortunately, there aren’t enough well-trained security analysts and most of those aren’t internal to an organization, so they don’t know either the people or the processes well. All of these “metric” solutions are trying to quantify security across industry and organizational boundaries, and since security is ultimately a human problem, they’re going to fail to establish a usable “industry standard”.

Ragnar Schierholz October 24, 2008 12:25 PM

Involving everyone who might possibly affected certainly sounds very right, but probably also not that new.

I have a big concern though (and I’m sure not the first in history to bring it up): How do you come up with reasonable numbers for the likelihood for a cyber risk to materialize? This is an approach that has been adopted from the insurance companies. Just, the insurances have long-term statistics about occurrences of events (floods, thunderstorms, avalanches, etc.) and the “threat landscape” of these events is fairly stable. However, first of all, there is little to nothing comparable to these statistics in the cyber security space and second of all, the threat situation is by far not as static. So even if we had detailed incident statistics, how much of a probability can you derive from threat statistics from 5 years ago?

Clive Robinson October 24, 2008 12:28 PM

@ Pat Cahalan,

“All of these “metric” solutions are trying to quantify security across industry and organizational boundaries, and since security is ultimately a human problem, they’re going to fail to establish a usable “industry standard”.”

The problem with assuming it is a “human problem” is that it is effectivly a cop out, that is I wave my arms and say it’s to difficult.

The problem is that we have been doing that for thirty years and things are getting worse not better, and for the obvious reason I would not expect this trend to change.

As I have noted befor and others have above is the lack of data.

More peoples health has been improved by actuaries realising that there was a connection between being a given weight for your height and early death than by all the slimming pills and potions the medical and quasi medical profesion has come up with.

The advantage the actuaries had was raw data and lots of it to study and test ideas on.

We are only going to reverse the trend if we know, not guess the best place to put scarce resources. Buying snake oil because it comes in a nice package and you think it’s going to work is not going to help you…

Adam October 24, 2008 12:54 PM

Pat Calahan,

You write “Well-trained security analysts who are familiar with an organization’s people are also going to be better equipped to judge when those numbers ought to be tweaked to encourage better behaviors.”

Someone looking at such advice might see it as the height of arrogance for technical people to be massaging the numbers to reach the outcomes they want.

Such behavior is exactly what drives people to quantification: a desire to stop people from manipulating outcomes.

Davi Ottenheimer October 24, 2008 1:29 PM

“You make up the numbers, like we’ve been doing for 30 years.”

In that sense, show me something that is not “made up”.

This reminds me of the tenets of Hume’s treatise on empiricism. Matters of fact can be falsified (i.e. precedent — things that have happened in the past), but relation of ideas (e.g. algebraic formulas) are analytic truths.

Matters of fact are easy to debate but try looking at ALE as an exercise meant to be directed towards analytic truths.

So, the problem you are raising (albeit a philosophical one) is not really 30 years but actually 300 or more years old.

Pserp October 24, 2008 2:58 PM

“If risk can be transferred to other organizations, that part of the risk can be subtracted from the net financial risk.”

I agree with unix ronin… if not careful then the use of statements like this could lead to the complete abrogation of managing that risk!

Well a good example of this can be seen in the current global “credit crisis/economic downturn” debacle where abrogation (at multiple levels) of managing and understanding the actual underlying credit risks allowed for fraudulent activities to go unnoticed (at multiple levels) to bring us unstuck in a way far bigger than could have been anticipated by any risk models that all those highly paid quants came up with.

All those companies just assumed that the risk had been hedged and so they were “safe as houses” (bad pun intended!)

Pat Cahalan October 24, 2008 4:36 PM

@ Clive

The problem with assuming it is a “human problem” is that it is
effectively a cop out, that is I wave my arms and say it’s to difficult.

No, it’s not. It’s certainly possible to tackle human problems. You do it with policy, process, and enforcement. You can certainly quantify lots of security problems with metrics (sorry, re-reading my earlier comment it has a lack of depth here), but you cannot metricize everything in security any more than you can metricize everything in IT… ITIL and other frameworks attempts to do so non-withstanding.

The problem is that we have been doing that for thirty years and
things are getting worse not better, and for the obvious reason
I would not expect this trend to change.

I don’t see that we’ve been doing that for thirty years. We haven’t been doing anything in particular for thirty years. We’ve been broken into a bunch of small camps of people trying to solve different parts of a huge problem using wildly differing methods. Probably the best place to start is by examining the classes of problems and figuring out which approaches work best for which classes.

The advantage the actuaries had was raw data and lots of it to
study and test ideas on.

Sure. But this is actually most useful (a) for establishing correlations and (b) when you’re crunching quantitative data. Figuring out the causal part is another step. Crunching this type of data illuminates areas that need to be explored further, it doesn’t make the problems go away… and a lot of these types of investigations are all qualitative data that requires a lot of study by someone who knows how to assess qualitative data in order to do anything meaningful with it.

@ Adam

Someone looking at such advice might see it as the height of arrogance
for technical people to be massaging the numbers to reach
the outcomes they want.

Oh, absolutely. For one thing, it certainly is arrogance if you don’t know what the hell you’re doing (which, you’ll note, I imply is unfortunately very common in the second half of that comment). This wasn’t intended to be “advice”, actually… just an observation. And as far as observations go, I’ll stand by that assessment -> if someone is an expert at (a) security (b) the organization and (c) the people, they aren’t just “technical people”, and they certainly know better than anyone who is not an expert in all three of those areas.

Such behavior is exactly what drives people to quantification: a desire
to stop people from manipulating outcomes.

What Davi said, more or less. Guess what, this approach won’t stop people from manipulating outcomes.

Unlike actuarial tables that are taking quantifiable data and crunching numbers, what you’re talking about here is qualitative data. So you can fold, spindle, and mutilate it to your hearts content but if you don’t know how to baseline it in the first place, you’re going to have something of dubious value come out the other end.

What you actually need, if you want to have a reasonable security analysis done in your organization, is for a couple of social scientists (say, a sociologist and an organizational science expert), a knowledge manager, a couple of high level security experts, a network guy, a couple of systems administrators, an accountant, and two legal analysts to come in a climb up your behind with a 10,000,000 candle-power flashlight. You need to give them the time and access to find out what you do, how you do it, and how important it is, and then you need to do what they tell you to do.

RIP October 25, 2008 11:10 AM

anyone see greenspan telling congress how he modeled risk in financial markets for 18 years, while he dismantled the security checks that had been previously built in,
he overlooked the psychology of a world of hustlers talking to each other and getting further and further from reality in thier frame of reference.
Security, should not forget some basics of psychology, the disgruntled or opportunistic insiders, the enemy outside etc.

risk assessor October 25, 2008 12:06 PM

Everyone should note a lesson from the current financial crisis: Systemic misestimation of risk can yield unpredictable and severe consequences.

At first draft here, I was going to write “Misestimation of risk can result in unforseen severe consequences.” But many people did see the housing bubble, and some of them did see that risk was being misestimated in the CDO and CDS markets, and a few of those people forsaw severe consequences, and a couple of them predicted a severe meltdown. All the same, I don’t think anyone was able give a detailed advance prediction of exactly what’s been happening in the financial markets recently. Thus: Systemic misestimation of risk can yield unpredictable and severe consequences.

I think that’s a general statement about the nature of risk–at least in large, complex systems.

Bottom line: WAGs about risk may be more dangerous than they appear at first glance.

Clive Robinson October 25, 2008 4:38 PM

@ Pat Cahalan,

You should view my comments about metrics from the opposit direction.

1, Firstly we do not have unlimited resources.

2, Secondly to be effective resources have to be primarily focused where thay will next achive the most bang for the buck.

3, Thirdly humans are very very bad at making decisions about information at the best of times without being able to quantify and therby qualify their thinking.

4, Fourthly as can easily be seen with politicians In the abcense of quantifiable data humans are easily swayed by predudice or group think by those around them or other inducments. (ie you tell the boss what you think he wants to hear for personal not objective reasons even though you might dress it up as objective…).

5, Fithly and importantly some humans are well aware of this failing. Especially where their money is directly involved (insurance, banking, etc) and have seen the need to devote resources to developed effective methods of determaning risk and mitigating it (actuaruial science etc).

6, Sixthly Where individuals do not carry direct risk either personaly or financialy “pet theories” will always come first. It can take well over a hundred years or more for people to stop waving their arms and chasing pet theories like a bunch of Greek philosophers ignoring the practicalities of reality. And actually get around to applying some measure of science to the problem and therby killing “scared cows”.

Unlike modern man the Greeks had some excuse there was just the logic of the spoken word to guide them. It was not untill Issac Newton actually realised that you theories had to be based on verifiable experimentation that Natural Philosophy was born, and Newton’s methods where sufficiently understood for the Victorians such as Lord Kelivn to enforce the methodology an coin the terms Science and Physics for what Newton and others did.

However the old Greek method still abounds, a modern(ish) example of supposed experts behaving like Greek Philosophers is doctors. It took them around 150 years to wake up and realise what the life insurance industry had found out from assessing raw data, that in a large enough population there is a quantifiable relationship between a persons weight height and life expectancy (now called BMI). Although well known by acturies as being an approximation doctors still get it wrong and have not taken on board that it is just an approximation bassed on populations. They still slavishly attempt it to apply it to individuals even though there is significant data to show otherwise (ie it is actually the quantity of fat not musclle etc) which is why some top sports persons are considered morbidly obease…

Further Doctors all used to puff away on cancer sticks like steam trains untill a report based on sound statistical evidence (from insurance data again) showed that they where killing themselves. However considerably worse it still took them a few more years to stop recomending smoking to pregnant women. And guess what the Greek Health Ministry (due to political incentives) actually alows one cancer stick manufacture to put an “aprroved by health ministry” lable on it’s packets of cigarets…

So as I have said there is a very clear need to have raw data from security incidents to be analysed. Unfortunatly amongst other reasons anti-monopolistic legislation is stopping the information being colated analysed and used to produce metrics that can be used to target scarce resources where they will achieve the best value.

But you have to be carefull BMI is an example of a proto-metric, that is it is a first aproximation taken on a large population or data set. Further work was (and still is) required to show that it was not mass in general but mass of a particular type (fat) and where it is located (around the organs) and as we are currently finding out there are major differences not just in different geographical areas but in peoples heriditory and frighteningly currently trace levels of industrial chemicals used in food packaging in the first world building up in (white fat) causing a form insulin resistance causing the body to lay down more fat…

Well with the SANS top ten best practicies we are just starting out on the BMI path, do we realy want to wait 200 years before we actually get around to solving the issue?

Roger October 25, 2008 11:01 PM

The problem is not only that most of these numbers are guesswork, but also that the problems themselves are often ill-conditioned, or effectively so — that is, changes in the input guesses that are small relative to our uncertainty, can result in quite different conclusions. This is perhaps one reason why, in the field of security, apparently intelligent people with roughly similar goals ending up screaming at each other and name-calling. (Yes, I’m thinking of differing attitudes to the threat of terrorism, but it applies to many areas.)

There seem to be perhaps three approaches to this problem, but none of them are entirely satisfactory:

A. We can identify specific areas where the risk matrix is not ill-conditioned, and where we can come to conclusions that nearly everyone can agree on. Unfortunately, most problems of this nature are trivial and were solved years ago.

B. We can try to collect sufficiently accurate data to arrive at a correct solution, even though the solution might be quite sensitive to errors. Unfortunately, it seems to be extremely difficult to come by data of sufficient accuracy. Further, if done on the cheap, this approach can easily suffer from what I call bean-counterism: the assumption that past tends provide a simple predictor for future risk, a view that is often dangerously misleading when dealing with an intelligent and malicious opponent. For example, just because the incidence of cat-burglary was very small for the last N years, doesn’t mean it won’t change overnight if you tell people that there is no need to lock upper storey windows or keep ladders locked away.

C.(i) Forget about ALE type analysis and just adopt solutions that are broadly applicable, regardless of the specific threat. Unfortunately, in IT our arsenal of such techniques is fairly limited, and the ones with broadest application (e.g. auditing and surveillance, principle of least privilege) are — probably because of this very flexibility — the easiest to abuse for other purposes, making them also often unpopular with system users.

C. (ii) Forget about threat analysis at all, and adopt a grab-bag of defensive strategies, none of which are demonstrably necessary but which are very cheap and “might do some good.” Very popular in practice.

C. (iii) Forget about improving security, and just make yourself a less attractive target by aggressive counter-attacks (in the courtroom, if you are a civilian.) Popular with the lawyers, not so much with the security staff.

Clive Robinson October 26, 2008 7:04 AM

@ Roger,

Your C points are unfortunatly where the various parts of the industry appear to be playing. And as you are no doubt well aware by and large it’s a bad place to be.

So for those who might think playing in these areas a quick counter postition / heads up as to why you should tread carefully if you want to play in the minefield.

In reverse order,

“C. (iii) Forget about improving security, and just make yourself a less attractive target by aggressive counter-attacks (in the courtroom, if you are a civilian.) Popular with the lawyers, not so much with the security staff.”

Hmm, and is a strategy that seldom if ever works. When push comes to shove it makes you poor in the process and can damage you in other ways 8(

It realy only works as a threat, and then only if nobody calls your bluff.

We have seen that agressive behaviour and civil court action has done little or nothing for the various digital media IP holders. Sure they have attacked some people who have worked on the fringes and put them out of business but, somebody else always takes up the batton and tries a different tactic or from a different juresdiction.

And contary to what most people might be told by the press relations people of such organisations punative damages from the “little people” actually does not deter others. In fact it makes the little people look like victims, and the IP holders as jackbooted thugs. And as the little people invariably have few assets the only people making money on such endevors are the legal people. So the win is considerably less money and a very damaged reputation…

And also if you make the mistake of attacking those with assets and the ability to defend themselves you enter a war of attrition as SCO found to their cost.

Even if you win it can be a Piric victory at best as Microsoft realised from it’s losses in courts in Europe and against other IP holders which is why it (apparently) used SCO as a proxie in it’s battle against “Open Source”.

And so onto number two,

“C. (ii) Forget about threat analysis at all, and adopt a grab-bag of defensive strategies, none of which are demonstrably necessary but which are very cheap and “might do some good.” Very popular in practice.”

Well I tend to take a less optomistic view of the grab-bag approach. In that as many are finding they may individually be cheap, but they seldem do any real good and a number are demonstrably harmfull.

Bruce has his “50ft Stake in the ground”, and others have “vault door on a tent” etc comments. It is worse than “security by obscurity” as a defense stratagy. The only time it apparently works is when a technologicly blind person bumps into the stake or door. However most “bean counters” are blind to the realities of both technology and security as are a large number of others sitting in walnut coridor. So you get the blind leading the blind and in that place of pain the one eyed man is the king of thieves not just the prince.

Or less dramaticly 8) if you don’t know what you are doing you are deluding yourself with grab_bag measures. All they do at best is consume your resources and give you a false sense of security, which is not good.

So I guess you could call it “Security by delusion” (can I trade mark this one 😉

This is because with grab_bag the resources are going at product not process which as Bruce has pointed out in “Secrets and Lies” is at best doing things backwards.

IDC published a forcast report in 2002 that indicated security spending in the US was just about 200USD per person and was rising at a little over 20% year on year so is arguably now around the 750USD mark if the trend has followed the forcast.

However Inflation would take the increase to only 225USD, whilst as we “know” (in our guts 😉 IT security in general has got worse. That is not a sustainable path in any persons view.

Which as most do grab-bag security, tends to suggest it is not only just not working it’s sucking the life blood out of those organisations.

So onwards and upwards on your list and we come in turn to,

“C.(i) Forget about ALE type analysis and just adopt solutions that are broadly applicable, regardless of the specific threat. Unfortunately, in IT our arsenal of such techniques is fairly limited, and the ones with broadest application (e.g. auditing and surveillance, principle of least privilege) are — probably because of this very flexibility — the easiest to abuse for other purposes, making them also often unpopular with system users.”

Surprise surprise this is the option that gets you in the top ten best practice catagory. And as you noted provided and it’s a big proviso you do it with sensitivity and tact and not use it as a whip or hammer or worse a downsizing tool.

Howevever as noted by John Leach (if I remember it correctly),

“The information security (ISF) forum collects a load of data about data breaches that companies suffer and the things they do to protect themselves… The ISF found a correlation that after normalisation the companies that suffer the fewest problems all tend to do the same ten controls well… So the ISF say if you do these controls you will significantly reduce your risk… ”

Which to my mind sounds like Doctors and BMI. However John went on with,

“I asked ISF why the controls did this… The short answer from the ISF was they had no idea”

So it is an observation over data that has shown a significant correlation to a desirable outcome. Now the ISF (have like the Insurance acturies who rested on their laurals) appear not to have so far persueded it into proto-metrics. Which personaly I think is the next step to be taken and the sooner the better.

The ANSI guide and it’s consult the world his dog and make up your own figures and plug them into our ALE derived formulation is actually a worse approach than this.

As Bruce noted it’s real mesage is externalise your risk wherever you can. So back to the old “Pass the Hot Potato” game.

Which to add a warning or salutory note is the risk re-insurance game. Which is currently bringing the banks around the world down in the same way in nearly destroyed Lloyds of London back in the last century with the “new names” and LMX spiral.

Jason October 27, 2008 2:13 AM

This mathematical approach is as accurate and scientifically sound as the Drake Equation.

(That’s a criticism. It’s pointless to say that risk = probability * severity when you have no way of estimating probability.)

Vilhelm October 27, 2008 4:39 AM

@ Clive,

Thanks, but I was assuming most readers here familiar with that having been written about.

As for your comment on subjective judgements, it is hard to see how you could avoid that becoming a reality if there is a large-scale proposal to use risk as security metric.

Security metrics or quantified risk are often thought to be a more objective approach to security, removing part of having to rely on subjective expert judgement affecting your security actions. Just comparing some numbers doesnt require any intelligence, right? Well, it turns out that it does. So we have just moved the problem into a usability problem, and security usability faults often become security problems. So to some degree we are back with the experts.

Clive Robinson October 27, 2008 11:39 AM

@ Vilhelm,

First off just about everything else in bussiness apart from IT security is based on metrics which have cardinal (not ordinal) real units of measure (ie dollars / manhours). And importantly they are usually easy to understand and based on data that is easy to gather automaticaly).

Why should we assume IT security is any different?

The big problem with security is that the probability of an “particular” event in any particular business time frame is close to zero. The cost however when the particular event occurs can be high either directly or indirectly.

So from that point of view IT security is not much different to physical security.

The infrequency and high cost makes most “particular” events edge events which are always very difficult to deal with rationaly.

ALE for instance can realy only start to deal with events that occure two or more times a year otherwise it is meaningless.

Then there is the issue of “particular” events in IT security they tend to be cascade events. That is they do not have a normal distrubution as do most other predictable events. That is to use the insurance industry as an analagy, they are not car or house thefts (ie 1 event 1 claim evenly distributed in time in a large population) they are hurricans (1 event many claims at once that rarely happen).

Engineering acknowladges these edge events as 1/100/1000 year storms etc. Engineers know how to design and build systems for these events. They do this by making “engineering assumptions” which are generaly anything but assumptions and have their feet firmly anchored in mathmatics.

Engineering assumptions generaly work by mitigating classess of events. The classess are based on their effects not their causes therfore distant blast effects are lumped in with high intensity wind gusts etc. Further if the effect cannot be objectly analysed then it is effectivly ignored as you cannot mitigate against the unknown.

Likewise an IT event class would be “loss of customer data” not “what if a virus got into our network”.

The basic aproach would be that of science,

1, Gather raw data.
2, Analyse raw data.
3, Look for correlations
4, hypothosize on correlation
5, find objective measurable metric to test hypothosis.
6, If metric not found break hypothosis up into sub hypothosise untill measurable metric found
7, if no metric is found for an atomic hypotosis then reject hypothosis.

This is similar to a McKinsey &Co “diagnostic” process and by and large it works and is reliable in it’s findings. Importantly the metrics derived from the process usually give a good indication of what is required to efficiently mitigate any class of risk that has been examined.

Further the metrics survive the “raised eybrow”/”and how is that going to…” test in that they are fairly clear to a manager etc.

In essence you need to know what objects you need to protect and identify how you protect the object. You do not need to know (or can ever know) every way the object can be attacked and then prevent each and every attack.

It still amazes me that very few people seem to know what it is they are trying to measure and importantly why.

Oh and just to reiterate about “externalising risk/liability” being a bad idea. Look at it this way if you hand over the customer database to a third party and they lose it you may be able to claim damages or insurance. But your customers will still blaim you and take their business else where if they can, which usually thay can.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.