Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes — miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior — dart left, dart right, dart left, and at the last moment jump — instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision — buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living — insurance companies, financial trading firms and arbitrageurs — try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PM25 Comments


Wicked Lad October 14, 2008 3:02 PM

Good column, Bruce. You wrote, “There’s never just one risk….”

Right. In the for-profit world, there are three: balance sheet risk, income statement risk and ethical risk.

Bruce also wrote, “The problem in the security world is we often lack the data to do risk management well.”

Right. And we often lack the data because the data don’t exist. Some of our greatest risks are from tail events (Nassim Nicholas Taleb’s “Black Swan”), where there’s no calculating the probabilty of a loss or the magnitude of the loss, so there’s no gauging the risk in any useful way. It’s a tough problem.

Davi Ottenheimer October 14, 2008 3:36 PM

“So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.”

That’s so untrue I’m not even sure where to begin.

Management has to trust their gut on virtually every other business decision, despite conscious and intelligent design, and yet you think we should tell them that security is an exception?

I want execs to appreciate the good/bad/ugly info from the security dept, but I would never kid myself into thinking that good decisions are always purely rational…

nullbull October 14, 2008 3:37 PM

“The problem in the security world is we often lack the data to do risk management well.”

I’m not so sure. From a perspective of any other kind of risk management, even the most data-driven like actuarial work, the data are always retrospective. And future conclusions are always driven by past data. Large organizations with significant Information Security departments have investigations and incident response teams. Those worth their salt track data about breaches, do root cause analysis, etc.

Similarly, costs can be estimated using a variety of weighting criteria. Is the data being protected subject to regulation? How is it classified in your data classification scheme? Have you conducted a business impact analysis? Does that feed your assessment of risk?

Many of these would require some kind of subjective application of a VALUE to a qualitative judgment. But picking those qualitative judgments apart, enumerating them, and assigning weight has merit. And it can be data driven, particularly if you are a large enough organization to have investigations data that supports pulling these issues apart.

Clive Robinson October 14, 2008 3:38 PM

As I have said on previous occasions when Bruce has gone on about Risk Managment and ROI on security spending,


Without metrics all of what we do is meaningless as their is no way to criticaly assess any measures taken.

It is not as though there is not raw data out there in abundance. However most of it is kept hidden to protect the guilty.

Untill we develop reliable metrics then Bruce’s article will be just as relevant in ten years time…

Davi Ottenheimer October 14, 2008 3:48 PM

“The problem in the security world is we often lack the data to do risk management well.”

I used to think this might be true, but after years of consulting with execs I have found more often that people would rather fly blind and paint a positive picture (hope) than have to deal with data they do not understand (yet). I have had data in excess, but the problems you describe still linger. I call this a leadership issue, not one of data.

For example, consider The Palin Drone’s instructions to her staff:

“Staff, I believe if we look for the positive, that is what we will ultimately find. Conversely, look for the negative and you’ll find that, too. … Wasilla has tremendous assets and opportunities and we can all choose to be a part of contributing to the improvement of our community … or not. I encourage you to choose the prior because the train is a’moving forward! I realize this is an added chore, but at least it’s a positive one!”

It’s hard to face the truth, to learn, and to reason. Much easier in the short term to paint everything yellow with a smiley face and keep marching.

The problem is not a lack of data, it is a lack of emphasis from leaders on the need to normalize, centralize and crunch through a plethora of data to figure out what is really going on.

Tufte, DeMarco and Lister are all excellent resources on this issue of how to listen and visualize what’s already right in front of us.

Analogy Guy October 14, 2008 3:49 PM

I find it entertaining to make comparisons with the current financial meltdown. First off, I attribute the subprime and prime mortgage problem to bad risk management. The lenders assumed that their loans would be secured by the value of the underlying property in the mortgage. Their risk management models apparently never incorporated the risk of the property itself being devalued. The lesson being something like “live by the mode, die by the model.”

The other half (or rather 90%) of current problems being CDS’s (Credit Default Swaps) which are kind of like insurance policies on loans with one major exception – as a 3rd party you can purchase CDS ‘insurance’ on somebody else’s loan, which is essentially a bet that the 3rd party is going to default. And if you buy a CDS you can use it as collateral to write a CDS on the same company and sell it on to another party.

The market for Credit Default Swaps is almost completely unregulated resulting in a market which had very little transparency thus making it extremely difficult to properly evaluate the risk of buying a CDS itself. In other words the details of the company actually being insured against failure were pretty well understood, but the risks that the company under-writing the CDS could pay out were not. The lesson for this part of the story being something like “garbage in, garbage out.”

False Data October 14, 2008 4:02 PM

Analogy Guy makes a great point. The last couple decades are littered with insurance companies, investment banks, hedge funds and similar examples of the limits of the formal approach, even when they’re facing the relatively simple, one-dimensional tradeoffs of managing financial risk. Long Term Capital Management comes immediately to mind: their business model, at least initially, was all about managing risk using an exceptionally sophisticated mathematical risk model, computer-driven decision-making, and tremendous financial resources. That’s not to say that formal risk management is bad, but it does make me wonder if there’s a way you could quantitatively measure how effective different approaches to risk management are.

Davi Ottenheimer October 14, 2008 4:39 PM

“Their risk management models apparently never incorporated the risk of the property itself being devalued. ”

Yes, good analogy, but here too I saw massive amounts of data pointing to hyper-inflated prices and debates about a housing bubble, etc. and yet very little movement towards offset of the risks.

The PMI Mortgage Insurance Group’s Risk Index is a perfect example of the data. You can’t say this stuff is not available, let alone meaningful:

So the problem was not relative to data available, but instead a crisis of leadership.

Consider Germany’s critique:,,3658674,00.html?maca=en-rss-en-ger-1023-rdf

“…Merkel said her government had tried in vain to win G8 support last year for tighter regulation of hedge funds and financial oversight of capital markets, hinting that she felt vindicated in her stance as a financial disaster unfolded on Wall Street in recent days.”

I suspect Basel II is likely to get far more attention soon, especially Pillars 1 and 2

cgr October 15, 2008 4:07 AM

Wicked Lad noted that we are often faced with multiple threats/risks but “managing risk” also carries the implication that there are 2 or more candidate routes to follow, e.g. “What’s the risk of having this medical procedure?” implies that you can opt to have it – or not. The “opt out” may comprise several possibilities of course – do nothing; try medication; try the local shaman, etc.

There is a grotesque tendency in the media to focus only the risks of a certain action and to ignore the risk of inaction. This leads to an un-informed debate about many issues, typically where the adoption of new technology (in the widest sense) is concerned. Of course, it’s difficult enough to write informatively about many risks because as Clive noted we have no, or at least few, metrics and the ones we do have are difficult to truly understand even for some experts (see the note below). Nevertheless the risk of ‘A’ means nothing without knowing the risk of ‘not A’.

Note: There is an excellent book called “Reckoning with risk” by Gerd Gigerenzer which I can recommend. It’s a one-idea book but very readable and should be mandatory for all med students and strongly advised for anyone who is faced with difficult decisions about healthcare. The author spent time in Chicago, IIRC, and tested academic clinicians – right up to the most senior level – on their understanding of some medical risk probabilities. The results are striking enough to convince me that I will always do the risk analysis myself, using GG’s approach.

Bill October 15, 2008 4:16 AM

“the fight-or-flight reflex that evolved in primitive fish”

Primitive fish were around in the Devonian, but it’s likely this behavioural adaptation evolved independently many times over, and much earlier, c.f. trace fossils from the Ediacaran. Arguably bacteria display it and they likely go back 3 billion years or more.

Naveen JP October 15, 2008 5:00 AM

I think Risk Management makes sense, but the way it is approached is important
(obvious!). I think there could be some thumb-rules (not rules (as always)) to
approach risk management. Some important items would be Classification
and Preparedness. The preparedness would of course depend on the class
of risk. The key here is to identify and anticipate classes of
risk. This in my opinion is the tough part. Once this is done, different models
would be applied to different classes.

The current financial situation is an e.g., where risks where not classified
right and hence preparedness was out of question!

Clive Robinson October 15, 2008 8:11 AM

@ Analogy Guy,

You missed one point. Those lending the money where not those carrying the risk.

That is the people selling knew full well what they where doing was extreamly high risk. However they also knew that they where only in it short term (90 days or so) before they got their sales commission. Those above them likwise knew but their renumeration was bassed not on risk but the performance of those below them.

Further up the chain those supplying the money to the sellers where likewise in it short term. They had the harder job of parcelling up the bad risk in such a way as to make it look like good risk and therby sell it on up the chain.

At each step people knew it was a “hot potato” and just dressed it up for the next “mug” up the line. Eventually by reparcelling the high risk debt with other debt in “financial instruments/vehicals” it became virtually invisable, and after a couple of more steps it found the source of the money with pension funds and other large value funds such as insurance.

Which means a few days / weeks / months down the road they will probably start to colapse (again it’s the little guys money that will be stolen).

Further the CDS problem is part of a well known class of problem “re-insurance” and we know what is going to happen from fairly recent experiance. Go look up Lloyds of London and the LMX spiral of re-insurance that was created quite deliberatly by senior people in Lloyds to suck in new money to gamble with, whilst fully protecting the “old money names” and “working names”.

The little people got very badly hurt and guess who was given the task of deciding if these new names who where now destitute should keep bleeding or not?

The wife of perjuring author and ex Tory Lord Jeffry Archer, the serial oppressor and littigant against her employees, possible purjurer and PhD Chemist Dr Mary Archer,

The future looks somewhat interesting.

Duncan Kinder October 15, 2008 9:11 AM

“That’s what companies that manage risk for a living — insurance companies, financial trading firms and arbitrageurs — try to do. They try to replace intuition with models, and hunches with mathematics.”

These are precisely the brilliant fellows who have brought us the recent financial meltdown.

Now if they had followed the Stone Age idea that treating a house as an investment is a jackass idea, then the current economic problems would not exist.

Apparently we need better number crunching.

Or something.

Orlando Stevenson October 18, 2008 12:08 AM

Ah, wouldn’t life be grand if measuring risk was always a simple and predictable calculation exercise. Rejoining reality, here’s a example pragmatic risk formula Ira Winkler and others refer to that may be useful when discussing the topic in mixed company:

   Risk = Threat *  ( Vulnerabilities / Countermeasures) * Value 

              A: Threats are just there, reality is that not much can reasonably be done to eliminate them.  
              B:  Value (or impact) should continue going up or someone should be fired.
              C:  Remaining variables left to help manage risk center on vulnerabilities and countermeasures.

Pragmatically addressing the remaining risk variables – benchmarking with peers, meeting regulatory requirements, and clear-eyed assessments, especially to get consensus on value and current state of security- all help. The reality is that there is always a limited set of programmatic knobs and buttons to help ensure vulnerabilities are minimized and appropriate countermeasures in place. All of which must be done well, continuously, and in concert to stem a dynamic threatscape that never sleeps. Bar keeps rising- we need to keep up or the risk goes through the roof.

One question I always think about and sometimes ask out loud and when a supported decision is needed about a particular solution path or mitigation “Given where we’re at, what can we hold up {perhaps raising my hand} as evidence of doing the ‘right thing’ in hindsight to address {related potential problems or series of problems} if we don’t move forward.”

Not a cure all – but one way to cut to the chase and get folks engaged in the bigger, longer term picture – which is what risk management is ultimately about.

Bruce Grembowski October 18, 2008 6:17 PM

Risk management has colored everything I do in my computer career, starting with college. What is the risk of turning in a program that’s 80% functional on time vs. turning it in two days late but 100% functional?

Writing programs for a living on closed systems (stand-alone or with users connected via leased-lines only) required much less attention to encryption than almost any work today, now that almost every computer is accessible via the web. User IDs and password were simple and never needed changing. But someone would have to physically break in to our office or into one of our client’s offices to get physical access to the systems, before even having to worry about computer-based security. These systems were actually shipped with default accounts and passwords set up that even the tape jockeys had memorized (“1/18;backup” anyone?).

Instead, our focus was on the risk of the programs not responding coherently when presented with unanticipated input. The risks were unanticipated output not recognized as such by the user (incorrect statements) that could affect the user’s bottom line, or incoherent output (GIGO) that could cause the user to withhold payment to our company or seek another vendor.

The introduction of dial-up modems on the company server and stand-alone systems with modems in client offices created a new risk (e.g., teenagers with war-dialing programs), but the risk was not considered that great by the company I worked for.

It turns out that the greatest risk introduced by these modems was to the users of the stand-alone systems the company provided, which had modems set up for access by the company to perform maintenance, retrieve data for generating statements, etc. When one of these users got fed up with the monthly charges for “maintenance” not really needed on the stand-alone systems, they did not have the knowledge to prevent the company from continuing to access their system and their client data.

So, once I was no longer working for the company, one of the first services I provided was an account scrubbing, to prevent the company from accessing systems which they had no business accessing.

This was reactive risk management on my part–a particular user had found out that the company, from which they had severed ties, was still accessing their computer. Most security measures I have seen in my career have been based on plugging security holes like this–ones that have been exploited, not by a thorough analysis of what the threats might be.

For example, worries about the threat of SQL injection did not seem to become important until this technique had been used to compromise data.

In a larger sense, everthing we do is about risk management, and as Mr. Schneier states, it is much better to try and formalize our approaches to security. But, in the end, I simply find it too hard to “think like a bad guy” and anticipate what techniques will be used to break security measures in the future; thus, I find myself hopelessly one week behind, reacting to yesterday’s attack, and closing the barn door after one horse has escaped (but keeping the other nine safely inside). This, of course, after having analyzed every possible security threat before implementing the system.

Thus, more than simply trying to formalize the approach to risk management, I find myself constantly re-formalizing my approach to risk management.

P.S.: This item was in the first copy of Mr. Schneier’s “CRYPTO-GRAM” newsletter that I received, and Yahoo! Mail promptly found it to be a risk and placed it in the spam folder 🙂

Alex October 19, 2008 7:58 AM

“here’s a example pragmatic risk formula…:”

Risk = Threat * ( Vulnerabilities / Countermeasures) * Value

NO, no that’s not pragmatic at all! That’s pure nonsense. Just randomly multiply things together is a big part of the problem (and much more so than a lack of data).

djb October 20, 2008 7:16 PM

For the most part, I find that risk management methods which fare so well in purely quantitative matters are woefully insufficient when applied to security.

Risk management relies upon information and there simply isn’t enough good data for the task. In the absence of data, we make assumptions. I’m OK with that, however, in my experience the strategies that are produced as a result of assumption based risk analysis are too often regarded as “written in stone” or unassailable. This is counterproductive to adaptable security. Because so much is unknown in any given security environment, a certain amount of trial and error, course correction, and qualitative assessment are not only valuable, but absolutely necessary. The unknowns make numeric analysis difficult.

Some aspects of security are quantifiable, and rightly should be. Others are qualitative but, alas, are often assigned a subjective numeric value: Political pressures, brand reputation, the influence of competing agendas, biases, perceived predictive ability, etc. Having said that, a room full of people can and will argue all day about the values ascribed to these gray areas. That’s one of the reasons effective security is so elusive.

There is also the “respectable profession” bias to consider. Security professionals are under severe pressure to present their craft as a “serious” profession in the eyes of those who don’t understand security but, as a result of unhappy fate, happen to control the security budget. One of the easiest ways to do that is to adopt a facade of numeric rigor. The results look like solid analysis to quants, even if the process is full of unknowns and unstable variables. Therefore, we get SROI, ALE, and other metrics. These metrics are based on so many questionable assumptions and predictions that they are largely useless.

Let’s look at accounting and finance for an interesting example. Occasionally, quantitative numbers cannot be determined for a particular asset or transaction. What do accountants and CFO’s do in such cases? They make assumptions based on the asset, risks, rewards, best practices, and their professional judgment. This reminds me of security professionals, before the obsession with shoddy risk management tools.

Clive Robinson October 21, 2008 12:43 AM

@ djb,

Nice analysis of “present state of the art”…

However what gets me the most is that we know that there must be a very large amount of data out there, but we just don’t get to see it.

For three reasons,

1, Nobody likes to admit they have failed (especialy to shareholders etc)

2, Those who have failed “to anticipate” don’t want to lose their jobs so repoert things in a distorted way.

3, The raw data is seldom if ever available as it is felt (probably correctly) that it would reveal other holes in security.

Untill we have a way to get over these hurdles then real as oposed to faux metrics are going to be at best a long time comming.

There is of course a fourth hurdle which is effectivly the “elephant in the room”.

Paid for investigations are often not realy independent.

The reasons for this are many and are often difficult to avoid.

The obvious one is that it is in the interest of those who are paying to provide as much information and time as possible to the investigator and to minimise cost sugest the direction of investigation.
The converse is not true, in that those either being investigated or feel as though they are being investigated have no incentive to provide other than what the investigator askes for and to minimise contact as it is time away from other duties for which they are unlikley to see benifit. And if under investigation are likley to let a few hares of their own out to defocus the investigators attention.

There are solutions to these four problems, but are costly to implement, and often only happen ineffectivly, when the loss involved due to inaction is large enough to attract not just the media but legislators spotlights. By which time it is “action this day” chaos, not thought through in reasoned and non adviserial manner.

In essence Newton’s scientific method (observe, reason, theorise, test, refine) needs to be applied.

The first step “observe” requires raw data unfiltered by preconception. The second step “reason” requires independence of thought and objectivity.

Untill we have acomplished these two basic steps we have no way of establishing what we need to measure or how. And with out these proto-metrics the third step is meaningless as is the fourth where the strength of the metrics is tested on the forge of reality and their edge honed into a usefull tool…

The first steps along the way have been heighlighted by the likes of Lord Kelvin (Sir William Thomson) with “When you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind” which he expounded in the “Treatise on Natural Philosophy” which provided the groundings for modern physics.

Which tends to suggest if “security” wishes to be taken seriously it had “better get with the (science) program”…

Orlando Stevenson October 22, 2008 9:33 PM

” Risk = Threat * ( Vulnerabilities /
Countermeasures) * Value
NO, no that’s not pragmatic at all!
That’s pure nonsense. Just
randomly multiply things together is a
big part of the problem (and much

more so than a lack of data).”

My, the fun and games we can have with addressing risk in the real world. Even the best books will explain that there are almost as many ways to measure risk as there are folks measuring it. Yes, basic formulas are pragmatic from the perspective of explaining risk in a simple, constructive manner – at least it gets “Risk” on the left side and variables on the right (some folks mix that up as well). A key underlying point is that grading on some basis matters with whatever formulas used- especially for comparison ( however calculated and with whatever values) to help assess risk in a reasonably consistent, repeatable manner. Methodology needs to help drive values from the grading (not “random”) and can certainly factor both qualitative and quantitative factors.

I bet many of us have had the question “How much Risk. Low, Medium, or High?” When challenged this way in an adhoc manner, helps to compare with examples of potential outcomes and also add a temporal view (risk going up – get after it, risk bounded by planned improvements, etc).

As for a more mathematically accurate formulas, perhaps Risk = Function (var1, var2, var3..var-n) resonates better. As an example, for more rigor in a specific situational settings, the even “simpler” Risk = Function(Susceptibilities, Consequences) work well with susceptibilities best understood by cyber security experts and consequences addressed by system experts – both table topping and collaborating following a process until Risk is in the proper range. As for some real rigor, the folks that I admire when it comes to focused risk analysis are those that do Probabilistic Risk Assessments for specific systems with well understood and repeatable rigor as living (as a number of engineer consultants do) .


What many do we use at a corporate level is even simpler
Risk = Probability * Impact
P & I – each on their own scale (e.g. 1-10 with description)
Risk is the result of multiplication

Bottom Line. Simple formulas support what should be a defined methodology addressing range of risks to help stakeholders assess, compare and prioritize investments.

Alex October 29, 2008 6:34 AM

“Bottom Line. Simple formulas support what should be a defined methodology addressing range of risks to help stakeholders assess, compare and prioritize investments.”

This continued assertion by our industry that wisdom is a by product of a repeatable, structured process for gathering prior information is just as absurd as useless “simple” models.

Until you can understand how informative a prior is, and until you have a rational model for how to:

1.) Account for the noise and uncertainty in your data sets
2.) mathematically model the effect various sub-component factors of risk interact

you’re screwed regardless of how many wikipedia articles you link to. What you’ve described in these comments isn’t capable of creating a useful State of Knowledge, more or less a State of Wisdom (another problem with current IRM practices).

Patrick Florer October 29, 2008 10:31 AM

I think that Alex has hit the nail on the head:

There are lots of formulae, and there may very well be lots of data, regardless of whether we have access to them or not.

But, until we understand something about how informative the priors are, no formulae or data set can yield meaningful results.

To put it another way, until we understand how, if at all, the past predicts the future with regard to any given threat/risk/whatever, we are flying by the seat of our pants, however fancy our “airplane” may be.

And, if we conclude that the past does not or cannot predict the future in any meaningful way – which, in my opinion, is the current reality most of the time – then we have no choice but to recognize this uncertainty and accept the fact that we are, for the most part, “screwed”.

Maybe senior execs understand this intuitively?

Clive Robinson October 29, 2008 2:43 PM

@ Alex, Patrick Florer,

“Until you can understand how informative a prior is”

It sounds very much like you are putting the cart before the horse.

You need to ask “what assets do I have and what class of risks effects them”,

Not “there is a probability X of the yyy worm geting into my network I think it can do zzz damage what assets are at risk of zzz”.

The assets need to be individually recognised the threats broadly catagorised.

That is wheels come in all shapes sizes materials and colours but usually the all have hubs and roll but your truck neads one that fits not only the hub but under the mud gaurd and should idealy have the same charecteristics as the one on the other side, if you dont know how to change it or cannot then you better plan how to shift its load ot to it’s detination…

Assets are tangable, threats fall into three general types,

1, Known knowns (specific).
2, Known unknowns (class).
3, Unknown unknowns (black swans).

Those in class 1 can be recognised and dealt with using known methods.

Those in class 2 can usually be detected by their charecteristics as they occur.

Those in class 3 generaly can only be recognised post risk event.

You MUST accept that you can deal with class 1 threats (punctures), some class 2 threats (blowout nut breaks), and assume class 3 threats will succeed (hub plate fractures).

It is this last point you have to deal with in some way such as by insurance or PR or preperation (ie have a second truck to take the load) you have no choice in this some threats will happen and you need to have reasonable contingency plans in place.

There is no mathmatics or silver bullets that can deal with unknown unknowns get over it and prepare for the consiquences by various means. Likewise you have to accept that some known unknows will also happen. You deal with both by mittigation.

Generaly assets fall into a two by two matrix,

X – passive | active
Y – generic | bespoke

That is if an asset is passive it can be attacked/stolen etc but it cannot cascade onto other assets, an active asset has the ability to be used as a bridge head or by failing cause other systems to fail.

Usually active assets are more readily identiffied than passive in IT.

Generic assets are usually more suceptable to being attacked than bespoke assets however bespoke assets usually have considerably higher value.

It is this area which you should be concerned with evaluating by formular it does not usually involve probability.

Attackers generaly come in three groups,

1, Oportunists
2, Amatures
3, Professionals

They are almost directly analagous to the threat groups.

For most IT threats you can use the same methodology as the physical perimiter defense method of

A, deter
B, detect
C, delay
D, respond
E, inspect / evaluate / improve

That is you put up a visable outer defense layer this deteres / defeats the oportunists using known threat vectors.

Inside of the deterent layer you then monitor your choke points or other areas of known and reasonably predictable behaviour and alarm for unusual behaviour.

Within this alarm layer you have further mechanisums by which you slow down unexpected activity for two reasons, the first is you need to be able to identify it. The second you should try and identify the method and target to taylor your response.

The reponse should at the best of times be measured and not nuclear. There are three reasons for this, the first is you want to do the minimum damage to your systems. The second is you want to reduce the impact on legitimate users. The third is you realy should not reveal your full strength to a probing patrol.

Last but usually overlooked is this is an ongoing process.

You want to regularly inspect your assets and what legitimate users are doing. You need to keep abreast of what is happening out there (although this can be difficult). And you need to update your systems as unknown unknowns become known unknowns and they likewise become known knowns.

None of this is actually difficult to comprehend and this is what any sensible exec is going to understand (and importantly want to hear).

This leaves the area of mittigation.

Some business activities are naturaly high risk and you and the execs must accept things can and will go wrong and there is nothing you can do before hand to stop them. Worrying about probabilities is a waste of time and makes very little difference to the utilisation of resources. The only thing that matters is “what do I do now the truck cannot move on in time to deliver the load” it does not matter a jot why it stopped (wheel / crash / engine fail
/ etc) the only thing that maters is what you do next to either get the load there or reduce your losses.

You must make it clear to any exec planning a new activity what things you can reasonably be expected to deal with and how, and those you can not. It is then up to them to decide if the potential rewards are worth the risks. It is upto you to work with them and explain the risks (like cascade events) and show if there are ways of limiting the damage should such an event occur.

So in addition to your other activities, knowing the upto date mitigation methods and their strengts weaknesses and aplicability to the assets under your command is a must.

Doing anything else is outside of the job remit (unless you are an exec). And part of that is knowing when to either “wallpaper your a55” or start looking in “Sits Vac”.

Leave the probability stuff to the acturies in exactly the same way a design engineer leaves the chemical science to the scientist he just utalises the products published specifications.

LJT October 30, 2008 12:10 AM

How does one calculate the impact of greed? Anger? Frustration? Envy? Revenge? Aren’t these the biggest risk to assets, information and systems in any corporate setting?

Risk is inextricably tied to human emotion and trust, which is at the core of the human experience. Look at Alan Greenspan. He trusted financial institutions to govern themselves against fraud and greed in a free market system. He was wrong. Look at how a FICO score has been used to underwrite financial risk for creating insurance and financial products for consumers. A failed model that didn’t see the collapse of the sub-prime market.

So managing risk with information security, in my view, is really just as mature as the banking and financial models we’ve relied upon for 100 years.

Until we stop looking at data points and trying to use raw mathmatics as determiners of truth, we will recreate the same misguided interpretation of risk as the finance industry. Only by embracing models of predictive human behaviors in psychological and emotional terms will be start to fully understand how to manage risk.

The best risk managers I know start with human process engineering, and work their way down to the systems level. How we feel determines how we act. How we act determines how we govern. How we govern is a direct relection of how we feel. The best governance controls library in the world won’t stop greed or hate, and until we start looking at culture and the human condition in the work setting, risk will always be an elusive shadow with a mind of it’s own.

Alex November 6, 2008 8:54 PM


“It sounds very much like you are putting the cart before the horse.”

Nope! In fact, after reviewing your post about explaining your particular risk model, I’m actually more convinced that you can build super-duper repeatable processes and you’ll still end up with crap if you haven’t thought risk through.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.