Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AM24 Comments

Comments

Clive Robinson June 7, 2007 11:47 AM

“Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.”

Only if the risk analysis has been performed correctly, and inveriably it has not, which is where a lot of the problems originate from.

Pat Cahalan June 7, 2007 11:50 AM

Bruce ->

I like the diagram. This is a little shaky: “Owners are affected by risks … but really, only by perceived risks.”

Owners make their judgements based upon perceived risk, sure. But they are certainly affected by unperceived risks if a system is actually compromised 🙂

Brandioch Conner June 7, 2007 12:09 PM

“Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.”

I wouldn’t say that.

The effectiveness should be the standard by which DIFFERENT approaches are measured.

Not every security model is equal. What SPECIFICALLY are you getting for each potential vulnerability you are opening?

Is there a different way to achieve that? Does that different way have a better security model?

Damien Vessa June 7, 2007 12:19 PM

On a completely different subject: has M$ gone mad? They are deliberately showing everyone how to exploit a flaw in IIS5! And since it’s a feature they are not going to fix it! The Workaround? Buy WS2003 w/ IIS6… until they decide to reveal the problems that version of IIS also has, that is..

just check Knowledge Base article 328832… hack your own server and see how insecure you’ve been all this time! Everybody else will see that too… 😀

F P June 7, 2007 12:47 PM

@Damien Vessa

Nope; they’ve been “mad” for a long time, at least as far as security is concerned. The behavior you’re describing is a way to drive sales in the newer product.

David Totzke June 7, 2007 12:54 PM

@Damien:

Shouldn’t you be over on Slashdot? The “work-around” aside from an upgrade, would be to simply apply ACL protection on the files. Contols that should be in place already as relying on IIS security assumes that it is the only means by which the files can be accessed. Certainly not security in depth.

This is “by design” and nothing new. I’ve known about this for years.

@Bruce:
Great blog. I enjoy every article and entry. – Dave

dragonfrog June 7, 2007 1:08 PM

It took me a couple of reads to get the title right – I first saw it as “Nonsense considerations in Security Decisions”. The presence of a diagram with blocks and arrows somewhat supported this reading, as that is often a favourite format for illustrating nonsense arguments.

The classic such diagram, however, would have lines from every block, to every other block, with arrow-heads on both ends.

pOrn June 7, 2007 2:34 PM

All the arrows are the same color on my computer. Should I get another computer? What computer are you using?

derf June 7, 2007 2:47 PM

You can make any system completely secure. Remove power and network connectivity, stick it in a vault, bury the vault in layers of concrete and drop it in the ocean at an unmarked, unknown location. I can guarantee that the information contained within will not be available for at least several days.

However, your users might object just a tad when they can’t access their email, stocks, actual business material, or anything else they might have placed on that fileserver.

One of the points to the FBI studies are that insiders (legitimate users in your diagram) ARE the attackers that can cause the most damage (malicious or not). They’re also the ones that complain loudest to your boss or bypass security measures, in the name of expediency, when they don’t have their way with the company’s data.

Roy June 7, 2007 3:35 PM

@Pat Calahan

I think the point about perceived risks is that only the perceived risks go into the decision making. Unperceived risks do not inform the process until after something fails and the risk involved is now newly perceived.

Of course this presumes the process can learn.

AC June 7, 2007 3:45 PM

Rather irrelevant and off-topic, but as a RIM employee I feel compelled to mention: it’s “BlackBerrys,” not “Blackberries.” 🙂

Mike June 7, 2007 4:55 PM

Unfortunately, I’ve often seen a much shorter model for security decisions:

Did the auditor ask about it? (yes/no)

That pretty much covers it.

Richard Braakman June 8, 2007 2:24 AM

Bruce, I think your diagram may be a bit too bleak. It shows the owner’s policy not affecting anything else at all! 🙂

p June 8, 2007 7:34 AM

I reckon the diagram borders on meaningless unless explanation is added.

For instance why are the trusted people and trusted systems not among the assets? And are the legitimate users not trusted people?

Mr Livingstone June 8, 2007 7:43 AM

I like the diagram. Although I can’t see the words ‘London’ or ‘2012’ in it. Are they encrypted?

Jay74 June 8, 2007 8:04 AM

@Mike “Unfortunately, I’ve often seen a much shorter model for security decisions:

Did the auditor ask about it? (yes/no)

That pretty much covers it.”

I’ve been an auditor for 10 years, and I’d have to say too often your model is in play. My brethren often don’t give enough consideration to feasibility and trade-offs, and as such sometimes make decisions that hinder rather than improve. And worse, the kind of auditors who are “checklist auditors,” that always look under the same rocks, which does little more than waste time and divert resources.

A great example of this is disaster recovery. I cringe when I see an audit step that says “was a successful disaster recovery test performed.” That is the wrong question, because if one really wants to improve the odds of recovery, you can’t just test to pass, you have to test to find points of failure.

Anyways, I’ll step off my soap box. But sadly, you are often right about us auditors. 🙂

Rob Lewis June 8, 2007 8:45 AM

Any solution that replaces a problem with another, even a lessor one, is really not desirable. The bolt-on approach of security add-ons is not seamless, and provides gaps for those attacker bypasses. Further, it impedes business functionality by causing interoperabilty problems.

Very granular access/audit policies at the data level, that are enforceable, can optimize business data flow while providing high level data assurance.

guvn'r June 8, 2007 12:01 PM

@Jay, I think Mike’s comment was aimed not at auditors but rather at the PHBs that consider whether an auditor asks about it as the litmus test for whether it’s an issue that has to be addressed in securing the environment. Sadly there’s a lot of that, which is why SOX and such things are forced into existence.

Jay74 June 8, 2007 1:22 PM

@guvn’r “I think Mike’s comment was aimed not at auditors but rather at the PHBs that consider whether an auditor asks about it as the litmus test for whether it’s an issue that has to be addressed in securing the environment. Sadly there’s a lot of that, which is why SOX and such things are forced into existence.”

I agree, and that’s how I took his comment (in re-reading what I wrote, I can see where I gave the impression I took it another way). I was just adding my two cents that yes/no auditor question that a checklist auditor can make this problem far worse.

Best,
Jay

Vincent June 11, 2007 1:44 PM

Bruce,

Around 10% of the male population suffers from some form of color blindness. Just using the colors red and greenish to distinguish between two arrows isn’t very accessible for them. I would suggest labeling the arrows “successful attack” / “unsuccessful attack”, or something similar, in addition to using color. This also makes the chart more clear, it took me a minute to understand what the arrow to nowhere meant even though I could see the colors.

Also, I don’t know if it’s worth pointing out in the diagram that a security layer rejects some legitimate user access to the assets.

Jay74 June 11, 2007 4:09 PM

Just in case this helps while you are waiting (Bruce is a very busy man), there are two errors that go down and leftward from “attacks” at the right. The one that passes into the security systems box and touches assets is the lone red arrow, and the one that deflected off the security systems box upward is the lone green arrow.

Hope this helps.

Best,
Jay

Jay74 June 11, 2007 4:10 PM

Oops: My just above was intended to answer Vincent’s concern just above the post.

Sorry for any confusion.

ieee member June 14, 2007 8:52 PM

Bruce, I think you meant it was published in “IEEE Security and Privacy”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.