Four Irrefutable Security Laws

This list is from Malcolm Harkins, Intel’s chief information security officer, and it’s a good one (from a talk at Forrester’s Security Forum):

  1. Users want to click on things.
  2. Code wants to be wrong.
  3. Services want to be on.
  4. Security features can be used to harm.

His dig at open source software is just plain dumb, though:

Harkins cited mobile apps: “What kind of security do we think is in something that sells for 99 cents? Not much.”

Posted on September 20, 2010 at 6:20 AM33 Comments

Comments

js September 20, 2010 6:51 AM

Err, according to the article that’s not what he said. He was specifically aiming at 99 cent mobile apps – in other words commercial, closed source software written cheaply and quickly to get a high volume of products onto the app stores to take advantage of the short-attention-span market.

I think it’s probably a fair comment that such a development cycle does not foster high security.

clvrmnky September 20, 2010 7:21 AM

The dig is dumb for two reasons:

  • The price of a mobile app has nothing to say about its security. Because of theses four rules we are told we can be sure that some enterprise app with a mobile connection could be just as insecure. Intel has shipped all sorts of expensive products. Has this ever been any guarantee of security?
  • His rhetorical question is a query into the kind and form of the security in this mobile apps. His answer is in the form of a level or amount of security.

It’s sloppy writing, and sloppy thinking.

Clive Robinson September 20, 2010 7:55 AM

And let’s be honest here,

Intel are not exactly unknown for producing multi hundred dollar CPU’s with bugs in. Some of which may well have effected asurance of the systems they where put in.

As the old granny is fond of remarking, Pots and Kettles pots and kettles.

Jason September 20, 2010 8:22 AM

Say what you like about the “99 cent” gag, but I think the “users want to click on things” guideline is pretty good advice.

I got into an argument recently with someone about whether Windows’ “click to allow” UAC security model was significantly different than Mac and Linux’s “type password to allow” model. I argued that while there was little difference in principle, the act of typing in a password causes users to think about security in a way that button-pushing doesn’t.

Users are used to clicking buttons as a means to an end without thinking about what the buttons do. But when you type in your password, you know something serious is happening.

Dennis September 20, 2010 8:26 AM

About 99 cent apps: This is obviously a remark about Apple app store apps and possibly Android apps. These platforms are supposed to be (and marketed as being) sandboxed so that developers don’t have to worry about security.

If these apps cause any problems, it’s not the app developer’s fault, it’s the platform developer’s fault.

Nick Coghlan September 20, 2010 8:30 AM

In the “Users want to click on things” vein: Blizzard use this to cut down on accidental deletions of game items in WoW. For items above a certain level of rarity, the game makes you actually type “DELETE” into a box before it will actually destroy it.

They’ve also put a lot of effort into providing “buyback” and “sellback” options on vendors for when you buy or sell items by mistake.

Not precisely security related (more “cut down on ingame support requests from users wanting their stuff back”) but similar principle.

Nick Coghlan September 20, 2010 8:40 AM

There’s actually a 5th law mentioned in the article: “Information wants to be free and people want to share it” (it’s in the paragraph just before the list of the last 3 rules)

I agree with the others above that his “code wants to be wrong” dig wasn’t directed at open source but at the cheap closed source stuff.

You really want code that is one or more of:
– open source (so you can check it yourself, or get someone you trust to check it for you)
– backed by a company with a reputation to maintain (so you at least have some hope they have reasonable security practices and procedures when it comes to software development)
– covered by a warranty (so you can at least sue somebody to make yourself feel better when it inevitably fouls up)

js September 20, 2010 8:51 AM

@Dennis

You appear to have forgotten the zeroeth Irrefutable Security Law: “There are no silver bullets.”

A sandbox will protect some set of things. A user will care about some set of things. The two sets are not necessarily the same. Even with a perfect sandbox, a broken app can allow the compromise of its own partition, which compromises any data you were trusting that app with. This is an app problem, not a platform problem.

careful September 20, 2010 9:06 AM

@Jason-Users are used to clicking buttons as a means to an end without thinking about what the buttons do. But when you type in your password, you know something serious is happening.

Yes, but you will still have that set of users that say “why can’t they make this program so I can just click a button instead of typing my stupid password in?”

Sasha van den Heetkamp September 20, 2010 9:35 AM

They’re all bogus if you ask me. Someone who is proficient in usability theory could have wrote that list too. You can have security. Or you can have freedom. Don’t ever count on having both at once.

Frank Ch. Eigler September 20, 2010 10:40 AM

“His dig at open source software is just plain dumb, though: [… it’s cheap therefore it’s insecure …]”

On the surface, one can sort of see the point. After all, if the revenue is minimal, then how can one afford to do a professional audit? I mean they better, lest they be sued for contributory negligence. Don’t some famed security experts insist software vendors should be legally liable?

RH September 20, 2010 11:21 AM

Could all of these be traced back to “entropy increases” which lets these rules hang out with things like “budgets lead to compromises, compromises lead to kludges, kludges lead to ‘goto.'”

billswift September 20, 2010 12:36 PM

Probably not all of them, but number 2 is obviously an “entropy increase”, I noticed that as soon as I read the list. Code “wants” to be wrong, simply because there are more possible ways to be wrong than to be correct (like the million monkeys and Shakespeare). So people have to specifically target the correct code.

SNaK September 20, 2010 3:55 PM

What kind of security can we expect from cheap CPUs that have invalid 0xF00F instructions that crash them and defective FPUs?

Clearly, they have no budget for security….

Richard Steven Hack September 20, 2010 10:01 PM

Once again, the immutable First Law of Security was enumerated by Rutger Hauer in the terrorist movie “Nighthawks”. Whenever he blew something up, he would call the news media and say, “Remember – there is no security.”

I phrase that as: You can haz better security, you can haz worse security – but you can’t HAZ security.

Randall September 20, 2010 10:06 PM

Like other commenters said, his dig is about smartphone apps, and I think you can easily distinguish it from open source software.

App developers only have an incentive to sell 99-cent apps, not to worry about security or the user’s long-term well-being after they click “Buy”. Open source development, especially enterprise open source, is often directed by the users and tends to be pretty responsive.

And the right model for smartphone apps is Web apps with a few extra powers. (Apps are sandboxed, but they could be much more sandboxed. We rely on the screening for too much.) I can imagine, once hardware and tech like Google Native Client and browser engines and the badguys all get better, the current app distribution model will feel like Microsoft’s ActiveX model of yore.

Nick P September 21, 2010 1:47 AM

@ Sasha van den Heetkamp

“You can have security. Or you can have freedom. Don’t ever count on having both at once.”

It’s that kind of unsubstantiated, defeatist thinking that holds innovative ideas back. Have you checked out the capability-based designs like HP’s Polaris prototype or the CapDesk desktop? How about high assurance IP encryptors that are pretty much turned on, a bit of data entered and they work almost flawlessly? Is that so much harder than a regular (read: buggy/insecure) COTS VPN? What about encoding things like PDF in formats that aren’t interpreted scripting languages, like PDF-A variant? Do we really need to use a complex interpreter for “every” PDF document or just a few? How about Windows Mandatory Integrity Control on IE8, which requires consent for protected actions to occur but otherwise web surfing works as normal.

It seems to me that many well-designed security architectures and systems provide a significant to awesome increase in security with a much smaller decrease in usability, sometimes none at all. An OpenBSD system running… anything a Linux box runs… comes to mind. The admin has some usability problems, but the end users get the benefits for free. Sounds like a nice tradeoff. Likewise, McAfee Sidewinder firewall has few to any serious vulnerabilities due to a well engineered base OS. It’s just as useable as the average firewall and better than some. A NSA inline media encrypter is similar in work to TrueCrypt whole disk encryption. I could go on and on.

Security and usability aren’t directly inversely proportional. The situation is more complex. Each product, system, methodology, etc. must be judged by itself and against alternatives. Professionals must determine the usability costs, their justifiability, and their enforceability.

Nick P September 21, 2010 1:58 AM

@ Richard Steven Hack

“I phrase that as: You can haz better security, you can haz worse security – but you can’t HAZ security.”

Nice. That sums it up nicely. Always an odd exception to the rule: upon death, all threats are irrelevant. A dead person is 100% secure against all threats. A comatose person probably doesn’t know the difference and is practically secure. It’s the living that must worry (and have the capacity to). It is they that “can’t HAZ security.” Only cheeseburgers…

bob (the original bob) September 21, 2010 6:42 AM

‘…”What kind of security do we think is in something that sells for 99 cents? Not much.”…’

So, basically the same security as software that sells for [whatever Windows costs these days] .

HJohn September 21, 2010 12:22 PM

@: “…”What kind of security do we think is in something that sells for 99 cents? Not much.”…”


Bruce is right, the dig was dumb.

As with anything, one must consider the context. A good example is PasswordSafe. One could argue that since Bruce provided it free, that we shouldn’t expect security. But that isn’t true…Bruce has a reputation to uphold, which alone made it in his best interest to do it right.

The same could be said of a lot of freeware or open source products. Many provide free products to home users hoping to sell it to business users. Many provide free products and then sell added functionality (automatic updates as opposed to manual updates, for example). Some solicit donations. Some are building a customer base hoping to sell other products. In each case, the business has a stake in their quality and credibility that extends beyond the short-sighted sale of an individual bit of software.

Granted, there are some crappy free products out there. But there are also some good ones supported by people who care about the quality.

Nick P September 21, 2010 5:18 PM

@ HJohn

Good points. OpenBSD is always my favorite example. It’s free and it’s security and quality are probably better than any comparable commercial or open-source offering. Qmail also comes to mind. In semi-high assurance circles, the Perseus Security Architecture project built an open source kernel and stack to demonstrate their design. You should look at the Assurance section on their web site. They built quite a bit of assurance into the development processes, even though the end result was free. The OKL4 3.0 microkernel also comes to mind. Unfortunately, recent versions are proprietary.

moo September 21, 2010 7:10 PM

@Nick P: so when you’re dead, you can haz security, but it doesn’t do you any good.

Its the standard tradeoff of usability vs. security. You can’t really haz both, because letting users do things (and letting their apps do things) always comes with some risks.

Nick P September 22, 2010 1:45 AM

@ moo

You actually can have both. See my post to Sasha for specifics. Usability vs Security is often talked about like it’s an all or nothing game. It isn’t. More of one often means less of the other, but the increase or decrease in usability for a given security measure may be small or large. Many security measures have little to no effect on usability, like link encrypters between a SCADA reporting tool and management. Usage hasn’t changed at all for the users: they still see the same numbers and probably at the same rate. This is just one example. The post to Sasha has numerous others.

I think blanket statements like “You can’t have both security and usability” are misinformed. The effects on usability must be decided on a case by case bases, then their acceptability must be decided. The effects also vary by stakeholder: the administrator might experience reduction in usability, while the users might experience no change or even increased usability due to less viral infections & crashes. Usability and security are correlated, but not totally inversely proportional as many believe.

Woo September 22, 2010 6:11 AM

I love the last excerpt.. by that principle, an enterprise encryption appliance that uses rot13 but costs 9999$ must be ten thousand times more secure than an iPhone app that uses a good implementation of AES but costs 99ct.
Is there a name to this misconception? “Security by portemonnaie”?
I wonder how someone can call himself a chief security officer but not having heard that most good security mechanisms are free and available in opensource libraries nowadays. Or is this corporate marketing speak? At least, intel sells security applications now..

Doug Coulter September 22, 2010 2:09 PM

I am liking Nick P’s comments here. I used to design intrusion alarms for high security installations…

OK, given a basic sensor, you can move a threshold around and change your ratio of missed detects and false alarms all over — but in that case it IS simply a trade.

So of course, what we did is investigate ways to improve the “dynamic range” (I’m an engineer and know I’m using this term really loosely).
Increasing separability of the probability density curves is maybe a more accurate phrase that probably blows right past most people.

In other words, reduce both sorts of errors, and make it not so much the old cost balancing equation where it is just a pure trade of one for the other, for reference you make:

Cost of missed detect * probability of missed detect == Cost of false alarm * probability of false alarm.

That’s the old school way, and once you are stuck with a certain “dyn range” it’s the best you can do, which is the myth here. What if you could reduce both, or what about the case where the cost of a missed detect is loss of a nuke and so can’t be calculated?

One explores other options at that point, whether it be better front end sensors, or better back end processing than a simple threshold, and it’s very do-able once you understand the problem.

The principle applies in a lot of other ways and not enough people take it to heart. For example, when writing an app for a PC, I’m thinking, how can I make this more useful to a user, without making it more complex for them? How can I make this more powerful without making it either too limited, or too dangerous? Concentrating on that is how great things are achieved, and to be honest, not much has been done along those lines, or not as much as I think should be. And of course it may not pan out as expected.

An example (some will disagree) is languages that either have no pointers (since everything is a reference, and therefore a pointer!) but do “garbage collection” so people who should probably stay away from programming can give it a go.

I think that one is an epic fail, as the machine then goes of “on demented errands of its own” and you no longer have control over timing, and you hope that the fancy garbage collection doesn’t leak (in one direction or the other). Not to mention the resulting cycle and memory bloat caused by allowing non programmers to program.
(yes, I’m talking about .NET and Java). And since they can’t really program, they pull in huge libraries to use just one function, but wind up getting all the security risks of that whole code base in the bargain.

How about the latest attempt to solve DLL hell by just copying everything in duplicate? Good thing storage is cheap now.

Or “cleaning up” windows ini files by encouraging people to stuff the registry so full even MS changed the recommendation on that? (if there is such a thing, a cool DRM scheme checks for a registry entry that isn’t there if the software is unlocked).

So people try, and mostly fail at improving this dyn range thing — hence the myth is perpetuated because it has a germ of truth in it.

I think my car is an example of success — it’s easy to drive, has lots of HP, fine control, good gas mileage, and is just better at being a car than the one I started out with 4 decades ago — but that is only incremental progress, lots of little things that add up.

Believe it or not, the two cars I’m comparing are a new Camaro SS (422 hp in stock trim) and a 1966 Plymouth Valiant 6 cyl.

Guess which one gets better gas mileage?

No guessing on which one has better brakes, handling, quiet in the cabin, is more fun to drive, and so forth. Yes, the 422 hp Camaro beats the ~90 hp valiant by ~2 mpg….and will go about 100mph faster while being more comfortable and carrying more cargo — while being more crash worthy and more likely to be able to avoid a crash in the first place — via better handling, and better navigation (Onstar). So it’s possible. But it’s not easy. And neither are all attempts immune to unintended consequences, as in many things.

Nick P September 22, 2010 4:30 PM

“I am liking Nick P’s comments here.”

Appreciate the compliment. 😉 I liked some of your statements, too. You’re car analogy was an awesome illustration of how quality and safety/security can be improved without a huge price increase or drop in usability. Hell, I’d call a Camaro much more usable: safety and fuel economy has never been so much fun. 😉

One of the other comments I give a +1 to:

“…when writing an app for a PC, I’m thinking, how can I make this more useful to a user, without making it more complex for them? How can I make this more powerful without making it either too limited, or too dangerous? Concentrating on that is how great things are achieved, and to be honest, not much has been done along those lines…”

That’s exactly it! We could formalize the point like this. The developer should start with the requirements for functionality, usability, integration and security. Then, consider which architectures, tools, libraries, algorithms, etc. can implement those requirements. The next step is picking and properly utilizing the best components for the job that meets all requirements to a reasonable degree.

Often, a system without tons of bugs and obvious security holes isn’t less usable. Many times it’s barely less usable and many times there’s a usability loss that may or may not matter. It’s manageable for developers, though. They can influence it for the better. You illustrated that nicely in your post.

EricDP October 19, 2010 12:38 PM

Um, there are five laws in the linked article. The one you missed is “information wants to be free and … people want to share it”.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.