The Unfalsifiability of Security Claims

Interesting research paper: Cormac Herley, “Unfalsifiability of security claims“:

There is an inherent asymmetry in computer security: things can be declared insecure by observation, but not the reverse. There is no observation that allows us to declare an arbitrary system or technique secure. We show that this implies that claims of necessary conditions for security (and sufficient conditions for insecurity) are unfalsifiable. This in turn implies an asymmetry in self-correction: while the claim that countermeasures are sufficient is always subject to correction, the claim that they are necessary is not. Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out. Further, when justifications are unfalsifiable, deciding the relative importance of defensive measures reduces to a subjective comparison of assumptions. Relying on such claims is the source of two problems: once we go wrong we stay wrong and errors accumulate, and we have no systematic way to rank or prioritize measures.

This is both true and not true.

Mostly, it’s true. It’s true in cryptography, where we can never say that an algorithm is secure. We can either show how it’s insecure, or say something like: all of these smart people have spent lots of hours trying to break it, and they can’t—but we don’t know what a smarter person who spends even more hours analyzing it will come up with. It’s true in things like airport security, where we can easily point out insecurities but are unable to similarly demonstrate that some measures are unnecessary. And this does lead to a ratcheting up on security, in the absence of constraints like budget or processing speed. It’s easier to demand that everyone take off their shoes for special screening, or that we add another four rounds to the cipher, than to argue the reverse.

But it’s not entirely true. It’s difficult, but we can analyze the cost-effectiveness of different security measures. We can compare them with each other. We can make estimations and decisions and optimizations. It’s just not easy, and often it’s more of an art than a science. But all is not lost.

Still, a very good paper and one worth reading.

Posted on May 27, 2016 at 6:19 AM28 Comments

Comments

Jeff Martin May 27, 2016 7:22 AM

“no possible observation argues one out.” is not necessarily so. If the cost of a countermeasure exceeds the observable probability/impact then the countermeasure will go away. For example, back in the day of self-spreading worms a certain type of vulnerability was patched almost overnight. Now that malware authors are chasing money instead of noise, the incident of worms is much much lower. Nowadays “wormable” vulnerabilities go in the regular patch cycle.

Ramriot May 27, 2016 7:32 AM

There is also an obvious case where you can say removing a security measure will not decrease security. That being where the measure is no longer in support of the technology or security model.

You may say, “but that is absurd, who uses a security measure when it no longer applies”. Well most government legislation includes such things, because it is often easier to keep adding a dumb measure than to update the law.

Perhaps also is how computer users keep installed local malware email scanners when they use webmail.

Finally how companies use CLI and phone numbers to infer a point to point intention when VoIP use in the network renders that measure unprovable.

JimFIve May 27, 2016 7:33 AM

It seems to me that the problem as stated in the abstract is incorrect. “Secure” is not a binary condition, it is a spectrum. You can’t say that something is secure, but you might be able to say how secure something is relative to something else. Likewise, there is not necessarily one place on the continuum that one can say everything below this line is insecure. Yes there are certainly things that are insecure (rot13) but there is a gray area in the middle when one has to ask “insecure to whom, your neighbor or the NSA?”

Tatütata May 27, 2016 8:08 AM

“The Unfalsifiability of Security Claims”

This problem is present in many fields.

For example, you can prove that a patent lacks novelty against a given prior art, but if you state that it is indeed novel in view of it, you can very rarely be entirely sure that there isn’t any another prior art somewhere which anticipates the alleged invention.

z May 27, 2016 8:11 AM

I agree with their conclusions. To make matters worse, the current political and media climate rewards added security theater and punishes decreased security theater.

If you are a policymaker and oppose a pointless and idiotic security-theater measure, your head will roll when there’s a successful attack regardless of whether that policy would have worked. The renowned security experts of The Today Show, will sit around discussing how you opposed this bill that surely would have stopped the attack. But if you support it, regardless of how stupid it is, at least you can say it wasn’t your fault. You even get to stand up in front of a crowd in the next election and say you were “tough on terrorism/crime/guns/machete-wielding-gerbils/whatever-they-don’t-like” and your position will be unassailable.

Probably 90% of our security measures are really designed to secure against litigation and/or political attacks rather than violent ones.

Daniel May 27, 2016 10:21 AM

Another area where this unfalsifiability problem rears its head in in airline accident prevention. The truth that things can be declared unsafe by observation, but not the reverse, is such a prevalent problem the airline industry has a special name for it–“regulation by gravestone”. There is not a single major advancement in airline accident prevention–whether that be the autopilot or CRM–that occurred until after the inverse was proved false by observation. Yet even today we cannot show that these advancements caused a decrease in airline accidents, only that they correlate with such a reduction. That is to say we cannot prove that the computer autopilot, for example, is necessary for a reduction in airline accidents, only that its existence appears to be a sufficient one.

Thus, the response to new information can only be to ratchet upward: newly observed or speculated attack capabilities can argue a countermeasure in, but no possible observation argues one out.

Now you know the frustration of that vocal minority of pilots who think that the autopilot causes as many accidents as it prevents. “Children of the magenta line” they call it. All such people can do is argue their own countermeasure–“more hands on training, less fly by wire”–but there is no possible observation that argues the autopilot out entirely when it exists in every plane and is used every day. There no longer exists a functioning control group for comparison purposes.

oliver May 27, 2016 10:42 AM

But I think we can all agree that there are things that are clearly and unequivocally insecure; such as everything that the TSA is doing. That is pure Security Theatre. There is no doubt about that.

Just Passin' Thru May 27, 2016 11:01 AM

Expanding on Daniel’s point a bit… you can substitute the word Safety for Security, and the authors’ arguments are still true. The inability for people to agree to a common definition of Safe (as well as Secure) in objective and deductive measurables leaves us with only observing outcomes. And I think that’s why we now often see the institutionalized idiocy of Zero Tolerance policies.

Which, I think, means that the burden of the ever-ratcheting-up of laws, policies, and requirements will always spread the cost onto innocent 3rd parties, eventually.

Do you suppose there might be a way to take 3rd party costs into the equation to help eliminate unfalsifiable countermeasures/conditions/policies?

David Leppik May 27, 2016 1:27 PM

This is also true of financial safety. Portfolio managers define safety in terms of standard deviation: if an investment’s returns are don’t change much day-to-day or month-to-month, it’s considered safe. Because there is no way for them to tell if that particular investment has a 5% chance per year of becoming worthless if that’s never happened.

C May 27, 2016 1:31 PM

This seems to be ignoring a large class of security results, such as using programming languages with overflow checking and thereby guaranteeing absence of buffer overflows. More generally, this ignores any sort of formal verification. Of course, such measures don’t guarantee that something is “secure” (what would that mean anyway?), but they do guarantee that a system does not have certain flaws.

In other words, the are ways to throw time at a system that actually give concrete impossibility results. You can write a mathematical analysis of an algorithm and conclude that it is not easier to break than another algorithm; and that proof can be machine-checked. Of course, you’re still vulnerable to software implementation errors, but you still proved something.

In that sense, it’s certainly not impossible to argue a countermeasure out, if you set up an alternate system that subsumes it. That alternate system can have a lower cost, or a lower privacy impact, and so on.

MarkH May 27, 2016 1:32 PM

@Daniel,

While your point is well taken, and I happen to be skeptical toward heavy reliance on cockpit automation, in aviation it is possible (in principle, at least) to conduct a controlled experiment to test safety claims.

However, an experiment to test the safety effects of so much hands-off flying may now be beyond the bounds of what could practically be undertaken.

Adam Shostack May 27, 2016 2:08 PM

@C I think you’re looking at Cormac’s work the wrong way.

“programming languages which … guaranteeing absence of buffer overflows” is a falsifiable statement. “Prepared statements are less likely to exhibit SQL injection behavior than dynamic SQL” is falsifiable.

“No programming language is secure” is not. Cormac’s arguing against the 2nd type, and that we should prefer crisp versions of the former over less crisp.

Lawrence D’Oliveiro May 27, 2016 7:12 PM

This is another version of the hoary old refrain “absence of evidence is not evidence of absence”.

It is if you’ve looked.

Clive Robinson May 27, 2016 8:50 PM

Firstly I prefer “untestibility” to “Unfalsifiability”, because it tends to make more sense to people. But you have to have a very good reason for claiming it because proving it can sometimes be difficult.

As for defencive measures and ratcheting up, this game is as old as the hills when it comes to appropriations.

The usual abstraction is “You can never know you have spent to much on defence, because you only find you have spent to little when you get attacked”. The real reason you get attacked has little to do with what you spend on defence but what your attacker spends on offence. The former suffers from the bredth issue whilst the latter has the advantage of depth.

Thus the attacker plays to their strengths and the defenders weaknesses, which offers a considerable cost advantage to the attacker. Which makes it an archetypal Taleb “black swan” issue. Because nobody gets rewarded for preventing an attack that never happens, only fired for not preventing the attack that does happen. In such a game the only sensible stratagem is “not to play” but you usually can not opt out of being a target for somebody else, so you don’t usually get the choice “not to play”.

However whilst you might not have the choice, the rest does not follow axiomaticaly.

The reason for the “ratchet effect” is “not letting go” of previous defences, even though you have outgrown them. It’s the “erring on the side of caution” failure and it’s an expensive one, due to falling into the trap of “defending instances not classes of attack”.

The trick to avoiding the ratchet effect is an effective “class identification stratagie”. That is rather than treat each attack as unique, you need to identify what it realy has in common with other attacks then develop your defence for to cover the commonalities to save expense.

9/11 was in effect a black swan event. Nobody even now would seriously consider training employees how to respond to an aircraft crashing into the building. For three reasons, firstly it’s an extreamly rare event when compared to other risks, secondly there is little or nothing that can be done by a building owner to prevent such an attack happening, but thirdly and most importantly the difference in response to an aircraft strike and a major fire is so minimal that ordinary “fire safty training” will cover the aspects you can do something about.

Thus when you identify a new risk it could have a zero cost impact in certain areas it shares with older risks stratagies. But the opposite also applies a new risk may cover a number of older risks, thus the new response stratagie covers the older stratagies. Which means the older stratagies can be dropped, thus avoiding the ratchet effect that often applies needlessly and at an avoidable cost.

However there is a fly in the ointment and it’s the “tipping point” of the class size. That is even though three risks may be in the same class it may not be cost effective in switching from three unique instance responses to a single class response. This is actually quite a common area of failing, when rapid technological change occurs. That is the fast change of pace renders re-evaluation as being worse than a “Red Queen’s Race”. It’s an area that gets little investigation for various reasons, which is a shame because it offers direct cost savings.

Cormac Herley May 27, 2016 9:28 PM

@C It’s as Adam says (thanks Adam). Of course you can prove possession of certain properties or absence of certain flaws. But you cannot falsify the claim that possessing those properties or avoiding those flaws is necessary. So if you decide that everything needs to have property X you don’t have a way of checking whether you’re right about that original assumption. That’s a bigger deal for some X than others. If the cost of X was a one-time verification effort, then maybe not so bad, if the cost of X is that 2 billion users carry out some action it’s a lot worse.

Cormac Herley May 27, 2016 10:07 PM

@bruce Thanks for the shoutout. All is not lost. I agree we can talk about outcomes and cost-effectiveness (and think we should a lot more than we do). But we need more effort on when failure to observe improved outcomes of benefit means something is a waste. If “given a sufficiently motivated attacker X is necessary” is allowed to trump the lack of any improvement due to X it’s hard to make progress.

Richard May 27, 2016 11:20 PM

Since this thread involves verifiable security claims, I’ll use it to get some comments on the security of cryptographic structure I came up with a few years back.

Consider the following simple AES/RC4 super-encryption sequence:

Plain_Text >> AES-CBC (w/ Secret IV) >> RC4 >> Cypher_Text

… which obviously would be decrypted with:

Cypher_Text >> RC4 >> AES-CBC (w/ Secret IV) >> Plain_Text

Normally using a secret IV with Cipher Block Chaining mode doesn’t buy you much, because although it does unconditionally secure the first block, the remaining blocks simply use the preceding blocks IV (which the attacker can see) so although Cipher-Block-Chaining does act as a randomizer, helping prevent known plain text attacks, it doesn’t otherwise enhance security.

But with the addition of a second layer of RC4 encryption, the unbreakable randomness of the first block cascades resulting all-or-nothing structure that can not be broken except by a brute force search of all possible key and IV values.

My assertion is that this concatenation sequence makes it basically IMPOSSIBLE to effectively attack either cypher with any attack more efficient than a brute force key search – giving a composite cypher with a total of 384 to 512 key bits (Assuming a 128 bit RC4 key, 128 bit Secret IV, and 128 to 256 bit AES key size)

My security claim for this construction is based on the fact that the random 128 bit CBC Initialization Vector for the first block is considered as part of the key material and kept secret, so effectively the first block is one-time-pad encrypted and therefor unconditionally secure – and because of the CBC structure, the second blocks AES IV is based on the first blocks AES output – but this information isn’t available to the attacker because the second layer RC4 encryption masks the raw AES cypher output used as an IV in the next block. So, because the first block is XOR’ed with an undisclosed random secret IV it’s unconditionally secure (by way of the one-time-pad security proof), and since the next AES block depends on the first, it is also theoretically unpredictable by any method short of brute force, unless we can attack the RC4 cypher to unmask the previous layer

  • And attacking the second layer RC4 encryption is also NOT possible because all attacks on RC4 theoretically require some plain text as a starting point in unwinding its internal state – and this would require knowing the input to RC4, which as shown above, is impossible to determine without first knowing the AES Key, the Secret IV, and a guess of each blocks Plain-Text

RC4 does show minor statistical biases, but these are useless when the input to RC4 is already statistically randomized by the previous layer of AES encryption.

There are a few minor implementation details you have to get right for this security proof to hold true.

First, obviously, the secret IV, and all the cypher keys should be different, and as cryptographically random as possible.

Less obviously, the cryptographic implementation needs to avoid tacking on unencrypted headers between layers, which would then get encrypted by the next layer, because such a header would act as known-plain-text to the next layer of encryption defeating the whole purpose (openssl does this if you don’t use the ‘nosalt’.option for example)

I’d be interested in hearing any comments on any other possible vulnerabilities that might invalidate the above security assumptions.

One comment I would expect right off the bat is “why bother? – Since everybody knows AES is perfectly secure all by itself.”

… to which I would answer -the whole point is to assume that NO cipher is perfect, and AES may have some vulnerabilities (perhaps related to that oddball input-never-equals-output sbox and simple, regular round structure) – and then come up with a construction that makes exploitation of any hypothetical weakness impossible.

Nate May 28, 2016 2:47 AM

C: “This seems to be ignoring a large class of security results, such as using programming languages with overflow checking and thereby guaranteeing absence of buffer overflows.”

In my (admittedly limited) experience, even discussing formal provability (or any kind of formal properties of code) with actual working programmers gets very negative responses. I don’t understand why, but the general sense in the software development community seems to be somewhere between “security flaws can’t be 100% eliminated, so it’s a waste of time for anyone to do anything to minimise them”, “algebra? that’s never going to be useful!” and “other programmers might not be smart enough to code without mistakes, but I’m better”, and “of course my software is going to be full of bugs; I’ll just patch it after it breaks, doesn’t everyone?”

Meanwhile, compiler developers appear to have declared themselves as in open war against both security administrators and software developers, by introducing “optimisations” that break around half of all previously working code, invisibly, without compile-time warnings, and particularly removing security code such as memory bounds checks.

Because on 2016’s Internet, apparently, raw speed is still more important than correctness and safety. Or at least it is to the paychecks of the compiler developers.

It appears that we have a deep-seated bug hidden in the social algorithms by which we decide what “acceptable software quality” means.

Nate May 28, 2016 2:55 AM

Source for ‘compiler developers have declared war’:

http://www.complang.tuwien.ac.at/kps2015/proceedings/KPS_2015_submission_29.pdf

Wang et al. [WZKSL13] have written a static program analyzer that tries to find code that may be “optimized” away in “C”, but not optimized away in C*. They found that 3,471 packages out of 8,575 packages in Debian Wheezy contain a total of about 70,000 such pieces of code (as far as their checker could determine). In most cases “optimizing” these pieces of code away would result in code different from what the programmer intended (programmers rarely write code that they intend to be optimized away). These numbers are pretty alarming,
but probably far lower than the number of undefined behaviours and packages containing them.

Clive Robinson May 28, 2016 5:30 AM

@ Nate,

Because on 2016’s Internet, apparently, raw speed is still more important than correctness and safety. Or at least it is to the paychecks of the compiler developers.

This is an ages old observation I’m sad to say.

It goes back to the dim and distant past of the 1980’s to my knowledge on PC’s and probably a decade or so before that on mainframes. We used to call it “Spec-manship” and it was a sign that the limited view of the marketing department was being inflicted on the rest of a company almost always to the detriment of the products.

As an analagy it kind of harks back to the ugly old days of the Ford Edsil car if you have ever seen one and is proof positive of the limitation in imagination in marketing droids in general 😉

The problem is the markiting droids set the specification, and throw the toys out the pram with the best of conceited devas to get their often ill informed ideas to market. Those who’s job it is to produce the product either learn to kow tow to the marketing devas or go do something else in life.

Having run up against one or two of these marketing devas, they often have “sales managment experience” and “large salaries” which are apparently their sole reason for arguing that their opinions are right. That shallowness of mentality when combined with their often sociopathic behaviour, gives you an ideas as to why we have the security issues we have.

Because the ability to “think Hinky” is often an introspective, almost medative one. It’s the sort of culture clash equivalent to the Titanic serenely and securely steaming along and meeting the unseen marketing iceberg in a way that with hind sight was all to predictable.

I’m far from saying all marketing people are like this, I’ve meet a few young creatives wit very good ideas working in marketing. However they tend to go off and do other things having had a few tangles and theft of credit from the poisonous old queen devas.

Douglas Adams had similar views about such “marketing execs” and amazingly managed to find a funny side to them,

http://www.ipad-ebooks-online.com/106/Adams,%2520Douglas%2520-%2520Ultimate%2520Hitchhiker's%2520Guide%2520(All%25206%2520books)_split_3.htm

John S. Downing May 28, 2016 7:46 AM

I skimmed the synopsis (didn’t follow the link to the full article).

Long ago, when knights were knights and damsels were in distress, a knight set out on a quest. He vowed to be invincible. He wore his heaviest armour (so that he was protected from head-to-toe with no Achilles Heel). His horse wore its heaviest armour for similar protection. He carried his lance … his spear … his great sword … his short sword … his dagger … his mace … his war hammer … his axe … his great shield … and even a lighter buckler for other forms of combat … and tons of water and food to consume along the way. When they got to the first bridge, it had been washed away by a flood. So, they took the ford. The horse lost its footing … and they both drowned …

One can never have enough locks and keys, thick walls, and hefty doors to be totally secure … but then, with absolute security comes absolute inflexibility and inability to respond to unexpected dangers.

By and large, “security” is the art of making your place less hospitable to attack than your neighbors’. It’s the old adage, you don’t need to outrun the bear. You need to outrun the person next to you.

NickP May 28, 2016 11:38 AM

It’s a problem because the author is trying to solve the wrong problem: an abstract claim over all situations that something is secure. Instead, we should leverage what formal verification community has said since Orange Book days: mathematical model of the design proven to embody or maintain specific properties also formalized. These specific properties are believed to make the system secure against specific threats. Our field is simply too new to say for sure a system is secure. So, we can only prove specific properties with quite a few together achieving security against all known threats from software. So, many we can say secure against software if HW and SW built certain way. Nothing further, though.

Nick P May 28, 2016 11:51 AM

@ Clive Robinson

The big risk there is that the API’s are copywritten. The prior case established that. So, in this case, we seem to get a win on how it’s interpreted. Yet, the prior case and any case law going in that direction could devastate ability to avoid lock-in. That’s exactly why Oracle is pushing this. Has little to do with Java itself. Check out enterprisedb.com to see a company that will get slammed next possibly with same arguments due to compatibility layer.

So, what is solution to this? Stallman gets a win with his long-term prediction FOSS would be safe from these kinds of shenanigans. That’s primary option. Another is having companies modify their licensing to allow data formats, protocols, or API’s to be clones with clean-slate implementations for compatibility or competition. That’s not happening with big players but startups would definitely consider it as differentiator. Lots of customers will stick with the originator if they keep updating the product as their developers’ experience with it keeps them the best.

I notice, though, that the two, top platforms in safe programming are both owned by patent suit lovers. That’s .NET and Java. Google owns Go and it’s FOSS but so was Android. We see where that went. Makes me consider rebooting Modula-3, Wirth’s languages from ETH (esp Component Pascal), or Free Pascal. The ultimate argument for such languages might be that no greedy company is owning them plus they’re easy for C, Java, and C# developers to learn. Meanwhile, Rust is moving at a steady pace with an amazing year. That level of safety and activity rarely coincide. Mozilla, that I recall, hasn’t tried any legal shit that would stop a fork or modification. So, it’s safer than others.

So, Rust with permissive commercial or FOSS licenses is current, best bet. It’s main, type system has already been subjected to formal proofs that it works. The unsafe side is being modeled right now. The language can’t be used for high-assurance given it can’t be modeled that way yet. However, one could verify algorithms or whatever in another language that matches a subset of Rust enough for the verification to be meaningful. I see Rust for raising baseline, though, instead of high-assurance. It’s doing that already.

Nate May 29, 2016 4:10 AM

I love the focus on high-assurance (or even just raising the baseline as a beginning eg with Rust) – but any amount of security at the language and OS level seems problematic if we’re rushing to the Cloud, which grants vast backdoor access to the RAM and CPU of the virtual machine to an opaque corporate-military entity who isn’t us. And probably doesn’t have interests that align 100% with ours.

How do we get a computing platform that isn’t backdoored from the beginning? Is there any way of getting any kind of cloudlike computing that preserves any kind of guarantee of privacy?

How do we deal with the new ‘fact on the ground’ that computing is now a two-tier enterprise split between those who own actual physical hardware (and can, theoretically, have security if their OS isn’t also broken) vs those who now only rent space on someone else’s hypervisor?

Richard May 29, 2016 11:47 PM

With apologies to the late great Edsger W. Dijkstra…

It is practically impossible to teach anything about secure programming to students that have had a prior exposure to the C language: so far as security matters go, they are mentally mutilated beyond hope of regeneration.

Favian Ray May 31, 2016 9:38 PM

There are a lot of valid points made in this comment thread. Security is often thought about as binary. Most people understand that security truly is a spectrum. I’ve encountered this with almost every client I have. Even with tech companies. They believe that buzzwords are the key to the next generation of cryptography. Its exhausting that we live in the age of people believing and demanding, “I want to be unhackable.” Its amazing that people believe its that binary. Not understanding that it extends to great depth beyond that.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.