Schneier on Security
A blog covering security and security technology.
« Unanticipated Security Risk of Keeping Your Money in a Home Safe |
| Friday Squid Blogging: Omega 3 Oil from Squid »
April 15, 2011
Back in 1998, I wrote:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break.
In 2004, Cory Doctorow called this Schneier's law:
...what I think of as Schneier's Law: "any person can invent a security system so clever that she or he can't think of how to break it."
The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes:
Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.
The idea is even older. Back in 1864, Charles Babbage wrote:
One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.
My phrasing is different, though. Here's my original quote in context:
Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.
And here's me in 2006:
Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something.
And that's the point I want to make. It's not that people believe they can create an unbreakable cipher; it's that people create a cipher that they themselves can't break, and then use that as evidence they've created an unbreakable cipher.
EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: "Unskilled and Unaware of It: How Difficulties in recognizing One's Own Incompetence Lead to Inflated Self-Assessments."
Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.
EDITED TO ADD (4/18): If I have any contribution to this, it's to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system's security.
Posted on April 15, 2011 at 1:45 PM
• 63 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
From Phil Zimmerman's original PGP docs, at http://www.cl.cam.ac.uk/PGP/pgpdoc1/...
"When I was in college in the early seventies, I devised what I believed was a brilliant encryption scheme. A simple pseudorandom number stream was added to the plaintext stream to create ciphertext. This would seemingly thwart any frequency analysis of the ciphertext, and would be uncrackable even to the most resourceful Government intelligence agencies. I felt so smug about my achievement. So cock-sure.
Years later, I discovered this same scheme in several introductory cryptography texts and tutorial papers. How nice. Other cryptographers had thought of the same scheme. Unfortunately, the scheme was presented as a simple homework assignment on how to use elementary cryptanalytic techniques to trivially crack it. So much for my brilliant scheme."
"I remember a conversation with Brian Snow, a highly placed senior cryptographer with the NSA. He said he would never trust an encryption algorithm designed by someone who had not earned their bones by first spending a lot of time cracking codes. That did make a lot of sense. I observed that practically no one in the commercial world of cryptography qualified under this criterion. "Yes", he said with a self assured smile, "And that makes our job at NSA so much easier." A chilling thought. I didn't qualify either. "
This sort of hard-earned humility is, I believe, the hallmark of a competent cryptographer (of which Zimmerman is of course an excellent example).
Worse than that. When people believe they've created something perfect, they will overlook all sorts of obvious faults. Most cryptography or security systems that people ( well, novices any way) build that they think is unbreakable could probably be broken by themselves if someone else had presented it to them.
Oh sorry, wrong law.
Reminds me of the joke about protecting yourself in bear attacks and only having to outrun your camping buddy...(addon) until you realize that Carl Lewis decided to pick the campsite right next to yours.
The odd thing perhaps is those systems that are cryptographicaly secure, have the illusion of being simple. For instance the One Time Pad is a very simple system from the point of encryption or decryption (ie add bit by bit).
However the hard part is always there, in the case of the OTP it has simply been moved to the key handling and key genneration.
In many ways this is true of all cipher systems we use simple primatives that are relativly trivial to break, but when put together the right way the trivial nature of each operation inherits strenght from all the previous trivial steps. The hard part being, finding a succession of steps that provides this inheritance efficiently.
And this is the point these days, it is relativly simple to design a strong system using established primatives, but it is, in most cases, going to be either strong or efficient but not both.
It takes a well practiced skill, a lot of knowledge and a little luck to achive both strong and efficient, it is rare but it can be done (we think ;)
Opps left a bit out the first line in my above.
It should be,
"The odd thing perhaps is those systems that are cryptographicaly or theoreticaly secure, have the illusion of being simple."
This is a special case of the weakness of modus ponens, which is that it doesn't protect you from false premises.
If I can't break my code, then my code is secure. I can't break my code. Therefore, my code is secure.
Perfectly valid logic, and perfectly wrong.
I think all the surrounding comments miss a crucial nuance from the original quote...
"Show me what you've broken"
addresses the matter that if the goal is to come up with an UNBREAKABLE cipher, the sort of person capable of determining whether it's strong or not is one that has demonstrated aptitude at breaking ciphers.
If you have credentials at breaking ciphers, then your opinions about cipher strength carry some amount of weight. If you lack those credentials, that rather diminishes such weight.
Bruce, you might like this story:
Basically, it says that complaining about the TSA is one of the signals the TSA uses to decide who gets more invasive scrutiny. I'm sure this will end well.
@Carlos & Christopher
Taking into account the weight of credentials would require an affirmation of the consequent.
If my encryption is secure, Schneier cannot break it.
Schneier cannot break my encryption.
Therefore, it is secure.
I think what's gone largely overlooked here, is the coveted ability to create a cipher so powerful, that you yourself cannot break it and therefore are able to hide from yourself things which you should not know.
Oh come on! What pompous arrogant posturing! Almost anyone can come up with an unbreakable encryption. With very little experience in either making or breaking codes, I did it with only 30 seconds of casual thought. Take a reasonable message (50 words or more) and replace each word with the number 0. Voila! An encryption that can never be broken.
As for the special case set of encryptions that can only be decrypted by authorized second parties. Well, admittedly, that's a trifle harder. Unfortunately, the margins of this email are not quite adequate to fully document my solution. So... I'll leave this piece of minutia as an exercise for my readers, since I've already solved the general case.
If one makes cipher that remains unbreakable to ones self for quite some time and then manages to break that cipher does one cease to be one's self?
Frederick: I've usually heard the reverse case: "An earlier version of myself cracked this problem".
"The Dunning–Kruger effect is a cognitive bias in which unskilled people make poor decisions and reach erroneous conclusions, but their incompetence denies them the metacognitive ability to appreciate their mistakes."
Something like the converse of this law appears to have happened to the Kryptos sculpture at CIA headquarters (http://en.wikipedia.org/wiki/Kryptos), which was designed to be difficult but crackable but has, to everyone's frustration, eluded a complete solution so far.
@Woof: your code leaks information. I can trivially deduce how many words were in your message.
I think you have it wrong there.
Plenty of people can break stuff but can't put together a proposal that others can't break. Most often people who are good at breaking can't come up with anything useful at all, they can tell you all the wrong ways to do something, but not the right way.
The real test should be to ask what people have built, who has tested it, who has deployed it and whether it worked within the expected parameters.
Plenty of folk have designed systems that were never broken because they were never used. And some of those systems cost hundreds of millions to develop.
This reminds me of a remark I've heard on one or two occasions: (approximately)
If you write a program as cleverly as you can, you will not be clever enough to debug it.
True. Your statements reflect the recent shift, esp in defense circles, from risk mitigation to risk management. Your statements are mostly true for high assurance systems. An example was the SCOMP system. The evaluation alone took 5 years. Then only 20 sites used it because it lacked the feature and cost advantages of COTS systems. Nobody is willing to pay for high assurance COTS infrastructure, so breakable will always be the norm.
On the other hand, crypto algorithms dont have this issue. Implementing, designing and deploying a crypto algoritm or protocol is often cheaper than say a desktop OS or virtualization suite. Cryptosystem designers routinely use formal methods and other exotic techniques most projects cant afford. The only obstacle i see to designing invulnerable crypto is that the theoretical security models & properties arent as straightforward as designing say a microkernel or filesystem.
I was perplexed by rot13 until I learnt to count to 26 ...
What about Quantum Cryptography? It is always claimed that Quantum Cryptography is unbreakable as of todays' date. But with the increasing advancement in the technological concepts in electronic industry, will Quantum Cryptography be able to stand with time? Does "Schneier's Law" applies at this place also?
Some days I can't break ROT-13. That's an indication of my skills as a cryptographer, not the security of the encryption. (Some nights I can't break ROT-26, but that's different).
Brian Kernighan, in "The Elements of Programming Style", 2nd edition, chapter 2:
Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?
The question is how many people believe they can create some protocol or scheme utilizing cryptographic primitives and believe it to be secure.
Isn't this really a bit of a philosophical argument about omnipotence? Let me think how its worded elsewhere.
Do you believe God is omnipotent? In other words, can God make a rock so heavy he himself cannot lift it?
"any person can invent a security system so clever that she or he can't think of how to break it."
I think the argument from a security standpoint is can a system be built so secure than no one can break it?
I think not, because the person who built it would be able to break it. Then it boils down to trusting that this person would not sell/give away the secrets at some given point and thus the security so secure is broken.
As for God being omnipotent, is the answer why would he? I can't really remember it right now. But it seems this is almost the same analogy here.
I once had to devise a cipher for a low bandwidth, high latency connection with tight timing requirements on the overall exchange.
Being aware of Schneier's Law, I endeavoured to be as uncreative as possible (essentially lifting 802.11i and replacing transmission of the IV with a deterministic time-based nonce, and being utterly unforgiving when it came to poor time synchronisation to reduce the risk of attacks related to rolling back the clock).
Even so, I encouraged my management to get a real cryptographer in to analyse my approach for flaws. To this day, I still don't know if that ever happened :P
You should have made this a fully recursive post by finishing with, "And in 2011, I wrote: [quote the whole thing]" :-)
Quantum cryptography implementations have already been broken. So, they are part of the evidence that Schneier's law is true.
Don't all researchers understand the reasons for impartial peer-review, quality assurance, and testing?
A new Bruce Schneier fact:
"Anyone can create an algorithm that he himself can't break. Except for Bruce Schneier."
As Nick P notes many "practical" implementations of QC are or have been broken.
QC is based on a "theoretical" axiom from a mathmatical model that appears to so far align with (in the most part) reality.
And of course the old joke applies to QC as it does to many engineering issues,
In theory the difference between theory and practice is zero, however in practice the difference is far from theoretical.
In the case of QC people are begining to realise it's not just ensuring a single photon is sent that's important it's ensuring that information does not leak via side channels that's important and that is a difficult task generaly requiring considerable experiance in quite a few otherwise unrelated fields of endevor.
Thus QC has a lot in common with practical implementations of AES etc. As Bruce has noted with software systems the cryptographic algorithms are currently more than sufficient even where some weakness in the algorithm exists (that could alow it to be broken in less than brute force time). This is because in general the rest of the system is far far weaker even when done well (as has been seen with a cryptographic library written by crypto knowledgable people).
Without going into the ins and the outs of it even people with a good engineering aproach will generaly muck up when it comes to implementing crypto systems (see WEP for example).
As I keep saying unless you take care you will find in general it's "Efficiency-v-security" in a worse way than "Usability-v-Security".
That is it is not impossible to have an efficient or usable system that is secure. Just very very improbable unless you real know what the ground rules are and how to properly apply them.
Which oddly enough is very much the same for designing Crypto algorithms. For those that have a yen to learn have a look at the history of the Fast Encryption Algorithm (FEAL) as an object lesson then the history of stream algorithms in NESSIE specifficaly SNOW.
As I said further up with a little knowledge it is quite likley that many people could design a secure encryption algorithm, simply by cobbaling together known primitives into an over engineered solution. However it is very very unlikley to be even close to being efficient and secure in all modes.
This is just an example of the general case: People are stupid and/or illogical.
"Anyone can create an algorithm that he himself can't break. Except for Bruce Schneier. Bruce Schneier can break any algorithm... with his bare fists!"
From everyday observation, it would seem to me that politicians are even more prone to Dunning-Kruger than cryptographers or other normal people.
I remember years ago being introduced, with my father, to someone who called himself an expert in something-or-other.
As we walked away, my father said that anyone who calls himself an expert, isn't. An expert knows that there is so much he doesn't know that he isn't an expert.
@FridayAfternoon: It doesn't matter whether Carl Lewis is camped next to you or not. You don't need to be the fastest in the pack, just faster than someone else.
That is of course unless the slower party is Chuck Norris, to whom any bear would first make a friendly bow before continuing the pursuit of your *ss. It is rumoured that Mr. Norris was pulled over for speeding a couple a days ago, and that - in an act of random kindness - he let the officers off with a warning.
"anyone who calls himself an expert, isn't."
This all harks back to Aristotle who insisted that to be wise is to know your ignorance (or so I remember). And that is again based on Socrates who is quoted as having said "I know nothing except the fact of my ignorance."
The cryptographic question is then one of the examples of "Everybody can formulate a question he himself cannot answer". A wise (wo)man will ask others the question.
I think it would be cool if the Dunning-Kruger effect were to be renamed the Ralph Wiggum effect, but that would be unpossible.
"Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities."
I remember an early stage in my career when I was very pleased with myself and was pretty sure I was about half way to knowing everything I'd need to understand. After my skills and knowledge improved by many orders of magnitude, I realized that what I knew barely scratched the surface. I'm still working on it..
"I realized improved by many orders of magnitude, I realized that what I knew barely scratched the surface. I'm still working on it..."
And one day you'll (metaphorically) trip over something and go "that's odd" and if you're lucky nobody else will have spotted it before you and if you investigate propperly and get a paper published then you might be a "scientist" then your troubles will realy begin ;)
Often attributed to Brian Kernighan:
Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it.
Since we're sharing related aphorisms, one of my favorites in this area is, "Education is the progressive realization of one's own ignorance".
I saw it on the Powerpuff Girls cartoon, but most likely it's from elsewhere originally.
I wonder if there isn't even a stronger version of Bruce's statement, based on a more liberal sense of "break". If you can't come up with at least one attack on your own algorithm/protocol/defense that's better than brute force, you don't know what you're doing.
Oh, and I'd love to see an analysis of when moving the problem to other parts of the system, e.g. key distribution or plaintext movement, constitutes a substantial decrease in difficulty.
"Unskilled and Unaware of It" - In ultimate frisbee we refer to this as having a poor ego to skill ratio.
Hmm, I'm not a lawyer but I touched on this a bit from a different angle in my RSA presentation on Cloud forensics.
There are four phases to a Daubert test for expert testimony (from the US Supreme Court case of Daubert v. Merrell Dow Pharmaceuticals, Inc)
1) testable hypothesis
2) known or potential error rate
3) subject to peer review
4) generally accepted by relevant scientific community
Seems like the Schneier Law emphasizes the significance of #3.
Here's a corollary, which to be as egotistical as possible, I will call Leppik's Law:
Anyone, from a babbling baby to an experienced cryptographer, can create an algorithm that he himself can't break, nor can anyone else. But nor can any legitimate user decipher it.
there was a study a few years back which produced the disturbing result that a significant majority of people (~90%) considered themselves to be in the top tier (~top 10%) of intelligence. the outliers in the study were the functional but cognitively impaired and those *just* below the very top. they rated themselves at the bottom, or just above average respectively. (the theory being that the impaired that are clever enough to be functional and self aware know that they're not rocket scientists, and the folks who support rocket scientists hang out with the *really* smart ones, which skews their understanding of where they fall. of note, both of these groups rated themselves lower than they actually are)
the takeaway for all of us is: unless you know a number of *very very* smart people, a) you probably aren't nearly as clever as you think you are, b) most people who tell you they are smart are probably delusional, and c)you really are surrounded by idiots (as are the idiots around you).
D) the people who tested your IQ used a flawed system.
There is a downside to hanging around with the general conception of "*very very* smart people", they tend to need minders to look after them in public.
At one point I developed a "pet theory" about smart people and very smart people that was loosely based on a rubber band... in that,
Except for exceptional cases (ie very smart people), the more you stretch being smart in any given direction the less area it covers in any other direction.
That is in the general case (ie smart people) you can have depth but lack breadth, which kind of accounts for the "nutty professor" types with the lack of dress sense and sometimes social norms.
However with age and a little caution I realise there are some very very smart people that realise early on the way to get through life peacfully is to act as those around them expect them to, just to make those people feel comfortable...
Knuth writes in the introduction no PRNGs in Seminumerical Algorithms of an algorithm he wrote in his younger days that would pick which algorithm to use to generate the next item based on the inputs. He then started testing and found it repeated every 20 numbers or something like that. Somebody else posted Zimmerman's similar story. It never ceases to amuse me when the media finds some 16 year old who claims to have done what Zimmerman and Knuth thought they did at some point (but inevitably won't release their source code).
we don't really want to give the delusional folks an out for them not being clever do we? ;)
the study (iirc) didn't only use iq, but included a variety of tests to deal with some of that issue.
we all have varied skills at various levels, and different sorts of smarts fall along those skills. while in theory most of us should have rolled up an average character, odds are that *someone* did manage to get all 18's on their 3d6 for every sort of clever. it's just less probable. sure you can get a feat or add a point through training but you're still starting behind.
if "jhkhjkdsfsdfbmnfewnwenb" = "this is the plaintext" but "this is the plaintext" is wrong.....
'Schneier's Law' seems as corallary to Godel's Theorem:
Security thinkers are inherently biased to find a coherent, self-consistent, internal operating system.
When they succeed, they fail, under Godel's theorem, to account for all features of external reality.
Hence, they cannot see how, as they have consistently proved their system secure,
but then, by definition, their system is insecure.
what is bad about collisions in encryption programs
I think the Dunning-Kruger Effect is _the_ most important problem of the human race, possibly even more than greed.
I'm reminded of Knuth, in trying to invent a "random algorithm" comes up with an algorithm that quickly cycles itself in a few steps.
Please, please, please, don't be armchair cryptologists. Leave that to the experts. And if you're not already an expert, you aren't. :)
@Randal L. Schwartz , so why would collisions be bad for a cipher.
If each IV has collisions with ever other one, but with different keys... wouldn't it be imposable to brute force it.
Are you from England?
@ Randal L Schwarts,
"Please, please, please, don't be armchair cryptologists. Leave that to the experts."
Hmm why would "experts" wish to be "armchair cryptologists"?
"And if you're not already an expert, you aren't"
Gives rise to,
"At what point do you become an expert and by who's recognition?"
Afterall it was not untill about 15years ago that formal training in cryptography became available in Universities. Prior to that you had to be self taught or working for a government funded organisation and thus sworn to secrecy...
I've read the Dunning-Kruger paper and I wonder how much it says about human nature and how much it simply says about Cornell undergraduate students, who were the test subjects.
I'd like to see a further study which tested a more general population.
Isn't this just the standard fallacy that absence of evidence is not evidence of absence?
@Clive Robinson: It's like becoming a hacker: You are one when other hackers refer to you as one. Doesn't explain how the first one got his title, though...
@Steve: It's an old joke: Psychology is the study of behavior of undergrad students.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.