Amateurs Produce Amateur Cryptography

Anyone can design a cipher that he himself cannot break. This is why you should uniformly distrust amateur cryptography, and why you should only use published algorithms that have withstood broad cryptanalysis. All cryptographers know this, but non-cryptographers do not. And this is why we repeatedly see bad amateur cryptography in fielded systems.

The latest is the cryptography in the Open Smart Grid Protocol, which is so bad as to be laughable. From the paper:

Dumb Crypto in Smart Grids: Practical Cryptanalysis of the Open Smart Grid Protocol

Philipp Jovanovic and Samuel Neves

Abstract: This paper analyses the cryptography used in the Open Smart Grid Protocol (OSGP). The authenticated encryption (AE) scheme deployed by OSGP is a non-standard composition of RC4 and a home-brewed MAC, the “OMA digest’.”

We present several practical key-recovery attacks against the OMA digest. The first and basic variant can achieve this with a mere 13 queries to an OMA digest oracle and negligible time complexity. A more sophisticated version breaks the OMA digest with only 4 queries and a time complexity of about 2^25 simple operations. A different approach only requires one arbitrary valid plaintext-tag pair, and recovers the key in an average of 144 message verification queries, or one ciphertext-tag pair and 168 ciphertext verification queries.

Since the encryption key is derived from the key used by the OMA digest, our attacks break both confidentiality and authenticity of OSGP.

My still-relevant 1998 essay: “Memo to the Amateur Cipher Designer.” And my 1999 essay on cryptographic snake oil.

ThreatPost article. BoingBoing post.

Note: That first sentence has been called “Schneier’s Law,” although the sentiment is much older.

Posted on May 12, 2015 at 5:41 AM83 Comments

Comments

Hugo May 12, 2015 6:00 AM

I see this indeed many times. Companies or organisations who think that as long as they keep the algoritm secret, it is secure. Often, another reason for a ‘proprietary’ crypto protocol is vendor lock-in. That makes it all even more dangerous.

Marcos El Malo May 12, 2015 6:52 AM

@Hugo

What was secret or proprietary about the Open Smart Grid Protocol?

UK Dave May 12, 2015 7:41 AM

In light of these developments, no, nice Mr British Gas man and nice Mr E.On man, I would not like one of your groovy funky new so-called “Smart” Meters.

Call me paranoid but I think I’m better off with my old dumb meters in their white cupboards with the funny little yellow plastic keys.

eindgebruiker May 12, 2015 8:01 AM

“Oma” means grandmother in Dutch. I like that: the “granny digest.”

Scott Arciszewski May 12, 2015 8:23 AM

In fairness: A lot of us have written our own homegrown crypto (that we would hopefully never, in a million years, seriously consider deploying in a production environment or recommending to a standards organization). Especially as amateurs.

Their mistake was publishing/using unvetted crypto code rather than writing it in the first place.

I am glad that this story is getting the attention it deserves. I am worried that the peanut gallery’s rhetorical responses to this story lambasting them for daring to write their own crypto in the first place might instill harmful impressions on fledgling developers, who should instead be encouraged to learn about how cryptography is broken and to write better algorithms (but not to deploy them).

OSGP should have been reviewed by a team of cryptography experts. Why wasn’t it? Is a shortage of professional cryptographers with the available time to review these standards the culprit? Was it a lack of funding to hire one to write it properly?

Dr. I. Needtob Athe May 12, 2015 8:37 AM

There used to be a frequent feature in Crypto-Gram called The Doghouse, where a bad security product was exposed. I just searched my archive and the last entry I found was about a product called Lock My PC in the May 15, 2010 issue.

The Doghouse was an interesting and entertaining feature but it seems to be gone now. What happened to it?

Martin Walsh May 12, 2015 9:27 AM

Once again you are cherry-picking this failure to make a grossly invalid point. “Peer-reviewed” and “withstood broad cryptanalysis” are the utterly meaningless terms.

Andropause May 12, 2015 9:45 AM

The article is a shameless ad for older essays/blogs, etc. Most of it consists on re-posting the abstract of a paper, so if this blog entry has any value at all, it is so well hidden so as to withstand all cryptanalysis.

__MPTR May 12, 2015 9:53 AM

Amateurs Produce Amateur Cryptography
Profesionals Produce Amateur Cryptography

Yours truly NSA

Martin Walsh May 12, 2015 10:00 AM

In 2015, here is what “peer-reviewed” means.

“If it’s not exactly like something we have seen before and we can just copy previous work, we don’t have time to evaluate it… unless, you give us a blank check. And, we can’t commit to anything EXCEPT THAT, if it fails you get the blame, and if it succeeds we get the credit.”

Only in the Security Industry. You will not find this in any other sector of industry, from Aviation to Medicine, even in places peoples lives are at stake.

This isn’t about cryptography, it has nothing to do with key-length or rounds or PRNG’s. This is about MONEY.

John Macdonald May 12, 2015 10:02 AM

Using an amateur crypto algorithm is not necessarily bad. I did it once – using an algorithm I had invented. I knew that it would pose no huge challenge to break it, but since it was being used to encrypt scripts and the interpreter had to be able to decrypt the scripts it was not difficult to decode the scripts without analyzing the algorithm enough to know how to break it (just run the script with the debugger turned on). The only reason for the encryption was to make it clear that the scripts were not intended for public viewing, and so if anyone was found to be using the code they must somewhere along the line have made a deliberate choice to ignore the copyright restrictions and could not claim that they thought it was intended to be freely copied. Naturally, we never caught anyone doing that (and it probably never happened – this wasn’t rocket science).

Zenzero May 12, 2015 10:05 AM

@Martin Walsh

“Once again you are cherry-picking this failure to make a grossly invalid point. “Peer-reviewed” and “withstood broad cryptanalysis” are the utterly meaningless terms.”

How is the point “grossly invalid”? It’s been proved again and again for many years.

I would argue that “Peer-reviewed” and “withstood broad cryptanalysis” are very meaningful terms in the context of cryptography or indeed in a broad range of fields. Peer review and broad safety testing has been on going in many industry’s for century’s. Would you fly in an airplane where the design wasn’t thoroughly tested and reviewed by experts in the field? The same applies to crypto just as much as it does with building/automotive/medical etc. I really don’t see what point you are trying to make.

Zenzero May 12, 2015 10:12 AM

@Martin Walsh

Apologies, I seen you had posted when I wrote my last reply.

Where is that quote from as it’s not familiar to me or did you make it up?

Here’s a more relevant definition of Peer review from the Journal of Theoretical Physics and Cryptography:

Peer review (also known as refereeing) is the process of subjecting an author’s scholarly work, research, or ideas to the scrutiny of others who are experts in the same field. Journal of theoretical physics and cryptography have implemented a rigorous peer review process to ensure the high quality of its technical material. Referees are formal reviewers whose comments and opinions will form the basis upon which the Editor will decide whether or not to publish the paper, and with what changes. The Editor’s decision is always based on all the reviews received, but mixed reviews present the need for the exercise of editorial judgment. Thus, the final decision for acceptance or rejection lies with the Editor. The review process shall ensure that all authors have equal opportunity for publication of their papers.

http://www.ijtpc.org/

John Galt III May 12, 2015 10:26 AM

This is spot on the topic of “A man’s got to know his limitations:”

Opinionator – A Gathering of Opinion From Around the Web
http://opinionator.blogs.nytimes.com/2010/06/20/the-anosognosics-dilemma-1
June 20, 2010, 9:00 pm
The Anosognosic’s Dilemma: Something’s Wrong but You’ll Never Know What It Is (Part 1)

I remember from about 20 years ago, as if it were yesterday, “and that makes our job so much easier.” It was a spook commenting on all of the rookie mistakes in cryptography. I think that the spook was being quoted in an article by PRZ.

Feynman’s quote below is spot on the issue of how hard it is to get cryptography right. And to my constant reference to being able to secure not only the software, but also the hardware and firmware, and all of the back doors. A very powerful concept for securing a processor is to make a complete working model of it in FPGA, using the data sheet, then run that processor against the real code on one has been exposed in the wild. Any time that there is a divergence in the trajectory tensor, it indicates undocumented features, either being exploited or an unintentional bug.

Feynman used the concept in physics and social sciences, while Dylan Grice applies it here to central bankers and economists. My friend used to work at Thinking Machines in Boston, when Feynman would come around to their technical meetings. He was still razor sharp – that must have been in the 1980’s.

Here’s a Feynman quote from Dylan Grice’s 2013 article. “He had ‘the advantage of having found out how difficult it is to really know something, how careful you have to be about checking…’ and that experts hadn’t ‘done the work necessary, they haven’t done the checks necessary, they haven’t taken the care necessary.’ He had ‘a great suspicion that they don’t know.'” Implicitly, they don’t know that they don’t know. The context is that Dylan Grice is comparing the lack of rigor in social sciences three decades ago to lack of rigor among central bank economists/policy makers right now. The idiots at the controls don’t realize that they are dealing with a nonlinear, time-varying system, or worse, they do understand, but pretend that they don’t. They think that (or act like) they are dialing a wall thermostat up and down to try and get the temperature just right. The problem is that they don’t know what they don’t know:

http://www.zerohedge.com/news/2013-07-31/dylan-grice-intrinsic-value-gold-and-how-not-be-turkey

Naïve intervention.
Richard Feynman instinctively distrusted social scientists. Not because he knew much about social science but, as he relayed once in a BBC interview, because he had “the advantage of having found out how difficult it is to really know something, how careful you have to be about checking…” and that experts hadn’t “done the work necessary, they haven’t done the checks necessary, they haven’t taken the care necessary.” He had “a great suspicion that they don’t know.” In each case above, the aim of policy makers was to make the world a better place. The problem, as Richard Feynman would have understood, was that they didn’t know what they were doing. They hadn’t taken the intellectual care necessary. Yet the central planners on the ground permitted themselves the delusion of thinking themselves that very great central planner in the sky. And in a way which seems Classically tragic they were punished for trying, with disaster inevitably ensuing.

I think that this is the right BBC interview. Well worth the time to watch.
http://www.feynmanphysicslectures.com/bbc-horizon-interview

My global comment:
The Keynesian model more or less worked for 70 years. That’s a pretty good run for a linear, first-order controller model applied to stabilizing a highly non-linear, time-varying system. The Austrian school of economics provides some good second-order corrections. But we always arrive eventually at the point where we recognize that all models are wrong and none are universally applicable.

My first notice of Dylan Grice and an early (late 2010) good experience with Zerohedge was this article, which is an absolutely brilliant piece of work:

http://www.zerohedge.com/article/5-black-swans-keep-dylan-grice-night-and-how-hedge-against-them-all

It probably still is the most sophisticated hedging article that I’ve ever read. His Japanese hedge is going to pay out quite well and now a lot sooner.

Every concept in this comment has a direct analog in cryptography and secure systems.

Questionman May 12, 2015 10:31 AM

How hard would it be to create a crypto that isnt breakable even by its maker?

Like in a way to encrypt something forever?

J on the river Lethe May 12, 2015 11:03 AM

Good recurring point article that gets dredged up from time to time.
Points.
1. Of course home brew is limited by how talented and careful the “cryptographer” is in construction and execution of a system.
2. It’s a given that even experts can vet in poor faith or sloppy efforts.
3. Yes, crowd sourcing has value.

It’s like when your car breaks down. You try to fix it. Fail or succeed. Just cause you fail to figure it out doesn’t change that Your odds are better if someone or even better a group who is an expert in automobile mechanics and even software will fix it correctly. Everyone knows the backyard mechanic who operates like a chimpanzee with a blowtorch or a brother in law that says he learned how to make web pages and will set one up for your business.

I am not going to bust Bruce’s chops for a general statement by nick picking with exceptions or lack of specificity.

Readers of this blog generally know what they don’t know. The general public doesn’t even know what they don’t know. The professionals are supposed to take the blowtorch out of their hands. Oh, and refill the nitrogen tanks. Apparently you actually can melt a cluster computer right down to the frame. Sigh.

Nick P May 12, 2015 11:09 AM

@ Martin Walsh

“Only in the Security Industry. You will not find this in any other sector of industry, from Aviation to Medicine, even in places peoples lives are at stake.”

Regulated industries with often specific guidelines on what to do. Quite the opposite of cryptography and even IT.

“This isn’t about cryptography, it has nothing to do with key-length or rounds or PRNG’s. This is about MONEY.”

I agree: the fields you cited are very much about the money. Their rigor and profit motive results in products so expensive most can’t afford them. The result is oligopolies that screw consumers or not getting product/service without insurance. Even more, many in those industries try to pay off reviewers to let killer stuff go through. Drug and medical device manufacturers come to mind immediately. A British journal changed its stance on “independent” reviewers taking bribes because they couldn’t find a single one taking less than $10,000 from manufacturers. So, they set the new standard at $10,000.

re peer-review

We have a number of algorithms with extensive peer review. There are a number of cryptosystems with extensive peer review. Most of this peer review work is in academia where students try to make a name by making or breaking stuff. There was also peer review from diverse sources in the 90’s for things like PGP and remailers. There are a number such as Matthew Green who post reviews of security tech on blogs. Little to no money involved for any of these. There are experienced cryptographers that charge decent sums to review products for industry. Of course, one could ask: “Shouldn’t people get paid for their work, esp if “lives are at stake?”

Note: The companies and organizations involved in smart grid initiatives are pulling a lot of money. I know a local city got around $100+ million in grants for their initiative. Boeing gets millions in contracts just to speculate on how grids might get done. All this money flying around and you expect people doing the most safety/security-critical parts of the system to work for free? Make minimal wage? Not quite fair.

Jayson May 12, 2015 11:20 AM

Non-cryptographers (like me) are perfectly aware that they should not create their own solutions. It doesn’t take much effort to understand the wisdom of using standard protocol. Examples like this are likely the result of a unique mindset, groupthink, or time to market taking a proof of concept to production.

Amateurs are certainly capable of producing quality cryptography as well as cracking it. I’d like to point out that those who created the protocol in the article were not amateurs, but professionals since they were paid for their work.

@Zenzero
I’d have no problem flying in a plane thoroughly tested by non-experts. The more flights they took without crashing, the better. 🙂

fads0j9h0 May 12, 2015 11:52 AM

I’ve noticed a correlation between “second-hander” mentalities and a loud opposition to an individual making their own crypto.

It is obvious that there are valid crypto principles that can be learned and applied independently of others.

That this an issue says more about the person who is running down self-made crypto than it says about making crypto systems. It says that they prefer dependence to independence. It says that they are are prey to the appeal to authority fallacy. It says that they fail to observe the difference between essentials and accidental properties.

If you want to make your own crypto, then the only requirement is that you know what you are doing.

Alan May 12, 2015 11:56 AM

What are the ramifications of this weak security? It sounds like malicious actors can shut down power systems — at least at the electric meter level. And change your electric bill, for better or worse. Other than that, not a risk. Right?

It seems like even when the risk appears low, there’s some additional risk that wasn’t immediately apparent. Ergo: encrypt by default. With proven encryption.

Daniel May 12, 2015 12:11 PM

In a related vein, after a humiliating incident I coined Daniel’s law. To whit,

Any security system you devise that you can detect can be detected by someone else.

Nick P May 12, 2015 12:41 PM

@ Jayson

“I’d have no problem flying in a plane thoroughly tested by non-experts. The more flights they took without crashing, the better. :)”

Lol. Nice one. Reliability can work that way but not security. The reason: you can’t easily detect when the crypto “crashed.” Especially if opponents (esp NSA) are willing to plant evidence of other attack vectors to maintain your trust in what they really attack.

@ John Macdonald

A DMCA situation seems like an acceptable use case for a custom algorithm. Even then, you’re better off as an engineer leveraging what’s already been made and shown to work. You could’ve simply used xorshift if you wanted it to look like a 128-bit crypto algorithm but fast at 10 cycles per byte produced. If ability to break it doesn’t matter, a substitution cipher that reused the same, one-byte key on the plaintext would do fine while being extremely simple, portable, fast, and space-efficient.

So, even in that use case, home-rolled solutions are still probably worse than using what’s already out there.

A Nonny Bunny May 12, 2015 12:53 PM

@Alan

What are the ramifications of this weak security? It sounds like malicious actors can shut down power systems — at least at the electric meter level. And change your electric bill, for better or worse. Other than that, not a risk. Right?

There’s privacy risks as well. The easiest one is is telling when someone’s home or not (and thus when you can rob the place). And analysis of the power usage can tell you things like what devices someone has (i.e. how many TVs are there to steal).

On the other hand, I doubt actual thieves are likely to bother. And those with an interest in violating someone’s privacy can probably resort to more effective means (like spear-fishing and getting into the browser history on the computer).

MarkH May 12, 2015 2:08 PM

Hmmm, do I notice a bit of hostility toward Bruce’s position? Maybe some of us are defensive about our original algorithms 😉

Anyway, one amateur’s perspectives:

1. Cryptanalysis is hard.

In general, to effectively attack a substantial cryptosystem may require a great deal of specialized learning; patience; insight; creativity; special-purpose tools (quite possibly constructed or customized to the target system) and understanding of how to apply them; and lots of time.

Learning cryptanalysis is very different from learning how ciphers, protocols and other infosec components work. I would guess that for every person who has learned enough cryptanalysis to be dangerous, there are at least 100 or 1000 who have learned enough cryptography to be dangerous 🙂

I suggest that it takes years of hard work to become well-versed at cryptanalysis. And unless you have discovered significant attacks against important cryptosystems, your competence is purely speculative. The number of people in the world who have demonstrated such competence is rather small.

2. If you aren’t a competent cryptanalyst, and you independently evaluate a cryptosystem to be strong, your evaluation is worthless.

See 1.

Note that most inventors of “home brew” crypto techniques haven’t gone deeper into cryptanalyis than solving newspaper puzzles, or doing a handful of classroom exercises.

3. The Most Dependable Indicator Available of Cryptosystem Strength, is withstanding lengthy attack by competent cryptanalysts.

If the members of that small community of public cryptanalysts with demonstrated competence (see 1) have invested multiple person-years into attacking a system, and haven’t come close to a practical break, then trusting such a system is much, much safer than trusting an alternative that hasn’t withstood such analytic scrutiny.

As to the claim that “withstood broad cryptanalysis” is an “utterly meaningless” term, it has both a quite specific meaning, and substantial real value.

MarkH May 12, 2015 2:35 PM

ABOUT THAT AIRPLANE METAPHOR…

First, I respect Jayson’s right to board any kind of aircraft he chooses 🙂

Second, I would hesitate to ride in an aircraft of proven design (that is, whose design was tested by experts) if the particular airframe had not been subjected to expert scrutiny. In the USA, that would at any event be illegal, as well as of dubious wisdom.

Third, I would be frightened to ride in an aircraft of unproven design, even if the vehicle had survived hundreds of flights. All of those flights might have been under conditions within some range of “normal” parameter values. The kind of design testing done by experts will answer a very long list of questions, including (for fixed wing aircraft):

• Can the aircraft be reliably recovered from a stall with a reasonable loss of altitude?

• Will the structure maintain integrity when subjected to a strong updraft?

• Can the aircraft be practically operated so that it remains controllable under all conditions of engine failure?

• Does the aircraft have a maintenance schedule ensuring that cracks (or other progressive failures of materials) will be detected before endangering safety?

What I want to convey, is that “typical use case” testing is nowhere near enough, to prevent a calamity.

Fourth, for cryptosystems, take the point above about abnormal case testing, and raise it the 14th power (well, the exponent was chosen somewhat arbitrarily) … because civil aircraft are NOT tested against very smart, very persistent, and utterly malicious persons who are doing everything in their power to destroy them.

That is not a reasonable performance requirement for passenger aircraft. It is the fundamental performance requirement for strong cryptosystems.

Marcos El Malo May 12, 2015 2:57 PM

@Mark H

If the airplane was a fighter or bomber and you were going to fly in combat, you might well want it tested against very smart, very persistent, and utterly malicious persons who are doing everything in their power to destroy it.

(And indeed, some kind of parallel could be drawn with a tried and tested combat aircraft like the A10 Warthog, which has been retired in favor of an unproven design with “advanced capabilities*”.)

*capabilities such as keeping defense contractors awash in cash

Nick P May 12, 2015 3:13 PM

@ MarkH

Nice points. The difference and difficulty in security vs safety was described in Programming Satan’s Computer. Other than a good metaphor, they have a nice summary of the problem:

“The problem is the presence of a hostile opponent, who can alter messages at will. In effect, our task is to program a computer which gives answers which are subtly and maliciously
wrong at the most inconvenient possible moment. This is a fascinating problem; and we hope that the lessons learned from programming Satan’s computer may be helpful in tackling the more common problem of programming Murphy’s.”

Ray Dillinger May 12, 2015 3:32 PM

I have learned enough cryptography to be dangerous. I can actually produce secure cryptosystems, and, given time and lots of effort, expose weaknesses in insecure ones. For example I was one of the people who saw what was wrong with the key scheduling in RC4, before that weakness actually turned into a break.

But I have not learned enough to be good. The secure ciphers that I design rest on the security of known-good primitives which I’m applying in a different way, and do not constitute an advancement of the art. What I can say is something like, “If ((Well known cryptographic pseudorandom number generator)) is secure, then this block cipher which uses it in its keying schedule has a secure keying schedule. And because ((Well known non-cryptographic number sequence generator)) has period of N and produces every possible bit combination the length of its state before repeating, then using its output to mask other output will render correlations on any sequences less than the state length invisible. …” and so on.

And, because I construct in this way, the products of these constructions ALWAYS take more CPU time and/or memory than a different cipher which is already known to be good.

So I can produce secure things — which there’s usually no point in publishing. Because there are other secure things already which are more efficient, have fewer parts, and have the advantage of having already been subject to more scrutiny and already have code out there to be compatible with, test vectors that people can access, etc.

So what good am I? I can build things that fit spaces in protocols that have strange requirements. If you want something, for example, that produces a ciphertext exactly the same length as the plaintext, doesn’t require an IV or a block cipher mode, can be deciphered using any three of a set of four keys but does not allow recovery of the fourth key when doing so, and allows use of the same keys more than once without loss of security…. I can build it for you. It’ll be slow, but it’ll fit the weird-ass requirements your goofy protocol gave you.

And that puts me in a strange place. I’m at the upper edge of the tiers of crypto implementation designers, and I’m better than just amateur at cryptanalysis, but I still don’t have the chops to be a researcher. I can apply ciphers and other primitives to build new protocols, and I can even combine primitives to build new ciphers, but I can’t build new primitives. I can figure out how to apply existing attacks to a given protocol or cipher, but I can’t develop new attacks.

Aaand, that’s frustrating. I have no idea how to grasp the next rung of this ladder.

I would like to thank all the amateurs out there who designed insecure systems with holes that somebody could drive a truck through though; if it hadn’t been for them I would never have gotten enough experience with attacks and analysis to get this far. And it feels weird to be saying, no, nobody should do that, because if nobody does there’s no “Beginning and intermediate-difficulty” material to learn these skills from. To the extent that I had to go through that stage to get this far, it feels weird to say nobody should be doing that because if everybody doesn’t do it, then it means everybody isn’t learning these skills.

So there’s something of a catch-22. We need mistakes made by others to learn from, and we need to make mistakes of our own as part of the learning process. And it’s only by doing that, that we can develop the kind of expertise that it takes to detect and avoid those mistakes.

Jonathan May 12, 2015 3:35 PM

@John Macdonald: That is obfuscation, not encryption. Which indeed has its uses – Windows shell does something even simpler when it keeps stuff in the registry, to avoid questionably-programmed apps messing with said stuff.

Wael May 12, 2015 3:38 PM

@Bruce,

My still-relevant 1998 essay: “Memo to the Amateur Cipher Designer.”

That was a great essay with several valuable “advices”. From what I see, no one listens, and every peepingTom, Dickhead, and dirtyHarry (emphasis on the middle name, as always) roll out their own super secure crypto algorithm stating it’s “secure”. In reality, such algorithms qualify as Crapography 🙂

We spent over 1000 man-hours analyzing Twofish, breaking simplified versions and variants, and studying modifications

Would you say the minimum amount of time needed to change an “unknown” to a “known” is 10,000 hours’ worth (between 4 to 5 years of fulltime) of Cryptanalysis work?

Martin Walsh May 12, 2015 5:29 PM

Here’s a suggestion. Remember those border guards who can tell if someone is a terrorist just by looking them? Boy, those guys are swell. Why don’t you hire those guys to evaluate security systems?

Justin May 12, 2015 5:33 PM

Hmm. A lot of proponents of weak homebrew crypto on this thread, it seems. It’s not about money. It’s about using crypto primitives that are known or believed to be very difficult to break, and avoiding known pitfalls when putting these together into a system.

This is the key right here: Anyone can design a cipher that he himself cannot break.

The right thing to do is to adapt something that is known to have withstood serious attempts by experts at cryptanalysis. Take AES, for example, or any of the finalists for the AES competition. Right off the bat, there is no need to use something weak or outdated like RC4 or DES. By using known good components and known best practices, there is no reason someone can’t become an expert at putting together systems that are very hard to crack, without necessarily being an expert cryptanalyst. People like Bruce do their best to teach this stuff, and no one wants to take the time to learn.

People who designed Bitcoin and the other cryptocurrencies, for instance, weren’t necessarily expert cryptanalysts. They didn’t invent a new hash; they used well-known cryptographic primitives and best practices that had withstood a lot of serious attack. People that are not experts just want to design something novel and they dismiss the wisdom of tried and true approaches.

Anura May 12, 2015 5:38 PM

The very first actual block cipher I designed, I thought it was pretty good. It used a 256-bit key and a 128-bit block size, with 64 rounds of encryption. I immediately realized that if you used an all-zero key, then the plaintext and the ciphertext was the same, but I thought “Well, as long as you use a random key, you should be fine.” This should be a serious red-flag to anyone who knows anything about cryptography. It wasn’t until a year or so later that I figured out just how to perform a chosen plain-text attack that recovered one bit of the key per chosen plaintext. At this point, I don’t have nearly as much confidence in myself. I now have the knowledge and confidence to write a cipher that I am pretty certain I will never be able to break, as well as the knowledge that the fact that I can’t break it means precisely nothing.

I can imagine what would have happened if I worked for a company where I designed that cipher, thinking I was smart enough to roll my own, and then left before realizing how stupid it was.

AC May 12, 2015 6:04 PM

@MarkH

“2. If you aren’t a competent cryptanalyst, and you independently evaluate a cryptosystem to be strong, your evaluation is worthless.”

More accurately, evaluating a new cipher is not a task for one person. That’s true even for an expert cryptanalyst.

Martin Walsh May 12, 2015 6:22 PM

Here’s one big difference between what you’re calling crypto, and aviation. At least with commercial and military aircraft, the validity of a design can be exhaustively tested and proven before anyone even gets on board.

But you have NO WAY to measure the strength of your so-called primitives. That’s why “peer-review” always boils down to just waiting a long time to see if anyone finds something wrong with it.

Here’s the REAL SNAKE OIL. Anytime someone says “we recommend a couple extra rounds just to be sure”, run. They don’t know anything. That’s why their analysis always boils down to some bullshit paper on how something is built. Imagine an automobile manufacturer telling the gov’t “we didn’t do any crash tests because we bought our wiring assemblies from Acme Cable and because the metal is all such and such an alloy.” That’s what you’re getting. And you better believe there’s a lot of money and power being spent to keep anything from being changed. Just try and move the furniture and see what happens.

Anura May 12, 2015 6:41 PM

@Martin Walsh

Anytime someone says “we recommend a couple extra rounds just to be sure”, run. They don’t know anything.

Absolutely no cryptanalyst can give you a specific answer on how many rounds a cipher should have, just an arbitrary rule of thumb based on what they know they can break. We won’t be able to get provably secure block ciphers at least until we can prove P != NP. Until then, the number of rounds in a given cipher will remain arbitrary.

Clive Robinson May 12, 2015 6:56 PM

@ Martin Walsh,

Here’s a suggestion. Remember those border guards who can tell if someone is a terrorist just by looking them? Boy, those guys are swell. Why don’t you hire those guys to evaluate security systems?

Err there is a little gem of awkward truth in that suggestion.

People who break systems of all kinds do it in one of three ways,

1, Dumb luck.
2, Having an itch and keep scratching.
3, Applying previous known failings to new systems.

The first two methods are how new attacks are found, they are also the way various types of criminal get caught, and to a certain extent are encapsulated by Bruce’s “thinking hink”.

@ Justin,

It’s about using crypto primitives that are known or believed to be very difficult to break, and avoiding known pitfalls when putting these together into a system.

Unfortunatly there is a problem with this reasoning in that it fails as the history of FEAL shows.

There are specific instances of attack and these fall into more general classes of attack and with a little bit of thought you can see that there are,

1, Known attacks within Known classes.
2, Unknown attacks within Known classes.
3, Unknown attacks within Unknown classes.

It is also possible to hypothesize new classes of attack where there are not yet specific instances of attack and move forward to finding one via various mehods.

The problem is the design method you outline will only work against points one and two. You cannot design a system to be secure against point three except by luck and being conservative in your design, for the obvious reason you are unaware of yet to be attacks or classes of attack at some point in the future.

Further being conservative is generally not in the requirments document because conservative is not what the industry wants as it’s seen as being “Old and Inefficient”…

I have a maxim of “Security -v- Efficiency”, whilst it is quite possible to design a secure algorithm you also –unlike the AES competition– have to consider the practicalities of implementation. The problem is that in the general case trying to get secure algorithms efficient is a little like getting a bubble under wallpaper as you are putting it up. Pressing on the bubble does not get rid of it, it just moves or fragments it.

In the case of AES the competition emphasized “efficiency” for speed or hardware resources. The result was optimised implementations that opened up lots of time and power signature side channels. Designers then used these optomised designs without understanding them and thus put usable attack vectors in their products… many of which are still there and still get put in new products.

There are a myriad of similar pitfalls for designers to make any of which can inadvertantly throw security out of the window in practical implementations. The likes of the NSA are only too aware of this as can be seen from the likes of their AES based IME which is only rated as secure for “data at rest”.

JustWnatToKnow May 12, 2015 7:14 PM

…how much of that old snakeoil crypto and this new offering comes from the spy agencies?

Parry Noir May 12, 2015 9:15 PM

@Anura

We won’t be able to get provably secure block ciphers at least until we can prove P != NP.

That’s only true if you defined security in a certain way.

Figureitout May 12, 2015 9:52 PM

Anyone can design a cipher that he himself cannot break.
–Anyone can lob stones at the latest fail w/o trying to implement anything themselves in the pig pen we work in. And very VERY few people can actually enuciate what “properly implemented” crypto actually means w/o reverting to bullsht figures of speech. It’s “proper” until someone just side steps it (it’s also less fun work reversing bullsht OTP’s and weak ciphers) and then the crypto isn’t really the most important part of the security system anymore.

Crypto only works if you beat attackers and they assuredly can’t get a good entry point until there’s multiple layers of crypto, if they’re in your systems then it’s a facade. What we need more than ever is assured verification (no you can’t just encrypt something and call it “assured”, you can encrypt malware too; likewise just handing out a “checksum”, fairly strong but not good enough). I can’t see a way unless starting from a sane, clean, simple state (simple components, handwound and shielded transformers, etc.) but the reality is far from it and it’s too much logic to rebuild (that’s the biggest problems w/ small PC’s, too little logic that needs a conscious human following it all and checking it at least 4 times; and no compiler to hold your hand, you need to keep the logic straight yourself). How can you think when all your toolchains, and computers can be easily infected? They’re all relying more on internet-connectivity, and if you don’t keep up w/ the latest parts, guess what?–The old parts go EOL and you’re SOL if you want to make $$ and survive.

And after scaring all the potential crypto designers away as they copy older work, we have a state today where we use crusty old ciphers still and all but a few don’t know how to do anything new or move on to implementing it on something besides a chalkboard. Longterm we’re screwed again.

Slartibarslow May 12, 2015 10:38 PM

Out of curiosity, Bruce, what was your formal cryptologic background before Blowfish?

Thoth May 13, 2015 1:02 AM

I guess @Bruce Schneier got abit too aggressive with the title but other than that is should be a lesson to us that security protocols must be handled with the most delicate care and assurance with the knowledge that a security breakage occurring in the protocol’s security can be disastrous.

In a protocol’s security, well known construct must always be used and homebrewed crypto/security should never be in the picture.

Probably the hostility might be aimed at the somewhat aggressive title @Bruce Schneier used in this post but on a thought that the unproven crypto (that have already been broken) would be used in actual systems to protect the Smart Grid devices, this is quite a wake-up call to anyone who wants to pen down their own crypto, not get it proven and wants to put them into critical systems for security purposes to re-think again.

It is good that the protocol is open for public analysis thus the weaknesses of the crypto were easily caught. Closed source crypto and protocols like the MIFARE system by NXP, NXP’s proprietary crypto ciphers, A5 GSM ciphers and many other closed source crypto protocols and systems contain hidden dangers of it’s own.

Thoth May 13, 2015 1:16 AM

@Slartibarslow
Everyone has to start their crypto “career” from somewhere whether be it as a cryptanalysis in the beginning or a crypto creator.

If Blowfish was created without experience of cryptanalysis and Blowfish have stood the test of time for years, then @Bruce Schneier is really lucky. If he had some private cryptanalysis (unpublished) experiences, then that does not mean that his creation of Blowfish is invalid either.

To use a cipher (and use properly) that is well established is what makes the difference between a secure protocol or an insecure protocol.

@all
Clobbering together known crypto primitives like putting PHTs, XORs, S-BOXES, MDS and what not together in hope of deriving a secure cryptosystem would still not make it supposedly secure unless proven over time.

Clive Robinson May 13, 2015 2:39 AM

@ Thoth,

a thought that the unproven crypto (that have already been broken) would be used in actual systems to protect the Smart Grid devices, this is quite a wake-up call to anyone who wants to pen down their own crypto, not get it proven and wants to put them into critical systems for security purposes to re-think again.

This is not the first time that people who should know better have done this, the most public was WiFi WEP.

The issue is that engineers are bad at security, and that includes quite a few who call themselves security engineers. It’s due in a large part to their training and a misunderstanding about what a proof is.

Engineers in general work on simplification and testing, using simplified models to establish things such as material strength under test conditions. They then build systems based on those models and tests.

Unfortunatly simplification for testing ignores complex interactions and it’s in these untested shadows that vulnerabilities exist.

When such vulnarabilities bite as the sometimes do (Tacoma falls bridge etc) it becomes clear that the testing has been insufficient for the task.

Testing can not eliminate such vulneravilities, as our desire for increased complexity out paces our discovery rate of complex interactions let alone our ability to model them and develop new tests to warn, mitigate or eliminate them.

That is just because a problem has not been seen before does not mean it does not exist waiting to be seen under the right light.

But that is not the only issue, engineers lean on hard science and probability in equal measure, which is a real issue when it comes to security. In much engineering it’s general assumed that events happen at random and thus probabilities can in effect be multiplied. In security it’s fatal to assume things happen at random, an attacker will try to make the probabilities meaningless and thus align not as one in a billion but as one in one.

Z.Lozinski May 13, 2015 6:43 AM

@Anon Techie,

There is a reason why professional cryptologists like William Friedman (NSA) , Mary D’Imperio (NSA) and Brig. John Tiltman (GCHQ and NSA) spent years looking at the Voynich Manuscript. That someone in the C15, equipped with nothing more than pencil-and-paper, could write a text that passes many of the tests for being a natural language, and that we still can’t read or make sense of, falls into Clive’s “Unknown Classes”. Because that means someone in C20 Berlin, or C21 Afghanistan, equipped with nothing more than paper-and-pencil could do the same …

z May 13, 2015 7:11 AM

This is what I have been unsuccessfully warning people about with regards to the Internet of Things. Security will be done wrong on a large scale. It’s hard enough to do with the regular internet, and at least everyone recognizes that it’s important there. I don’t think consumers and companies realize that people are actually going to try hacking these things. It’s very hard for the average person to think like an attacker and break out of the “why would anyone want to hack this?” mindset.

Martin Walsh May 13, 2015 11:03 AM

Just ask yourself, why would someone write this in one place

“We advocate research and experimentation on new design approaches for cryptographic standards”

then turn around and cultivate a climate of ridicule and contempt for anyone conducting research and development in secure systems design?

It’s interesting Snowden makes statements about “properly implemented” encryption vs “home brew” but doesn’t have any idea how to protect keys. It’s interesting the same people who readily use terms like “peer-reviewed” were the same people who recommended TrueCrypt – was it because so many people were using it for so long?

MarkH May 13, 2015 12:22 PM

Well … it’s been an interesting discussion, with energetic resistance to Bruce’s position. One reason why this push-back is interesting to me, is that Bruce has been making this point repeatedly, consistently, and relentlessly, for at least a decade now. Naturally he posted this example, because it is a powerful exemplar of how dangerous cryptographic arrogance truly is!

I once heard on the TV, an attorney in the field of criminal law say that in the profession, they have a sort of truism (paraphrased): “If you try to murder someone without being found out, there are 100 ways to get it wrong; if you are a genius, you will think of 50 of them.” Regardless of whether it’s an accurate statement about violent crime, I think it is the right mindset for the engineering of strong cryptosystems, and this is what Bruce has long been telling the world.

For the most part, engineers are pretty smart, have at least some amount of fairly rigorous education, and have some confidence based in a history of contributing to the design of systems that were considered successful. OK, we’re bright kids, what’s so hard about infosec?

John Galt III got it exactly right in his comment above, with a link to the fascinating Errol Morris article … to varying degrees, our intellectual limitations prevent us from knowing where those limits are. If you “roll your own” crypto, there are 100 ways to get it wrong, and if you are as big a genius as you suppose yourself to be, you will think of 50 of them.


@Martin: Bruce’s post evidently touched some very tender nerve; I haven’t yet decrypted what’s eating you. Your comment above accuses Bruce of cultivating “a climate of ridicule and contempt for anyone conducting research and development in secure systems design.”

First of all, that is demonstrably false. Schneier and his colleagues are sometimes active in R&D in secure systems design, and Bruce has made posts or other comments favorable to such work performed by others.

Second, we seem to have very different standards for what constitutes “research and development.”

There are probably thousands of “inventions” every day (many not original, but unknown to the inventor). Engineers are constantly cobbling together available components to solve some problem: we call this “design”. Every time somebody “invents” something, or puts pieces together, is that “research and development?” When a few rocket scientists at ETSI (owner of the tragically flawed OSGP) invented their own “MAC”, was that really research and development? Really? Seriously?

I once drove my car home after a radiator hose burst, by wrapping a dish-washing glove around the rupture and holding it in place with zip ties (those were handy materials in that late evening). I can assure you, I was NOT practicing R&D.

I don’t classify tinkering by people in a field where they are grossly ignorant and incompetent — as the ETSI engineers have proved themselves to be — as research and development.

A propos of TrueCrypt: Bruce (and other security evangelists) make a very strong distinction between building your own crypto implementation (extremely risky, even if you do it with paranoid caution), and inventing your own crypto primitive (practically guaranteed suicide). The latter is what the folks at ETSI did: they planted a landmine in front of their headquarters, and then stomped on it forcefully.

As far as I understand TrueCrypt, it is based on widely-used, standardized, recommended, and strongly vetted primitives. The way the containers themselves are designed seems to an original (and therefore risky) piece of crypto design, but the confidentiality of data within a container would be secured by the standard primitives even if container could be distinguished from a blob of random data. The designer(s) of TrueCrypt didn’t practice the pants-sh*ttingly stupid arrogance demonstrated by ETSI.

Anonymous Cow May 13, 2015 12:30 PM

…a tried and tested combat aircraft like the A10 Warthog, which has been retired in favor of an unproven design with “advanced capabilities*”…

The A10 is still flying today albeit in reduced numbers. As capable as it is age is catching up. The A10 is a 1970s era aircraft and it’s airframe gets a lot more stress than the 1950s era B52s and KC135s. More importantly it’s manufacturer – Republic Aviation – no longer exists so parts are increasingly hard to get.

But your point on it’s replacement is valid. It’s not easy to make a combat aircraft that can do a lot of missions very well. Previous attempts have not gone well. The F111 was proposed for both Air Force and Navy; it turned out too heavy for the Navy so they never bought any. And the Navy/Marine Corps was looking at the F16 during it’s trials but then the Air Force downgraded the air-to-air capabilities and thus the Navy/Marine Corps went with the other proposed aircraft which is now the F/A18.

Sancho_P May 13, 2015 1:42 PM

Um, thinking of our elite’s mindset, probably there were not amateurs involved?
“Get it right, have the golden key built in from the beginning!”

moo May 13, 2015 2:55 PM

I didn’t read all the comments yet, but there are several trollish comments in just the first half.

Regarding amateurs making their own crypto: there’s a fool born every minute, and anyone with no experience attacking crypto algorithms who tries to write their own definitely falls into this category.

To anyone tasked to build a crypto solution for their organization: First you should read Bruce’s essay about amateur cryptography, from beginning to end, TWICE. Next, get the book “Practical Cryptography” (ISBN 0471223573) and design your protocol(s) using only the building blocks listed in that book. Then hire an actual cryptographer to review your design BEFORE you invest a bunch of effort in building it. And if you’re serious, you should probably hire them to try and attack it afterwards.

J n the river Lethe May 13, 2015 3:03 PM

The main point I have always gotten from Bruce’s statements on amateur cryptography is that in isolation it is dangerous. I never got the impression that people shouldn’t learn or tinker. He always emphasizes the need to test and submit to others to analyze.

To me it can be like the guy who thinks he figured out the Grand Unified Theory of physics. To often those hobbyists are middle aged frustrated and undereducated. Trying to prove worth to the world. In the 70s it was backyard mechanics trying to prove the oil company conspiracies. Now it is cryptography. Maybe?

If you worry that you may be nuts, you are probably boringly sane.
If you think you are the only one in world that is sane, you may very well be nuts.
If you know you may be a little nuts, (OCD, etc.), you are creative. But be careful! 😉

moo May 13, 2015 3:18 PM

There aren’t many non-rocket-scientists who believe they could easily design and build their own moon rocket. Why are there so many non-cryptographers who believe they can design their own cryptographic algorithms and implement their own cryptographic primitives? Its like nobody ever told them that building good crypto was very hard. Or else they decided not to believe it.

One big difference: with a moon rocket, if the design or implementation is bad, the rocket will probably explode (or at least fail to get off the ground). Cryptography fails are usually less obvious, but in some ways even worse–you’ll think you are safe from the bad guys, and you’ll think your crypto “works” (because the app can communicate successfully through it) but the first time a well-organized criminal group or government agency or even a lone bored teenage hacker in his bedroom takes a crack at your system, they’ll find a way to break it and steal all your data, and you’ll be lucky if you even find out about it.

Don’t be a toddler playing with a loaded gun. Leave crypto implementation to professionals, or at least hire a professional to help you do it right.

mb May 13, 2015 3:43 PM

@Martin Walsh

That is why the rest – really everyone – is supposed to USE WHAT IS REVIEWED AND TESTED. There are more than enough implementation errors left to fuck up. No reason to take the grande tour of fuck-ups. And there really IS NO REASON.

Deployment of unproven algorithms and research of them are two completely different horses. The first one with near certainty is a dead one.

The outcome is quite literally ALWAYS the same. Total breakdown. Name ONE case in which someone decided to use a homebrew solution where it didn’t turn out to be straight from the book of bad ideas. These cases virtually do not exist. The numbers are so few that it is quite obviously not a good idea to copy.

If you need a cypher pick one that is applicable and then make sure you implement it correctly.

Alain May 13, 2015 4:05 PM

I first read the comment from OSGP itself :

OSGP said “there have not been any reported security breaches” of any deployed smart metering or smart grid system built with the current OSGP specifications, and that systems built with these specifications include a “comprehensive multi-layer security system that has always been mandatory”.

This is not about amateur crypto, but just a money thing. OSGP probably doesn’t care a thing about it’s crypto strength. The encryption is just a legal annoyance they have to build in.

Bruce Schneier May 13, 2015 4:27 PM

@ Andropause:

“The article is a shameless ad for older essays/blogs, etc. Most of it consists on re-posting the abstract of a paper, so if this blog entry has any value at all, it is so well hidden so as to withstand all cryptanalysis.”

Even worse, most of my blog posts consist of nothing more than a link and a sentence, and most of the time a quote from the thing linked to.

You’re right that my blog would have more value if I had something critical to say about everything I post. And, honestly, I would like to have time for that. But I don’t, so this is what you get.

There are lots of security sites out there, though. Good luck.

Bruce Schneier May 13, 2015 4:29 PM

“How hard would it be to create a crypto that isnt breakable even by its maker? Like in a way to encrypt something forever?”

It’s trivially easy to create an unbreakable algorithm. AES-256, or Threefish-512, with like a thousand rounds: there, I just did it. What’s hard is designing secure algorithms within reasonable performance constraints. What’s even harder is designing secure systems that use cryptography.

Bruce Schneier May 13, 2015 4:32 PM

“Anytime someone says ‘we recommend a couple extra rounds just to be sure’, run.”

I disagree. That seems a prudent design decision to me.

fads0j9h0 May 13, 2015 6:04 PM

@John Galt III

The general thrust of your comments are in the direction of philosophic skepticism, in other words, that certain knowledge is impossible.

Feynman also had this orientation, as evidenced by his lectures on the character of natural law. He philosophy of science is logical positivism. That approach is math heavy, and predictions are required for proof.

Anyone having that orientation is going to tend to line up against the idea that an amateur can produce a solid crypto solution, not because solving math problems is hard, but because certain knowledge in general is impossible.

And the pro-“peer-review” folks are going to take aim at lone individuals as a consequence of the collective subjectivism found in Kuhn’s philosophy of science.

So the a matter of whether a man can solve a math problem is triggering a response colored by cultural bias.

Fascinating.

Zenzero May 13, 2015 6:08 PM

@Bruce Schneier

“You’re right that my blog would have more value if I had something critical to say about everything I post. And, honestly, I would like to have time for that. But I don’t, so this is what you get.”

I can only speak for myself Bruce but you’ve created a wonderful place for discussion of topics that matter. Your post generate conversation so are golden. Keep up the good work

Thank you

Ari Trachtenberg May 13, 2015 8:58 PM

Amateur != Unvetted. There is nothing wrong with amateurs making up cryptosystems – the problem is with unvetted cryptosystems being deployed (even by “professionals”).

Nick P May 13, 2015 9:14 PM

@ Bruce

re increasing rounds

I totally agree. It’s been one of my recommendations, too. The reasons is incredibly obvious: published attacks on most well-designed algorithms needed exponentially more work to succeed as the rounds increased. Many often fail to be practical well-under 10 rounds. Specific example: IDEA’s choice of rounds became its main security margin over time. Conceivably, adding some more to the same design would extend it for years more.

So, anybody implementing crypto should always (a) use a decent number of rounds and (b) keep it a tunable [upward] parameter for offsetting effect of cryptanalysis.

@ Ari

There’s definitely no problem with amateurs making cryptosystems in general. The problem is any amateurs that build and field new crypto where peer-reviewed solutions already exist. Another problem is anyone else doing the same with those amateurs’ work. So, both the usage and lack of review are problems. Usage shouldn’t occur until a certain amount of review occurs.

The act of building itself can become an issue when we consider ethics. A new cryptosystem might be required for a given situation. Engineers without sufficient, crypto experience should decline to build it and instead contract a professional to reduce risk. If they don’t, then they are intentionally endangering users by building the amateur cryptosystem with knowledge it will likely fail. They might also have acted fraudulently if they misrepresented their own skill in the area. If management forces them to, then it’s management’s fault. But one should only build what they’re qualified to build in safety- or security-critical use cases.

Figureitout May 13, 2015 11:29 PM

Clive Robinson
But that is not the only issue, engineers lean on hard science and probability in equal measure, which is a real issue when it comes to security.
–Believe me, we’re aware of the security issues. I don’t, but many scoff and will gladly wait for a hack and then fix it eventually. There’s more to life than security though, well, except for “job security” b/c otherwise you can’t eat and you die. I don’t see military or security agencies selling their “secure” products on the open market (as Wael loves to say way too much, “every tom, dick or harry” can have a looksy at your product and try to dump code, xray the board, etc.; reverse and copy it or just start hacking protocols), they keep them secret. B/c they would be broken too (oh they have a bunch of laws making commercial products insecure by law and will guillotine anyone probing theirs, ok.)…

They eliminate a huge chunk of that probability w/ that, and then having physical guards to facilities and again tech. centers (unless the guards are under-paid and under-appreciated and flip on you). In that regard, comparing leading edge commercial designers to military ones is unfair; maybe they should swap roles sometime.

moo
Why are there so many non-cryptographers who believe they can design their own cryptographic algorithms and implement their own cryptographic primitives?
–No one said that bullsh*t. We’re saying the “experts” can’t even enunciate what “proper” is, and it is in all likelihood “wrong”. And also that they dust their hands off on the algorithm stage but don’t see thru the finalized product; and don’t understand the problems that WE run into implementing your “perfect” algorithm. Sometimes it WILL NOT work on a platform or under our other design constraints unless we re-write firmware or layout a board, and at extremes, redesign the chip. Guess what happens when you upgrade to a new chip and “enhanced” toolchain?–Everything breaks and it’s digging thru finding these insane bugs again, I act like I hate it but I love it, but still, it takes time b/c you have to zoom in on the tiny problem area.

Leave crypto implementation to professionals
–What, all 3 of them? Seriously look at the GPG (one fcking guy lol, wow…) or the Truecrypt Audit. Wasn’t even that rigorous at all, it was like 3 consultants and Matt Greene, I like his blog and all, but he’s a mathematician trying on the implementation hat and it looked shaky. It would take *years to review Truecrypt code due to the low-level aspect of it (where oh so many problems can bubble up if that code is not good).

What else is there? OTR and ZRTP? That’s about it eh? More ciphers that you have to adapt to proprietary drivers and chips, let’s see some cryptographers try that and we’ll laugh this time.

Ari Trachtenberg
There is nothing wrong with amateurs making up cryptosystems – the problem is with unvetted cryptosystems being deployed
–Amen, personal systems that basically no one will have access to anyway; if you do there isn’t a tool or script you can use, dig in lol. If it’s some large public system w/ a risk to you if it gets hacked, I guess you can unload the risk using public crypto. And tools and/or scripts/published attacks will run on those ciphers; and it’ll probably have implementation errors and side-channels galore still…but we’re secure now, b/c we used peer-reviewed crypto (is that like open source code peer review…?).

Clive Robinson May 14, 2015 3:12 AM

@ Figureitout,

. I don’t see military or security agencies selling their”secure” products on the open market….

Actually you might be surprised you can buy “British Inter Departmental” (BID) systems hardware with only some minor hassle, mind you the stuff is a pig to work with and can give you some real nasty shocks if you don’t issolate and earth it properly. It’s also a real pain to service and repair as I less than fondly remember.

One reason the hardware is fairly readily available is NATO. That is although various nations have their own crypto they have due to their involvement with NATO to talk to other NATO nations securely hence they have standard hardware kit that interworks.

Further quite a few European nations have “outsourced” their military phone and data networks, and the demarcation can be a little more fluid than you might expect…

As we now know in part from Ed Snowden’s little escapade even the US outsources much of it’s communications networks and crypto kit ends up being manufactured, installed, operated and decommissioned by private commercial organisations.

However having access to the hardware does not these days mean you get access to the more interesting crypto algorithms, which come as upgrades in various forms. That said being able to study the hardware does give you some insight into the work involved with practical implementations.

Wesley Parish May 14, 2015 5:49 AM

Puts me in mind of something I drew up in response to a comment of my CSc lecturer once, when he was referring to (moderately) secure communications. I thought, let’s rustle up something based on Vignere for a quick and dirty solution, and discuss it with him after the lecture.

You can do Vignere as a ROT[n] and I thought (naively) that [n] could be handled by hashing the date. Alternatively, I did think that the ASCII of the text could be used for a running value of [n].

A quick discussion with the lecturer made it clear that it wasn’t very well thought out at all. If I could hash the date, then so could the intercept. And using the ASCII for [n] was breaking the cardinal law of predictability.

Very quick, and very dirty, and completely unusable. 🙂

Thoth May 14, 2015 6:32 AM

@Nick P, Clive Robinson, Figureitout
You could technically buy the AIM II chips (http://www.gdc4s.com/advanced-infosec-machine-(aim).html) off the market if you want. It is unclassified as long as the algorithm and keymats are not of classified nature as it is a programmable security core. Red/Black separation and all that stuff… that is if you can convince the sales people you are legit and harmless.

Some guy on Youtube with a bunch of radios (https://www.youtube.com/user/LifeIsTooShortForQRP) including military ones.

You want a catalog of NATO products from high assurance to no assurance ? Link: http://www.infosec.nato.int/niapc/

It’s good to look around for high assurance products and use them as inspiration and knowledge.

John Underhill May 14, 2015 10:49 PM

I think the responses to this article illustrate what is a more fundamental break within the community. There appears to be two camps; the engineers, who are driven towards a more timely evolution of crypto systems.. and the mathematicians, who, for good reason, are quite guarded in their trust of new ideas.
Each requires the other; too much caution invites stagnation, just as haste and half measures have no place in protocol design.
Examples of failed crypto systems are plentiful, true enough, but not all of these ‘experiments’ are failures. Some like True crypt and bouncy castle, became quite popular and widely used. I wonder how many times the authors of these libraries had their emails ignored, or were forced to endure the ‘don’t roll your own crypto’ speech.. and I wonder how many of the cryptographers who gave that advice, made offers to review their work, or in some way assist in making better libraries that would end up being used by millions of people.
As an open source developer, I know that all I can do is put something on a public forum, label it experimental, and hope that if it has some merit, someone might notice, that’s about all the help I’m going to get.. so you see, it cuts both ways.
Just my 2 cents but.. I don’t think we are prepared for the technological changes that will come in this century, and I don’t think we are moving nearly fast enough to address the implications those changes will have on our security, privacy, and way of life. Maybe we need more people looking at these problems, accept that there will be failures, but there will be successes as well..

Figureitout May 14, 2015 11:27 PM

Clive Robinson
–How many terrorist watchlists would that put me on though if I don’t unload the risk onto someone else? Not that I care for one more, I’m such a persistent threat as you know, let me tell you. Better watch out or I’ll tickle-fight you to death lol…

The algorithms come later, I want to see w/ my eyes what humanity calls the most secure computers in the world and god forbid, use them a little.

Thoth
–Nice links, thanks. Couple concerns, regarding the AIM, it was “NSA Certified”. That doesn’t mean much to me anymore, it actually means it’s got a backdoor almost 100%. I’ll look at it from a distance. The youtube guy, lol, WTH, life isn’t too short for QRP; that’s really cool. Whatever, reminds me of this guy that has his whole house filled w/ radios (he’s basically like a hoarder, radios everywhere you can’t even walk besides a small path thru house), I thought he’d have my radio, almost…

RE: looking towards military for inspiration
–Look at their engineers too, if you get a chance, they are in fact rather brilliant (speaks monotone, but you have to listen to content, not the presentation). I got to hear one talk about payloads into space, that blows the amount of problems I run into out of the water. You need near perfection for these successful missions…failure can have such severe consequences, I can’t imagine the pressure. And he could quickly answer a problem w/ commercial satellite issues, it was the right answer lol, I was testing him.

Thoth May 15, 2015 5:59 AM

@Figureitout
Those chips might have convenient “key escrow” (a.k.a backdoor) for US Govt use and for foreign SIGINT of course. Hard to not expect them to put those stuff inside. It is good to not approach the chips directly with bare hands but to use a 10 foot pole or just look at the concepts and then re-think your designs or learn something from it.

The AIM II (version 2) chip is co-developed by Thales (French) and General Dynamics (US). I am guessing both nations have some form of SIGINT and COMSEC agreement of sorts to allow co-development of the AIM II chip.

Not only do the Govts have brilliant engineers, they have field experience too to back their knowledge.

fajensen May 15, 2015 6:39 AM

@moo
To anyone tasked to build a crypto solution for their organization:

Preparation is good, however, in this new & globalised world e-con-me, the prudent “anyone” will leave a flaw or a back door that is good enough to pass the review.

Because – there is a market for that sort of thing; it’s value may be several years worth of wages. Which is handy for when “someone” is outplaced.

Nick P May 15, 2015 12:55 PM

@ Thoth

Maybe. A lot of these things are what’s called Controlled Cryptographic Items (CCI’s). The CCI’s are illegal to possess unless your given permission through a process and all sorts of regulations. I’m still not sure all the details of that given that I’d probably not make it past the security check. 😉 I sent GD an email listing my favorite stuff and asking which are available to all U.S. companies. I’ll post their answer when I get it.

AIM II and any NSA-certified, GD product are typically anywhere form good to great. They’re gonna cost you a pretty penny, too. Checking out the site showed me that they’ve acquired Argus Systems: makers of PitBull trusted OS extensions. So, they have seL4 kernel, a VMWare-style MLS desktop, an Orange Book CMW MLS desktop, a bunch of NSA certified crypto, an embedded security chip, and even custom cellular base-stations. All in one shop, for sure. I can’t wait to see them integrate seL4 into more of their stuff. Or see what they do with Argus.

The NATO catalog is neat because it’s all in one place. However, it’s not high assurance: that’s all the security products that are NATO certified. You can tell its assurance level by looking at classification. NATO-Restricted and NATO-Confidential are the weak stuff. NATO-Secret has pretty strong security. COSMIC Top Secret is in the high assurance ballpark.

“The AIM II (version 2) chip is co-developed by Thales (French) and General Dynamics (US). ”

I didn’t know that. The French and U.S. collaborating on a high assurance product is strange given that they spy on each other. The French are major I.P. thieves, especially. I wonder what that means for it in terms of backdoors or if there’s some agreement on defense products. Anyway, both companies have great security engineers so it just increases my confidence in its security against foreign threats.

“Not only do the Govts have brilliant engineers, they have field experience too to back their knowledge.”

Very true. It’s why I trust them on COMSEC more than most. At least, their high end stuff.

“It’s good to look around for high assurance products and use them as inspiration and knowledge.”

Also true. It’s what I do. Here’s an example I found today.

This isn’t as strong as NSA’s Inline Media Encryptor (IME): it’s only rated to Secret, doesn’t mention TEMPEST, and is non-CCI (maybe commercial). Yet, you can see considerable difference in how it’s done vs commercial stuff. It uses a Crypto Ignition Key for activation rather than just a PIN typed into possibly infected machine. A big, red zeroize button is on the front. Tamper-circuits activate it automatically. Onboard, unique key generation. Some kind of recovery ability. Serial port for administrative use, which all high assurance stuff uses (hint). Typical price for their stuff: $2500 + support and accessory cost.

On the other hand, customers using 40Gbps SONET might opt for less encryption at $189,950 per unit plus $44,950/yr for support/maintenance. Anything at 40Gbps is pretty expensive ($20k+) in general but… holy shit lol…

@ Figureitout

” That doesn’t mean much to me anymore, it actually means it’s got a backdoor almost 100%.”

Thing is, though, it might not even have a backdoor. The reason I’m considering that is that most of the NSA-certified items can only be used by defence and military. That they don’t want you having it implies they can’t break it. I’ve also seen no evidence, in or outside of the leaks, that NSA spies on our defense contractors. Further, most of their high assurance protocols get keys straight from NSA or escrow them. So a backdoor in them makes little sense from so many viewpoints.

So, it might be backdoored and it might not be. It’s still an understandable, default position to not trust them if NSA is in the threat profile. Like with my email scheme, you can combine the best of one country with the best of a competing country. They’d have to collaborate on bypassing your encryption while implying to each other that their respective gear is subverted. Greatly lowers the chance they’d use a backdoor.

Figureitout May 16, 2015 1:06 PM

Nick P
–Yeah it might not, but I don’t trust their certifications.

I’ve also seen no evidence, in or outside of the leaks, that NSA spies
–Yeah, but FBI was investigating CIA head, and even then that unprofessional investigator let her personal feelings get involved and she got obsessed (turns out the “boyscout” was f*cking his biographer and giving her classified material, but he’s not a traitor like Snowden in some people’s eyes…Aren’t they suppose to tell all their relationships too, b/c they’re prime targets for spies and don’t know “this world” and yet that wasn’t an issue too; meh dishonorable scumbags).

get keys straight from NSA or escrow them
–Key escrow is a backdoor. I hate that term, it’s just a buffed up term to describe a backdoor.

Like with my email scheme
–Can’t even trust that, if Snowden didn’t go public, how much stuff would’ve been exfiltrated unknown until it’s too late. All these agencies have penetrated each other, they’re all not trustworthy by default. I find it darkly humourous that eventually there won’t be any new IP to steal since all everyone does is watch each other and cheat lol.

Unfortunately, we have to go back to around 1950’s/60’s when we were starting to chain up more and more logic from first principles and protect solid, beautiful logic from side channels. I can practically see this from using the mostly single purpose IC’s where it would be insane to have chunks of memory in them, there’d only be external attacks most likely. And having serial ports wouldn’t be hard to interact w/ arduinos or any other micros, and they won’t have the logic for many attacks to work (and importantly, no where to hide).

Nick P May 16, 2015 1:28 PM

@ Figureitout

“Yeah it might not, but I don’t trust their certifications.”

A basic Linux desktop has millions of lines of code. The devices that make it work have miles and miles of circuitry. Nobody can verify that themselves with any realistic assurance. Especially professionals with work to do. This is the point of certifications. Even you will need a third party review of any substantial system.

Might want to figure out which kind you’d trust. You’re already trusting the makers of the equipment you’ve been using… without them doing a security review.

“Yeah, but FBI was investigating CIA head,”

What’s that have to do with NSA analysts job function? Prior publications and leaks plus recent ones show no spying on defence contractors. Given how much we’ve seen, that’s a significant omission. Worth further investigation is all I’m saying.

“Key escrow is a backdoor”

No it isn’t. A protocol on a device that has key escrow means any use of that protocol sends a copy of the key to a third party. This doesn’t reveal prior communications, future communications, internal device state, and so on. A backdoor is much worse than one compromised session. So, a device with no backdoor but a compromised protocol is still usable: avoid the default software.

“Can’t even trust that,”

It worked and for many people. End of story. Only question is, “would the same scheme work for devices for current threat profile?” If you need real functionality, you don’t really have many alternatives. A diverse set of devices checking each other for both security and reliability will be key.

“chain up more and more logic from first principles and protect solid, beautiful logic from side channels.”

It’s a nice idea that might help. How do you know, though, that your simple IC’s weren’t subverted with nanoscale backdoors? Your verification tools can’t see them, you probably can’t detect them during use, and the power draw will be insignificant compared to your primitive circuits. You keep saying you won’t trust black boxes and then using black boxes in your designs. A little odd.

Figureitout May 16, 2015 2:36 PM

Nick P
A basic Linux desktop has millions of lines of code.
–I know, won’t find any arguments from me there. I enjoy tracing out start up code on micro’s, all the init() functions. There’s a difference between trusting and just using b/c I can’t be isolated for job skills or just isolate myself and it’s fun. So yeah, I know, didn’t take me very long to see even w/ “simple” chips how things can go south real quick; it’s a really bad feeling.

What’s that have to do with NSA analysts job function?
–They do a lot of inter-departmental work now compared to before; it’s highly feasible if FBI/CIA do this (as well as all the LEO’s spying on girlfriends etc.) that they’ll do same thing.

No it isn’t.
–It’s a backdoor in the protocol then, it’s bad for me. If I can’t trust comms between devices, that’s in some ways just as bad as a physical backdoor; all computers send signals around. Backdoored internet is really annoying too.

How do you know
–I don’t, I’ve said that before. Probability. Now is it kind of absurd to have these sophisticated backdoors in hex-inverters, op-amps, RTC modules, shift-registers, and linear regulators? It is to me. I have to trust small black boxes b/c of budget and pre-built measuring devices (multimeters, ‘scopes) b/c I can’t rebuild by hand logic to something we consider usable. I can’t handle that, so yeah I give up after that.

I find it odd that you make big claims on security when you really can’t based on what’s still possible beyond a shadow of doubt. And remember before, this isn’t even an argument. I want strongest computer security in the world, so my threat model includes all of them, the best.–As a hobbyist. I don’t care people saying it’s impossible, it probably is, still going to keep trying again and again and again.

If I’m in security industry then yeah, I’d sell products/services for different threat profiles which does matter cost-wise for many things.

Clive Robinson May 16, 2015 4:34 PM

@ Figureitout, Nick P,

. Further, most of their high assurance protocols get keys straight from NSA or escrow them. So a backdoor in them makes little sense from so many viewpoints.

Err no, as I’ve said before a backdoor has been there in the crypto kit in the past and the fact the national signals organisation issues the keys is a reason to suspect it may still be.

It works on the simple idea of a large key space in which say 15% of keys are considered strong and 15-30% weak via some subtle mechanism. Thus the likes of the NSA with knowledge of the subtle mechanism only ever issue strong keys to their own users.

The idea goes back to the days of mechanical field cipher systems which are very likely to fall into enemy hands and thus be duplicated by a less knowledgable signals agency. This has been seen with amongst others the Hagelin coin counting mechanism. Such an unknowledgable enemy would just randomly select keys from the entire key space thus would pick weak key quite often. This alowed the NSA an “easy in” on some traffic which revealed a lot of information to provide “probables” etc that would help break all but the strong key messages, which due to the wealth of information became considerably less relavant.

We know from the escrow war the last time around the NSA went a slightly different way with a cipher that was extremely brittle in that any even apparently insignificant change vastly weakened the system.

Thus the NSA have a lot of “form/previous” in backdooring their systems in a way that only makes it strong for them.

I know that the NSA were not alone in this, electronic cipher systems designed in Britain (BID kit) that used standard MSI logic chips had issues with larger numbers of weak keys. So much so there was a well known key setting on one type of equipment where puting an easily remeberable, “known plaintext” in produced human readable ciphertext that was equally as rememberable… which was used by techs for testing.

Nick P May 16, 2015 7:57 PM

@ Figureitout

Come to think of it, once you build the machine, I have a good estimate on how long it will take to put good stack on it: 14 years. Theirs is coded in assembler so they skip all kinds of tool verification and inefficiencies. You’ll likely have to use assembler or an embedded C compiler given hardware constraints of what you intend to build. I still think you should write yourself a macro assembler like HLA to get C-like structure with simpler verification and more efficiency.

Note: His Write Great Code book was wonderful in teaching me how almost everything gets transformed by compiler so you can code in a way that helps it. Or implement the same strategy in a macro-assembler.

@ Clive Robinson

I agree that there’s those and plenty of other issues in crypto they advocate for the world market, Americans, and foreign governments. That’s most of your examples. I’m only talking about their NSA-evaluated COMSEC gear that they use themselves and let defense contractors use. It’s not supposed to enter foreign hands. So, the only target would be themselves and U.S. defense contractors. They could backdoor this but it only weakens them and their most trusted partners. So, that’s the distinction I’m making when I claim they might not have backdoored it at all past protocol-level.

Now, if it ships to another country, then they have reasons to weaken it or do something tricky. This motivation makes me wonder about GD’s AIM product now that Thoth told me Thales was their partner on it. The French are heavily into industrial espionage, especially against America. America is heavily involved in subverting everything for political and economic intelligence collection, including against France. So, does this give it greater assurance via more eyes that there’s no backdoor? Or a backdoor that only works two ways? Or two sets of very sneaky backdoors each country aims at everyone else?

These games…

Buck May 17, 2015 12:12 AM

@Nick P

That is fantastic!! I see now that it went over my head last May… Though, if you happen to have any more links growing in that farm of yours, I’d love to see a collection of modern-day practical assembly projects!

Figureitout May 17, 2015 2:38 AM

Clive Robinson
–I’ll just assume anything from NSA is tainted and their “seal of approval” is poison; which they have over the years given to my school (have observed multiple attacks on me and others there). Don’t want any of their “seals of approval”.

Nick P
–Yeah, it’ll be a while for various reasons, one being I simply can’t do it now, too many fuzzy spots; the idea is not merely a simple PC, but a side channel resistant one. Can hardly read assembly now (hate how it’s different on different chips, too many “whys?” and I can’t just “know” what’s needed in startup w/o referring to datasheet, but that may work in my benefit eventually) and some C startup code is difficult for me. Like just cruising thru startup code and onwards and looking at “disassembly” options and how the next layer looks (I would like to get as deep as I can before I’m 6ft under, like delve into HDL and microcode as well). The MenuetOS is amazing, and looks fun. But mine won’t be like that, more boring and I will intentionally make it as hard as possible for lots of things to run. I’m unsure what I’ll support (pretty sure on PS/2).

There’s lot’s more to do before that though still, and these are my side projects I work on over weekend and when I’m tired after work and working out.

Way OT now so…

Nick P May 17, 2015 8:18 AM

@ Buck

Glad you liked it. Nah, there isn’t much to show for assembly projects that I’m aware of. Maybe this web server done with 256 instructions. There’s also SymbOS, an amazing 8-bit OS. Most of the best projects use C or C++ in clever, low-level ways. One of my favorite examples a while back was ..krieger.

Most of the developments in assembly are by the research community trying to improve it in various ways. A nice example that aims to give power of assembly with excellent safety properties.

Zenzero May 18, 2015 7:25 AM

@ Nick P, @ Figureitout

“You’ll likely have to use assembler or an embedded C compiler given hardware constraints of what you intend to build. I still think you should write yourself a macro assembler like HLA to get C-like structure with simpler verification and more efficiency.”

On a side note, here’s an article you might find interesting:

http://llvm.org/docs/tutorial/LangImpl1.html

It’s basically an excellent 9 part tutorial in writing your own language using LLVM (http://llvm.org/). The language implemented is obviously just a play language for the tutorial but the tutorials cover many interesting aspects of language design and would provide an enterprising individual with a good jump off point to writing/designing a stable/secure/verifiable framework which could be added to over time.

Linus January 6, 2016 1:01 PM

If anyone is still interested: we found that even the RC4 variant used for OSGP’s data confidentiality can be attacked with Klein’s WEP attack from 2006.

OSGP re-uses the public 8-byte digest of each message as the RC4 IV; XOR-ing it with the shared 16-byte secret key. Maybe they thought this prevents the already knwon WEP attacks, where the IV is concatenated instead of XOR-ed? Turns out it does not.

The bottle-neck of the attack is to accumulate about 90.000 eavesdropped messages (for a 0.5 success chance) and the first plaintext bytes must be predictable, such that the first cypher stream bytes can be derived. The prediction can be done as certain header bytes are always the same for certain message types, and as there is no message padding the message length can give away its type.

If a message is sent every 15 seconds (which I was told is a realistic value) you have to eavesdrop a device for about 2 weeks. However, – as in OSGP all devices share the same secret key (!) – you can speed this up by eavesdropping several devices.

Full paper and slides can be found here:
https://www.acsac.org/2015/workshops/icss/

Best regards,
Linus

Clive Robinson January 7, 2016 4:43 AM

@ Linus,

Thanks for the link.

Oh and the write up of the fault, hopefully people will “take it onboard” and learn from it.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.