Cryptography Is Harder Than It Looks

Writing a magazine column is always an exercise in time travel. I’m writing these words in early December. You’re reading them in February. This means anything that’s news as I write this will be old hat in two months, and anything that’s news to you hasn’t happened yet as I’m writing.

This past November, a group of researchers found some serious vulnerabilities in an encryption protocol that I, and probably most of you, use regularly. The group alerted the vendor, who is currently working to update the protocol and patch the vulnerabilities. The news will probably go public in the middle of February, unless the vendor successfully pleads for more time to finish their security patch. Until then, I’ve agreed not to talk about the specifics.

I’m writing about this now because these vulnerabilities illustrate two very important truisms about encryption and the current debate about adding back doors to security products:

  1. Cryptography is harder than it looks.
  2. Complexity is the worst enemy of security.

These aren’t new truisms. I wrote about the first in 1997 and the second in 1999. I’ve talked about them both in Secrets and Lies (2000) and Practical Cryptography (2003). They’ve been proven true again and again, as security vulnerabilities are discovered in cryptographic system after cryptographic system. They’re both still true today.

Cryptography is harder than it looks, primarily because it looks like math. Both algorithms and protocols can be precisely defined and analyzed. This isn’t easy, and there’s a lot of insecure crypto out there, but we cryptographers have gotten pretty good at getting this part right. However, math has no agency; it can’t actually secure anything. For cryptography to work, it needs to be written in software, embedded in a larger software system, managed by an operating system, run on hardware, connected to a network, and configured and operated by users. Each of these steps brings with it difficulties and vulnerabilities.

Although cryptography gives an inherent mathematical advantage to the defender, computer and network security are much more balanced. Again and again, we find vulnerabilities not in the underlying mathematics, but in all this other stuff. It’s far easier for an attacker to bypass cryptography by exploiting a vulnerability in the system than it is to break the mathematics. This has been true for decades, and it’s one of the lessons that Edward Snowden reiterated.

The second truism is that complexity is still the worst enemy of security. The more complex a system is, the more lines of code, interactions with other systems, configuration options, and vulnerabilities there are. Implementing cryptography involves getting everything right, and the more complexity there is, the more there is to get wrong.

Vulnerabilities come from options within a system, interactions between systems, interfaces between users and systems—everywhere. If good security comes from careful analysis of specifications, source code, and systems, then a complex system is more difficult and more expensive to analyze. We simply don’t know how to securely engineer anything but the simplest of systems.

I often refer to this quote, sometimes attributed to Albert Einstein and sometimes to Yogi Berra: “In theory, theory and practice are the same. In practice, they are not.”

These truisms are directly relevant to the current debate about adding back doors to encryption products. Many governments—from China to the US and the UK—want the ability to decrypt data and communications without users’ knowledge or consent. Almost all computer security experts have two arguments against this idea: first, adding this back door makes the system vulnerable to all attackers and doesn’t just provide surreptitious access for the “good guys,” and second, creating this sort of access greatly increases the underlying system’s complexity, exponentially increasing the possibility of getting the security wrong and introducing new vulnerabilities.

Going back to the new vulnerability that you’ll learn about in mid-February, the lead researcher wrote to me: “If anyone tells you that [the vendor] can just ‘tweak’ the system a little bit to add key escrow or to man-in-the-middle specific users, they need to spend a few days watching the authentication dance between [the client device/software] and the umpteen servers it talks to just to log into the network. I’m frankly amazed that any of it works at all, and you couldn’t pay me enough to tamper with any of it.” This is an important piece of wisdom.

The designers of this system aren’t novices. They’re an experienced team with some of the best security engineers in the field. If these guys can’t get the security right, just imagine how much worse it is for smaller companies without this team’s level of expertise and resources. Now imagine how much worse it would be if you added a government-mandated back door. There are more opportunities to get security wrong, and more engineering teams without the time and expertise necessary to get it right. It’s not a recipe for security.

Unlike what much of today’s political rhetoric says, strong cryptography is essential for our information security. It’s how we protect our information and our networks from hackers, criminals, foreign governments, and terrorists. Security vulnerabilities, whether deliberate backdoor access mechanisms or accidental flaws, make us all less secure. Getting security right is harder than it looks, and our best chance is to make the cryptography as simple and public as possible.

This essay previously appeared in IEEE Security & Privacy, and is an update of something I wrote in 1997.

That vulnerability I alluded to in the essay is the recent iMessage flaw.

Posted on March 24, 2016 at 6:37 AM30 Comments

Comments

FoLI March 24, 2016 8:50 AM

This is time some expert in hardware-based proof of programs invents:
– a CPU
– an OS
– a keyboard
– a screen
– a serial port

None of them complex. All with proofs. All open source. Ideally, the OS would be written in the native assembly language of the CPU. The performance will be so 1980’s, but this is the price to pay.

FoLI March 24, 2016 8:52 AM

My post is a solution to the above paragraph that I now quote:

“The second truism is that complexity is still the worst enemy of security. The more complex a system is, the more lines of code, interactions with other systems, configuration options, and vulnerabilities there are. Implementing cryptography involves getting everything right, and the more complexity there is, the more there is to get wrong.”

Green Squirrel March 24, 2016 9:24 AM

I’m writing these words in early December. You’re reading them in February.

And we are reading the repost-on-blog version at the end of March.

Gary Walsh March 24, 2016 9:43 AM

“we cryptographers have gotten pretty good at getting this part right.”

No you haven’t.

All you do is the easy part and then leave the hard part to software developers. Like an arrogant scalpel manufacturer who claims they did the hard part and stupid surgeons are the reason things go wrong. Funny, your so-called experts haven’t fixed a damn thing in twenty years. All you do is talk talk talk.

steve king March 24, 2016 10:01 AM

Did you hear about the cryptographers that designed an automobile?

If you listened to the wrong radio station while you made a right turn the windshield would blow out suddenly. If you ever lowered the passenger side window while traveling over 100kph, and it was the first Tuesday of the month, the engine blew a gasket.

Owners complained, people died.

The cryptographers reacted “They shouldn’t do that. Designing automobiles is hard. Just don’t ever lower the passenger side window while traveling over 100kph, if it’s the first Tuesday of the month. We put it in the manual. Didn’t you read the manual? Stupid people.

Then the cryptographers all went to a convention and told everyone how hard it was to design automobiles and how it so complicated and impossible for anyone to get right except for them, even those rocket engineers weren’t good enough.

Cellinottoobright March 24, 2016 10:10 AM

Then again, you don’t have to be perfect, you just have to be too hard for worthless government tax parasites who try to interfere with your legal privacy rights.

Like FBI, who cocked up their investigation, then lied through their teeth saying only Apple could help them. Nineteen times the criminal scumbags lied. Comey lied to Congress. His FBI goons lied in court.

https://www.emptywheel.net/2016/03/22/dojs-pre-ass-handing-capitulation/

You could stop these shitheads dead with ROT-13. Watch, sad sack Stirling David Allen’s gonna stump em now.

Comey doesn’t even care about this dead guy’s phone. He wants this court order so he can kill the leaders of Black Lives Matter, MLK-style.

steve king March 24, 2016 10:18 AM

If medical technology were governed by NIST, a hundred billion dollars would get you a manual on how to remove a Band-Aid. A few hundred billion dollars more and the NSA could deploy technology that prevented you from reading the manual on how to remove a Band-Aid.

Security Philosopher March 24, 2016 10:36 AM

RE: Backdoors to encryption
If good encryption is illegal, only criminals will have good encryption.

Jack March 24, 2016 11:03 AM

@Security Philosopher

To be clear, “good encryption” = “security”… so… if good encryption is illegal, only criminals will be secure, everyone else will be insecure.

@Gary Walsh, Gary Walsh

Wow, careful, your shoulder looks a bit out of joint there with that chip on it.

“No you haven’t.”

I guess you’re saying cryptographers haven’t even gotten the easy part right? Manufacturers still design scalpels horribly? Why don’t you do better then if it’s so easy?

Nick P March 24, 2016 11:21 AM

@ FoLi

Proof of CPU’s has been done several times: FM9000, Burger’s Scheme CPU, Rockell-Collins’ AAMP5G, AAMP7G, and Verisoft’s VAMP processor. The others can be unreliable so long as the system can catch that. So, you need to prove the I/O interface. Yale’s FLINT group has done a lot of that. Only thing slowing this kind of work down is lack of people doing it. Plenty of worked examples to build on. So, jump into it.

Jakub Narębski March 24, 2016 1:07 PM

Reading the history of a Clipper chip, you can notice additional thing, namely that any backdoor system must be secure against bypassing it… which is (as Clipper chip short history shows) not easy to do.

So backdoors would: 1.) make us all vulnerable, 2.) and would not help against terrorists / pedophiles / organized crime / criminals / tax dodgers / … anyway after all.

KT March 24, 2016 1:50 PM

They really should be using techniques from formal verification to write encryption software. At very least, they should have an informal proof (or explanation) of why their software is correct.

We’ve seen that in OpenSSL (and others) they aren’t even trying to think of everything that can go wrong: they are merely trying to get it to work (which, in security software, means it only looks like it works).

Clive Robinson March 24, 2016 1:58 PM

One of the problems with the maths is what it does and does not cover both in terms of actual theoretical design and practical implementation.

For instance AES appears to be theoreticaly secure in certain limited ways, it was however very much insecure in early implementations. This was because the theoretical work ignored time based side channels amongst other things.

But it goes further most crypto algorithms are used as subs in crypto modes. Some modes do not work well with some crypto algorithms. In part because the crypto algorithm design and mathmatical proofs did not consider the use of that type of mode.

But the problem gets worse as you move up the protocol stack, often the proofs don’t overlap, thus they have inbuilt security gaps.

The simple fact is that currently we don’t know enough to have a chance of getting the maths side right let alone a practical implementation.

And that’s before we start considering the hardware security issues.

Mark March 24, 2016 5:53 PM

Considering the impacts associated by the ‘Wassenaar Arrangement’ : http://www.wassenaar.org/participating-states/ GLOBALLY, not leaving out what here in Australia is now being controlled in adition by the Australian Government : http://www.defence.gov.au/DECO/_Master/docs/Australian_Export_Controls_and_ICT.pdf , one can easily see that the government influence and controls in software design’s and publication will no doubt, kill secure security products once and for all.

Cryptologic software design is not about maths, its about correctly designing programatic algorithms to ensure security.

This may be a mute point, but http://www.foocrypt.net/

or

https://www.gofundme.com/foocrypt needs funding to live.

J. March 25, 2016 5:00 AM

“an encryption protocol that I, and probably most of you, use regularly.”
My bet would be on Badlock, the upcoming Microsoft Windows/SAMBA issue.

Who? March 25, 2016 6:50 AM

@ Clive Robinson

One of the problems with the maths is what it does and does not cover both in terms of actual theoretical design and practical implementation.

Completely agree.

For instance AES appears to be theoreticaly secure in certain limited ways, it was however very much insecure in early implementations. This was because the theoretical work ignored time based side channels amongst other things.

The math of cryptography is strong, what usually fails is the implementation itself. As you note, timing attacks were a useful side channel attack against first AES implementations.

But it goes further most crypto algorithms are used as subs in crypto modes. Some modes do not work well with some crypto algorithms. In part because the crypto algorithm design and mathmatical proofs did not consider the use of that type of mode.

Like the weaknesses in Electronic Codebook (ECB) encryption mode.

Do you think that these ones were the reasons behind AES being adopted as a standard by the NIST? I agree, AES is difficult to get right and, when done, there is a risk of making mistakes choosing the wrong encryption modes. So we must fight against both broken implementations and bad design choices taken when deploying AES encryption.

But the problem gets worse as you move up the protocol stack, often the proofs don’t overlap, thus they have inbuilt security gaps.

I fear I do not fully understand whay you are saying here.

The simple fact is that currently we don’t know enough to have a chance of getting the maths side right let alone a practical implementation.

So, do you think we should mark AES with a large “here be dragons” note and avoid it whenever possible?

And that’s before we start considering the hardware security issues.

Don’t start talking about hardware security issues. I fear we have no chance to win this battle at all. We should accept most of our hardware is either backdoored or at least weak enough as to be backdoored (even if unintentionally from the point of view of hardware manufacturers).

Taking Bruce’s second point (“complexity is the worst enemy of security”) even if current hardware is not backdoored, its design is so complex that it must have multiple backdoor-like bugs that will be discovered (and exploited) at some point.

Our only chance, at the hardware level, is trusting on simple devices (like Alix/APU boards, old personal computers, or perhaps ARM based appliances) to build the perimeter of our networks.

A Patriot March 25, 2016 9:32 AM

The government thinks it can keep the backdoor keys secret but it can’t even keep my SSN secret.

Clive Robinson March 25, 2016 11:05 AM

@ Who?

So, do you think we should mark AES with a large “here be dragons” note and avoid it whenever possible?

Yes AES needs a “here be dragons”, but then so does most crypto. Thus just like a crate of nitroglycerin all crypro should be treated with some considerable caution and care.

Whilst AES is not the strongest of the original finalists, it is the one everyone trys to work with for various reasons, so avoiding it is neither desirable or practical.

I fear I do not fully understand whay you are saying here.

When you can visualise it, it’s easy to understand. We often talk about systems as consisting of layers, what we don’t mention as much is that the individual layers consist not of one unit but many effectivly independent units. Thus almost like a layer of oranges or apples in a box. The assumption is that the layer is full and that there are no gaps. Unfortunatly that is often not the case. Further when it comes to proofs, on each independent unit, the coverage of the proof is often less than the functional coverage of the unit. Thus not only are their gaps from missing units, the proofs for the units there are often do not overlap, or even get close. The result is often that a gap in one layer aligns or can be made to align with gaps in other layers. It’s not that the proofs are incorect, it’s just that there are gaps in the coverage thus securiry holes.

We’ve just seen this with the iMessage issue. The compression function used a linear CRC which provably detects communications errors of a certain type, but not all errors. What the CRC did not protect against was certain bit flipping attacks which exploited the types of error that the CRC did not detect. This alone would not have alowed an attack to succeed, however at another layer a side channel became possible because attachments were effectivly out of band. And so on at other layers, the result was a knowledgeable attacker could align the gaps and get through the security of the overall system.

We should accept most of our hardware is either backdoored or at least weak enough as to be backdoored…

Yes the simple fact is that the lower level an attack is the more devastating it usually is.

It’s a point I realised some years ago and set about finding ways of dealing with the issue, which I did do successfuly. With out going through all the details again it is possible to set up mitigation strategies via use of multiple hardware units and detect “defecting hardware”.

dittybopper March 27, 2016 1:01 PM

Kinda hard to hack into a piece of paper you encrypted on and then incinerated. Just sayin’…..

Skeptical March 29, 2016 1:29 PM

Almost all computer security experts have two arguments against this idea: first, adding this back door makes the system vulnerable to all attackers and doesn’t just provide surreptitious access for the “good guys,” and second, creating this sort of access greatly increases the underlying system’s complexity, exponentially increasing the possibility of getting the security wrong and introducing new vulnerabilities.

Well, a couple of things. First, I wonder whether the complexity issue is too heavy a hammer to use here (I’ll explain what I mean), and second, I wonder whether you fail to consider the full effect of a particular policy on security by failing to consider how that policy structures the incentives and behaviors of key actors.

So, first, part of the problem I have with the complexity argument – on its own – is that it proves too much. Sometimes we find ways to improve systems that also involves simplifying them – certainly this can be an important type of improvement to make to a system if complexity in itself is a problem – but frequently the improvements involve greater complexity.

Obviously complexity is a pretty damn ubiquitous problem, especially in the context of information technology.

And so we devise ways of managing that complexity – and in doing so, even though we make unforeseen mistakes along the way, we build up complex systems that nevertheless function safely and as intended.

You say that it’s hard. Sure, it’s hard. Lots of things we do to improve systems are hard. They’re not easy to figure out, and even when we think we have a solution, it may not be easy to design tests that might fully satisfy us.

But there’s an important difference between sounding a cautionary note (hey, we need to be really careful and deliberate in how we do this, we need to expect certain failures, we need to do lots of testing before implementing, etc.) or criticizing a specific proposal (e.g. this won’t work because in cold temperatures the O-rings will not expand to fill the changing volume of space they’re meant to fill), and simply taking a general stance against adding any complexity, regardless of how susceptible to testing, regardless of how limited the damage might be if certain failures occurred, regardless of the alternatives.

Hell, encryption itself adds enormous complexity to a system of communication – and I suppose if we didn’t investigate how we could design and conceptualize encryption in a manner to make it susceptible to understanding and testing, as to how we could limit the probability of mangled or lost signals to within acceptable ranges, etc., using it might be a bad idea in certain circumstances.

Indeed, on the morning of December 7, 1941, a Navy aircraft sank a submarine at 0700, approximately one mile distant from Pearl Harbor. The pilot communicated this information in code, which meant a delay (considerable, in that case) between the coded communication being received, and the information contained therein being decoded and distributed to parties who needed to know such information. And somewhat less dramatically, that tradeoff between the benefits and costs of using encryption and the type of encryption to be used is one that is obviously still a very active factor today.

Now, obviously strong encryption is a vital and a good thing in a great many circumstances. I don’t doubt that it’s quite hard to get right, and quite hard to implement. And we actually insist that some of our most precious information is strongly encrypted, despite the increased complexity and the risks that it poses. And just as encryption itself may make a system of communication more complex, so too will any design that seeks to both secure systems against certain types of access attempts while allowing access in specified circumstances that involve parties other than a particular user be more complex as well.

There are different ways of going about ameliorating the undesired costs of strong encryption without sacrificing too much of the desired benefits. One might be to render the processes involved in a lawful access mechanism diverse, obscure, and the consistent focus of testing.

In many ways, I would far prefer that the US Government and various companies work together than work against each other. I’d rather Apple cooperate with requests for assistance, and that the US Government aid Apple in securing the device against unlawful access. Had Apple provided the assistance requested, both parties would have incentive to protect the mechanism devised, and to devise means of ascertaining when others may have discovered them as well.

That type of system facilitates information sharing and cooperation for a common purpose.

Otherwise – and you want to talk about introducing complexity – you have the US Government in the position of balancing the equities in deciding whether to disclose vulnerabilities to companies while simultaneously seeking to discover such vulnerabilities in order to fulfill its duties as a government.

And if we’re going to talk about the question of security overall, then that broader, systemic effect on the incentives and behavior of the various actors involved must be considered. If we incent the most powerful and knowledgeable actors NOT to work cooperatively on security, and to seek ways to crack security (rather than seeking to add a designed-in means of authorized access), then we’ve reduced the security of our devices, all else being equal.

So from an engineering perspective, I wonder whether the objection is just too broad; and from a strategic perspective, I wonder whether you’ve thought through how the moves and incentives of other players in reaction to stances like Apple’s will affect security, regardless of the challenges that the complexity of lawful access – whether limited to narrow conditions or not – brings to the security design of a device or system.

Dirk Praet March 29, 2016 7:27 PM

@ Skeptical

So from an engineering perspective, I wonder whether the objection is just too broad; and from a strategic perspective, I wonder whether you’ve thought through how the moves and incentives of other players in reaction to stances like Apple’s will affect security

As usual, you make another long and eloquent argument for your case of exceptional government access. So let me once more summarize too the various reasons why many of us firmly believe it’s a genuinely bad idea.

  • Technical: The vast majority of subject matter experts agree that there is no way to provide a technically sound, secure and scalable way of doing so. I am also still waiting for the first government agency or supporter to come up with a valid proposal. Repeating calls for a new Alamo project or for everybody to think harder is not going to change that. You may just as well shout out to the mathematics community that they finally have to come up with a solution for the quadrature of the circle.
  • Philosophical: The entire idea of encryption since the dawn of times is to keep busybodies, criminals and governments alike out of privileged communications. Even some of your own Founding Fathers used it to hide some of theirs from both the British and the Postmaster General. I guess no one will argue that the then British government had excellent reasons to ask for government backdoors too.
  • Legal: Although you keep arguing that the DoJ has the law on its side in using the AWA to compel private companies to aid the government in circumventing decryption, many of us – including judges, lawyers and legal scholars – see important statutory and constitutional hurdles to such exceptional access. Calling us biased idiots does not change the fact that it is ultimately not you but SCOTUS who will decide on the matter.
  • Economic: Mandating exceptional government access will inevitably hurt the US tech industry in more than one way and move strong encryption products out of the country.
  • Political: Unlike you, many of us aren’t supporters of governments and corporations working closely together to enable the 24/7 tracking of every move and every communication of the ordinary citizen upon presentation of a “valid” warrant. That’s a scenario right out of a dystopian novel and a prime enabler of tyranny if for whatever reason a leader or government ever goes off the deep end. No government in a democratic society should have such powers, and I’d rather take my chances with 3rd parties subverting security and encryption than allowing any government to do so.
  • Societal: Comey’s “going dark” allegations are hysterical. Never before in history has LE/IC had such a vast arsenal of legal and technical means to track, monitor and convict people. There is no denial that some investigations today are hampered by encryption, but I don’t know of any major case in which the breaking thereof was essential in preventing a clear and imminent threat to national security. The San Bernardino case certainly wasn’t, and what’s more, eventually showed the FBI – despite sworn statements to the contrary – didn’t even need Apple to get into that phone.

    Until LE comes up with credible statistics that encryption is thwarting significant amounts of investigations with a serious impact on the general public or national security, society as a whole remains more secure with unbreakable encryption than in accepting the many risks associated with encryption weakened by government mandated backdoors.

anon April 15, 2016 1:37 AM

I’m getting really sick of reading this lie: “adding this back
door makes the system vulnerable to all attackers”.

Yes – it’s really useful to win arguments, but it’s just plain not true.

Also, if anyone wanted a back door without messing up complexity, all they need to do is generate a PRNG using asymmetric techniques – magic – they can decrypt whatever they want, and nobody else can because they don’t have the necessary secret key.

Oh wait. They did that already. It’s called FIPS.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.