"Surreptitiously Weakening Cryptographic Systems"

New paper: “Surreptitiously Weakening Cryptographic Systems,” by Bruce Schneier, Matthew Fredrikson, Tadayoshi Kohno, and Thomas Ristenpart.

Abstract: Revelations over the past couple of years highlight the importance of understanding malicious and surreptitious weakening of cryptographic systems. We provide an overview of this domain, using a number of historical examples to drive development of a weaknesses taxonomy. This allows comparing different approaches to sabotage. We categorize a broader set of potential avenues for weakening systems using this taxonomy, and discuss what future research is needed to provide sabotage-resilient cryptography.

EDITED TO ADD (3/3): News article.

Posted on February 25, 2015 at 6:09 AM25 Comments

Comments

SoWhatDidYouExpect February 25, 2015 7:23 AM

Is it possible that the people responsible for this outrage already have a stronger system in place for themselves, and therefore can afford to weaken everyone else’s use of cryptography? We need to find out what they are using and if it is better, put it into general use.

Bob S. February 25, 2015 8:08 AM

Re: “they” might have better cryptography”?

Not necessarily. But, they know “they” haven’t sabotaged it.

As for surreptitious weakening of cryptography, I don’t see any advantage or point in appeasing saboteurs. See: Neville Chamberlain (kids, don’t be like him).

I am curious how “they” managed to entirely corrupt, degrade and sabotage our democratic representative government to the point of impotence. We know the difference between right and wrong, why don’t our elected officials?

Obviously, “they” are very good at what they do, and tenacious.

Status: WE are losing.

Duck and Cover February 25, 2015 9:53 AM

NSA sabotage is a very, very touchy issue, because this illegal means of warfare involves financial consequences that the US can’t begin to pay. NSA sabotage is so touchy that the Obama administration has classified cyberspace sabotage as arms.

That gets their other tit caught in the wringer. Armed attack invokes the victims’ right to self-defense and the elements of the crime of aggression. What’s more, the clandestine command structure makes cyberwar a sneak attack, that is, conduct in breach of the Hague III Convention on the Opening of Hostilities. Fun fact: Anybody know the official justification of the nuclear bombing of Hiroshima and Nagasaki? Because Japan carried out a sneak attack.

The preeminent legal scholars of the Russian government know that very well. Under conventional and customary precedent, sneak attack is punishable by thermonuclear incineration of two cities.

Jessup and Langley. That would be fair.

name.withheld.for.obvious.reasons February 25, 2015 10:07 AM

@ Bruce Schneier

Nice work!!!

A quick read revealed a formal approach to classifying the issue(s) and an almost thorough exploration of the problem space. The readability and clarity of the paper requires a moderate amount of technological skill–this paper will be lost on the idiots that have hold of the pens (and pencils) that will be needed to address the identified issue. I believe the paper is important, my fear is that it will be lost in the noise that is the idiots yelling it down.

It is clear to me what your paper defines, what is unclear is the effect it will and must have.

Martin February 25, 2015 10:40 AM

Japanese leaders had no intention of surrendering after the two atomic bombs were dropped. They fully intended to continue fighting until the last man.

The ONLY reason Japan finally surrendered was because the Emperor decided to. The Emperor’s command to stop fighting was broadcast around Japan and that was the first time Japanese citizens had ever heard his voice.

No remark by any US Gov’t bureaucrat, written or otherwise, can serve as an official “reason” the two atomic bombs were dropped.

David Leppik February 25, 2015 11:56 AM

One area where I’d like to see more research is in developing encryption algorithms that can be verified in production systems. Developers rely on automated tests (primarily unit tests) and eyeballing to see that the system appears to work.

Unit tests typically verify that, given a known input, the code produces the expected output. For example, in all my code that checks the current date (e.g. “boolean doActionToday()”), I implement it via a version that takes an arbitrary date (e.g. boolean doActionOnDay(Date)).

It’s dangerous to do that for code that require a secure PRNG, since you don’t want to be one configuration error away from replacing your random number source with a mock random number source that always returns the same numbers. As a result, we are left completely in the dark when it comes to end-to-end testing to verify that our production systems are configured securely.

Anura February 25, 2015 11:59 AM

I agree that committee-designed standards have been a problem in the past, but I think there is room for the committee in some aspects: requirement building. Instead of designing every feature, they design the requirements, and then the proposals are submitted based on those requirements, and analyzed.

I’d also like to repeat something I’ve proposed in the past: modularizing the protocols. Instead of having a bunch of large standards, break everything up into small standards with modularized components. Instead of five standards averaging 50 pages a piece, you can have 20 standards that average 5 pages a piece, eliminating overlap and making it much easier to analyze the protocols and verify correctness of an implementation.

Shoemaker February 25, 2015 12:01 PM

To nitpick, you may want to change that link to https lest you surreptitiously weaken a cryptographic system yourself.

In fact several of the links in your sidebar, like the subscribe buttons, are unnecessarily http.

Nick P February 25, 2015 12:30 PM

@ David

The shortcut would be extending systems such as CRYPTOL and AnBx to handle new environments. That skirts around the issue entirely for many algorithm and protocol implementations. Plus, the people doing the porting don’t have to be experts on cryptography or protocols. They just need to know about issues such as timing channels that show up in implementations.

Nick P February 25, 2015 5:02 PM

@ Bruce

You should update the paper to include the two tools above so your large audience sees them. They’re open source, meet some of your requirements, and might get enhancements if publicized.

Dirk Praet February 25, 2015 7:46 PM

@ Anura

I agree that committee-designed standards have been a problem in the past, but I think there is room for the committee in some aspects: requirement building

In light of the recent statements by Daffyd Cameron and Mike Rogers, I think it would be wise to put a vetting process in place for who exactly is on the committee. I’m not convinced that folks with shady NSA or other TLA backgrounds belong there. Same goes for NIST.

Figureitout February 25, 2015 11:00 PM

Nice paper Bruce. These kinds of “mistakes” make it really touchy for new people taking on new roles. I really don’t understand how someone could’ve seriously put a double goto fail after an un-bracketed if-statement; that’s probably the most suspicious to me. Too glaring of a “whoopsie”.

Best thing though is to truly understand and actually walk thru everything it’s doing, if that’s never actually done systematically then to claim it’s secure is such a falsehood. So tricks and bugs in languages themselves need to be well known to implementors; if they don’t know them then they’re incompetent and need to be removed from their role or watched closely by someone who “knows what they’re doing”. If the code’s written like sh*t, remove the developer; it’s a waste of time debugging poorly formatted code. Can’t be having amateurs coding these super important areas.

Victor February 26, 2015 2:05 AM

Hello Bruce,

Is there a problem in this sentence?

“If the saboteur wishes to prevent collateral damage, the keys should first encrypted with the attacker’s public key.”

George Theodorakopoulos February 26, 2015 4:39 AM

In the case study on TLS (Section 7.1), the paper says it will discuss four potential weaknesses in detail (Secure randomness, Underlying primitives, Public-key infrastructure, Error oracles), but only the first two are discussed. Is there a reason for that or will the last two be added in the future?

Very useful paper, by the way.

Bent Schmidt-Nielsen February 26, 2015 8:23 AM

@Bruce, Very nice summary. But, I am quite surprised that you did not mention the nice work by Nadia Heninger et. al. on factoring weak RSA keys found widely on the web. https://factorable.net/
This is a good example of the real world consequences of poor random number generation.

Mike Amling February 26, 2015 5:10 PM

“for case of symmetric encryption”
‘case of’ should probably be ‘the case of’, as it is for the paper’s five other cases of ‘case of’.

“Control measures the degree to which the saboteur can limit whom can exploit a weakness as an attacker, even after it becomes public.”
‘whom’ should be ‘who’

“the constant relating the NIST curves used to generate random bits”
If I understand it, the constant relates two points on the same curve, not two ‘curves’.

“took on only one of 32,767 key pairs”
‘took on one of only 32,767 key pairs’

“in a numerous applications”
‘in numerous applications’ or maybe ‘in a number of applications’

Mike Amling February 26, 2015 6:10 PM

“extract sensitive data 2 only in memory”
Huh?

“Being that misuse of these values”
Or ‘Since misuse of these values’

“high, It has”
‘high. It has’

“little attention have been given”
‘have’ should be ‘has’.

“formal frameworks aimed at understanding backdoors has been”
‘has’ should be ‘have’

“whomever chooses it”
‘whoever chooses it’

Robert Brown February 26, 2015 6:23 PM

On p 17 “Secure randomness”, third paragraph, you said, “which resulted in a source entropy
pool containing at most 2^15 bits, and thus 2^15 possible initial states for the PRNG.”

I think you meant to say “at most 15 bits”, not “at most 2^15 bits”.

If I am wrong, plese explain why.

Thanks!

Matt Hurd February 27, 2015 7:32 PM

Prone to DPA, or equivalent through vibration, audio, heat or other radiation, is missing from the paper and perhaps one of the more difficult to have proper awareness of.

Clive Robinson February 28, 2015 7:42 AM

@ Bruce,

As I read it you only realy cover the “weakening of standards” by the rather obvious “bull in a chins shop” method used over the Dual EC random generator.

There are more subtle methods that can be used.

For instance how can you “fix” an open standards process carried out almost entirely by others?

An example was the AES process / competition,

It is known that the NSA like side channels especially when the leak KeyMat in a non obvious way, the question for them is thus how do you put the fix in without it being obvious, thus giving plausible denialability?

Firstly know how to herd cats, that is get lots of mutually suspicious people to move in the desired direction by their own choice.

NIST used the NSA as advisors in setting up the competition, and thus allowed the NSA to put space between them and the competition. They then advised that it was about the algorithm security speed and efficiency, and ensured that the only implementation issues that were put in the specification were mainly if not totally irrelevant to their desired result.

The key thing to know is that to make an algorithm fast or efficient on any given practical implementation usually reducess considerably the “implementation security” not the “algorithm security”. This is because amongst other things it makes time based side channels considerably more likely than not. And where there are time based side channels in crypto, they are more likely to leak key data than they are plain text data.

Knowing this as the NSA most certainly did as one or two academic researchers did and said so at the time, should have raised / rung warning flags / bells with NIST. But for some reason did not, then or for some time afterwards even though a practical attack was demonstrated.

Worse part of the competition rules was the fact that all implementation code used for the entries must be made publicaly available.

The NSA must have been rolling around on the floor with laughter and glee in their offices when they got that one past NIST. Because they must have known that the code for the competition winner would be downloaded and “used as is” by any one and everyone either directly or indirectly through another entities code library, where they downloaded and “used as is” the competition code.

The result was the wide scale adoption of a theoreticaly secure algorithm but implemented through code that was optomised for speed or efficiency that was far from secure.

So whilst “data at rest” was secure, “data in transit” was only secure if further steps to remove or mittigate the resulting side channels from the competition code was taken.

As we know there are still quite a number of “finished products” out there which use insecure coded versions of AES directly to line without mittigation…

There is a reason they call this “finessing”, and one is, even though the information on this competition manipulation and it’s results has been known publicaly for some considerable time, the majority of people just don’t want to believe it for various quite human reasons.

This brings in another aspect to deniability, which is the difference between “proof” and “belief”. Whilst proof can be demonstrated people will call it “conspiracy theory” etc untill the level of proof is way way beyond that “which would hang a man”. We have seen this with the Ed Snowden revelations. There was considerably more than circumstantial evidence that the NSA et al were doing what they were doing long long before hand. However whenever it was brought up it would get at best skepticism and often “shouted down”. But worse, even though the meta evidence is there now for all to see many are in either denial or “head in the sand” mode due to the cognitive dissonance caused between belief and evidence. And for these people I doubt there will ever be enough evidence to convince them to overcome their beliefs.

This also raises another point about what level of evidence is required in security design. In law there are two commonly quoted burdens and the hurdles they must cross cleanly, the first for prosecution for a crime of “beyond reasonable doubt” the second when suing for damages of “reasonable probability”.

When designing a system it is wise to regard even circumstantial evedence as damming unless it can be proven otherwise, and thus take steps to prevent the cause or where not possible mitigate the effect. It’s the reason why “air-gapping” was –untill recently– held in high regard and the likes of TEMPEST / EmSec design rules are the way they are, even though the are usually ignored or avoided by commercial system designers.

NextSteps February 28, 2015 5:28 PM

Wow Clive, you took the words right out my mouth. Not much more to add to that.
Gardening is only a decision to trade the keyboard away.

Lets seem try Hack our Ayn Rand luddite class gardening secrets then.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.