Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AM101 Comments

Comments

Brett October 22, 2013 6:38 AM

“It wants backdoors, just like the cybercriminals and less benevolent governments do.”

Should read…”It wants backdoors, just like the cybercriminals and other malevolent governments do.”

Great write up, if only we could be the public to wake up and really see this as a very serious issue.

Cpragman October 22, 2013 6:43 AM

Seems like the white hats could test for the presence of back doors by running various honeypots and waiting for a visit from someone.

Mark October 22, 2013 6:49 AM

“Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system.”

Isn’t it possible to verify the outout of RNG functions to identify weak implementations?

JoachimS October 22, 2013 6:51 AM

The A5/1 (and A5/2) was accepted by a committee, standards body. Based on the less than stellar result by ETSI/SAGE we also need standard bodies that are themselves open and producing standards that are not secret nor inaccessible unless you join the club and can pony up big piles of money.

Nicholas Weaver October 22, 2013 7:00 AM

I disagree with your belief that Dual_EC_DRBG is not a deliberate backdoor.

1: The original article in the NY Times made reference to the Crypto rump talk.

2: It is an asymettric backdoor, so even if known, its safe-to-deploy in the same way Clipper was.

3: It WORKED. For a “clumsy” backdoor, the commercial vendors certainly flocked to it because it was a standard, that apparently became de-facto mandated in some circles.

RSA BSafe library used it by default, which was almost certainly some form of overt or covert condition for sales or certification.

And I’m suspicious of BlackBerry: we know they are compromised heavily by the NSA/GCHQ, but their business model is security of communication. Their server software is FIPS certified, and the server software includes Dual_EC_DRBG too.

As for other products: Well, the Guardian & the Post know of at least one, but they refuse to tell the rest of the world which one (the blacked out chip name).

Mike the goat October 22, 2013 7:04 AM

whoops, the previous post was intended to go on the squid page.

I agree with you Nicholas. The fact RSA’s library used an algorithm that was widely known to be junk amongst those in the know by default… I think that speaks volumes.

AnonymousAtRisk October 22, 2013 7:07 AM

What protection can you provide those who give you examples of intentional vulnerabilities? Some of these vulnerabilities are introduced into software whose customers number in the dozens, all of which have signed NDAs and have license agreements with clauses against reverse-engineering (the main way we know about them).

Matt Palmer October 22, 2013 7:07 AM

@Mark: No, it’s not generally possible to determine if random number generators have weak implementations by looking at their output.

You can probably statistically identify poor implementations – for example:

http://xkcd.com/221/

But in general you can’t:

http://www.random.org/analysis/

The problem is magnified if you are trying to detect one which has been deliberately weakened, since it will still – deliberately – pass statistical randomness tests, but the sequence would be predictable by someone who knows how it has been weakened.

Rich October 22, 2013 7:11 AM

@ Cpragman
Seems like the white hats could test for the presence of back doors by running various honeypots and waiting for a visit from someone.

I think Putin was doing that when the plane flying Bolivian president Morales home was forced to land in Austria: mention Snowden will be on board over some channel you suspect is bugged and see what happens.

Twylite October 22, 2013 7:43 AM

@Mark: What Matt said; plus the weakness is usually in the seeding of the algorithm rather than the algorithm or implementation being weak.

If you have implemented a well-known algorithm then you can use a Known-Answer Test to verify that your implementation is correct. But that doesn’t mean your random number generator (RNG) as a whole is secure.

Most RNGs are actually deterministic random (unpredictable) bit generators, because it’s really hard (and quite slow) to generate truly random noise. So instead we generate tiny amounts of noise (the “seed”) and use it as a key in an iterated AES-CTR or HMAC-SHA2 or similar construct (the “generator”) and treat the output as “random”.

Assuming the seed has sufficient entropy, predicting the generator’s output is as hard as attacking AES or HMAC-SHA2.

But even if we put in a completely predictable seed, the output of the generator looks (to statistical analysis) completely random. The output tells us nothing about the quality of the seed.

The backdoor opportunity here is knowing that the seed is predictable or has limited entropy.

Mike the goat October 22, 2013 7:51 AM

Bruce: I thought it was interesting reading Slashdot last month, there was an article stating that Linus admitted he had allegedly been pressured to insert a backdoor into the kernel.

Twylight: the whitening done by, say AES in counter mode also makes it impossible to prove that your seed data is legit. This is precisely the issue I have with Intel’s RDRAND.

Mark October 22, 2013 8:00 AM

@ Matt Palmer

Thank you for your explanation. I thought it would be easier and the rate of detecting manipulation would be easier with the amount of generated numbers to check for repetetive characteristics.

Mark October 22, 2013 8:11 AM

@ Twylite

Thank you, too 😉

Sounds really interesting and a bit confusing, because if it is manipulated I think there must be special characteristics that could be detected by automatic analysis.

If the manipulation isn’t based on the algorithm itself and instead on the underlying input (date, time, cooler rpm, serial numbers, power usage, or whatever the input is) I think it could be checked, if these input is used or altered to generate numbers within a special spread (then it wouldn’t be the same and more difficult to detect, but could be, if my view is correct, reproduced easier).

I think I have to learch much more about RNG to understand why it is so difficult.

Mike the goat October 22, 2013 8:38 AM

Mark: the point is that you can pump anything you like through a block cipher and the output will likely pass statistical tests like ent or diehard. In the case of the Intel implementation – the chip is a “black box” and you can’t verify the quality of the seed.

Clive Robinson October 22, 2013 8:39 AM

@ Bruce,

I can give you an example of a quite deliberate back door in a product going back into the last centure, as I wrote it.

I won’t mention the product because having proved a point (about “code reviews” I ensured it only ever went out in alpha and beta code none of which should now be running,) and as the company nolonger exists the software is most definatly not supported 😉

It involves the use of a PRNG that can also do PubKey encryption and pointers in C mixed in with the misuse of C’s malloc functions.

Without going into to many details on of the failings of C’s malloc is it does not clear memory on making a block of memory available to a program, nor on putting it back on the heap. The result is this can by judicious use of free enable a variable to be passed almost secretly if a programer is not wide awake. Further C’s pointers to memory that has been returned to the heap remain valid if you know exactly how the algorithm works and avoid freed memory being reused (if you want to know more go read “Deep C Secrets” it will make your life more interesting as well 😉

The Blum Blum Shub PRNG has been around since the mid 1980s, as a CS-PRNG it has some realy desirable features. However it has a significant down side, it has a “magic number” that is used to perform a moduls operation. The magic number is a pq pair multiple that can be used as a Public Key. In this manner it is just like other mathmaticaly based CS-PRNGs bassed on RSA, eliptic curve or other PubKey algorithm.

As you now have embedded a PubKey as part of your random number generator you can use it for all sorts of things…

As you note hiding symetric keys in IV’s and nonces is one, you simply make a minor programing error to convert the BBS PRNG into PubKey encrypting the symetric key and to stop it being obvious you add an offset to the symetric key by mangaling sixteen bits of it (which means only 65336 tries max to find it).

Another is in generating PubKeys you deliberatly make one of the primes searched for from a backdoored start point. The start point is generated as a smallish random number then shifted up to the appropriate size. You encrypt the start point using the PubKey in the BBS RNG and embbed it at a fixed point offset in the final public key. As you are the only person who know the pq pair and thus the private key you are the only person who can recover the start point and quickly find one of the primes thus quickly factor out the other prime and rebuild the private key.

One inefficient but simple method to do this is described in Adam Young and Moti Yungs book “Malicious Cryptography: Exposing Cryptovirology”, John Wiley & Sons, 2004.

They also have a Cryptovirology web site and FAQ,

http://www.cryptovirology.com/cryptovfiles/cryptovirologyfaqver1.html

paul October 22, 2013 8:45 AM

“Too obvious” is a bad criterion for deciding whether something is deliberate or accidental. First, it assumes a very smart adversary who never makes mistakes or succumbs to a tempting target of opportunity. Second, it ignores offense in depth: the more weaknesses you can create, the more chance one of them will go undetected and/or unfixed.

Here’s a question for software engineers: if you find a security-related mistake in code and fix it, do you breathe a sigh of relief, or does it motivate you to redouble your examination of that and other pieces of code to find even more backdoors?

Vinzent October 22, 2013 8:45 AM

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area — and it’s hurting us badly right now.

Um, that’s wrong. There is actually quite a lot of research going on, mostly in the safety-critical community, but the results are slowly spilling over to the security-critical stuff.

A pilot project at EAL5 was even sponsored by the NSA, the Tokeneer project.

From the paper:

As well as achieving EAL5 levels of assurance, we believe that the Correctness by Construction process is close to achieving EAL7.

From my experience I’d say that the problem of “only” providing a correctness proof for the source code of a certain crypto-algorithms isn’t even a hard problem anymore. (Actually it is probably easier than proving that the flight-control-laws are implemented correctly – and we actually do just that.)

The biggest problem for non-acceptance is that we would need to get rid of the still most used implementation language C, which is hard to verify due to a lot of semantic issues (and once you remove the semantic ambiguities, there’s not much language left).

So the hard thing is not missing development methodology, but the willingness to adopt it, the research has been there for more than 20 years now and there are usable results from it.

D October 22, 2013 9:15 AM

I’m somewhat less concerned about the possibility of a back door in the Intel HRNG, or at least one of the sort described in the recent research. It has a key problem, literally: someone who measures the relevant circuit’s properties sufficiently well could retrieve the fixed part of the AES key envisaged by the paper.

It’s really difficult to measure a few specific gates in a modern processor that well, but it’s possible; certainly the research instruments that are used to characterize the performance of modern gates could do this measurement. (E.g., a dual-probe SPM of some sort, or possibly just an STM used cleverly.) It’s just a matter of someone wanting to make the measurement enough.

Although I’m not sure whether, e.g., China has any labs sufficiently capable at present, I’m pretty sure that they would be willing to spend enough to acquire this capability. And China having the secret key to Intel’s HRNG would be catastrophic to US interests. (They would then be in the same position as the NSA to attack US government traffic involving Intel computers, i.e., pretty much all of it. And at least in the unclassified world, the recommendation seems to be to use the Intel HRNG for US government applications. But this includes a huge variety of traffic that requires protection, particularly w.r.t. economic espionage, which the Chinese are way more interested in than military espionage.)

So I really really really hope that the NSA hasn’t done something that stupid. If they have, they’ve screwed the American government to screw the American people.

(I wouldn’t be entirely shocked, however, if it turned out that Intel chips actually implement something asymmetric like DUAL-EC-DRBG. The fact that they’ve provided no diagnostic connections at all suggests that they don’t want anyone to know what they’ve really implemented. And I think it is realistic to think that Intel could keep this a secret: e.g., acquaintances and colleagues who work there often have no clue about how the work they’re doing relates to actual products. Intel is good at keeping trade secrets secret — maybe even better than the NSA, judging by recent events.)

[I’ve decided to leave this mostly unsigned because I used to collaborate on semiconductor-related projects with Intel and a US non-military scientific laboratory. NSA peeps: I have no special knowledge; this is merely speculation by someone who knows a lot about measuring semiconductors. But you probably already know that. 🙂 I’ve avoided providing specific measurement recipes for hostile governments because I am queasily uneasy about the possibility that you’ve done something this stupid.]

Alex October 22, 2013 9:24 AM

The real problem with ANY back door is that it can be used by any and all who become aware of it. While the US government may think these “tools” help law enforcement, once said back door enters enemy hands, they’ve just handed over the entire country to an unfriendly foreign power.

Anyone remember what happened with the Engima machines? How quickly we forget history.

GentooAndroid October 22, 2013 10:00 AM

@D: “It’s really difficult to measure a few specific gates in a modern processor that well, but it’s possible”

Does, yes or now, your knowledge help to detect https://www.schneier.com/blog/archives/2013/09/surreptitiously.html successfully (this is linked in current post of Bruce) ?

@Bruce: “I am looking for other examples of known or plausible instances of intentional vulnerabilities”

I exposed highly deniable compromise of Debian and OpenBSD in https://www.schneier.com/blog/archives/2013/09/google_knows_ev.html#c1770312 ; for the case of OpenBSD, the compromise was considerably weaker than for Debian. The same post discusses the compromise of Gentoo’s servers.

Petter October 22, 2013 10:13 AM

A few years ago the export restriction on non military encryption were reduced.

There are no reason why US would let strong encryption free to this degree unless they did not see it as a threat any more.
Or as I believe – they wanted to get the strong encryption used around the world replaced with encryption algorithms weakend by the NSA/NIST.

They had the power to recommend every crypto they found a weakness in or the ones they were able to manipulate and at the same time they had the possibility to “disable” secure cryptos by not recommending it.

H. Meadows October 22, 2013 10:13 AM

@Mike the Goat
Linus admitted he had allegedly been pressured to insert a backdoor into the kernel.

I thought it was not possible to pressure Linus to do anything, considering how grumpy he can be;-P

On another note, there is another Finnish export that was in the news some time ago, accused of decrypting their users HTTPS sessions.

This led to a question I would like to ask anyone here…

How can Nokia decrypt a HTTPS session on a remote server (that is in between the source and target as per the article below) unless the encryption key is sent (in clear text) to Nokia?

Nokia: Yes, we decrypt your HTTPS data, but don’t worry about it
http://gigaom.com/2013/01/10/nokia-yes-we-decrypt-your-https-data-but-dont-worry-about-it/

Nokia has confirmed reports that its Xpress Browser decrypts data that flows through HTTPS connections – that includes the connections set up for banking sessions, encrypted email and more. However, it insists that there’s no need for users to panic because it would never access customers’ encrypted data.

The confirmation-slash-denial comes after security researcher Gaurang Pandya, who works for Unisys Global Services in India, detailed on his personal blog how browser traffic from his Series 40 ‘Asha’ phone was getting routed via Nokia’s servers.

“From the tests that were preformed, it is evident that Nokia is performing Man In The Middle Attack for sensitive HTTPS traffic originated from their phone and hence they do have access to clear text information which could include user credentials to various sites such as social networking, banking, credit card information or anything that is sensitive in nature.”

Mike the goat October 22, 2013 10:16 AM

D: That’s exactly what I was thinking. If you wanted to demonstrate that you had nothing up your sleeve so to speak then you would allow people to fully debug the thing via JTAG – and this means input and output from every process. This would allow verification. They haven’t done that and don’t seem interested in implementing a “window” into the black box, so to speak. This leads me to speculate that there is something underhanded going on or – and perhaps just as bad – they are covering up bad engineering.

Mike the goat October 22, 2013 10:22 AM

H: my understanding is that it functions much like Opera Mini’s “acceleration” technology, if you want to call it that. The way it worms is essentially Nokia’s servers act as a proxy – download, strip out content that isn’t cellular friendly and then compress, encrypt (with a Nokia SSL key) and pump it to the user’s phone. This means that they are effectively MITMing everything.

Their servers make the SSL connection with the SSLized site. The cleartext HTML is then compressed and sent down a TLS tunnel to the mobile browser.

Yes, terrible engineering and there is no excuse in this day and age. With small Symbian phones with tiny amounts of RAM you could argue that there just wasn’t enough space to properly render fill web pages on the phones and this technology was necessary.

There is no excuse now.

Carpe October 22, 2013 10:26 AM

@Bruce

If you want a good example of intentional side-channel(s), look no further than a very close re-examination of the OpenBSD IPSec stack debacle. Not very many people are aware that since it died down there have still been a few new things about it posted by some of the key players. The audit was done in secret, and they used the excuse that they found some benign, probably accidental flaw(s) which fits all three of your first section requirements.

I’m willing to bet realizing a strong crypto-comm suite like OpenBSD and IPSec got them involved and intentionally injected side-channels just because it’s a high value target.

Mike the goat October 22, 2013 10:29 AM

H: oh, and re Linus’ famously bad attitude – tell me about it. I tried to submit a patch in the early days and he flamed me out over something trivial. I would love to know how Greg deals with him.

Mike the goat October 22, 2013 10:32 AM

Carpe: you could argue that the entire spec of IPSEC is a backdoor. Plenty of opportunities for implementation errors through unnecessary complexity, fallback attacks to less secure ciphers (there’s even a NULL “cipher” that does nothing – why bother? Why not just use l2tp or plain GRE or one of a handful of unencrypted tunneling protocols?)

Nick P October 22, 2013 10:46 AM

@ Bruce Schneier

If you haven’t read this one, you definitely should. It’s from a student at Navy’s Postgraduate School. A good thesis on issues in subverting software which does an exemplar subversion of a Linux NFS system. The paper is often used to teach about subversion & illustrate why high assurance approaches are necessary.

http://www.dtic.mil/dtic/tr/fulltext/u2/a401762.pdf

“First, the artifice is small. In all, eleven statements in the C programming language are needed for this example. This small size in relation to the millions of lines of code in the Linux Kernel makes it highly unlikely that any but those involved in the development of the kernel would notice. The artifice itself is composed of two parts located in two unrelated areas of the kernel. A second characteristic is that it can be activated and deactivated. As a result, the functionality exists only when the attacker wills it. This will further complicate any
attempt to discover the existence of the artifice.

Unlike some Trojan Horse attacks, there will be no suspicious processes running on the system. Even when activated, the functionality is embedded in the kernel and not in a user-space process so its observability is limited. Under these conditions, no amount of testing is likely to show
the presence or absence of a well-hidden artifice.

Finally, it does not depend on the activities of any user on the system. The attacker can activate and deactivate the artifice at will as long as the system will process
IP packets. He is therefore not subject to any permission constraints of a particular user on the system. Moreover, the fact that all users and administrators of the system may be trusted (for example in an environment where all users are cleared to the same level and the system is operated in system-high mode), has no effect on the attacker’s ability to exploit the system. Administrators make the system vulnerable simply by connecting it
to a network.”

AlyssaB October 22, 2013 11:05 AM

The school-of-thought has traditionally been NOT to implement crypto yourself, even for well-known/published/reviewed algorithms, because vulnerabilities often get introduced in the implementation. That was supposed to hold true even for those of us who knew what we were doing. Does that still hold true now? Who knows anymore what might have backdoors and who works for whom.

For that matter, Bruce: How do I really know who you’re working for and whether you are really you or if this is really your blog? Now I’m paranoid that I’m getting too paranoid 🙂

Logical Security From Physical Security October 22, 2013 11:19 AM

One way or another, after you trace it back to the origin of security, logical security always reduces to physical security.

The ultimate basis of sound encryption is a natural source of randomness that is not subject to prediction.

One way to ensure security is to use one-time pad encryption and a hardware random number generator that you personally have validated. The one-time pad key gets physically move from one end point to another using a covert side channel. Air gapped endpoints physically secure the endpoints.

Once established, the one-time pad channel can then be leveraged to pass long keys for other algorithms that are computationally intractable to crack.

NobodySpecial October 22, 2013 11:28 AM

You can rely on customers with more power (ie bigger guns) than the NSA.

If the NSA approved a product/cipher/service as secure and the Army/USAF/Navy/CIA used and it turned out there was a deliberate back door – you would know about it from the smoking black crater where the NSA’s HQ used to be.

The NSA’s job is supposed to be to provide security for other government users, as well as spying on Belgians.

Nick P October 22, 2013 11:34 AM

@ Bruce Schneier

re example of product subverted

People rarely mention Lotus although it’s a classic (and obvious) example of subversion. I found that the specific vulnerability is even described in its IBM Redbook! (page 79 in PDF under “facilities for confidentiality” starting with “one variation is introduced…”) You could say the RNG weakening is just a sneakier example of their classic MO of getting commercial products to make weak keys.

re processor subversion possibility

Many people looking for subversions of Intel are talking about RNG’s, masks, etc. I think these are possible but there’s a better option if we use your criteria for a good subversion. If I were NSA, I would create processor errata that turns off certain protections such as MMU upon recognizing a certain unique sequence. Or that executes a chunk of code in kernel mode after a certain sequence of data or instructions.

There are a few reasons this is a good idea. One, it’s undetectable at software level and few people examining hardware would have the capability to detect it. Two, it can be used very selectively reducing its risk. Three, using it to drop a software rootkit that does further subversion will make defenders think the software was attacked somehow. Speaking from experience, such diversions are highly effective. 😉 Fourth and the best: many Intel chips have had over 100 errata. If it’s detected, it will look like “Yet Another Intel Errata” that resulted from complexity and time to market pressures.

Interestingly, Kris Kaspersky’s CPU bug presentation shows some flaws with exactly these traits. MMU failures, remote ring 0 execution, etc. They go back YEARS. The more recent Intel CPU’s have much fewer errata than others. However, if the backdoor is trigger activated, it would never activate unless it was intended to. (Assuming no errata in it lol.)

Another bonus idea is facilitating a DMA attack, potentially by disabling the IOMMU. You heard that one here first. 😉

In any case, processor errata are impossible to see in practice, can allow total control over a machine, are the norm on Intel, and are deniable in practice. Perfect subversion opportunity for the sophisticate, well-funded attacker.

Nick P October 22, 2013 12:06 PM

@ Vinzent

“Um, that’s wrong. There is actually quite a lot of research going on, mostly in the safety-critical community, but the results are slowly spilling over to the security-critical stuff.”

It actually went the other way around. Work that ended up being Orange Book’s A1 criteria mandated TCB’s with extremely rigorous development techniques. Some systems were produced. Years later we see safety-critical projects adopting a fraction of that for safety properties. Then, we see some of their methods being used in security projects. Yet, the methods to get it done for security have existed since the 70’s-80’s, were used commercially for security, and have slowly expanded in capability over time thanks to few doing research. These days, there’s good research in both areas with a certain amount of spillover going both ways.

“From my experience I’d say that the problem of “only” providing a correctness proof for the source code of a certain crypto-algorithms isn’t even a hard problem anymore. ”

Yeah, Galois’s CRYPTOL toolkit solved that problem a while back. SPARK was also used for a provably correct implementation of some SHA-3 candidates. The crypto algorithm is the easy part. The harder part is the software using it correctly & its TCB.

“The biggest problem for non-acceptance is that we would need to get rid of the still most used implementation language C, which is hard to verify due to a lot of semantic issues (and once you remove the semantic ambiguities, there’s not much language left).”

CompCert partly solves many of the semantic issues by verifying it down to machine code. However, it’s very easy to write something in C that’s incorrect/backdoor that someone won’t notice. So, I agree with you that C is the wrong language for “verifiable” software. OS’s and system software has been written in ML, Modula, Haskell, Ada, and so on. A few have good tool support and plenty of libraries. So, I’d say current dev’s have fewer BS technical excuses for them to use any more, eh? 😉

(Labor/talent pool is still a valid criticism. However, get enough people making good, secure projects with certain languages and we might see more people learn them.)

“So the hard thing is not missing development methodology, but the willingness to adopt it, the research has been there for more than 20 years now and there are usable results from it.”

100% agreement. I’ve been posting specific tools, methods and design concepts here for years. Others like Xavier Leroy’s INRIA group, Shapiro’s team, ETH Zurich’s Oberon stuff, etc. were working other angles. The common denominator is every advancement has been mostly ignored by mainstream developers. Occasionally, you will see some attributes copied by mainstream. (eg C# or Java getting Ocaml-like features, Windows mandatory integrity policy)

Not enough though. My theory has been that certain very specific issues account for vast majority of risk area. If we invest heavily into solving them (or often using existing solutions), then we can build on that to solve other problems without introducing same old vulnerabilities as before. It takes a certain willingness to ignore cool, but unproven approaches. Or to throw away legacy tech that’s holding us back. These choices can be done incrementally but so far mainstream isn’t trying at all. Aggravating…

Douglas Knight October 22, 2013 12:41 PM

DUAL_EC_DRBG:

How can you, today, say it was not a backdoor, when the New York Times says that a Snowden memo says it was? Have you seen that memo? Does it not actually mention getting caught by MS in 2007?

I am very concerned about about the failure of journalists to publish slides and memos they quote, even in heavily redacted form. By quoting, they’re letting NSA know exactly what they’ve seen, so why not? And I don’t trust journalists to understand these memos. Of course I trust you to understand them, but I’d still like to see them myself.

I also don’t understand how anyone in 2007 could doubt it was a backdoor, but I’ll leave that to Nicholas Weaver.

Bob Robertson October 22, 2013 12:46 PM

When examining the PGP source code long ago and far away, in the early 1990s before GnuPG, and altering it to generate a 4096-bit key for the heck of it, the big deal was that the NSA had demanded that a limit of 1024 bits be put into the code.

And sure enough, it was named “NSA_HACK” or something equally obvious.

So in all seriousness, since I’m pretty sure that the Windows backdoor would have come from approx. the same time period, “_NSAKEY” doesn’t sound far fetched at all.

William A. Hamilton October 22, 2013 12:46 PM

Here is the verbatim text of a May 1985 letter from Bradford Reynolds, Counselor to Attorney General Meese, to William Weld, U.S. Attorney in Boston on arrangements for the sale and distribution to governments in the Middle East of software equipped with a so-called trap-door, i.e., “special data retrieval capability. Not surprisingly, the Department of Justice failed to produce a copy of this letter, whose authenticity the person who signed it has confirmed, in litigation discovery:

“As agreed Messrs. Manichur Ghorbanifar, Adnan Khashoggi, and Richard Armitage will broker the transaction of Promise [sic] software to Sheik Klahid bin Mahfouz for resale and general distribution as gifts in his region contingent upon the three conditions we last spoke of. Promise must have a soft arrival. No paperwork, customs, or delay. It must be equipped with the special data retrieval unit. As before, you must walk the financial aspects through Credit Suisse into National Commercial Bank. If you encounter any problems contact me directly.”

Mike the goat October 22, 2013 1:07 PM

Nick: interesting link, by the way. I posit that mainstream tech isn’t trying for a good reason. There are legacy systems everywhere and nobody wants to touch them. So they sit there, humming along, ludicrously past support contract end date and unpatched. I am sure we have seen systems like these. A prominent university I once did some work for still used a minicomputer for storing the grades of students. When we upgraded a lot of their IT assets we spoke of their grading system. The fools had full source for the system (obviously not ANSI C but that’s no biggie) and I suggested we do a baby step and just move it over to a *BSD so they can stop heammoraging cash to keep their ancient hardware running (getting replacement daughterboards is possible even now for the right amount of cash). This was shot down by the committee who wanted to leave it the hell alone so we instead pulled out the original terminals (about fifteen of them in the staff library) and instead got two modest i386 boxes, put an 8 port cyclades in each of them and set it up so that upon authentication it would open up an available serial port and drop it on disconnect. We published two A records and used a windows telnet client that was smart enough to try sequential A records if it gets a refused (all ports used) or the host is down. Anyway it worked and we were able to at least allow the profs to insert their grading info from the comfort of their offices rather than being forced to use dumb terminals from the library. (About six weeks after implementing this one of the machine’s mobos died unexpectedly and things fell over gracefully to the other terminal server so at least that design decision that was critiqued as unnecessary paid off for me)

Anyway I am rambling a bit. I will get to the punchline. About five years later I had well and truly moved on and by chance went to said institution to see a friend in the comp sci dept. Unbelievably the 1980s era hardware had still not been replaced nor was it slated to be replaced soon. Even though they had full source, even though it could have been mothballed and run in a damn emulator if need be, it still wasn’t replaced.

So this is the mentality you have to deal with!

Mike the goat October 22, 2013 1:16 PM

Bob: I notice – at least with the version of gpg I have here – that it refuses to generate an 8192 bit key. I wonder why such arbitrary limits are even hardcoded.

Nick: my apologies for the second post but I thought I would also mention the state of our ATM machines which are supposedly meant to be kept at the bleeding edge. I had the opportunity to inspect an ATM that belonged to a major foreign based but American controlled financial institution. It was running …. Wait for it … OS/2 Warp. This was about ten years ago but I was in the mall the other day and an ATM was rebooting (saw the Phoenix logo from a distance and had to go for a look). Up comes the splash screen – eComStation (the mob that purchased the OS/2 source and is.supposedly looking after patches). The ATM I had the pleasure of inspecting was designed to be sited at a gas station and had the option of using the PSTN, cellular network or Ethernet. You’ll be amused that the latter is encrypted using 3DES and the secret is displayed in the config page. I would hate to think what a malicious repairman could do with additional insider knowledge.

I hope to hell they’ve improved their tech since then (and there is good reason to believe they have… I hope)

Nobody October 22, 2013 1:37 PM

@all americans:
your “cybercriminals and less benevolent governments” is your government.

Andreas October 22, 2013 1:44 PM

Example of a possibly intentional vulnerability:
Although I don’t think this is intentional, it is an interesting case. The example LFSR code in Applied Cryptography (page 379 in the second edition) contains an error that makes the period much shorter than intended – the ^ and >> operations are reversed.
I noticed this error back in 2002 when auditing a BIND update and tracked it back to AC. Since this is an educational example it could affect more pieces of code than if it was a backdoor in some specific software.
(Yes, I did report it, but perhaps the email got stuck in a spam filter… Or was intercepted by NSA to keep it out of the errata?!? 🙂 )

Nick P October 22, 2013 1:46 PM

@ Mike the Goat

I’m aware of these issues. The biggest version of it is mainframe and COBOL software. Here’s a link you might find interesting:

http://www.pcworld.com/article/249951/if_it_aint_broke_dont_fix_it_ancient_computers_in_use_today.html

Comforting to know that our ICBM system depends on VAX’s whose maintainers scrounge for parts on eBay, yeah? Anyway, I previously proposed source transformation, binary translation, or highly accurate emulation to solve these problems. Fortunately, in the VAX case, one company is doing exactly that. Good to see some progress. 🙂

Greg October 22, 2013 2:01 PM

“…commercial software system is easier to subvert, because the profit motive provides a strong incentive… ”

Could you not also argue that fear of a discovery of a backdoor destroying their market would be an incentive NOT to participate?

Mike the goat October 22, 2013 2:09 PM

Nick: thanks for the link, I hadn’t read the article before but I can personally confirm the OS/2 reference.

Re ICBM systems. That figures. I know the Russians had them all controlled by KREMVAX 😉

Lee October 22, 2013 3:35 PM

It is me or does it seem like something named NSAKEY fits both the HIGH DENIABILITY and the MINIMAL CONSPIRACY criteria?

Czerno October 22, 2013 6:12 PM

A deliberate, targeted backdoor in MS Windows crypto : can’t remember if this was noted on this block, it was found by a French security firm analyzing crypto DLLs several years ago, that the crypto in then current Windows versions was deliberately weakened – I think, by /forcing/ several bits of the RNG output – on a very specific condition, viz that the selected OS language be Fr-FR !

It has not been revealed in whose interest that weakness was introduced. US spying on the French ? Or the French gov spying on its citizens ? (Or did MS charge BOTH governments for the same blob of code ?)

BTW the targeted vuln maybe still present in today’s windozes…

RobertT October 22, 2013 7:42 PM

So much focus on the now completely irrelevant WinTel world.

Nobody looking towards the future on personal communications cares about Intel, Microsoft or Nokia for that matter, they are all irrelevant. (except maybe on the server side)

IMHO the companies sitting in the driving seat are:
Google,
Samsung,
Apple,
Qualcom
Mediatek
Huawei
ARM

I’m not real sure of the order of importance but that’s my short list and its the place where i”d be looking most closely for security vulnerabilities.

None of these companies is very open mined and generally dont reveal anything about the inner workings of their products, in part this is to prevent patent infringement notices. Today a Smartphone chip has over 60M transistors so all manner of circuits, software/firmware are implemented which violate patents from just 5 or 10 years ago. As a consequence they’ve developed a defensive strategy where the companies simply dont reveal whats under the hood. This leaves the patent troll with the very expensive task of reverse engineering a huge chip or alternatively signing a cross license effectively blind. Most big trolls take the small fee and move on, the problem trolls are the ones with single point patents they want to make lots of money so their job needs to be made as difficult as possible. (Look at the famous Blackberry case…)

Many of these companies have VERY murky beginnings Huawei and Mediatek are obvious examples but even Qualcom has obvious links back to the military secure radio communities and their close friends at Fort Mead.

Most of this list relies on ARM processors, you dont need to scratch real deep to find many common links between ARM and the Cheltenham crowd.

Is it possible the Samsung is in someway answerable to the Korean gov’t? (lets see there was some messy bail-out about 15 years ago without which Samsung might have gone belly-up) hmmm

As you can see Intel and their implementation of the TRNG is not even in the top ten of my security backdoor list.

Today in the Android development world applications ask for AND are granted DIRECT access to almost any and every function of the phone. The attack surface for malicious applications is astounding, the iPhone is no better yet these are the personal computing devices upon which most of the world relies for their real life communication / data security.

RobertT October 22, 2013 8:28 PM

@D
I also have a lot of experience with semiconductors, none of it with Intel. I have a lot of experience analyzing chip function using backside optical emissions techniques which you are probably aware of if you work with Intel.

It is very difficult to find others that are both technically skilled interested in this field so maybe we should find a way to communicate off line.

Dirk Praet October 22, 2013 8:28 PM

@ RobertT

Nobody looking towards the future on personal communications cares about Intel, Microsoft or Nokia for that matter, they are all irrelevant.

Ceteris paribus, they may become irrelevant at some point, but they still have a huge user base today in the PC market. I don’t believe the NSA can ignore them just yet. It’s equally surprising that with only one exception (@ Nick P.), nobody even mentions Big Blue either.

IMHO the companies sitting in the driving seat are:

Given its ubiquitous presence in any (inter)network infrastructure, I would definitely add Cisco.

Baz October 22, 2013 8:39 PM

Dear Alice,
For years now, the NSA has been persuading people called Alice and Bob to be in charge of communication endpoints so that they can mount a known-plaintext attack using the header and footer of their messages.
Yours sincerely,
Bob

PS I have my suspicions about the skewing of the Solitaire cipher too. Was the designer cut a deal? http://www.ciphergoth.org/crypto/solitaire/

RobertT October 22, 2013 8:47 PM

@D
I also have a lot of experience with semiconductors, none of it with Intel. I have a lot of experience analyzing chip function using backside optical emissions techniques which you are probably aware of if you work with Intel.

It is very difficult to find others that are both technically skilled interested in this field so maybe we should find a way to communicate off line.

questions October 22, 2013 9:50 PM

hi all

I have a puzzle for you. Something seems paradoxical to me.

I see that there exist little hardware usb keys to top up the entropy pool in /dev/random and I presume this means better random numbers and that means better private keys. Correct?

The puzzle is this. If you cannot work out whether a random number generator is compromised by looking at its output, then how do you know there is more entropy to begin with?

RobertT October 22, 2013 10:51 PM

@Dick Praet
I dont have exact figures in front of me but the world wide PC market is about 250M units per year whereas the world wide phone market is about 1600M units per year. This phone unit volume used to be simple talk-message / feature phones but is quickly transitioning over to smartphones. This base ignores the Pad market say another 150M .and not to forget the smart TV market about 250M units.

BTW the core CPU’s/Architectures for almost all these platforms are ARM or MIPs based.

Now what was that you were saying about huge user bases?

Figureitout October 22, 2013 10:57 PM

Today in the Android development world applications ask for AND are granted DIRECT access to almost any and every function of the phone.
RobertT
–Yeah, that’s why I had no qualms ditching my android. Kept doing things I told it not to; some form of malware that would keep turning on google talk when I said no a million times. Basically it’s a walking open wifi network just sniffing for goodies.

It is very difficult to find others that are both technically skilled interested in this field
–While not high-skilled, I have high interest. It’s also hard to find someone or something to really educate you in the subject matter; unless of course you “wear the green” which I have a philosophical objection to. Not to mention the equipment necessary, I have high interest in how these work, how they’re made, etc.

cipherpunk October 23, 2013 12:40 AM

What about services that intentionally put a weak max limit on passwords? (e.g., email (MS Hotmail), banking (BofA), etc) What about services that intentionally put a weak max limit on passwords but do not inform the end user (e.g., you enter in X characters, and system truncates at Y characters and throws away the rest)?

I reported an issue to BofA with respect to their challenge-response system, and to date have yet to see a fix; right now I would consider it security theater at best.

Wael October 23, 2013 1:00 AM

@ RobertT

Today in the Android development world applications ask for AND are granted DIRECT access to almost any and every function of the phone.

That isn’t entirely accurate. We can talk about this in more detail later, since it’s a little OT. Perhaps you should look at the latest security enhancements in 4.2 – 4.3.

Mike the goat October 23, 2013 1:01 AM

RobertT: I would suspect that those who care about their privacy are more likely to shun mobile devices in favor of the common PC. Ibeleive the NSA knows this and hence the apparent focus on WinTel. That said, I don’t believe for a second that ARM and MIPS aren’t compromised. Their program’s objective was to have maximum influence. No doubt they did their job thoroughly.

Paul H October 23, 2013 1:49 AM

Hey Bruce you’re soliciting examples of probable backdoors? I’ll bring two, both SSL related (lets leave aside the CA model of authentication/trust):

  1. The Debian OpenSSL “flaw” discovered 2 years after the code push (2006) in 2008. This filtered down to Ubuntu and any Debian derivative.

Why? Too many SSL derivatives to watch. It’s the one to pick since you get most of Linux when you do it (most servers and Ubuntu users). Attacking OpenSSL proper is loud. It’s easier to get at one of the maintainers through force or payment (probably the later in this case) to hobble the very specific part of OSSL that he did. He commented out two lines, one was harmless and the other broke entropy on D-OSSL. In fact it seems now after the leaks as I’m sure you know Bruce, that the NSA’s MO is going after randomness.

He still works at Debian. Still maintains OpenSSL for Debian, and controls the build servers. He was not asked to leave for his actions (which at the least violated GPL, as he didn’t give his code back to the OSSL guys, and this is because they would have immediatly spotted his sabotage, though to be fair he did tell them what he was going to do and they didn’t see an issue with it).

  1. The recent discovery that Androids SSL was downgraded from AES to RC4-MD5 in 2010 as the default cypher and is still this way. http://op-co.de/blog/posts/android_ssl_downgrade/

Why? It’s subtle. 3 years of no one looking and app developers just sticking with RC4-MD5 who use SSL in their apps. Who audits the SSL on an Andoid device? Almost no one. It gives them a great ledge to stand on to break SSL.

Rolf Weber October 23, 2013 3:31 AM

@Deliberately_Missing, another question:

Do you know for which protocols Lavabit used PFS?
Just for HTTPS or all the STARTTLS stuff as well?

Andrew Hickey October 23, 2013 4:45 AM

Paul,
The OpenSSL maintainer’s actions didn’t violate the GPL at all. He made the full source code fully available. There’s no GPL requirement to feed changes back to the original author.

GentooAndroid October 23, 2013 6:34 AM

@Paul H: “http://op-co.de/blog/posts/android_ssl_downgrade/”

A comment of that page authored Cédric — 2013-10-15 16:40:17 – states:
“I ran a quick packet capture to see what cipher GMail would use when initiating a SSL connection from a fully-fledged browser/OS combination (Firefox/Ubuntu). Despite my client advertising “strong” ciphers (in the correct order), GMail opted for RC4+SHA…”

Another shocking smoking gun.

gonzo October 23, 2013 7:23 AM

Its amazing to me that so few people remark upon the emergence of truly hidden folders in the Internet Explorer browsers from about version 5.5 onward.

Integrated with the Windows shell, the behavior I am describing is the cache folders that still remain invisible even when the Windows software is instructed to show all hidden and system files and folders.

I have never doubted this behavior was added at the instigation of the government. Windows provided no such protection for its own kernel. For the bootstrap. For the registry. Nor for the files associated with the most fundamental of OS processes.

Just as to the cache of web pages visited. And at that time, hotmail email messages landed in there in plain text.

Its not a backdoor per se, but it is the accumulation of personal information at a location obfuscated from the user, inaccessible to the (average) user, and handled differently than all other user and OS files.

Snarki, child of Loki October 23, 2013 7:34 AM

I’m coming to suspect that the NSA has inserted an insidious backdoor into the widespread ROT13 algorithm.

FURRFU!

Clive Robinson October 23, 2013 8:20 AM

@ Mike the Goat,

You can actually blaim Apple for alternative data streams, it was Micro$hafts attempt to “keep up with the jones’s” in file systems for desktop metaphores.

I actually like them because most people have no idea how to use them and they are a great place to put “Alternative code” which can be called into use with ease and not be visable to other coders, testers, code reviewers and other not very helpfull people noseing around in your “development patch”.

One use I put them to is “save your 455 time”, it is a matter of self protection to always have your code develoopment three to five weeks ahead of where the team leader, project manager and other sundury managment think you are. Thus when the inevitable cockup/disaster happens you have three to five weeks of hidden time you can use to pull things back on schedual and save your and possibly some of your colleagues bacon.

For the many people out there who have not got a clue as to what MS NTFS “Alternative Data Streams” are, technicaly they are “File System Forks” (FS-F)–which are not to be confused with process forks– and many file systems have them. They are kind of the opposit to File System Links (FS-L). A FS-L allows a file to be reached by many names so is a many-to-one relationship, a FS-F however allows a single file refrence to access several related files and is thus a one-to-many relationship. This alowes files to be “objects” with varied attributes attached to them in recognised name spaces, this is usefull for hiding icons ACLs and all sorts of other usefull complex file attributes that file control bits (ie srwx etc) cannot handle.

If you want to know more then have a look at,

http://en.m.wikipedia.org/wiki/Fork_(file_system)

gonzo October 23, 2013 8:28 AM

@Mike the goat,

ADS to my view falls into the “hybrid” backdoor situation.

Its tech that had a legitimate non-nefarious purpose when first adopted, but that we can surmise has been RETAINED long after it became about as useful as a human appendix for potentially exploitable reasons.

I think the only behavior I’ve ever seen with ADS from a legit OS perspective is where files are flagged with an ADS flag whether they’re trusted or untrusted (i.e., originated on the net), but that’s it.

Also, there was malware in the past that hid alternate executable code in the ADS for legit files. That surely could be a useful vector for NSA style shenanigans particularly if there were an “undocumented” way to place or activate “hidden” ADS that would not be visible to virus or malware scanners calling up info on the ADS using the published APIs.

At bottom, we’re sort of victims to our own happy surplus of data storage and bandwidth. Back in the days of 56K modems with hardware UART chips on the mother board (that would gab interrupts and impact performance), there was no way to “secretly” usurp a lot of user data, even if the attack vector waited only for times the user was connected.

Back when Windows 98 was out there, new systems were coming out with, what, 2 or 3 gigabyte hard drives?

And that matters. One of the reasons the secret “really hidden” i.e. cache files were discovered was the fact that users saw massive amounts of lost space, did not remember they had not set i.e. other than at its default behavior for caching, but using explorer or other file searches could not find the missing space. There was a long post on this years back at the f**k microsoft site, and I’m pretty sure it was SCARCITY that led to the discovery.

Our always on internet connections and massive storage volumes make it hard to detect things like, for example, what would happen to the hard disk if, say, every single character that passed through the keyboard buffer were added to an ADS tied to the windows page file accessible only via an undocumented mechanism not presented by the standard API.

Cervisia October 23, 2013 8:49 AM

@questions
The amount of entropy in a pool indeed cannot be derived from the random bits. In Linux, there is a separate variable entropy_count that gets incremented or decremented when data is written to or read from the pool.

Dave Walker October 23, 2013 9:41 AM

A follow-up, rather than an a statement (implied or otherwise) trying to “out” a suspected backdoor:

For a recent, detailed description of how a current commercial OS’ RNG uses and mixes CPU hardware RNG output with other sources, I’d recommend reading https://blogs.oracle.com/darren/entry/solaris_random_number_generation .

My own 2 penn’orth is that, if I wanted to nobble crypto, I’d go after the RNG hardware as described – however, points raised in the link above show that this would have a limited effect, and the question begged is whether that effect can make the difference between cleartext recovery in feasible time, and not.

gonzo October 23, 2013 10:19 AM

Was discussing this with a colleague who pointed out the “new” behavior of the shadow copy service in recent versions of windows.

Unlike the old system restore that kept copies of .dll and other system files by archiving them before they were overwritten, the shadow copy service back up everything.

Everything.

That text file you prepare and encrypt and then wipe?

No mas. The original is probably still in the shadow archive.

Notably, or perhaps, notoriously, there is no option to force Windows to shadow only system files.

Back in 2009 it didn’t get too serious of an angst factor here.

https://www.schneier.com/blog/archives/2009/12/the_security_im.html

Call this sort of thing “boiling the frog plain view erosion” of basic security and an end around the need for back doors in some cases.

Now, imagine some of the other things we now know about how the NSA operates. Think about combining the existence of the shadow copies with an exploit giving a remote root to some undocumented system that pipes the shadowed information off as garbage packets? Even worse.

Add hardware level compromises… motherboards designed to amplify and broadcast the DMA channels for example, or reported, plans for processors with built in 3G functionality! And good lord.

It amuses me that my old non-connected 486 33mhz laptop getting data via a floppy disk and running DOS is really the only way I actually feel comfortable these days. Boot from four different flavors of DOS, and the directory structure and content of the drives looks exactly the same. Nothing is hidden beyond reach from the OS perspective. I can create a RAM drive at boot, use a very lightweight editor (or even pipe command line text into a file) to create plaintext messages on that drive, and I can PGP encrypt the message and feel 100% confident that when I transfer the ciphertext to a floppy, there will be nothing left on my system of the original, and nothing additional on the floppy. I can even raw block scan the floppy to be sure.

There’s just not that sort of comfort on any modern system. And yet my system would be worthless for, for example, reviewing the Snowden documents. Its a quandary to be sure.

Alan Braggins October 23, 2013 10:23 AM

“It was an accident” isn’t the only form of deniability. Microsoft’s explanations of _NSAKEY, while a bit incomplete and evasive, provides a plausible legitimate explanation: they didn’t think about backup in their original key generation procedure, it took an NSA review to find that (“and how secure is your backup of the signing key, can we be sure that won’t be compromised?” “what backup?”), and they were trying to avoid admitting exact details to people they dismissed as mad conspiracy theorists anyway.
On the one hand, that supports the idea it’s not a backdoor. On the other hand, it’s exactly what the NSA would want Microsoft to say if it was a backdoor.

Dirk Praet October 23, 2013 12:35 PM

@ Dave Walker

3. Additionally on x86 systems that have the RDRAND instruction we take entropy from there but assume only 10% entropic density from it. If the rdrand instruction is not available or the call to use it fails (CF=0) then the above two entropy sources are used.

That sounds reassuring. You wouldn’t happen to know from Darren or any of the other guys to which extent the suspicions about Intel’s RDRAND contributed to this particular implementation of Solaris SWRAND ?

@ Nick P.

Dave is a source of wisdom for MLS stuff.

Tim October 23, 2013 2:09 PM

The NSA’s actions create an “arbitrage” of information. The best defence is to spy on the NSA straight back, or is that not allowed? 🙂

RobertT October 23, 2013 4:24 PM

@Mike the Goat
” I would suspect that those who care about their privacy are more likely to shun mobile devices in favor of the common PC”

The problem with what your saying is that ALL worthwhile data/information/texts whatever must by definition have multiple recipients.

Now we can be certain that the most security paranoid will use PC’s in air-gapped configurations BUT for the 0.01% of the population that does this there will be 10 times this number that use some sort of hardened but still connected PC, above this we have another 10 times simply use an internet connected PC with anti-virus (and think themselves secure). At the bottom of this pyramid we have the vast majority of people that live in security la-la land, they’ll happily use their Mobile phone/tablet for everything. (whats really weird is that this majority will gladly vote for increased rights for law enforcement agencies especially if they sense that their communications/data are not safe….clueless)

Unfortunately this means that ANY data, texts whatever that you send to anyone else will in all likelihood be mopped up at one or more security weak points within the system. This fact makes other peoples security weaknesses my problem.

Nick P October 23, 2013 7:27 PM

@ Dirk Praet

“Dave is a source of wisdom for MLS stuff.”

I appreciate the tip. 🙂

@ Dave Walker

Are you the one whose name I’ve seen on many papers about component-. language- and typed security approaches? If so, quite a collection of interesting work. Might like to talk to you about a thing or two at some point.

Nick P October 23, 2013 7:36 PM

@ RobertT

“Unfortunately this means that ANY data, texts whatever that you send to anyone else will in all likelihood be mopped up at one or more security weak points within the system. This fact makes other peoples security weaknesses my problem.”

Exactly. It’s why whatever scheme is used must default to secure operation and put as little trust in outside sources as possible. This concept also causes trouble in two other areas:

  1. “Disappear in the crowd” anonymity schemes fail because most of the world remains non-anonymous.
  2. The most secure online service can be DDOS’d off the net because most people use insecure software, providing attacker’s fuel for their botnet’s fires.

Add to it that most Internet protocols and endpoint API’s are insecure, then you get a nightmare for anyone wanting to prevent subversion while not living in a virtual cave. I mean, if you want to participate like the rest to some degree, you take on so much residual risk their choices have created. That and TLA’s current levels of power made me mostly stop caring about personal ITSEC against such entities. Case in point: I post here from a Linux laptop or Android smartphone. (Shrugs)

RobertT October 24, 2013 2:07 AM

@Nick P
What concerns me is not so much the actions of the NSA but rather the precedent that is being set by their now well documented behavior. It’ll be hard for anyone to argue that China does not have similar rights to sweep up as much data as they can get their hands on.

Now for the tricky part: How will that data be protected / used?

It is easy to imagine a case where Russia or maybe China collects Internal Emails form inside some (say Indian) company and based upon the evidence documented by the emails decides that a crime has been committed under their law. It then implements it’s own Extraordinary rendition process to ship the offending Indian executive off to some Gulag or Laogai without any recourse or legal Due Process. Unfortunately the behavior of US gov’t entities is setting a lot of dangerous international legal precedent’s.

Thats why I think data security is so important and why I’m much more concerned about HUGE data leaks coming from mobile comms devices, than anything else. It seems likely to me that we are entering a new period where the process and respect for International law is failing the worlds citizens, and this failure will have consequences.

Mike the goat October 24, 2013 2:30 AM

RobertT: the US now just looks like a hypocrite when Kerry gets up and speaks of demanding a fair trial for some American held elsewhere in the world. I think the US has pretty much signed its own death warrant as far as their continued existence as world cop. Everyone knows that when the US dollar stops being king worldwide (and this process has already gone so far it can’t be averted) their political clout will diminish along with their ability to spend well beyond her means without any major repercusions. Call me an alarmist but I believe in the next fifteen to twenty years we will look back on the halcyon days of the early 1970s and long for things to be just a little bit like how they used to be. That’s if we aren’t rotting in a FEMA camp, of course.

Nick: everyone will make tradeoffs between convenience and security – even those of us who know better. Of course this makes the spooks’ job all that bit easier as they can more readily elucidate those with hardened systems and/or procedures and know that they likely harbor interesting intelligence. That’s why red herrings like your Average Joe PGP encrypting their emails to grandma are so useful.

GentooAndroid October 24, 2013 3:33 AM

@gonzo: “secret “really hidden” i.e. cache files were discovered was the fact that users saw massive amounts of lost space, did not remember they had not set i.e. other than at its default behavior for caching, but using explorer or other file searches could not find the missing space. There was a long post on this years back at the f**k microsoft site”

I don’t find the long post you are talking about. But this link might be useful to anyone interested in this really hidden browsing history (for IE and Outlook):
http://membrane.com/security/secure/Microsoft_Is_Unscrupulous.html

Clive Robinson October 24, 2013 3:46 AM

@ Gonzo,

The “boil the frog” [1] issue in ICT arises in each case I’ve looked at to the mistrust by managment of humans with failings (even though managment have the same failings and more).

Basicaly it starts from the question of “what do you do when things go wrong”, in the pre-1960’s you’ld scrable around for bits of paper and “reconstruct the past” and recover work or asign liability. However the increasing use of both comms and info tech ment pieces of paper were not being generated thus liability was increasingly difficult to assign and white collar crime became more prevelent/noticable.

So the need to log / test / audit came to the fore from engineering and science with the drrumed in mantra of “if it is not written down it never happened”.

However a major contributing secondary effect came into play, prior to the mid 1970s managers had secs either assigned to them or in a typing pool they provided the “instattutional memory” but were expensive to have. This was seen as both a choke point on the speed of business and as an unnessecary expense. Info tech was seen as the problem solver (even though it all had to be typed in). The problem was the execs did not have the ability to “type” nor were they pre-disposed to “record keeping” and thus “info serch/retreival” and did not in any way verify what went into the info tec systems, thus could become “patsies” to take the fall…

Thus we have implicit requirements to automaticaly record everything typed in or communicated. It takes very little “self interest” of backup / business continuity suppliers to whip up a fire storm of FUD around very occasional disasters happening to those who did not buy there solutions to sell these systems. The result huge repositories of business information.

This createed a secondary market of business managment by “data warehousing” and more recently the legal attack of “electronic recovery”…

The fact that all of these systems were and still are “done on the cheap” and security does not get a place at the design table means it’s like leaving the data in a great big pile in a companies reception area but without guard receptionist or locks on doors to prevent others helping themselves… As was once observed “It’s like stealing candy from a sleeping baby”.

[1] For those that don’t know about “boil the frog” it comes from a supposed experiment in physiology. If you drop a live frog into hot water it will hop out almost immediatly, however if you put a frog in cold water and heat it slowly the frog won’t hope out and will eventualy “boil to death”. Thus the idea being the difference between install and instill, if you instal new powers then the change is sufficient to cause a “knee jerk” reaction and “kick back” if however you instill new powers at each “salami slice” the change is to small to cause kick back which also gives rise to the observation of “slowly slowly caatch the monkey”.

gonzo October 24, 2013 10:05 AM

@Alan Braggins….

“…Adapting this version for metaphorical use is left as an exercise for the reader…”

Sorry, I’d like to give that some thought, but I have to update my facebook wall and send a few tweets.

🙂

Chris October 24, 2013 10:54 AM

Don’t forget Ross Anderson and co’ “Newton Channel” that showed that DSA and other digital signatures had covert channels.

Of course they don’t prove if they are there by intent or accident, but simply that this sort of thing has been around since the early 90s.

Jason Ramsey October 24, 2013 12:07 PM

There is an email encryption plugin for Thunderbird that allows one to use PGP (gnupg in my case) to encrypt email.

It has the option (I believe it is on by default) to encrypt the email with both the recipient’s and the sender’s public key.

I can’t help but wonder how much easier it is to break the encryption if we have two different keys encrypting the exact same message in the same envelope.

Nick P October 24, 2013 12:09 PM

@ Jason Ramsey

“There is an email encryption plugin for Thunderbird that allows one to use PGP (gnupg in my case) to encrypt email.”

And there’s probably a zero-day in Thunderbird that allows NSA to decrypt PGP email. That’s how they’d look at it. Gotta think like them to beat them. 😉

Moderator October 24, 2013 3:13 PM

Jose, that’s more than enough. Dirk Praet has already answered your claim that he is some guy named Dirk Paehl, though he hardly needed to, since you have no evidence except a very vague similarity of names. Repeating the same accusation is not going to convince anyone, and it’s annoying and off-topic. Please stop.

Blog Reader One October 25, 2013 4:05 AM

In 2003, a user’s examination of an update for the Java Anonymous Proxy (JAP) software revealed a hidden function that was intended to record the IP addresses of users who accessed one particular Web site via the JAP anonymizing network. The JAP team confirmed the presence of the “crime detection” function and mentioned that it had been mandated by the courts in Germany, from where the JAP software is produced. Later on that year, the JAP team was allowed to suspend the monitoring. A writer for The Register, Thomas C. Greene, mentioned the issue of disclosure and stated that it is problematic for an anonymizing service to secretly introduce even limited exceptions. Though the JAP team fought the court order and stated that shutting down the service would have appeased the authorities who sought the backdoor function, Mr. Greene mentioned that the situation could have been handled better without silence, by leaking details to the press overseas or temporarily disabling the proxies affected by the backdoor, among other options.

Moderator October 25, 2013 1:39 PM

This may be the most ridiculous fight I’ve ever seen on Bruce’s blog.

Jose, you are derailing the thread and your claims make no sense, so you are no longer welcome on this blog. I’ve removed your comments in this thread and I’ll remove anything you post in the future. If you go away now and don’t try to comment here again, I’ll leave your first comment over on the “Can I Be Trusted?” thread intact. If you persist, that will be removed too, so please quit while you’re ahead.

The rest of you: if he does reappear, please don’t make it worse by fighting with him. If you’re the first to see it, you can briefly note that Jose is a banned troll so no one is misled, but leave it at that and the comment will be deleted. Also, Mike, if you find yourself starting a comment by admitting that it’s “in bad taste,” it’s time to close the tab and walk away without hitting submit.

gonnagetflamed October 25, 2013 5:47 PM

Firstly, this is a great write up. I agree with most of it, but I do have some major bones to pick at here:

“This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.”

The premise you seem to suggest here (that such controls are feasible) is ridiculous. Similar to the statement about the profit motive being ineffective at producing trust in this scenario.

There is more risk to the user in trying to solve the first problem than there is in leaving it unsolved. Any technical control, whether it’s a standard, technology or committee will take a system that currently has multiple points of failure and replace it with a system that has a single point of failure for corruption.

For the user, it does nothing. We substitute one 3rd party entity that he must trust for another, and truly pursuing absolute verification of source code as an empirical possibility is irrational. Let’s assume we create a delegation with volunteer funds who verifies source code. I don’t know them. You may not know them. Most users will not know them. On what basis should we consider them credible and not bought or coerced with NSA funds? How would this basis be any different than that which establishes trust for Certificate Authorities, NIST and any number of other entities which have been spectacularly demonstrated as untrustworthy by this year’s leaks?

Even if we can establish the source code with personally credible trust (which is already placing us well into utopia), where does this march for absolute verification end? Who’s to say hardware is trustworthy? Who’s to say operating systems are trustworthy? Who’s to say any of the comparatively underexamined routing and TCP/IP stack protocols that run the Internet (most of which were developed on contract for Uncle Sam) are trustworthy? How about high level languages? Has anyone done a security audit of the major compilers in use? What if GCC or any of the latter day x86 microcode extensions are fundamentally backdoored?

There –is no end– to that line of reasoning. You have to accept risk of not knowing precisely how the software was developed, and no appointed board or revision process is going to fix that. This isn’t the food industry where we can conduct a large array of 45 minute inspections and an occasional long-term study and establish any certainty about the safety of the product. What we’re wrestling with is a collection of products which probably are sufficiently complex that exhausting all available (and certainly qualified) manpower on the planet we’d not be able to keep up with proving the standards of safety we’re aiming for, or even expected in other industries.

So while I think moving towards making code voluntarily open source at least to selective (or even broad) review is a positive step, it’s largely security theater. A determined code obfuscation expert with sufficient resources could thwart such efforts without much difficulty, regardless of whether the process was formalized and mandated (which is how I interpret your statement) or simply volunteer. And if the process was formalized, it would do a great deal of damage to the industry by making it tremendously less efficient.

To get at the second statement, why should we believe that the profit motive doesn’t work? Have any of the companies implicated by these leaks come even close to promising protection from mass surveillance? Hardly! They wouldn’t have the stones to! Most of the industry seems to just roll along like none of this has even happened, judging by their PR!

The US technology market can provide more incentive as a quantity than Uncle Sam ever could, and particularly as we dance with shutdown and austerity. The current budget battle is not going to be the last. The days of the NSA receiving $10 billion budgets are probably numbered, and even $10 billion is not enough to keep all of the major tech firms well fed compared to their normal revenues.

These companies have no reason to believe that they will lose money from behaving in this fashion. That’s why they do it. They are completely unafraid of the consequences.

Consumers will not fix that problem, either. No, that is a problem for developers to fix. We must make compelling products that at least aim for this level of provable security if we want to slay that giant. There will probably be no way of verifying whether they can beat the NSA, but right now nobody is even concerned with that requirement. Right now there are no good alternatives to the backdoored products for the most part.

And my last little secret is that FOSS isn’t going to solve that. FOSS will help the creative types get a demo ready to show to a VC, but profitable products will be the only ones polished and baked enough to be compelling alternatives to what is peddled by companies like Microsoft, Apple and Google. Certainly try to hold companies feet to the fire by asking for organized audits from trusted 3rd parties, but you have to let the market (and profits) drive this. Innovation requires investment, every time. And I mean execution, not just ideas. We already have the ideas to change this, if you’re curious enough to find them. We are lacking in well executed examples.

Mike the goat October 26, 2013 4:16 AM

mod: yeah, point taken re feeding the troll and indeed my response wasn’t handled with maturity. Oh, and thank you for actioning the request the other day – appreciate it.

gonnagetflamed: surprisingly I don’t think there will be many who disagree with the supposition that if you can’t have open source then independently audited ‘closed’ source is a reasonable enough alternative. I have encountered proprietary software companies who will happily give you access to the source code for review so long as you are happy to sign an NDA. No doubt some will not be eager, especially if the code quality is embarrassing (uh, Redmond). Unfortunately many vendors still think that security through obscurity is a good idea.

David Johnston November 18, 2013 5:32 PM

FIPS 140-2 4.8.2

Never was there a process more obviously designed to lower the entropy from an RNG. Just mandate that conformant implementation throw away all the matching pairs.

“If each call to a RNG produces blocks of n bits (where n > 15), the first n-bit block generated after power-up, initialization, or reset shall not be used, but shall be saved for comparison with the next n-bit block to be generated. Each subsequent generation of an n-bit block shall be compared with the previously generated block. The test shall fail if any two compared n-bit blocks are equal.”

FWIW, the Intel RNG doesn’t do that. It’s stupid.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.