The NSA Is Breaking Most Encryption on the Internet

The new Snowden revelations are explosive. Basically, the NSA is able to decrypt most of the Internet. They’re doing it primarily by cheating, not by mathematics.

It’s joint reporting between the Guardian, the New York Times, and ProPublica.

I have been working with Glenn Greenwald on the Snowden documents, and I have seen a lot of them. These are my two essays on today’s revelations.

Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted.

EDITED TO ADD (9/6): Someone somewhere commented that the NSA’s “groundbreaking cryptanalytic capabilities” could include a practical attack on RC4. I don’t know one way or the other, but that’s a good speculation.

EDITED TO ADD (9/6): Relevant Slashdot and Reddit threads.

EDITED TO ADD (9/13): An opposing view to my call to action.

Posted on September 5, 2013 at 2:46 PM393 Comments

Comments

Alex September 5, 2013 2:59 PM

My biggest fear and complaint is that NO ONE in the US government seems to have any concern nor intention of stopping or severely limiting this.

No doubt that today’s revelations violate HIPPA and probably many other laws, yet there’s no one looking to hold anyone accountable.

All of this just further confirms my decisions to stay with as much open-source software for our office, maintain everything in-house, and work with internet providers and carriers who are a bit on the hippie/libertarian side of things. Does this make me immune? No, but it certainly makes it us a much more difficult target when not using the standard stuff.

Also, has anyone looked into the CIA’s quasi-private organization, In-Q-Tel? They work with a company called CallMiner which handles hundreds of thousands of phone calls a day. Probably something going on there too.

Daniel September 5, 2013 3:07 PM

Now it becomes clearer why Obama has gained such a sudden interest in Syria. Anything to control the narrative and deflect attention from the vital issues.

Give the man credit. No, he wasn’t going to make a stink about a “hacker” but he’s going to do everything in his power to makes sure that what the hacker reveals is promptly buried.

Shunra September 5, 2013 3:14 PM

So basically, the U.S. decided it didn’t need no stinkin’ trust in its corporations, governments, systems, and standards.

They really should have consulted with our host before destroying all the credibility of all U.S.-related entities.

Hanno September 5, 2013 3:15 PM

On the crypto bits in your guardian piece, I found especially interesting that you suggest classic discrete log crypto over ecc.

I want to ask if you could elaborate more on that. Because other respectable cryptographers recommend the opposite:
http://blog.cryptographyengineering.com/2013/08/is-cryptopocalypse-nigh.html

Also, how does RSA play in this? With RSA vs. DSA vs. ECDSA, I’d still say RSA with long keys is the safest bet. How do you consider ecc with non-nsa-influenced curves? What about Curve25519? (I don’t think DJB is secretly an NSA-spy)

What I found especially troubling to hear about DSA is that it’s unsafe as soon as you have a single signature made with a broken RNG. Though I don’t know what other dlp/ecdlp-based algos suffer from that.

Jack September 5, 2013 3:23 PM

Bruce,

You are writing about openness in your essays. You are writing about monitoring government and NSA activities. You are asking for people to tell their stories to you.

Why then do you and others take such care to not really be open and share all of the Snowden documents?

Richard September 5, 2013 3:29 PM

Hey Bruce,

Please expand on the undocumented command line option in Password safe.

Security by obscurity doesn’t work.

Felix September 5, 2013 3:30 PM

From the Guardian report:

“It shows the agency worked covertly to get its own version of a draft security standard issued by the US National Institute of Standards and Technology approved for worldwide use in 2006.”

Which standard is that?

charlie September 5, 2013 3:36 PM

We need a truth and reconciliation commission.

Any guilty of these loses their security clearance. Forever.

Tom September 5, 2013 3:38 PM

Please, whenever you have time/opportunity, tell us more about the ‘why’ of these comments:

“Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.”

Thank you…a luta continua!! May the Force be with you!!

Dietrich September 5, 2013 3:38 PM

You know, we mathematicians go to ALL THIS WORK coming with all these FANTASTIC structures that are all theoretically UNBREAKABLE.

And what happens? YOU HUMANS SCREW IT ALL UP! STOP MAKING BABY MATH JESUS CRY!

Mike September 5, 2013 3:40 PM

Someone owning the internet is inevitable. People should be happy that its the good guys and not the Russians or the Chinese. Does everyone here really want Iran, China, Russia, Syria, etc to be able to do their business without the intelligence community being able to keep tabs on it?

This sort of pwnage is why people should be proud to be American and they should be irate that Snowdon is giving a huge leg up to our geopolitical adversaries. Hey, if you want to stop using American products go right ahead. All technology should be considered to have little goodies from its host government hiding inside of it. Anarchists who think that we can enter some sort of stateless utopia through radical transparency are delusional.

William A. Hamilton September 5, 2013 3:41 PM

I just read today's Snowden-based articles in the New York Times and the Guardian on NSA actions to SIGINT-enable target communications. What the NSA and the CIA did with copies of the PROMIS database software, beginning in 1981, provided a robust learning experience for NSA for what it appears to have been been doing more recently, according to the latest NSA documents furnished by Edward Snowden.

A CIA contractor, GE Aerospace, allegedly operated a PROMIS packaging facility in Herndon, Virginia, beginning in the early 1980s, to install unauthorized, copyright-infringing copies of INSLAW, Inc.'s PROMIS database software on computers into which the government had covertly inserted a replacement integrated circuit, before shipping the turnkey systems to buyers overseas. 

An NSA integrated circuit manufacturing facility in the Silicon Valley allegedly produced the “Petrie Chip” which the CIA contractor covertly inserted into the computers equipped with the PROMIS software.

These turnkey systems were allegedly sold to foreign intelligence and law enforcement agencies and banks through cutout companies. 

The Petrie Chip allegedly replaced an identical-looking integrated circuit and consumed an identical amount of power, making it virtually impossible to discover its presence in the computer.

The Petrie Chip allegedly automatically copied data tracked in the PROMIS system and periodically transmited the data to a local NSA listening device, defeating any Tempest Shield-type protections.

The CIA allegedly continued to use this contractor-operated PROMIS packaging facility after GE Aerospace was acquired by Martin Marietta and Martin Marietta morphed into Lockheed Martin.

The CIA's Division D, comprising both CIA clandestine services officers and NSA engineers, allegedly managed this PROMIS/Petrie Chip operation until the creation of the successor Special Collection Service, which thereafter assumed the responsibility.

The government euphemistically referred to NSA's Petrie Chip as its "special data retrieval capability" for PROMIS systems designed to steal intelligence secrets.

NSA sold PROMIS through cutouts to banking sector entities to enable real-time electronic surveillance of wire transfers of money and letters of credit, and to foreign law enforcement and intelligence agencies to facilitate computer thefts of their intelligence secrets. In addition, the CIA deployed unauthorized, copyright-infringing copies of PROMIS to virtually every component of the U.S. intelligence community as the standard database software for the gathering and dissemination of U.S. intelligence information.

Neither NSA nor any of the other agencies of the U.S. intelligence community has ever paid copyright-infringement compensation to INSLAW despite two federal court rulings about the government’s theft of PROMIS “through trickery, fraud, and deceit,” and two Congressional investigations.

William A. Hamilton
President
INSLAW, Inc.

Felix September 5, 2013 3:42 PM

In “How to remain secure against NSA surveillance” you suggest a number of methods you have taken to protect your machine used for reviewing classified documents.

How about removing the wifi card and physically disabling the microphone and webcam?

As well as putting a screen filter on to reduce the angle of view, I suppose.

QnJ1Y2U September 5, 2013 3:47 PM

From one of the Guardian articles linked above:

Since I started working with the Snowden documents, I bought a new computer that has never been connected to the internet.

Coming next: NSA compromises all brand new computers.
(What’s scary is that this no longer sounds crazy).

Martin September 5, 2013 3:52 PM

This only works IF you can guarantee that 100% of the government employees and individuals entrusted with these tasks are 100% trustworthy, 100% infallible and 100% non-subvertable.

Certainly the common assumption in the US is there is always someone you know who seems to know someone who knows someone with enough clearance in a government agency that can get you the “inside” skinny on any individual. No matter how careful an agency can be there will always be another individual, at some time in the future who enables inappropriate access to the stuff we are asked to “trust” the government to protect.

Pretty sure you could substitute any other country’s government and state the same. So until then we should NEVER allow them the keys to our digital house especially when it’s our tax dollars paying for them to copy the keys!!

Felix September 5, 2013 3:54 PM

@Mike,

unfortunately the US can no longer be classed as ‘the good guys’ even by normal non-conspiracy theorists.

You can argue that you are not as bad as the others governments, but what with the government torture programs, total surveillance, the utter dismissal of constitutional rights in the name of national security, and a political system totally owned by the 1% and the military industrial complex, your days of being the good guys are well and truly over.

Thomas September 5, 2013 3:58 PM

“People should be happy that its the good guys and not the Russians or the Chinese. ”

Good guys? NSA? Serious perspective error or what? Patriotic blindness?

As far as we know, US has started more wars in last 30 years than Russians or Chinese put together and you call them “good guys”?

We call them war-mongering idiots, definitely not “good guys”.

Definitely not any better than Chinese or Russians, but much worse: Obama is already starting a war just to hide this NSA-scandal and UK is following.

Nice job.

Also NSA is directly attacking your own constitution, i.e. are criminals at very high level and you are missing that too: How a professional criminal can be a “good guy”?

You have wrong role models, pal.

“This sort of pwnage is why people should be proud to be American”

Just like the Nazis were proud for their efficiency in handling the “jew issue”. We do know that many were. Just like you are now.

Daniel September 5, 2013 3:59 PM

@Mike. I think you are trolling but I’ll take your comment seriously.

A key problem with the “someone has to own the internet” logic is that no one can. You say the good guys own it and Snowden is a traitor but what about all the Chinese and Russian spies in the NSA? There is a long list of traitors in the security business. The problem is that when the NSA installs a backdoor for itself it installs one for everyone else too who can get access to that backdoor. So the idea that the NSA is “on my side” is bogus. They are only one my side so long as they can keep a secret and there is plenty of evidence they can’t do that. Once that secret is out I now have a backdoor in my computer than anyone can get to, friend and foe, and the NSA isn’t going to to squat for me.

So the idea that the NSA represents the “good guys” is very short sighted. The NSA is the good guys right up until the NSA’s own ineptitude betrays me to the Russians and Chinese.

Andy September 5, 2013 4:04 PM

The second essay makes me fear Bruce might have an accident soon.

Lots of people assumed the NSA has a lollypop.
Snowden went half a globe away to be protected when he came forward and proved the NSA actually had a lollypop much larger than assumed.
Now openly calling for engineers and coders to start taking that sucker away from the NSA…

I admire your courage.

Jan September 5, 2013 4:07 PM

I would be really interested to know if RSA or DSA are preferable where there is a choice. RSA is most likely a more interesting target, on the other hand, DSA fails horribly if you ever use a key on a system with a broken PRNG. Since PRNGs are obviously a prime target for subverison, my gut feeling would be not to touch DSA with a 10-foot pole.

Also, please spill the beans. Yes, it will suck for the US, but the US isn’t the only country in the world, and is it really a good thing to protect their interests and thus help them violate the rights of everyone else, including their own citizens?

Bruce Schneier September 5, 2013 4:07 PM

“On the crypto bits in your guardian piece, I found especially interesting that you suggest classic discrete log crypto over ecc. I want to ask if you could elaborate more on that.”

I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry.

Bruce Schneier September 5, 2013 4:09 PM

“Why then do you and others take such care to not really be open and share all of the Snowden documents?”

I believe the Guardian and Greenwald have both written about this.

It’s not my show; I am not in charge of what gets released.

Bruce Schneier September 5, 2013 4:10 PM

“Please expand on the undocumented command line option in Password safe. Security by obscurity doesn’t work.”

I added it in an early version.

There’s no obscurity. It’s Blowfish — you can verify the implementation with any other implementation.

Bruce Schneier September 5, 2013 4:14 PM

“From the Guardian report: ‘It shows the agency worked covertly to get its own version of a draft security standard issued by the US National Institute of Standards and Technology approved for worldwide use in 2006.’ Which standard is that?”

I don’t know. DUAL_EC_DRBG, perhaps?

https://www.schneier.com/essay-198.html

Bruce Schneier September 5, 2013 4:18 PM

“I would be really interested to know if RSA or DSA are preferable where there is a choice. RSA is most likely a more interesting target, on the other hand, DSA fails horribly if you ever use a key on a system with a broken PRNG. Since PRNGs are obviously a prime target for subverison, my gut feeling would be not to touch DSA with a 10-foot pole.”

In general, I don’t think there is a difference. Cryptanalytic advances against one transfer to the other.

How Far September 5, 2013 4:22 PM

Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions? Is there any way to detect such interference had the NSA enough control over communications channels to automatically replace binaries and published hash lists?

Mike September 5, 2013 4:23 PM

Excellent article Bruce. Was wondering about Linux tough. Some people reporting that the NSA added its own set of security features to Linux SE in 2003. Many top Linux users say it has been reviewed many times with no findings of system compromise; but with recent events, some saying it needs another look.
Keep up the good work sir!

jay c September 5, 2013 4:29 PM

“My biggest fear and complaint is that NO ONE in the US government seems to have any concern nor intention of stopping or severely limiting this.”

Mostly true, but not entirely. Ted Cruz and Rand Paul appear to want to stop it.

USB? Really? September 5, 2013 4:30 PM

From your article:

3) Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.

You should know as well as anybody else — USB 0days are a dime a dozen; USB drivers and their userland stacks are all terrible, filesystem drivers are worse, and so on. It doesn’t take that much development effort to own your internet facing computer, load particular code onto your USB stick, and then when you plug it into your airgapped machine, own that, and so forth.

Please use caution. I think you’re better off just using a serial cable and moving data fairly slowly in a very primitive way.

SteveB September 5, 2013 4:33 PM

Just thinking about NSA inserting backdoors…

What’s the likelihood that an open-source security product (perhaps something like SELinux, which NSA contributed much of the code for…) contains vulnerabilities that are subtle enough to withstand non-expert scrutiny?

Bruce Schneier September 5, 2013 4:35 PM

“Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions?”

Yes, I believe so.

Bruce Schneier September 5, 2013 4:37 PM

“What’s the likelihood that an open-source security product (perhaps something like SELinux, which NSA contributed much of the code for…) contains vulnerabilities that are subtle enough to withstand non-expert scrutiny?”

Less likely than a closed source product.

All we’re doing here is playing the odds.

Brad Hicks (@jbradhicks) September 5, 2013 4:38 PM

I worked in computer security for 14 years, Mr. Schneier, and I want you to know something, even if it only matters to me that you know it:

You are the man I wish I had become, the man who has found a way to make it his job to be right about security. My fondest wish is that I could have found a way to get your job; my deepest hope is that, if I had gotten it, I could have done it as well as you have.

You are a hero.

Tin foil hat September 5, 2013 4:40 PM

I am now even more convinced than before that we should start using one-time pads when it is feasible. And more important messages should be encrypted with the OTP encryption done outside PC — manually or using some simple microcontroller based solution. You shouldn’t use the PC keyboard for input or use PC to display those messages, because it is far too easy to install remotely a backdoor to PC which is connected to Internet.

And if you are very afraid of possible biased random numbers when creating the OTP, you can always cascade encrypt that data with some more common methods.

If you aren’t able to use OTP, maybe we should start using cascade encryption with unrelated keys and encryption algorithms and demanding signing with more than one method and key. Traditionally this has been objected, but I think that the NSA leaks give a good reason to change that tradition.

Bruce Schneier September 5, 2013 4:43 PM

“I am now even more convinced than before that we should start using one-time pads when it is feasible.”

No.

Don’t break what isn’t broken.

Mike September 5, 2013 4:47 PM

@Thomas Oh that Obama, be careful cause I bet he’s itching to take your gun away.

The fact that the tin foil hat crowd seems to be the only one up in arms here makes me fetter better about the public’s ability to perceive risk.

The Big Bad Government already pays for your healthcare, processes all of your private tax information and can do all sorts of other nasty things but guess what 99.999…% of the time it does its fucking job.

China and Russia make no bones about emptying your bank account and stealing your intellectual property. Come on, take a lesson from Bruce and see who is actually going to screw you over rather than believe some fantasy story that validates your world view.

Curious September 5, 2013 4:49 PM

In one of the Guardian articles you said: “Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about.”

Why are you not going to write about those “other few things”? Can you write about the here please?

Thanks.

anonymouser September 5, 2013 4:50 PM

ahh… but what exactly is not broken? how far can we trust the maths?
I never believed in the magic numbers of ecc, I still don’t understand how come the exponent 3(three) is acceptable for RSA… there are many things which seem amiss.

bf skinner September 5, 2013 4:51 PM

@Jack “Why then do you and others take such care to not really be open and share all of the Snowden documents?”

In addition to what Bruce writes above – Snowden and Greenwald have said repeatedly they are making narrow disclosures so as to not possibly jeopardize anyone.

In addition…did you notice anything particular about the Bradley dump on Wikileaks? The way there was SO much that public analysis choked and it went away as an issue. The story went away.

I’m not saying the public needs to be spoon fed but the stories are just that stories and the larger public needs things to be explained, correlated and put into context so that the larger public can understand ‘what the big freaking deal is’

Concerned about ECDSA September 5, 2013 4:52 PM

You encourage us to prefer older discrete log systems over elliptic curve systems. I know about the dual ec prng vulnerability. But what about P-521 and that family of NIST curves? Are these magic numbers a legitimate cause of concern?

Bruce Schneier September 5, 2013 4:58 PM

“Why are you not going to write about those ‘other few things’? Can you write about the here please?

I want to keep some secrets in my back pocket.

Jeff September 5, 2013 4:58 PM

@Bruce

Great essays and I really respect your stand on this.

Are you still working for BT and do you have anything to say about their involvement in these abuses?

Bruce Schneier September 5, 2013 4:59 PM

“You encourage us to prefer older discrete log systems over elliptic curve systems. I know about the dual ec prng vulnerability. But what about P-521 and that family of NIST curves? Are these magic numbers a legitimate cause of concern?”

I personally am concerned about any constant whose origins I don’t personally trust.

Bruce Schneier September 5, 2013 5:00 PM

“Are you still working for BT and do you have anything to say about their involvement in these abuses?”

Yes. And because of the first answer, no.

It’s okay; lots of other people are talking about BT’s involvement. I don’t have anything to add.

Stephen September 5, 2013 5:05 PM

There was once an age where we feared the Soviets and the Orwellian police state they represented.

50 years later, we became it.

Nietzsche would be proud.

Sandy Harris September 5, 2013 5:10 PM

Over a decade ago I worked on a project whose main goal was preventing massive surveillance:
http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/intro.html#goals

A more detailed rationale is here:
http://www.freeswan.org/freeswan_trees/freeswan-1.97/doc/politics.html#policestate

& a description of technology we developed is here:
http://en.citizendium.org/wiki/Opportunistic_encryption

Our project failed; we never got wide enough adoption to have it take off. Should it, or at least the basic notion, be resurrected?

Argon September 5, 2013 5:24 PM

There is a mention in NYT article about backdoors in hardware – do you know if they are firmware/microcode key backdoors, or are they actually backdoors embedded in the silicon?

gonzo September 5, 2013 5:28 PM

Hi Bruce,

I see you’re still comfortable working with Truecrypt. Is it too much to ask if you’re using the pre-compiled executable available for download, or only if your comfort level attaches only to a version you’ve compiled yourself?

Jose September 5, 2013 5:37 PM

Winrar and 7zip are already compromised, the owners dont give the hashing of originals installers… OTP encryption is flashing, back to the stone age again hell….

GhostIn(Your)Machine September 5, 2013 5:43 PM

There are, as there always have been, three points of attack:

Key Management
Implementation
Sources of Random

The algorithms we have, overall, are good. Those three areas however though pay huge rewards. I was a student at a NSA Center of Excellence and had (one of) the “best education money can’t buy”

My personal favorite to attack during the time I was involved and studying: sources of random. Let’s just say there is a surprising amount of implementations that can be defeated with large enough database lookup tables.

I sleep better now that I am no longer involved though. I saw what my work turned into and decided I wanted no part of it, left before I got in too deep to get out.

Ninho September 5, 2013 5:45 PM

Mr Schneier, congrats and respect !

Guys, remember the ‘NSAKEY’ apparent in, was it, windows 95 binaries ? At the time many people assumed it had nothing to do with the NSA, the funny naming being rather some sort of an unimaginative programmer’s idea of a joke.

In retrospect, knowing what we learnt today, that may have been the actual key to an NSA sanctionned backdoor.

Unless it were just a decoy used to hide the /real. NSA backdoor from view :=)

cody September 5, 2013 5:48 PM

“On the crypto bits in your guardian piece, I found especially interesting that you suggest classic discrete log crypto over ecc.”

I no longer trust the constants. I believe the NSA has manipulated them through their relationships with industry.

In essay-198 you wrote “It’s possible to implement Dual_EC_DRBG in such a way as to protect it against this backdoor, by generating new constants with another secure random-number generator and then publishing the seed. This method is even in the NIST document, in Appendix A.”

If we’re using constants that can be verified with such a technique, is there still a reason to avoid ECDSA?

I’m also wondering whether curve25519 is safe, given that the Tor project is planning to use it.

Steve September 5, 2013 5:57 PM

@b f skinner: “In addition to what Bruce writes above – Snowden and Greenwald have said repeatedly they are making narrow disclosures so as to not possibly jeopardize anyone.”

Or the forthcoming book deal.

Greenwald is a self-aggrandizing little toad who mishandled his source.

Just sayin’.

Clive Robinson September 5, 2013 5:57 PM

@ Bruce,

    I want to keep some secrets in my back pocket

Welcome to my world 😉

What realy suprises me is peoples apparent shock to this news, all of it has been discussed befor on this blog by the likes of Nick P, RobertT, one or two others and myself for some years now.

For quite a while I’ve said the NSA’s priority list was,

1, Known plaintext in common file formats (MS office etc)
2, Getting weaknesses into protocols.
3, Getting weaknesses into standards.

I’ve pointed out “end runs” around security via device shims etc publicaly since the late 1990’s

I’ve also developed “backdoors” in crypto software and provided some details to this blog.

I even give details about research by Adam Young and Moti Yung about Cryptovirology and kleptography and given details of how to use it to make PK certs that reveal one of the PQ primes.

I’ve also pointed out the advantages of “oportunistic harvesting” -v- targeted attacks in the way such agencies work due to it being a lot more efficient.

I’ve also repeatedly pointed out the dangers of random number generators with to little entropy in software apps and embeded systems (such as routers and switches).

I’ve also repeatedly warned of “magic pixie dust” issues with hardware RNGs in the likes of Intel’s chips.

I’ve also warned of other attacks such as “protocol fallback” which we know have and probably still are used by the likes of the NSA and GCHQ (oddly not mentioned in the articles).

So most regular readers should be totaly unsuprised by these revelations.

But as they say “You can lead a horse to water…”.

Bryan September 5, 2013 6:06 PM

It seems to me that, by the NSA shoe-horning in back doors to all communications hardware, that it has provided its adversaries with the same capabilities.

Israel, China, Russia all have competent mathematicians that know how to cryptanalyze components and PRNGs. For the hardware pieces that look identical and use the same power, they can be reverse-engineered and/or sliced apart and microscopically analyzed.

It’s not that the USG/NSA can decrypt all; if the NSA can, then all of these govts can.

Does the USG/NSA not care that they’ve provided these capabilities to potential adversaries? Is it really thought that the recently publicized USG cyberwarfare initiative can protect us?

Pogo was right: “We have met the enemy and he is us.”

George September 5, 2013 6:11 PM

” I feel I can provide some advice for keeping secure against such an adversary.”

That’s the most telling (and horrifying) sentence in the Guardian article. Something is seriously wrong if we have to regard an agency of our own government– an agency that supposedly exists to protect our Homeland from dangerous enemy threats– as an “adversary.”

Tin foil hat September 5, 2013 6:22 PM

“No.
Don’t break what isn’t broken.”

Why put all the eggs in the same basket and make the secrets vulnerable if NSA manages to find a method to break the symmetric encryption (or public key encryption) you are using? Using OTP with other encryption methods probably won’t make the security any worse, especially when there is reason to suspect that some of those other encryption methods might be broken. Wouldn’t it be preferable that there would be still some security left even if some encryption method is later found to be weak?

bf skinner September 5, 2013 6:22 PM

@Steve “just saying”

What that someone’s going to write a book?
Well that IS how we pass information from one of us to the next right?
A formal, researched presentation of masses of data IN CONTEXT
It’s not like tweets and blogs provide much in the way of knowledge let alone wisdom.

Have you read the Pentagon Papers? No? Few have.
The only people who read it were MacNamara, probably, (may he burn) and Ellsberg and his insider circle of DoD analysts.

Why did it become such a hugh issue? Because Nixon, (who never read it ) and Kissinger, (who never read it) moved heaven, earth, prior restraint, and attempted murder tried to keep anyone from reading it. All they and everyone else ultimately learned was that it was proof the war was unwinnable and had been for years. But had they ignored it…would’ve dropped off the radar soon enough.

Another commenter September 5, 2013 6:35 PM

Bruce, Given your opinion on what happened to ecc constants, could it be that the “new capabilities” against Google which apparently came on stream in 2012 were as a result of their adoption of ECDHE – which may well have been Google’s attempt (via forward secrecy) to thwart the NSA?

Lorenzo September 5, 2013 6:48 PM

I’m interested to speculate on what forces can actually balance the NSA’s power over the entire Internet. I am talking about actual economics, not just people’s boycotting facebook for a month or two. What if US-based corporations were to show a dent in their profit due to a dip in public trust? Imagine if Google saw their users decline – would they be powerful enough to lobby Congress to curb the NSA? What if Facebook, Google, Yahoo joined forces for that?

It’s all a game of balancing forces; in the current cloud-frenzy situation, it was inevitable for the NSA to be on top of it and take advantage of the situation. I wonder what’s next – todays’ story is probably a few years old. I am worried about mobile telephony (who cares about breaking 4g when you can pwn Android AND iOS?), the fabled Internet of things, RFIDs, etc.

Furthermore, today’s documents show how there’s no silver bullet to breaking security, especially on large deployment: it’s a combination of slow-moving tactics (e.g. infiltrating telecoms), political pressure (on standard bodies), commercial pressure (cash to tech giants), lawful pressure (security letters), luck (the 0day), and who knows what else. As such, there’s no silver bullet to counter-fight this: we (as tech-savy people with freedom in mind) must be ready to counteract the political pressure, the economical pressures, the technological advantages, etc.

Lastly, I bet some sort of quantum computing has been deployed successfully in 2010; and that actually gave access to the loads of encrypted data that have been collected over the years. We shall see in 15-20 years how things actually went, but I have a strong feeling about this.

ps: where do I sign in on your call for geeks to bring back the Internet to what it works best, e.g. a free and openly interconnected system? I’m interested 🙂

anon September 5, 2013 6:51 PM

@Bruce: I think one of the more surprising revelations in one of your articles today is that you still use Windows for most things. Seriously?!?
How and why?

Solstate September 5, 2013 6:59 PM

What the NSA and the US Government don’t seem to get is that whatever hacking information they assemble WILL eventually leak out to the wider hacker community and the script kiddies. The NSA is using US taxpayer funds to permanently reduce the effectiveness of the Internet.

Some of these vulnerabilities will be discovered sooner or later anyway, but it is quite possible that many of them, especially the ones engineered by NSA coercing business into adding backdoors, would not have been made possible without the immense wealth of the US taxpayer to fund it.

Tango September 5, 2013 7:06 PM

TrueCrypt is an NSA program. You offer a free program that works on almost all personal computers. Millions download and use it. When your agents or assets get caught with TC on their laptop, it doesn’t mean they are working intelligence. Everyone uses TC. But is it secure, or is there an NSA backdoor in the program?

MarkH September 5, 2013 7:09 PM

Bruce,

I’ve been wanting to write this for a few weeks now, and feel it ever so much more today:

I’m deeply grateful for the strong public light you have shed on threats to privacy and liberty — those inseparable companions! — ever since I started following your blog not long after the 2001 terrorist attacks, but most acutely since the “Snowden affair” broke.

I observe that your perspective on true security is not that of those “doctrinaire libertarians” who deny the legitimacy of almost all government power. Rather, the valuable and necessary exercise of that power must be rigorously monitored and constrained.

My distress about the cancerous growth of the “national security state” has grown near to agony in recent months: not only the naked contempt for the United States Constitution displayed from the Oval Office down to the lowest functionaries in such agencies as NSA, but also the slavish acquiesence of the dozens of elected lickspittles on Capitol Hill (and others in high echelons of government) who not only knew about and enabled this criminality, but still continue to defend their cowardice without hint of shame or remorse.

By birth, I received the unmerited honor of descent from men who faced grave danger to life and limb … ineffable horror … and in one case the spilling of his last drops of lifeblood … on battlefields where the fundamental values underlying the American idea were at issue.

Today, my grief and discouragement about my once courageous country are greater even than during the tragedy of US military involvement in Viet Nam.

“Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety.” These United States now permit our liberty to bleed out, casting off dignity and honor in the face of the terrorist bogeyman. Having done so, we shall neither deserve liberty and safety, nor can we expect them.

Bruce, your work in resisting the flood tide of cowardly surrender to fear is of immeasurable value, especially now that the public voices of resistance are so few. Your patient, persistent, ever calm and reasoned arguments counter to the mainstream have been a great comfort to me in a time of near despair — extending to your generous expenditure of time responding to this comment thread.

As you carry on, please know that have the gratitude and best wishes of many!

nano September 5, 2013 7:25 PM

anon • September 5, 2013 6:51 PM

@Bruce: I think one of the more surprising revelations in one of your articles today is that you still use Windows for most things. Seriously?!?
How and why?

I would think they are all compromised. Name one that you think is not.

Bruce Schneier September 5, 2013 7:26 PM

“thanks for the reply. since we’re on the topic of trust, do you trust PBDKF2?”

I don’t know it.

Bruce Schneier September 5, 2013 7:28 PM

“There is a mention in NYT article about backdoors in hardware – do you know if they are firmware/microcode key backdoors, or are they actually backdoors embedded in the silicon?”

I don’t know.

I assume the NSA aims for robustness, and does everything it can.

James September 5, 2013 7:29 PM

@Tango

Department of Justice FAQ on Encryption Policy
April 24, 1998

Still accessible here:

http://web.archive.org/web/20091120073126/http://www.justice.gov/criminal/cybercrime/cryptfaq.htm

Excerpt:

“5. Why does law enforcement oppose the use of encryption? Don’t you realize that it will make your job easier by stopping crime?

We do not oppose the use of encryption — just the opposite, because strong encryption can be an extraordinary tool to prevent crime. We believe that the use of strong cryptography is critical to the development of the “Global Information Infrastructure,” or the GII. We agree that communications and data must be protected — both in transit and in storage — if the GII is to be used for personal communications, financial transactions, medical care, the development of new intellectual property, and other applications.
The widespread use of unrecoverable encryption by criminals, however, poses a serious risk to public safety. Encryption may be used by terrorist groups, drug cartels, foreign intelligence agents, and other criminals to secure their data and communications, thus nullifying the effectiveness of search warrants and wiretap orders.

The Department’s goal — and the Administration’s policy — is to promote the development and use of strong encryption that enhances the privacy of communications and stored data while also preserving law enforcement’s current ability to gain access to evidence as part of a legally authorized search or surveillance.

At bottom, it is important to recognize that society has an important choice to make. On the one hand, it can promote the use of unrecoverable encryption, and give a powerful tool to the most dangerous elements of our global society. On the other hand, it can promote the use of recoverable encryption and other techniques, achieve all of the benefits, and help protect society from these criminals. Faced with this choice, there is only one responsible solution.”

(Other parts of this extensive document are also very interesting, especially in light of the new debate about encryption and snooping).

Nick P September 5, 2013 7:30 PM

My reactions to guardian article

“The NSA spends $250m a year on a program which, among other goals, works with technology companies to “covertly influence” their product designs.”

This was to be assumed ever since Crypto AG, Lotus and the fact that subversion is the best method (more on that later). This kind of thing is why I’ve griped about the fact that there are only six fabs for top mobile chip sets. Just six organizations to subvert with a portion of a $250m/yr budget. I wish my job was that easy. 😉

“A GCHQ team has been working to develop ways into encrypted traffic on the “big four” service providers, named as Hotmail, Google, Yahoo and Facebook.”

At first I was surprised because I thought those were already compromised. The key point might be GCHQ, rather than NSA, is still trying to crack them. Or something else.

“Among other things, the program is designed to “insert vulnerabilities into commercial encryption systems”. These would be known to the NSA, but to no one else, including ordinary customers, who are tellingly referred to in the document as “adversaries”.”

If they don’t know everything, you’re an “adversary.” Ha!

Many times on this blog in the past I pushed for designs at EAL5-7 level with review by mutually distrusting parties for subversion resistance. The reason is that subverting the producer of software means you can’t trust them, and therefore the software. It’s the most powerful attack as it can bootstrap others.

Even OSS isn’t totally safe as games have been hidden in the likes of OpenOffice without people’s knowledge and NSA’s M.O. is ideal: insert subtle vulnerabilities that look like bugs. FOSS programmers are often volunteers without all the domain expertise we would like who at least put the time and effort into giving us the features. This can be true for security features too. Whose going to ban or accuse a hardworking FOSS developer because they picked a bad exponent? I mean, “seriously, who even knows about all that stuff?” 😉

“The document reveals that the agency has capabilities against widely used online protocols, such as HTTPS, voice-over-IP and Secure Sockets Layer (SSL), used to protect online shopping and banking.”

Their implementations have had plenty of issues and the protocols often allow weak choices during negotiation phase. Both could be what the quote refers to. Of course, going back to subversion, they could get companies to build vulnerable knockoffs of SSL or insert taps on the side with the server or SSL offload engines. Or just offer businesses cheap SSL engines that also leak the keys. An old idea I came up with that lead me to stop using them.

” Documents show that Edgehill’s initial aim was to decode the encrypted traffic certified by three major (unnamed) internet companies and 30 types of Virtual Private Network (VPN) – used by businesses to provide secure remote access to their systems. By 2015, GCHQ hoped to have cracked the codes used by 15 major internet companies, and 300 VPNs.”

This is entirely unsurprising. Others and I into high robustness security have consistently pointed out that most VPN’s and OS’s max out at Common Criteria EAL4, with many below that. I’m less concerned about the certification than the development process that implies: EAL4 certifiably produces shit. It (mostly) corresponds to C2 in Orange Book days and even then you had to go up two levels before a system was self-protecting enough. Those levels correspond to EAL5-7 with some extra, critical features. There are VPN’s and network protections developed like that but hardly anyone uses them and a few are hard to obtain.

“This GCHQ team was, according to an internal document, “responsible for identifying, recruiting and running covert agents in the global telecommunications industry.””

THERE IT IS! I’ve been waiting for confirmation. The subversion threat in its most powerful form: active, malicious insiders who the customers trust. Even if the company is pro customer, this type of compromise can be disasterous. For this, I’ve included a list at the end of this post showing all the ways it can mess you up.

My reactions to Bruce’s essay

“Each individual problem – recovering electronic signals from fiber, keeping up with the terabyte streams as they go by, filtering out the interesting stuff – has its own group dedicated to solving it. Its reach is global.”

Bamford’s Puzzle Palace said the same kind of thing in another time. The resources into these efforts were specialized, massive, and cutting edge. I’m sure the specifics you read on their 21st century version were pretty amazing. Of course, commercial organizations such as Facebook do the same thing so govt no longer has a monopoly on tech for massive data movement and analyses. There’s potential for cooperation and competition. And probably other implications we’ve yet to think of.

“The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability.”

I’ve often told people to use hardened, minimized versions of OpenBSD for routers or security appliances. This strategy’s worth just went up 1,000%. The other was MILS type kernels with plenty partitioning and info flow control on system components. Those companies are tight with government, though, so might be backdoored by now. There’s still at least open source microkernel type platforms to build on such as OKL4, Tud:OS, Turaya, NOVA, Minix and Genode. And CHACS at Navy has a paper on how to break a basic network stack into pieces a sep kernel can manage. Do the same for the other functions, etc.

“Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. ”

Subversions that look like accidents. It’s been the gold standard of compromises for years. Keeps proving itself out. Deniability works for them just as well as for crooks.

“Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”

The encryption part ideally should be a black box the average developer can initialize and run traffic through. The box should work with pipeline type designs. There’s a few OSS crypto libraries that do this already and which have had a decent amount of scrutiny over time. I’d advise people wanting more standardized approach to start with those libraries.

For more safety, it’s best if the specific algorithms, IV’s, or other security-critical parameters are both randomized and transmitted secretely like the key. The simple version is to program all these things to send 512bits of material. You can squeeze all kinds of keys, IV’s, salts, algorithm choices, etc. into 512 bits. Combine that with fixed message size and fixed transmission rates for a tunnel that will look the same for many types of traffic. And put 50-80% of the effort into handling error conditions safely during initiation, processing, or cleanup. And only allow safe arguments or configurations: don’t even code them if they’re not safe.

(Certain people I’ve told about removing all weak options and unneeded code claimed to have edited proprietary binaries in the part that handles weaker options to just freeze the app and signal a problem.)

Points on Bruce’s Advice

” Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.”

This is a nice idea for a few reasons although I’d add I2P and Freenet to the mix. They could use the extra scrutiny. More people on these networks is like a denser, busier crowd. Easy to get lost in. However, using Tor will be impractical for many as it’s slow, exit nodes can be blocked, or it creates extra scrutiny on the individual.

So, an alternative here is for us to disguise traffic as ordinary traffic. Let’s use HTTPS as an example. When they go to attack it, it doesn’t work. They come to realize that it might not even be a web browser communicating with a web server. It may be a peer to peer app that speaks a limited amount of HTTP. And the crypto wasn’t really AES with SHA2. And the weak failure modes or algorithms in SSL cause an instant connection failure with optional IP block. And some implementations were done in C++, some in Java via GCJ, some in Python, and… one Genera LISP machine (?) with apparent security modifications.

NSA director internal memo: “Deceptive protocols with diverse/deceptive implementations and non-standard usage… who ARE these people? Even the Chinese don’t give us this much trouble! ”

“Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections – and it may have explicit exploits against these protocols – you’re much better protected than if you communicate in the clear.”

Good advice. Add in my simple, strong algorithms/protocols with safe defaults and obscure defaults you get plenty headaches for them.

” Assume that while your computer can be compromised, it would take work and risk on the part of the NSA – so it probably isn’t.”

I’ve also said that before. It makes sense. The more work and risk an op takes, the more they must justify [to their bosses] doing it. That makes it non-default for them.

“Be suspicious of commercial encryption software, especially from large vendors.”

New assumption: this product is safe against everyone but the NSA.

“Try to use public-domain encryption that has to be compatible with other implementations. ”

Good too. Allows diversity, peer review and subversion resistance.

Appendix to this below. Now, I can read the comments to this article and maybe respond to them. I can also link to old posts of mine breaking down subversion resistant software engineering and all the levels of attacks they have, if anyone wants.

A Subverted Organization, Role-by-Role, Attack-by-Attack

  1. The person handling the supply chain can buy compromised parts from NSA.
  2. The systems architect can weaken total system security with a bad design choice or with an obscure interaction between components.
  3. The software engineer coding a security feature can sabotage it.
  4. The auditors/testers looking for problems can ignore specific vulnerabilities.
  5. The service reps that help the customer choose the product that provides the necessary level of protection can mislead them into buying a weaker product.

  6. The project managers can declare certain hard to exploit vulnerabilities as “theoretical” or “not cost effective to fix,” then tell the NSA about them. One could argue that this is exactly what all NSA assessments with source code do. 😉

  7. The people that write policies on detecting problems or compliance issues can leave out something.

  8. System administrators can use logical or physical access to pull details on systems or backdoor them.

  9. Maintenance personnel can do any of the above if they have access to the computers or customer data.

  10. The company’s head lawyer can create fake NSL’s sent to his or her department to request information or force backdoor implementation.

  11. A member of IT staff might accidentally give a partner organization with intranet access too much privilege. And they do the attack.

  12. People maintaining the firewall or access controls might slip up.

The common denominator: all of these involve insiders and each was “probably an accident…. well that’s all that’s provable.” That’s the dark beauty of well-executed subversion. 😉

Bruce Schneier September 5, 2013 7:30 PM

“I see you’re still comfortable working with Truecrypt. Is it too much to ask if you’re using the pre-compiled executable available for download, or only if your comfort level attaches only to a version you’ve compiled yourself?”

I don’t compile.

Basically, I’m just playing the odds here. I think TrueCrypt is less likely to be backdoored than either PGP Disk (what I was previously using) or BitLocker.

Bruce Schneier September 5, 2013 7:32 PM

“You recommended to ‘Prefer symmetric cryptography over public-key cryptography.’ Can you elaborate on why?”

It is more likely that the NSA has some fundamental mathematical advance in breaking public-key algorithms than symmetric algorithms.

Bruce Schneier September 5, 2013 7:34 PM

“Using OTP with other encryption methods probably won’t make the security any worse, especially when there is reason to suspect that some of those other encryption methods might be broken.”

One-time pads are worse than symmetric algorithms. Don’t let the theoretical security fool you.

https://www.schneier.com/crypto-gram-0210.html#7

hoodathunkit September 5, 2013 7:34 PM

Bruce wrote:

“Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions?”

Yes, I believe so.

Of course they could . . . and have. They did it with PGP back in ’91-’93 (MSDOS 5&6). The PGP software was usually safe, but entirely command line and a lot of people used a shell to drive it. Somebody had dozens of shells written that deliberately left ‘bits’ on the hard drive, pretended to use PGP but encoded using ROT13 , thrashed hard drives (you could do it back then), and assorted mayhem.

Bruce Schneier September 5, 2013 7:37 PM

“I think one of the more surprising revelations in one of your articles today is that you still use Windows for most things. Seriously?!?
How and why?”

Yeah, well.

Gweihir September 5, 2013 7:45 PM

Thank you for working on this with the Guardian and others! Having a true and recognized crypto expert that also has sound judgment on the non-technical issues in there is incredible valuable.

GhostIn(Your)Machine September 5, 2013 7:51 PM

@name.withheld.for.obvious.reasons: I got “involved” in late 2003 or so, was just a student with an interest in security who had broken a few school systems and found some (looking back on them) pretty crappy holes. Still, there was a program at my school, and it got attention from them. In said program, I must say I had some of the best teachers I ever have met, and I wish I could still be studying under them. Many of them have forgotten more about computer security than most practitioners will ever know, and have their names in the thank-you sections of many of the basic texts in the field.

I had a talent for finding bad assumptions made in network devices, and that is what my research was based around, but classes covered everything from the Shannon papers and mathematical modeling to quantum crypto theory, forensics, and counter-forensics. I was always clearly taught that the US agencies, particularly the NSA, are dual-mission, charged with both protecting US government and commercial computer systems as well as ensuring the capability to penetrate other systems on demand.

Around 2008 I started to see a very disturbing trend: The focus shifted from offensive and defensive mix to almost pure offense, both in teaching and in the direction that projects were heading (and what my fellow students were being prepared for). It went as far as sitting at dinner with professors from the program one night and them actually trying to convince their pupils that some classes of security vulnerabilities in systems we were discovering should NOT be disclosed to the vendors or open source communities, because it would make it likely that not only would those issues be fixed, but also similar vulnerabilities in other system might be fixed.

That thought disgusted me, and when I asked the head of the program about it, it was made clear that was the direction that leadership from the very top had decided to go, and there was little to no hope of changing it. More than any other event, that was the one that drove me out.

I joined to learn to break systems, but I also discovered it was possible to construct systems that could not be broken, and could be proven to be secure. During my time there I worked on many exotic systems I never had encountered before, including a TCSEC A1 rated system. There was one project with complete formal modeling, I wasn’t directly involved, but got to watch and see the results of fellow students, go through and be taught everything step-by-step. The other students built a provable secure server for a modern protocol, and had it validated! (I was doing a different project, more offensive than defensive in another lab, so sadly I was not hands-on for that)

Seeing it done, knowing it could be done, and knowing enough to understand how, from the ground up a provable secure system with modern networking and protocol support was built, I determined that what I wanted to do was provide that. Later seeing the decision to stop all development on it, and even discussions about government silencing public research in the field to prevent that information getting out, as it would pose a threat to the “primary mission”… I left, thinking “to hell with them all”.

Now I work for a private company, not US based, in the computer security field. We make software that touches many of these areas, and I am pleased to say the quality of the software has improved since I joined, a lot. Still, I must admit it is a long way from what I hold in my mind as “secure” after seeing it done right.

Maybe some day it will be done right, but for now I am having to settle for trying to prioritize what vulnerabilities and potential vulnerabilities must be fixed first. I have recently come to the realization that my employer will never release software I would consider absolutely secure, and for all they say about providing security against national-level threat actors, the way they work, they will never compete at that level, and I am pretty sure they do not want to.

Right now I am debating how to proceed; what I want to do I have become convinced will never occur at my current employer, but I can’t think of anywhere outside a government it is practiced, and I don’t have the means to start my own company yet. How I will proceed, I do not know, I just know what I feel must be done.

Nick P September 5, 2013 8:02 PM

@ William A Hamilton

+1 for mentioning PROMIS. I should have had that one in my list of precedents. It was one of my early inspirations for worrying about subversions and software companies being front organizations.

@ How Far

“Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions? Is there any way to detect such interference had the NSA enough control over communications channels to automatically replace binaries and published hash lists?”

Yes. That’s called a Man in the Middle attack and swapping out executables in transit was even used by clever black hats closer to 2000. File integrity checks are the main proposed method of validating the download. However, if the source is compromised, they might put compromised hashes on the site. So, using trustworthy sources and validating them is the most important defense. One can also use separate PC’s for acquiring the file and checking/using it.

@ Mike Doherty

“You recommended to “Prefer symmetric cryptography over public-key cryptography.” Can you elaborate on why?”

I totally missed that line when I read the article. I’ve been pushing people to do that on his blog for a long time. The reason is that symmetric algorithms are both safer from their codebreakers, interchangeable due to large number of good ones available, easy to implement on many chips, accelerated on some chips, faster in about every case, use less bandwidth for integrity protection than public key, and can be used for many extra things such as authorization (e.g kerberos) or proof of work schemes. You can do a high assurance implementation of a few primitives for many CPU types and FPGA’s, then leverage them in countless designs for both standalone and distributed, servers and desktops, general purpose and embedded.

Symmetric crypto kicks public keys ass except for it’s signature (no pun intended) use case: trading a secret without pre-sharing a secret. Even so, it’s use can be limited to that part and symmetric used for everything else.

@ Tin Foil Hat

“Why put all the eggs in the same basket and make the secrets vulnerable if NSA manages to find a method to break the symmetric encryption (or public key encryption) you are using?”

That’s why I’ve always pushed for us having a bunch of different primitives that are each heavily cryptanalyzed. The classic example is in block ciphers where we have Blowfish, IDEA, Triple DES, and all the AES candidates. That’s not one basket: that’s about nine algorithms. Then quite a few different ways of implementing and using them. Or was I using one of the ESTREAM stream ciphers? The data always looks the same scambled… 😉

@ Tango

“TrueCrypt is an NSA program. You offer a free program that works on almost all personal computers. Millions download and use it. When your agents or assets get caught with TC on their laptop, it doesn’t mean they are working intelligence. Everyone uses TC. But is it secure, or is there an NSA backdoor in the program?”

Honestly, if it was, I’d call it one of the best investments of tax dollars into INFOSEC the government ever did. I’d even praise them for it. OF COURSE, there could be a subtle flaw NSA can see but nobody else can. That’s true even if Truecrypt is built by trustworthy people, yeah? However, the NSA would have also protected our data at rest from every OTHER attacker including the FBI and it seems DOD investigating their opponents. It would be a win for us (mostly) and an epic fail for the majority of government eavesdroppers.

M. September 5, 2013 8:06 PM

You mention you still primarily use Windows. But why? Do you have things you need to do that you feel you would be unable to do in Linux? Do you plan to fully switch to Linux?

It seems contradictory to assume Windows is compromised and guess that another system is likely to be more secure but still not switch.

Another question: do you see any reason to assume that OpenBSD (or some other security-oriented OS) is any safer than Linux?

Werner Almesberger September 5, 2013 8:13 PM

How Far: “automatically replace binaries and published hash lists?”

Some ad hoc-ish thoughts:

Let’s assume they can. To detect this, the recipient could contact the originator through a secure channel (e.g., a face to face meeting, if they already know each other) and verify the hash.

Countermeasures to this could include: 1) surreptitiously changing the hash(es) they carry, 2) suppressing the meeting or the reporting of the detection of a mismatch, 3) selectively providing uncompromised versions to likely verifiers, 4) compromising the binary even before the originator gets access to it, 5) compromising the originator.

1) may be too difficult, especially if people get creative or have good memories. 2) may be an option, particularly through extortion. Likewise, 5) could work by extortion, or simply by “turning” (or having turned) the originator. 3) would depend on the NSA’s capability to compromise mirrors in foreign countries.

3) would be particularly effective if a small number of downloaders could be singled out for receiving the compromised version, while all the rest (probably including all likely verifiers) would get the uncompromised version. Furthermore, this could be randomized or the tampering could be done only on the first access, so that inconsistencies that get detected would vanish when repeating the operation, get blamed on “natural” corruption, and not trigger an investigation.

4) would be the most elegant solution and could be implemented by compromising the build system or the communication to/from it. This may get particularly feasible if the build system is some compile farm or cloud service.

4) could also be implemented by compromising something that enters the binary but is not considered to be part of the source proper, e.g., libc or some other library or header.

Regarding 3), it may make sense to carry lists of hashes with you whenever you go to a meeting with like-minded individuals, then compare them over a few beers. Depending on the scenario, also peer-to-peer comparisons (as opposed to the more difficult recipient-to-originator) could be useful. Chances are there’s nothing to detect, but it may still be fun to play that game for a while.

I would worry about 5). To detect tampering at the binary’s origin, one would need fully reproducible build processes, so that the same source compiled in a well-defined environment would yield the same binary.

4) would be the hardest to defeat. Getting rid of untrusted compile platforms and communication paths would be a good and easy first step.

To ensure that also the trusted environment is safe, one would basically have to audit the source one has downloaded, all the code that gets pulled in, verify that the compiler generated the correct assembler (and that the assembler translated it correctly, that the linker didn’t mess with it either, etc.), and also have some verification that the result doesn’t get compromised on its way to the release site(s).

If the goal is to determine whether such tampering exists at all, the best place to start would be old material, released before the Snowden incident, so that NSA and friends shutting down any tampering equipment now would have no effect.

  • Werner

Steve September 5, 2013 8:13 PM

@b f skinner:

Yes, I understand the mechanics of how information is passed along and, yes, I’ve read (some of) the “Pentagon Papers.”

I also understand the mechanics of celebrity and how literary fortunes are made.

While I will point out that Mr Greenwald has done a great service to the world by telling us what most reasonably observant people already know in principle: governments spy on their citizens, he is also a terrible human being who will sacrifice others to reach his goal which is largely notoriety for on G Greenwald.

He (and the Guardian) badly burned Edward Snowden by not getting him to a safe place before publishing and he exposed his husband/domestic partner to inexcusable danger by using him as a courier/mule (a story, by the way, which has been played for maximum histrionic effect by the selective shading of facts on the part of Mr Greenwald’s reporting).

I don’t believe in “killing the messenger” but in this case the messenger does need a good swift kick in the pants for the people he’s wronged in the process of telling his story.

If I were Bruce Schneier, I’d be very careful around Mr Greenwald, since I believe he’d happily toss Dr Schneier under the cliched bus if it were to his advantage.

Petter September 5, 2013 8:25 PM

@Bruce
In your opinion, would you describe signed and encrypted email using CA-issued 2048-bit certs for broken or are they still secure enough?

Carlo Graziani September 5, 2013 8:25 PM

OK, Bruce. How about making the challenge to the IETF a bit more specific: could you name some titles of new RFCs that should be written? What are we replacing here?

SMTP obviously needs to be replaced by a messaging protocol that minimizes exposed metadata and routinizes application-layer encryption.

Do we need to go lower down the stack? Is DNSSEC good enough? Routing protocols? TCP/IP?

Give those guys a specific agenda. You read the documents, you’re in a good position to draft one.

Britt September 5, 2013 8:27 PM

You have confirmed my deepest fear…
Thanks for your Call to action.

(I’m thinking; what could I do?)

Zooko September 5, 2013 8:34 PM

Bruce:

It’s too bad that people will read your warning about elliptic curve crypto and then stay away from all elliptic curve crypto, even Curve25519, which is not subject to the possible backdoors which you are warning against. This might cause people to distrust tools that use Curve25519, such as future versions of Tor and future versions of my project — Tahoe-LAFS.

We need to move to elliptic curves because RSA and integer discrete log are so inefficient, at the desired security level, that they make our tools less usable for actual use. For example, in LAFS we generate a new public-private keypair whenever a user creates a directory. Currently this is a 2048-bit RSA keypair, and so this is a real performance issue. I believe similar efficiency issues (especially with respect to size of public key) are pushing Tor to move to Curve25519.

Regards,

Zooko

Maxfactorx September 5, 2013 8:34 PM

Bravo! Bravo! Bravo! Bruce you are truly a national treasure. Thank you for taking this position publicly and being such a strong voice for liberty and progressive cutting edge technical, moral, analysis. Yours a real patriot! And we love you brother!

underscore September 5, 2013 8:44 PM

Thank you Bruce for your work on reviewing the documents and especially the advice and additional viewpoints you have provided.

Here is a question I wanted to ask anyone who might know: When discussing the encryption that Laura Poitras would need to have configured, Edward Snowden had told her to “assume that your adversary is capable of a trillion guesses per second”

My question is, is this realistic? Can it be realistically assumed that NSA has the capability to go through a trillion guesses per second? Or was Ed Snowden perhaps exaggerating to ensure that Laura selects a really strong password?

David September 5, 2013 8:44 PM

Bruce,

In your article about taking back the internet, you talk about needing more whistleblowers.

Given the current high profile of this topic, do you have any recommendations on how to even start the process cold? How does one, without just exposing their jugular, start to pass along a tale? With Whom would you recommend starting the conversation, especially since there seem to be so many conflicts of interest, unreliable reporting, and laser-like focus on the current actors (Greenwald, Poitras, etc)?

I’m not asking about specific techniques, or a how-to; rather, I think the biggest block to telling the world is that those who know have a hard time answering the question, “How do I even start?”

Douglas Knight September 5, 2013 8:49 PM

Does NIST 2006 standard with a backdoor found by 2 MS cryptographers in 2007 uniquely identify Dual_EC_DRBG? If so, why not just say it?

Buck September 5, 2013 8:54 PM

@Bruce

“Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions?”
Yes, I believe so.

I am also interested in learning more about this particular revelation. I (along with many others here) have assumed the existence these types of interception techniques for years… Surely the NSA isn’t the only entity that has been developing such capabilities.

I do wonder for whom, how, when, and why these listening posts become active in their MITM attacks. Obviously, open source software wouldn’t be the only bits of bytes subject to a surreptitious manipulation of information streams…

Should public leaders have any reason to believe that their digital communications have not been altered before receipt?
Can the intelligence community make the same claim?

These backdoors are in place, and someone must have the keys… Who’s to say that they’ve kept these secrets better than some of their others?

Seems that these surveillance capabilities have vastly expanded vulnerabilities to national security. We decided yes, we can spy on everyone; never stopping to consider whether or not we should… I’d bet that money would have been much more beneficial to security if it had been put towards developing secure systems!

Of course some of it has been… Apparently, Alamos National Laboratory has been running a quantum network for almost 3 years! So perhaps institutional communications have been secured to a reasonable degree… Still though, I’d have to assume that their agents remain vulnerable. Do they not use 4G, Facebook, and Gmail in their personal lives? If not, that would seem to be a great way to blow one’s cover! 😛

Also of great interest will be the hardware-based backdoors…
Are IC manufacturers reasonably secured?
How easy would it be to plant some small secret circuits (in an incredibly complex chip) just prior to production and remain undetected for lengthy periods of time?
What kinds of transmissions can be used to defeat air-gapping via subverted silicon?

Figureitout September 5, 2013 8:59 PM

What a monumental task, rebuilding the internet. I really don’t know if we’re all up for it and how we could organize the project? Surely you all would know that you would very likely be working along side an agent trying to subvert your project. Where would you secure your work? Physical compromise, if you’ve never seen it, is now an issue. This will scare away a lot of people and it will ruin the atmosphere of working on the project.

Almost like a NASA problem or making your computer almost entirely from scratch, just using many basic parts you can’t fab but they will be visible.

Anyway, this just sucks. How quickly did the internet and all its infrastructure go from “Holy Cow!” to “That’s creepy scary”.

tazman September 5, 2013 9:09 PM

I find it humorous that you express concern over government spying, and yet link directly to a facebook page for readers to follow…
The irony is immense.
There are social networking platforms built on Free/OSS, that are decentralized, federated, secure, and with varying degrees of privacy, that respect users and do not spy on them, collect their information, or spam them with advertisements (indeed, as they are user owned and operated, not operated by corporate behemoths who have a strangle-hold on so much of the rest of the internet) such as friendica, diaspora, pump.io, statusnet, libertree, and most amazing, the up-coming redmatrix.

Sharon Kramer September 5, 2013 9:14 PM

Thank you for all your hard work.

I’m waiting on the info to come to light of the US Chamber of Commerce’s involvement/access to private information in this debacle.

If you remember, Annonymous hacked HBGary Federal in 2011 and found info of the US DOD introducing them to the Chamber as a potential client to spy on US activists. The intent was to use the info to set up fake social media identities and character assasinate US citizens to cast doubt on the validity of their words, that are typically adverse to the interest of US industries.

HBGary was purchased by ManTech International, of Fairfax, Va in 2012. It ranks No. 22 on Washington Technology’s 2011 Top 100 list of the largest federal government contractors. The Chamber is the first lobbying group to top $1B in DC lobbying dollars.

I’m betting its just a matter of time before the dots are all connected back to the NSA aiding US industries to spy upon and discredit US citizens.

Good article from the Nation on the subject.
http://www.thenation.com/blog/174741/how-spy-agency-contractors-have-already-abused-their-power#axzz2e4fFih2B

None September 5, 2013 9:16 PM

Hi Bruce – any suggestions on a BleachBit alternative for OSX? Likewise, TrueCrypt seems to be abandonware on that platform – thoughts on how secure FileVault 2 is?

underscore September 5, 2013 9:18 PM

@tazman:

There are social networking platforms built on Free/OSS, that are decentralized, federated, secure, and with varying degrees of privacy, that respect users and do not spy on them

That brings to mind the “de-centralized version of Facebook” called Diaspora that received some good Kickstarter funding a couple of years ago.

What came out of it? Nothing much. One of the developers (supposedly) committed suicide and the others released v0.10 which they then left in the hands of the community…

Jonathan September 5, 2013 9:39 PM

Encrypt your communications. Use TLS. Use IPsec.

Loved the two articles. I just hope people who read the articles know what TLS and IPsec are/stand for.

I know what TLS is but to be honest I’m not sure if I use IPsec. If I use OpenVPN am I using IPsec? I’m not sure.

Keep the articles coming!

Engy September 5, 2013 9:48 PM

If they are in fact installing backdoors, this leaves huge vulnerabilities in everything. This isn’t going to end well for anyone, including the NSA.

CivLib September 5, 2013 9:52 PM

Bruce,

Adding my voice to the chorus, and enthusiastically echoing MarkH’s sentiments, I simply say :

Thank you, Sir.

Let us know when we need to initiate your legal defense fund.

Julien Couvreur September 5, 2013 9:58 PM

Bruce Schneier says: “Basically, I’m just playing the odds here. I think TrueCrypt is less likely to be backdoored than either PGP Disk (what I was previously using) or BitLocker.

Here’s a proposal to improve the odds: chain encryption.

Use a PGP disk inside a TrueCrypt archive on a BitLocker drive. If any of them is not backdoored (and your OS isn’t compromised), then the result should be secure.

para.noid September 5, 2013 10:04 PM

So if they can replace executables during a download then I guess they can replace web pages (or PDF files) too? I mean if they want a person to get other information than what s/he thinks they are getting.

Johnston September 5, 2013 10:05 PM

Worth noting that the author of Curve25519 does not trust NIST curves.

Hopefully we see Curve25519 more and more. It has been selected for use in DNSCurve, CurveCP, Tor, Tahoe-LAFS, and Google’s new QUIC protocol.

gonzo September 5, 2013 10:13 PM

@julien courveur

I would be reticent to try to overlap three softwares each of which are doing their best to grab all low level disk accesses and convince their respective operating systems that the encrypted containers are logical discs.

Ever see how computers (mis) behave when two virus detection suites are installed at the same time and don’t know how to play nice?

Gomer September 5, 2013 10:20 PM

Bruce, you didn’t even get the signals intelligence directorate name right in your guardian article. What other things don you assume you know that u botched up? Pompous and assuming.

Dan September 5, 2013 10:35 PM

Hello
Bruce Schneier,

One simple question:

As simple user, that don’t implement the security itself on websites (or VPN systems),
I don’t have what to do about it.

the only thing that I have control is when i’m using cloud-hosting:
till day, I mostly used WINRAR (from RARLAB) encryption password (on document stored on cloud, that I didn’t want to be digged on)

Now, I think maybe to change my software that I use to encrypt personal files (becuase WINRAR could have backdoors as it’s aren’t open sourced)

Should I change to 7zip? you have better offer?

Truecrypt actually sounds too good to be true (“Plausible Deniability” – Like they want illegal things will use it). the developers are even anonymous, so, it’s even rational (after latest NSA leaks) that maybe NSA behind it.
I think that it’s even easier to backdoor opensource like Truecrypt than actually force company to implement it. (considerate the gigantic budget NSA has)
What do you think?

BTW: Truecrypt founded before 9.5 years, right after NSA start finance the anti-encryption programs..

GregW September 5, 2013 10:58 PM

I think the comment that “Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted” underplays the likely implementation vulnerabilities that are neither subverted code nor math.

E.g. known plaintext in protocols and documents (e.g. HTTP headers encrypted within SSL, known HTTP text within specific target HTTPS websites, known byte sequences within Word documents, etc).

floppy September 5, 2013 10:58 PM

remember the old days when we had to transfer data by putting it on a floppy disk and hand it to a friend? guess what, what’s old is new again. now the Internet is only good for sharing pictures of our cats.

I just hope that old computer in the closet and the stack of floppies still work.

DB September 5, 2013 10:59 PM

I wonder if the NSA has forced a designed-in weakness in things like Intel’s digital random number generator built into every modern intel computer (Intel’s Ivy Bridge DRNG)…. which would therefore weaken every encryption key generated on every modern intel computer, for example…

GregW September 5, 2013 11:00 PM

So Bruce, should true hackers boycott the Obfuscated C contest, since its value for adding to the NSA playbook of “accidental mistakes” outweighs the honor of displaying one’s cleverness?

I wanna Fight September 5, 2013 11:01 PM

How do we fight back, how do we take the internet back from this over-reach? How can we trust anyone ever again? How do we get people above-the-law to be bound by those laws again? How do we scorch the neck stumps of this hydra? Well first, we have to cut a few heads off…

Michael Moser September 5, 2013 11:22 PM

So the next time that we see a report on broken random number generator we know that the feature is by design

Johnston September 5, 2013 11:28 PM

@Bruce:

You write, “…whether you’re running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer… These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

Have you seen documents about exploitation of CPU bugs? Anything about Sparc64? What about OpenBSD? I’m talking about unknown vulnerabilities of course.

Thank you dearly for all you do.

Long Time Lurker September 5, 2013 11:29 PM

“I have been working with Glenn Greenwald on the Snowden documents, and I have seen a lot of them. These are my two essays on today’s revelations.”

I hadn’t realized this. It explains a lot about the bias (and ignorance) you have been showing in your articles since this whole affair broke, Bruce.

I am a little surprised here that nobody seems to be concerned that the NSA actually has a mission: counter-intelligence, protecting US government systems, counter-terrorism, etc….

Does anybody here realize how seriously the release of this information about sources and methods will compromise that mission? There is a very good reason this stuff is classified. And all your paranoia aside, it’s not because they don’t want Americans to find out — it’s because they want the methods to still work against the “bad guys”.

I also find it ironic that everybody is dog-piling on the NSA at the same time they anxiously wait “intel” proving who was behind the chemical attacks in Syria. You can’t have it both ways, folks. There’s not going to be any useful intelligence in the future if you take away all the tools.

Anyway, I hope nobody here complains the next time there is an “intelligence failure” and we don’t manage to thwart the next foreign-grown threat, whatever it may be.

Michael Moser September 5, 2013 11:35 PM

What happens if. Openssl now removes weak ciphers from the library? How is the NSA going to punish them?

gonzo September 5, 2013 11:39 PM

@long time lurker

Stick it in your ear.

The NSA can accomplish its mission without “capturing the entire haystack.”

Part of the problem with being so infatuated with devoting ever increasing resources into making a surveilence state dragnet is that the security apparatus forgets to put the due focus on the actual points of risk.

Two words: Boston Bombing.

For all the Americans’ stuff being caught in the “collect the whole haystack” philosophy, it somehow was not obvious to prioritize and actually use “subject specific” tools for a guy who traveled to a radicalized area and, ahem, as to whom we had actually been WARNED through official channels by another nation state.

Finally, please explain to me how sniffing all the encrypted packets between my computer and my online banking institution provide a material benefit to the acquisition of intelligence regarding the Syrian chemical attacks. Seriously, you’re reaching.

Figureitout September 5, 2013 11:53 PM

Long Time Lurker
–I speak for myself as an American citizen as saying: “I don’t give a >>>> who carried out those heinous chemical attacks. It happened in their country, thus it is their problem to solve.

If you or the other politicos who want to go to “war” yet again want to do something about it, suit up and go get killed.

Our critical systems should be shielded and isolated and localized and there are other ways to encourage this security. Centralized control will fail and I don’t trust it.

anonymous September 6, 2013 12:07 AM

The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you’re running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance

This seems like it might be more problematic than the NSA cheating to defeat global network security protocols. Can you share more about this, even if you don’t want to reveal everything? Can the average user either do things or avoid things to address this?

MarkH September 6, 2013 12:37 AM

@”long time lurker”:

Let me see if I got this right…

…we need NSA to determine who ordered chemical warfare attacks in Syria? That’s news to me. I thought that observations (coinciding with the time of the poison gas casualties) that ordnance was fired from government-controlled territory into the rebel-controlled district where people were poisoned, made the case fairly well.

…and it is necessary for US officials to betray their oaths of office, and shred the 4th Amendment to the US Constitution by illegally spying on US citizens in the United States — in order to determine who ordered chemical warfare attacks in Syria?

Are you suggesting that the orders for the poison gas attack were issued from Rapid City, South Dakota? Perhaps Modesto, California? Maybe Bowling Green, Kentucky? Or was it Terra Haute, Indiana?

Do I understand correctly that you make no distinction between the interception of foreign signals intelligence (NSA’s lawful mission) and domestic spying, forbidden to NSA by both the United States Constitution and federal statutes?

Did I follow your logic correctly, or did I miss something?

Alain from Switzerland September 6, 2013 12:41 AM

Kudos.

Very recomforting to see this coming from an US American, gives hope. Corresponds pretty much to my expectations of how it is that I had over the years, never felt any of these systems were without backdoors etc.

One, I think, crucial question remains yet unanswered – how much data is collected and stored, resp. could be stored once the new facility in utah goes operational? How much data is on average kept per average (typically completely innocent) citizen of any country in the world…?

Rob September 6, 2013 12:52 AM

From the Guardian article:

“Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.”

I initially felt that maybe you knew something from looking at the documents that would have backed this up. Reading your responses above though, it seems that you do not know which algorithms are safe.

Maybe reword it a bit since some will end up quoting you saying that “you can remain secure even in the face of the NSA” if you follow bruce’s five point plan and use diffie-hellman. I think we are in the same position we have always been in; we have no idea what the NSA is truly capable of and have no reason to believe we are secure against them. Now we just know that they likely wield the kind of power we always imagined/feared.

Love your blog though… just feel maybe we should be more explicit that we do not have any clue yet how to protect ourselves from the NSA or state powers.

Jason Green September 6, 2013 1:14 AM

Bruce,

First, I join many here in offering my deep thanks. For more than a decade you have been an all too rare voice of sanity and reason about security, helping to make important issues understandable to the rest of us.

I do have a couple of questions. In describing your own practice, you mention that you don’t use all the security tools all the time, and that you use a machine with an air gap for highly sensitive material. I wonder how this squares with the notion of trying to improve everyone’s security by getting as many people as possible to encrypt as much as possible. By doing this, you make it economically/computationally impractical to dig wanted data out of a huge encrypted “haystack” . That kind of large scale encryption won’t happen if every encrypted tweet and email has to cross an air gap.

Finally, I would like to encrypt most of my internet traffic as I see that as something an “ordinary person” can do to protest the surveillance state. Unfortunately , nobody I communicate with on the web shares my concerns, so apart from signing my facebook wall posts to make a point, encrypting will be a hassle. Since I’m unlikely to convert my social graph to the cause of encryption, what ought I do?

Johannes September 6, 2013 1:22 AM

Remember this: The math is good, but math has no agency. Code has agency, and the code has been subverted.

I don’t mean to offend, but at this point, I think this naïvety is either wishful thinking or it borders on stupidity.

The NSA has been involved in every major standard released by the NIST, so they should all be assumed backdoored. I mean, sure, there’s review from academia (the integrity on most, including youself, is not doubted) – but: do you really think you stand a chance against all the intellectual capacity accumulated by the sheer NSA’s limitless budget? I hardly think so.

It might be difficult to admit, but they’ve been ahead of academia forever. GHCQ/NSA had RSA long before we did. The gap only widened when academia found out about linear cryptanalasis only after the year 2000; the NSA already incorporated this knowledge when designing the DES S-boxes.

The only valid conclusion is that they are lightyears ahead by now. I feel we are facing an adversary that will not be defeated easily.

gonzo September 6, 2013 1:29 AM

@Jason Green

I think the key reveal is in the indications re: Snowden’s comments on end point security, and the fact as Bruce says, that if the NSA wants to get into your computer they will.

An air gap is therefore crucial at least for decryption operations — for your most secure communications, you do not want to be keying your pass phrase into a machine where the very OS, nay, even the hardware, could be compromised.

Winter September 6, 2013 2:03 AM

On the source of randomness. What is exactly against putting a microphone next to a fan and compress the sound (bzip2?) and hash it?

Air turbulence is inherently chaotic. I must be missing something important here.

@How Far
“Could the NSA be intercepting downloads of open-source encryption software and silently replacing these with their own versions?”

Git was designed to prevent corruption of the source tree. I always understood that you could trust a git repository to detect a corrupted download. The underlying SHA1 is not considered secure anymore. But would it really be possible to replace files with functional identical code with the same SHA1 hash?

With a secure Git tree, you can rebuild the original system. And we already know how to defend against Trusting-Trust attacks:
http://www.dwheeler.com/trusting-trust/

Selective replacement of web pages and message digests could be easily thwarted by looking at the pages using Tor and open VPNs, changing the identity between reads.

Winter September 6, 2013 2:11 AM

About covert Tor use

A long time ago (I did not check it recently), the Tor pages would say that if you ran a Tor node, all your Tor requests would be directly fed into the Tor network. So your Tor use would be hidden, and you would not be vulnerable to timing correlation attacks.

I always found that a rather dubious claim. Running a Tor node makes you visible as a “suspect” anyhow.

With Bruce’ advice to hide in the network, I am still curious whether running a Tor node would help you hide in the crowd?

Jan September 6, 2013 2:17 AM

@Petter: “would you describe signed and encrypted email using CA-issued 2048-bit certs for broken or are they still secure enough”

I’m not Bruce, but still: If you are relying solely on the CA to verify that you got the right key, I’d consider it broken. If they really want to, they’ll simply get a CA to issue a false cert, one way or the other. It is still a hassle and a risk for them, though (you could store the message and later notice the key mismatch), so you should still use it. If they want in, they’ll probably break into your or the sender’s computer.

@underscore: “My question is, is this realistic? Can it be realistically assumed that NSA has the capability to go through a trillion guesses per second?”

GPUs used for Bitcoin mining can calculate hundreds of millions of hashes per second. Commercially available ASICs are in the tens-of-billions hashes per second range.

If we are talking about a trillion guesses per second on unstrengthened passwords, I would consider a trillion guesses per second a low estimate. With strengthening, depends on the strengthening and how much resources they dedicate to it, but my personal guess would be that there might be a small safety margin.

@Douglas Knight “Does NIST 2006 standard with a backdoor found by 2 MS cryptographers in 2007 uniquely identify Dual_EC_DRBG? If so, why not just say it?”

My guess: Because “Dual_EC_DRBG” confuses regular readers, and the readers who are familiar with the matter will know what they are talking about.

Jim September 6, 2013 2:33 AM

“Something is seriously wrong if we have to regard an agency of our own government– …– as an “adversary.”

It is more wrong than you think. Because this agency is just part of the US government, but not part of the government(s) of the rest of the world.

uwe September 6, 2013 2:38 AM

let’s ignore the commercial software aspect of this. what about open source software like gnupg? did anyone with sufficient knoeledge of that topic check the sources/commits which may point to some”crypto sabotage”?I think under normal conditions the mainter of the software in question should check commits/submitted patched before they are applied to a release bu what if the nsa gained direct repository access? I mean you can possibly eleminate backdoors in commercial software by introducing new laws because the developers know what they did but who will do this for open source software? and..hey…that is one of the ideas behind open source: everyone can check it but in my opinion this is not only a right but also a duty

Jan September 6, 2013 2:46 AM

@uwe: Since the Debian OpenSSL debacle went undetected for years, the answer is “no”. Everyone can check the source in theory, in practice, nobody does.

Wayne September 6, 2013 3:01 AM

Regarding the command line options in Password Safe, what are the command line arguments? I’ve been using Password Safe since its first incarnation – a truly excellent (and portable!) product; I would love to be able to use any extra functionality that is built-in.

Just me September 6, 2013 3:06 AM

Great, now I have to write my own software to know what is going on, I need to invent my hardware and the software that makes it work, no trust for open software or just librarys, maybe I shouldn’t trust books about encryption if I don’t completely understand the mathematics they are based on.
I don’t want to start a new conspiracy, but what if the manufacturer of CPUs undermine the strength of encryption by calculation wrong or without support for big calculation results … to support NSA to break it.

Any suggestions for firewall hard- or software that isn’t manipulated and would really secure my network without backdoors … just doing what I pay for.

Thank you Mr. Snowden you ruined my day, made me more paranoid than before and perhaps saved me from living further as stupid optimist.

Aspie September 6, 2013 3:30 AM

Looking forward to more details Bruce, now that you’re in the loop.

My memory is hazy but didn’t this begin with the CLIPPER chip way back in the crazy 1980’s? And seem to recall that DES was deliberately weakened in civilian applications.

knuth September 6, 2013 3:44 AM

I wouldn’t trust Silent Circle. One thing i would add to your advice for security is using open source software. One of the biggest lessons we should keep from all these leaked documents is that trust is a very big thing to spend it without investigating first. And based on that, I never did (and now have proof for that) trust software that is not open source. Who tells me that Silent Circle is trustworthy? I trust more RedPhone and TextSecure from whispersystems that are open source from Moxie Marlispike than that. As i would trust more LUKS encryption than i would Symantec’s PGP Desktop. Open source is in the open, you can’t hide nsa bugs in it.
Having said that i’m still a bit suspicious about truecrypt because of the allegations that compiling truecrypt source code doesn’t give you the same hash as their binary executable…

Trust is becoming more and more important (it always was, but now we realize it more).. So be careful who you pick for your allies 😉
Keep on

Outtanames999 September 6, 2013 3:53 AM

I would just clamp on to the pipes going into and out of CIA and NSA contractors and data centers. I mean it’s what they do, right? Grab the data from the stream. They’ve already done the heavy lifting. They’re really the anti-internet because they have re-centralized everything. How hard could it be to piggy back off them?

All the NSA has done is create the world’s largest honeypot of data, ripe for the pickings. But it’s not immune to data poisoning attacks so be on the lookout for those. And what they’ve ended up with are in effect single points of failure and easy attack targets for any missile. Two or three and the NSA is left deaf as a door nail. Next!

We will not be free until we know as much about what they are doing as they know about us.

David Jao September 6, 2013 3:55 AM

The advice to avoid elliptic curve cryptography is, I think, a bridge too far. Elliptic curve cryptography is likely safer than the alternatives.

There is evidence of one particular elliptic curve-based random number generator with a magic constant backdoor. This example is an exception, not the norm. My research specialty is elliptic curve discrete logarithms. I know as much about this subject as anyone in the world, and I am regularly in contact with other experts. For plain vanilla public-key encryption, I don’t see any plausible way to backdoor the curve. There do exist known families of weak curves or curves with trapdoors, but the curves used in practice are very different curves, and there is not even a remote connection between the two.

The NIST FIPS 186-3 curves such as P-521 are generated from standard hash constructions. The seeds for the constructions are provided in the text of the FIPS 186-3 standard. The origins of the constants are therefore known. In order to compromise the constants, the NSA would need two independent things: an elliptic curve trapdoor over prime fields, and a preimage attack on SHA-1 (a collision attack is probably not good enough for this application). Either of these would represent a major breakthrough. One could certainly imagine such a scenario, but I think the paranoia dial is pretty high here. It’s much, much easier to compromise a private exponent or RSA key by way of a bad RNG. I suspect it would be easier for the NSA to build a quantum computer than to subvert ECC via some hidden means.

Just me September 6, 2013 3:59 AM

@knuth

Open source is in the open, you can’t hide nsa bugs in it.

Even open source software grows and I don’t think that many people check and understand any line of code before compiling it and the more functions are offered as open source and just be built-in another open source product the less grip can anyone have.

I’m with you, that the chance to have a software without backdoors is most likely higher with open source than with commercial products, but I am really sure the NSA has bugs built-in some interesting piece of code.

David September 6, 2013 4:05 AM

One thing I’ve always wondered about is Windows Update.

If the NSA has access to Windows Update it could inject an update with a rootkit build into it onto any computer it desires.

Other software with automatic update features could be targeted the same way.

anonymouser September 6, 2013 4:22 AM

http://en.wikipedia.org/wiki/PBKDF2
is the key-generation algo used by TrueCrypt and many other things.

It “stretches” the randomness available in a password by hashing it repeatedly

you are using it, mr. Schneier, you should know what it is and what you think about it

Outtanames999 September 6, 2013 4:24 AM

@David

“Other software with automatic update features could be targeted the same way.”

Yes like antivirus software. So that the very things we trust the most are now the most untrustworthy.

N September 6, 2013 4:28 AM

Have any proof of economic spying ? NSA said it’s only for counter terrorism or foreign policy, but does you have paper about such thing.

knuth September 6, 2013 4:45 AM

@David
Linux uses hashes for checking whether an update is good or not. So one cannot just simply inject the update with a rootkit without changing it’s hash. I suppose windows also have a similar mechanism, otherwise every script kiddie would be able to inject rootkit into windows update with a simple metasploit. But the difference with microsoft is that microsoft will cooperate and might probably share the keys for encryption or have an nsa bug (for “ol’ country’s sake)

@Just me
Of course, very few people check the code. But, still code is in the open. So especially with open source cryptography, it’s a very risky business to try to hide something in the open.. Although some things are better hidden it the open i guess 😉

So still, if you don’t want to go back to the caves or use paper and pencil cryptography, you have to trust some program for this job. So when you do, be sure to do some thorough checking before, always keep up to date with news,versions,criticism about it and cross your fingers.. And you have more chances for achieving good results with open source than without it..

Adam September 6, 2013 4:55 AM

It’s things like this which make me think CAs are utterly worthless. It just takes an agency to go to a compliant CA, obtain a bogus signed cert and now they can impersonate a bank or whatever. How can you pay a company for “trust” when they can totally subvert that trust if they wish.

I wonder how secure the internet could have been if it used a web of trust similar to PGP instead of CAs. It wouldn’t stop CAs from signing a key as part of a web, but it would also allow individuals and companies with business and personal relationships to sign each other keys making it far more difficult to impersonate or falsify a key.

Personally I think the whole CA signing of certs should be deprecated immediately for something more resilient.

Bruce Schneier September 6, 2013 5:12 AM

“In your opinion, would you describe signed and encrypted email using CA-issued 2048-bit certs for broken or are they still secure enough?”

I would assume that the CA is either colluding with the NSA, or that NSA hackers have broke into the CA and stolen the master private key.

I don’t know about 2048-bit RSA, but when I made a new key recently I chose 4096 bits — just to be safe.

Bruce Schneier September 6, 2013 5:13 AM

“How about making the challenge to the IETF a bit more specific: could you name some titles of new RFCs that should be written? What are we replacing here?”

I don’t know. I’m hoping you all figure it out.

Carlos September 6, 2013 5:14 AM

But! But! But!

Syria!

Chemical weapons!

Terrorists!

Also, pay no attention to the man behind the curtain.

knuth September 6, 2013 5:14 AM

I agree with Adam.. Gods in security are a bad thing. And CAs are just that.. You trust someone for your God and say “if the God says he’s given this certificate to schneier, then i trust him that he hasn’t given a same to nsa”
And if you lose trust to your God how can you trust him again?
Although -correct me if i’m wrong- there hasn’t been any evidence until now that nsa has “backdoor copies” of certificates from CAs

Bruce Schneier September 6, 2013 5:15 AM

“My question is, is this realistic? Can it be realistically assumed that NSA has the capability to go through a trillion guesses per second? Or was Ed Snowden perhaps exaggerating to ensure that Laura selects a really strong password?”

I have no special knowledge, but it seems like an exaggeration to me.

Bruce Schneier September 6, 2013 5:16 AM

“Given the current high profile of this topic, do you have any recommendations on how to even start the process cold? How does one, without just exposing their jugular, start to pass along a tale? With Whom would you recommend starting the conversation, especially since there seem to be so many conflicts of interest, unreliable reporting, and laser-like focus on the current actors (Greenwald, Poitras, etc)?”

I don’t know. Someone — other than me, I think — needs to write an up-to-the-minute guide for potential whistleblowers.

Bruce Schneier September 6, 2013 5:19 AM

“Does NIST 2006 standard with a backdoor found by 2 MS cryptographers in 2007 uniquely identify Dual_EC_DRBG? If so, why not just say it?”

I don’t know.

And I was not involved at all in the writing of the news story. I thought it was weird, too.

Bruce Schneier September 6, 2013 5:20 AM

“I find it humorous that you express concern over government spying, and yet link directly to a facebook page for readers to follow… The irony is immense.”

Oh, I assure you, I know.

Bruce Schneier September 6, 2013 5:23 AM

“Bruce, you didn’t even get the signals intelligence directorate name right in your guardian article. What other things don you assume you know that u botched up? Pompous and assuming.”

I did not write the Guardian article. I only wrote the two essays with my name on them.

I hope that I didn’t botch up anything. I’m sure I did.

Bruce Schneier September 6, 2013 5:25 AM

“‘I have been working with Glenn Greenwald on the Snowden documents, and I have seen a lot of them. These are my two essays on today’s revelations.’ I hadn’t realized this. It explains a lot about the bias (and ignorance) you have been showing in your articles since this whole affair broke, Bruce.”

I might be biased and ignorant, but this doesn’t actually explain it. Most of my writings since this affair broke were written before I got involved with the documents.

Bruce Schneier September 6, 2013 5:27 AM

“‘Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.’ I initially felt that maybe you knew something from looking at the documents that would have backed this up. Reading your responses above though, it seems that you do not know which algorithms are safe.”

Correct. I have no special knowledge about any specific algorithms.

Bruce Schneier September 6, 2013 5:29 AM

“In describing your own practice, you mention that you don’t use all the security tools all the time, and that you use a machine with an air gap for highly sensitive material. I wonder how this squares with the notion of trying to improve everyone’s security by getting as many people as possible to encrypt as much as possible.”

Good security is too annoying to do all the time. Life is compromise, and it’s no different in security.

Carlos September 6, 2013 5:37 AM

@QnJ1Y2U
“Coming next: NSA compromises all brand new computers.”

They’re already compromised, and have been for some time. That is kind of the point of the articles.

aaaa September 6, 2013 5:49 AM

@knuth If I can give you fake linux update, then I can probably give you a fake linux update hash.

Danilo D'Antonio September 6, 2013 6:03 AM

In the U.S. the state-men has a budget of 52,000,000,000 dollars just for the secret activities. Well: I do not think we can reduce all responsibility for this to the U.S. Government. A sum of 52,000,000,000 dollars means a lot of POWER and INCOME for those concerned state-men, public careerists, public employees for life. The public careerists indeed have a power even greater than the highest governmental authorities. The public careerists are tremendous!

Foer this reason I dedicate to Mr. Schneier:

To the Kings of Cypherpunk
http://www.hyperlinker.com/ars/en/_ttkoc.htm

Greetings, Danilo D’Antonio

knuth September 6, 2013 6:04 AM

@aaaa
you would also have to trick the user to download fake keys for the signature check too. And you have to convince the user to download new keys from the keyserver so that the fake updates are ok with the signature check. And after doing that you would have to keep feeding him fake updates. Otherwise on the first real update the signature check would kick because of faulty keys..

Yes linux downloads updates on http not on https. But the signature check after that is based on the encryption keys the user already has. Now whether the keys are trustworthy or not is a very good question….

fridz September 6, 2013 6:06 AM

@Bruce
Excellent articles in the Guardion.

However I found your tacit endorsement of Silent Circle a little out of character for you.

On their homepage they say that “By making everything properiertary — from the nerwork and servers to the fiber-optic cables — Silent Circle aims to keep your online communication private…”

Also on their homepage, it says that the company is run by ex US Navy Seal and British SAS security experts.

Closed source, security by obscurity, ex US and British Military… somehow this does not sound trustworthy…

Curious September 6, 2013 6:13 AM

@N

I guess, if I were a bureaucrat or a trusted lieutenant of some kind, I would be tempted to find come up with ways to have such aspirations remain a secret or to make such an enterprise so trivial that one wouldn’t notice it otherwise.

ATN September 6, 2013 6:16 AM

One undetectable way to defeat perfect random number generators is to copy/send their result to some spying host.
I find “strange” that you can’t get most/all Wifi source drivers on Linux and you have to load a pre-compiled modules – the hardware provider do not really benefit from the complexity added.
I would not be surprised to see non-ethernet frames going over the Internet – frames never displayed by any Ethernet protocol analyzers.

The security of Linux loaded with binary drivers (Wifi, graphic card,…) is equal to the security you get from Windows or OSx.

Wesley Parish September 6, 2013 6:26 AM

So we finally know who to blame for the attempted Linux kernel subversion that that BitKeeper caught back in 2003.

Curious September 6, 2013 6:33 AM

@Curious “In one of the Guardian articles you said..)

Oh hey an imposter 🙂 Someone else, being curious.

Clive Robinson September 6, 2013 6:40 AM

@ Albert,

With regards,

    Ok, on to the big question. Is AES safe?

The answer is “you known not what you ask” and because of that the reply of “Maybe, probably not and definatly not” is going to look odd at best 🙂

I’m not trying to be either rude or pick on you or the many others who are asking the same question in their heads but out of fear/etc won’t ask.

The problem is in part the way we use “AES” as a shorthand and assume that others will know which one of the mtriad of possible meanings we are using from the context of what we are saying.

AES is the standard that resulted from a competition to find a replacment for the previous US encryption standard DES.
At it’s heart is a crypto-primative algorithm.

So the first of my three answers “maybe” applies to the algorithm. That is we beleive the algorithm is secure in the light of knowledge current in the open community at the time the algorithm was selected. But we also kknow by the same measure other algorithms submited were probably more secure and almost certainly still are. What we don’t know is what is yet to be discovered in the way of breaking the algorithms or what the closed community of the NSA know but are not saying (we know they certainly rigged the competition process).

So onto my second answer of “probably not” any encryption algorithm is fairly usless on it’s own as in effect a block cipher like the AES algorithm, is a glorified “substitution cipher” which used that way it can be increadably weak. Thus the algorithm is used inside other algorithms known as modes hence you see AES-CBC, AES-CTR where the mode type appears after the hyphen. Now most modes are designed for protecting data “on the move” as opposed to “data at rest” whilst some modes are good for the former they may be terrible for the latter. But within the mode algorithms are “magic numbers” and assumptions such as “nonces” or even other algorithms when having “tweakable algorithms”. One such is XTS used in TrueCrypt and the algorithm is known to be brittle in use due to underlying assumptions which is why there are recomendations that people need to follow if the use is going to match the assumptions. This causes real world issues when using Flash drives for instance.

Which brings me onto my third answer of “definatly not” which relates to real world implimentations and use. The AES algorithm and the mode algorithms have to be turned into lists of instructions for the system they are to run on. In most cases this is a highlevel programing language that is compiled and run time linked into the OS which produces CPU level instructions called machine code. In turn the machine code is interpreted in the CPU into micro-code which drives the data through CPU latches around the various parts of the ALU and register files etc within the CPU. These parts of the CPU are in turn made of logic gates which have power consumption signitures and time delays and thes can and do open up side channels. I mentioned early that the NSA rigged the AES competition, that is they advised NIST on the selection criteria which ment that all the cadidates had to “post their code” they also had to meat certain criteria such as speed the number of gates used etc. These criteria lead the cadidates optomise their code as “efficient” as possible to each criteria. Now as the NSA GCHQ et al know very well the more efficient you make the implementaiton of crypto code the more side channels it has unless extream caution is observed. One thing we do know is that optomised for speed and minimized number of gates is an almost certain guarentee of side channels no matter how clever you are. Also the NSA knew that developers would not write their own code they would simply download and use the competition candidate code.

As was pointed out and demonstration code exploited implementations of AES were subject to timing attacks that could fully leak the key across a network connection due to “cache hits” on the Intel x86 platform from base level pentiums upwards within weeks of the winning candidate being anounced. Even today there are very side channel suseptable AES implementations in use infact the majority of those implementaitons on the likes of routers and switches are timing channel cursed as are most application level software implementations that are more than a year or so old…

It’s why I tell people never to encrypt or decrypt data on a computer that is connected to an external network, do it just once and the key may well be compromised.

I hope that answers your question satisfactorly.

@ David Jao,

You say the seed is publeshed, but you forgot to mention that unless the standard uses a seed that is verifiably independent (say first hundred digits of Pi or some other well known constant that has been known for more than 100years) then it is just another unknown magic number irrespective of how many times it’s been hashed.

Further we know that hash collisions can be computed much faster than “brute force” for one hash construct, and we also know that the NSA or other US agency knows atleast one other way to arive at collisions than the open community did when it poped up in some network malware. It would be foolish to think that they did not know short cuts to collisions for other hash constructs, simply because they alowed it’s existance to become known.

@ Knuth,

    Open source is in the open, you can’t hide nsa bugs in it

Yes you can you only have to be marginaly smarter than those doing the code review. I’ve mentioned this several times in the past, and it’s one of several reasons I think “code signing” has little if no real security value.

@ Dave,

I think you will find that microsofts software updates have already been compromised in the past.

Albert September 6, 2013 6:59 AM

Thanks Clive for the explanation on protocols and modes. I was really just curious if they had figured out some shortcut to break, let’s say AES-128 in 2^60 iterations or less (or something like that). In that case many applications would be broken since AES is used everywhere.

If I remember correctly Rijndael was a bit special among the finalists for AES since it was not based on a Feistel network. But it seems most experts now agree that it is safe enough anyway. It has received a lot of scrutiny since then.

It doesn’t look like there was any information about AES though in these documents. Bruce says he trusts it. That’s enough for me. Then I trust it too 🙂

Bruce Schneier September 6, 2013 7:02 AM

“However I found your tacit endorsement of Silent Circle a little out of character for you.”

As I keep saying, I’m playing the odds. I don’t know what is strong and what has been compromised. I only know what sorts of attacks the NSA tends to prefer, and what sorts of targets are more vulnerable to those attacks.

I know some of the people behind Silent Circle very well. And I don’t have anything better for my iPhone.

Bruce Schneier September 6, 2013 7:03 AM

“It doesn’t look like there was any information about AES though in these documents. Bruce says he trusts it. That’s enough for me. Then I trust it too :)”

Clearly I need to write an essay about how to figure out what to trust in a world where you can’t trust anything.

thefunnierthebetter80 September 6, 2013 7:05 AM

@knuth • September 6, 2013 3:44 AM

“Having said that i’m still a bit suspicious about truecrypt because of the allegations that compiling truecrypt source code doesn’t give you the same hash as their binary executable…”

That’s quite obvious: TrueCrypt binaries are digitally signed by the developers, otherwise they wouldn’t work on windows (win7 64 bit doesn’t allow unsigned drivers). You’ll never get a binary with the same hash as theirs if you compile from source.

@ Dan • September 5, 2013 10:35 PM

“I think that it’s even easier to backdoor opensource like Truecrypt than actually force company to implement it. (considerate the gigantic budget NSA has)

BTW: Truecrypt founded before 9.5 years, right after NSA start finance the anti-encryption programs..”

Yeah, right. That’s why I only trust closed-source software, released more than 10 years ago, and I feel even safer if it’s full of spyware and ads.

knuth September 6, 2013 7:05 AM

@Clive i also said that hiding things in plain sight sometimes is better because none expects it 😉
And open source security has as a prerequisite that the code reviewers are “gods” and can see any buggy changes someone adds (another weak link in trust)

Can you provide more info about the weak of AES while connected to external network?

David September 6, 2013 7:07 AM

@knuth

I mean that if they have access to the fundamental infrastructure of Windows Update or a similar system, they can inject anything they want targeting anybody.

The updates would be signed by Microsoft, so you would have to verify the hash of every update manually with some sort of generally available source. I can’t imagine anybody doing that, let alone the general public.

Think about how many sources we trust on a generic Windows PC with all those automatic updaters running. It’s an absolute nightmare.

I agree that it would be more difficult to do this on Linux or another OSS OS, but still, who knows what they have access to. If I ever want to keep something a secret I’m probably not going to store it on a computer.

It’s also important to note that I don’t mean a rootkit that everybody would install because the repository is comprised, that would be too conspicuous. I mean specifically targeting individual computers/people with a rootkits by identifying their machine (some sort of ID, IP address etc), and offering them another (perfectly signed) update then everybody else is receiving.

aaaa September 6, 2013 7:20 AM

@thefunnierthebetter80 There is that issue with truecrypt behaving differently on windows and linux. Linux uses 0 for padding and windows uses random data. That would be a nice place to store some additional data.

knuth September 6, 2013 7:26 AM

@David unfortunately if you want to keep something secret you have to use a pc. Otherwise it’s paper and pencil and your calculations on algorithms.. How would you keep it secret? Storing it somewhere locking it with a key when you know that every lock can be broken?Storing it in a bank (haha)?
Obviously you would do something like Bruce suggested using an air gap (a pc without ever meeting a network). Although even this has a flaw. You would end up using an outdated system with every bug it has since it’s installation date..
When you use a computer you take a risk (as you take every second when you live in this world). The goal is to minimize the risk…

@thefunnierthebetter80
I’m talking about linux not windows

ToldYaSo September 6, 2013 7:32 AM

Let’s begin listing the colluding/complacent parties and boycotting them. Microsoft did themselves no favors with windows8 already so that’s an easy boycott…
What I find scariest of all is no disavowing of any of these programs or tactics from the NSA/Gov’t at all. They just say it’s necessary, please move on. They want to show us|everyone how big their balls are and how bad-assily they can break shit. It’s a bluff, a last ditch effort to look strong when they have been weakened to the core.

David S. September 6, 2013 7:34 AM

@knuth

You’re absolutely right, I clicked a little to soon, I meant networked computer.

A non-networked computer with encrypted drives is probably the safest thing I can think of.

Apologies for using the name David, I just noticed somebody else is already using that.

thefunnierthebetter80 September 6, 2013 8:01 AM

@knuth

If you talk about linux, then it might depend on the version of the compiler (every distro has a different version of gcc), the versions of every single library needed for compiling, etc… It’s very difficult to get the very same binaries on two different computers, try with other open source software.

@aaaa

Honestly I didn’t know that. Anyways, the source code is freely available. I guess that if that was a backdoor, someone would have already found it. Truecrypt has been out for almost 10 years.

Clive Robinson September 6, 2013 8:01 AM

@ Bruce,

You mentioned in your update RC4 as potentialy having more problems.

We already know that there is some bias in the output and that the key schedual realy needs to run on a few times around the state aray after loading in the key.

Also with ARC256/8 the output oneway function is add two bytes from the state array mode 256 where the length of the state array is 256 bytes. This is potentialy fragile construct.

Since 97/98 I’ve used ARC1024/8 with a key schedual run on of five times around the 1024byte Sarry.

Further for some applications I’ve modified the array update algorithm by adding a variable to the J pointer or added a variable to the output value prior to the mod 256.

I did this originaly as a method of generating high speed random numbers where the variable in use was actually the output of a slow TRNG.

However during the testing phase I used either a BBS PRBS generator 10 bits at a time or a variation on the “Mitchel-Moore” [1] variation of a LFSR with 16bit output where 8 dropped the bottom six bits to give me the 10 bit value.

Both gave rather pleasing results on testing and I’ve since used them in a number of products (the Mitchel-Moore output passed through a nonlinear circuit, similar to those found in some BID stream generators).

[1] For those that want to know more about the Mitchel-Moore generator it can be found in Donald Knuths trillogy in the section on random generators. Put simply it uses an obvious observation to enhance a LFSR byte wide operation suitable for efficient CPU usage. As many people are taught an LFSR is a bit wide array with “taps” that are XORed together and the result fed back to the shift register input. The simple observation is that when you add two integers together the bottom bits are actually XORed, thus if you instead use an array of unsigned integers and add from the tap points you get the same effect as a maximal length LFSR on the LSB but the addition evolves through the integer towards the MSB. For obvious reasons both the SUB and MAD (multiply and add) instructions can be used as well. It’s since been shown that the maximal length on the otput is in effect equivalent to 2^nm where n is the number of integers in the array and m is the bit width of the unsigned integers. If you look at the basic design of the SNOW stream cipher you will see it’s in effect a Mitchel-Moore generator that uses a cellular automata as the non-linear feedback element.

speedie September 6, 2013 8:38 AM

Bruce ,
I see you avoid questions about OS you’re using ( well , vagueness is sort of avoidance ) .
I would presume you’re biased towards Linux on basis that anyone can add malicious bits into open source . Regarding that , would you advise against using certain northern American company’s product , which has NSA code deeply integrated into their OS ?
Great work , btw 🙂

Dan September 6, 2013 8:40 AM

Hello
Bruce Schneier,

One simple question:

As simple user, that don’t implement the security itself on websites (or VPN systems),
I don’t have what to do about it.

the only thing that I have control is when i’m using cloud-hosting:
till day, I mostly used WINRAR (from RARLAB) encryption password (on document stored on cloud, that I didn’t want to be digged on)

Now, I think maybe to change my software that I use to encrypt personal files (becuase WINRAR could have backdoors as it’s aren’t open sourced)

Should I change to 7zip? you have better offer?

Truecrypt actually sounds too good to be true (“Plausible Deniability” – Like they want illegal things will use it). the developers are even anonymous, so, it’s even rational (after latest NSA leaks) that maybe NSA behind it.
I think that it’s even easier to backdoor opensource like Truecrypt than actually force company to implement it. (considerate the gigantic budget NSA has)
What do you think?

BTW: Truecrypt founded before 9.5 years, right after NSA start finance the anti-encryption programs..

dan September 6, 2013 8:48 AM

@thefunnierthebetter80

No,
I’m sure for example that BitLocker has backdoor, 100% sure.
but truecrypt, I don’t know?

David Jao September 6, 2013 9:03 AM

Clive Robinson, I said in my comment that a collision attack is probably not good enough for what you’re suggesting. A collision attack lets you generate two identical curves, with little control over what the curve is. I can’t see how to build a backdoor into that situation. What you would need to do is fix a curve in advance, which has a backdoor, and use the hash to generate that particular fixed curve. This attack requires computing preimages for SHA-1, which is way beyond what the academic community can accomplish. Without an available preimage attack, hashing DOES help, a lot, and you are wrong to ignore it.

It is possible that the NSA has a backdoor. Anything is possible. But I don’t consider it likely.

JohnE September 6, 2013 9:14 AM

My Google HTTPS connections (in the US) for the last year(s) has used RC4, so today I wanted to see what cipher suite is being used…AES_128_GCM…coincidence?

peter September 6, 2013 9:35 AM

Bruce has talked quite a bit about Security Theatre. Who knew that that the NSA had reduced almost all internet security itself to theatre. A big show.

Bruce Schneier September 6, 2013 9:42 AM

“I see you avoid questions about OS you’re using (well , vagueness is sort of avoidance ).”

What? I said it in the Guardian piece. I use Windows (unfortunately).

Bruce Schneier September 6, 2013 9:46 AM

“However I found your tacit endorsement of Silent Circle a little out of character for you.”

I’m not endorsing it. I’m using it. If you know of a better solution for iPhone secure voice and messaging, please tell me.

Jacob September 6, 2013 9:49 AM

I have been following the Truecrypt story for some years, and my conclusion is that this program can not be trusted:

  1. It is unique in the field of open-source world whereby the developers of a very successful program, which is actively maintained, runs well on multi-platforms and well documented refuse to show themselves to the public, with the organization (“the foundation”) behind them shrouded in secrecy.
    To further stress the point – I’ve never encountered a highly capabale coder, financially uncompensated, that would produce such a well-written documentation (and would not boast about it to boot).

  2. There is strong anecdotal evidence that the developers do not respond to comments other than a direct challenge to the cryptographic implementation in the program. Highly unusual for such a project.

  3. Crypto review done on the program found an anomaly in the Windows source code, in a critical security code, that is highly suspect. No such anomaly found in the Linux source code though. Consequently, authors recommend using only the linux version compiled by the user.

Skeptical September 6, 2013 9:54 AM

But this has nothing to do with what information the NSA is permitted to access under law.

In our society, you do not have an unlimited right to privacy. Instead, where allowed by our law, the government may search your papers and property, and it may compel you to provide keys to encrypted material.

Your right to privacy in the United States is secured by good law and good institutional design and culture. If a court issues a warrant to allow my belongings to be searched, or my communications to be intercepted, that’s okay.

I am more concerned about entities other than the US government eavesdropping. Companies. Criminals. Bored script kiddies. So long as encryption works against them, I’m fine.

These leaks have wandered far from the initial disclosure of bulk metadata collection of domestic phone calls. We’re now disclosing capabilities of the NSA vis-a-vis certain forms of protocols. We’re now disclosing methods.

This actually harms national security – and not in the dubious way that “national security” is paraded out whenever a leak occurs.

Honest question Bruce. I know nothing of your politics, but are you not a little concerned about this? Are you not a little concerned that so many media organizations have access to 50,000+ highly classified documents? How secure are those documents, at this point?

I have the sense that a line is being crossed with some of this reporting, and I don’t mean that in a “let me argue on the government’s behalf/Fox News” sense.

MarkH September 6, 2013 10:10 AM

@Jan (re Debian SSL):

No doubt all sorts of mistakes have slipped through open source review, but that ghastly security failure is not a true example.

The OpenSSL developers had correct code in the relevant module.

A participant in the bundling of the Debian distribution — who obviously had no concept of cryptography — modified that module, without informing the OpenSSL developers. This was a violation of good distribution software practice: for example, at that time the RedHat developers had a standing policy that all changes MUST be sent to the software’s custodians for review.

I think it quite likely, that if the OpenSSL group had been invited to look at what the Debian person did — or much more, if the module change had been submitted as proposed OpenSSL revision — they would quite promptly have screamed bloody murder.

So the Debian catastrophe was the consequence of “bad hygiene,” rather than failure of open source developers to catch a defect in review.

Dave O September 6, 2013 10:11 AM

Bruce: “New talking point: The NSA has deliberately made us all less safe.”

There may be data here. Are a percentage of vulnerabilities fixed by vendors deliberately introduced by the NSA (or GCHQ, or Russia or China or France or Israel’s equivalent entity) and later found by hackers?

Finding which are deliberately introuduced would probably require a lot more success on your whistleblower strategy, mind… would there be mileage in looking at who originally introduced later-fixed exploitable bugs into open-source software and looking for patterns?

More generally, does this also probably explain why the US has made noises about not liking the proliferation of Huawei gear in the West – that’s simply a case of them assuming that what they do to vendors in the US, the Chinese government does to Chinese vendors?

Bruce Schneier September 6, 2013 10:12 AM

“Honest question Bruce. I know nothing of your politics, but are you not a little concerned about this? Are you not a little concerned that so many media organizations have access to 50,000+ highly classified documents? How secure are those documents, at this point?”

Yes. I’m concerned with it. Their opsec is worlds different than the NSA’s.

Part of the problem is that everyone is guessing what is really worth protecting. “Will publishing the article on this cause someone who is now using it to switch to another method? Is that someone bad and planning actions that could harm Americans?”

Those with the documents are doing their best to make reasonable judgements. If the government was more open to a meta-discussion on what should or should not be made public, then there might be ways to better protect the important things. But by reflexively pushing back and refusing to engage at all, they lose that chance.

The underlying problem is overclassification, and a “secrecy above all” mindset.

Winter September 6, 2013 10:21 AM

“New talking point: The NSA has deliberately made us all less safe.”

We have one contractor, Snowden, putting his life at risk to inform us about the NSA crimes.

All these back-doors and other security holes are worth quite a lot of money. All that data stored in the NSA repositories is also worth a lot. How many contractors and others with access have used their knowledge for personal gain?

I do not even want to guess how many colleagues of Snowden have sold their knowledge to criminals, spies, and commercial parties? And how many of these back-doors are circulating among criminals?

Bruce Schneier September 6, 2013 10:25 AM

“I do not even want to guess how many colleagues of Snowden have sold their knowledge to criminals, spies, and commercial parties? And how many of these back-doors are circulating among criminals?”

Yes. The fact that the NSA still has no idea what Snowden took, and would not have known he took anything at all if he didn’t go public, strongly implies that he is not the first.

Random832 September 6, 2013 10:42 AM

@thefunnierthebetter80

“That’s quite obvious: TrueCrypt binaries are digitally signed by the developers, otherwise they wouldn’t work on windows (win7 64 bit doesn’t allow unsigned drivers). You’ll never get a binary with the same hash as theirs if you compile from source.”

Is there a way to “dis-sign” a binary to prove that the original binary is identical to some other unsigned binary? It seems like there should be.

Terry Crews September 6, 2013 10:46 AM

Don’t be fixated on the NSA. Does Albert Wenger propose we sue everyone else that is breaking the law to steal information? He’s living in Candy Land. I guess when all you have is a hammer, everything starts to look like a nail.

MarkH September 6, 2013 10:49 AM

@charles:

According to a published report (not handy, I read this years ago when it was still news), a Debian team member ran OpenSSL through a source-code checker (along the lines of lint, but probably much fancier), and then “fixed” all of the source code statements flagged by the checker.

One of these “fixes” had the effect of zeroing out randomized data used in key generation (the automated source checker flagged the statement as referencing an uninitialized variable).

NSA has crossed several lines to become a criminal organization — but when it comes to infosec technology, they are the best in the world. I’m sure that their inserted backdoors are clever and subtle. The intention of an NSA backdoor is to enable NSA to spy: the Debian bug enabled EVERYONE to spy.

knuth September 6, 2013 10:54 AM

Until recent news i was just sceptical about truecrypt and said to myself “nah it’s just conspiracy theories”. But with latest news i agree with Jacob. Truecrypt is not to be trusted. Would you trust someone that is not even showing his/her/their identity? That has never made his presence known?
A team that makes a program that has no opponent in it’s features. No other program has plausible deniability, no other program has so many features (keyfiles,disk encryption etc) for free for two different OS with a very user friendly GUI too. Too damn good to be good i say…

Enigmatic September 6, 2013 11:04 AM

@Jacob

Following your comment, I would like to mention a few other facts about Truecrypt which raise eyebrows.

If we look at the “company structure” of Truecrypt, we discover an international operation based within the Czech Republic as well as within the USA. Also, we find that there is a “non-profit” element as well as a “profit” element involved.

It is a fact that Truecrypt has filed trademarks in the Czech Republic as well as in the United States.

Czech Republic:

http://tm.kurzy.cz/truecrypt-developers-association-lc/truecrypt-p439088z290125u.htm

USA (federal trademark):

http://www.trademarkia.com/truecrypt-78860644.html

http://www.trademarkia.com/company-truecrypt-developers-association-lc-3533880-page-1-2

From the application in the Czech Republic we can see that the name “David Tesařík” is connected with it.

If we look at the company structure, we find that there are two entities, both registered in Nevada, via an “anonymous” registration company.

There is first the “Truecrypt Foundation” which is a “Domestic non-profit corporation”:

http://nvsos.gov/sosentitysearch/CorpDetails.aspx?lx8nvq=djRu2RWGpIESdKlMBbSrDw%253d%253d&nt7=0

Then there is the “Truecrypt Developers Association, LC” which actually is a “Domestic Limited-Liability Company”:

http://nvsos.gov/sosentitysearch/CorpDetails.aspx?lx8nvq=VP9XIL826QISlsINaYdB5g%253d%253d

The company details at the website of the Nevada Secretary of State reveal another name, which I haven’t seen mentioned anywhere else yet: “Ondrej Tesarik”, and this person heads both entities.

From these facts alone, a number of questions should be raised – unless of course one prefers not to ask any questions, as this could diminish the trust in this wonderful and versatile product, which is being offered for free.

I would raise for example the following questions, and there are certainly many more:

  1. Why do citizens from the Czech Republic feel the need to create a non-profit as well as a for-profit organization in the USA, which as a consequence also means that they have abide to the US laws? After all, the USA is not exactly a “crypto-friendly” place, and this was already known in 2003/2004, when Truecrypt started to appear on the “market.”
  2. What exactly is their “profit” here? After all, they are developing an incredibly complex program for free. Is has been mentioned in various discussion about Truecrypt in the past that other companies employ large teams to develop similar products.
  3. The people behind Truecrypt need administrative backup to maintain companies and to file trademarks (which of course also costs money). Where does the “backup” and the money come from?
  4. Is the Truecrypt story simply too good to be true?

In discussions, you often find fervent defenders of Truecrypt. However, I think those people also need to ask themselves the following question:

Would the US-government leave a US-company (yes, it’s a US-company!) like Truecrypt in “peace” and do nothing while they develop “uncrackable” encryption?

Would they really…?

jonathon September 6, 2013 11:16 AM

@Mike

Calling the US the good guys, means that you are an active supporter of terrorist activity.

Or did you really not know that the United States was:
* The first country to be convicted of being a terrorist state by the United Nations;
* The first country to be convicted of being a terrorist state by the United Nations twice;
* The first country to be charged with being a terrorist state, three times, by the United Nations;

Dirk Praet September 6, 2013 11:29 AM

There must be some kind of way out of here, said the joker to the thief
There’s too much confusion, I can’t get no relief
Businessmen, they drink my wine, plowmen dig my earth
None of them along the line, know what any of it is worth

No reason to get excited, the thief, he kindly spoke
There are many here among us who feel that life is but a joke
But you and I, we’ve been through that, and this is not our fate
So let us not talk falsely now, the hour is getting late

All along the watchtower, princes kept the view
While all the women came and went, barefoot servants, too
Outside in the distance a wildcat did growl
Two riders were approaching, the wind began to howl

Bob Dylan

Bluetooth September 6, 2013 11:29 AM

Skeptical • September 6, 2013 9:54 AM


This actually harms national security – and not in the dubious way that “national security” is paraded out whenever a leak occurs.

NSA (supposedly) did not even know what documents Snowden had taken with him (NSA seems to “audit” everyone else except their own selves), and would not have known until Snowden went public.

For this reason it could be argued that Snowden did a great favor to NSA – they obviously need to do something to secure their own document management.

Figureitout September 6, 2013 11:33 AM

To everyone calling out Bruce’s opsec; I and everyone else would probably get a good laugh at your attempts of true opsec for no more than a week. You would give up, guarantee it. I don’t envy his situation at all and most anyone that had to do it for real don’t like to talk about it or relive.

And it’s goddamn pathetic that civilians are having to start thinking w/ this brain numbing paranoia that will make everyone crazy and anti-social. In 2013, no one can hand me a piece of hardware and convince there aren’t machine code backdoors or hidden commands.

You don’t even have a secure computer in the first place to program the next piece of infected hardware. Infected, all of it.

Skeptical September 6, 2013 12:01 PM

Bruce, thanks for taking part in this thread, and for your posts in general. Sites like this, where an expert provides facts and insight in a way intended to reach the non-expert, and where the discussion proceeds largely free of vitriol, are true gems. They’re the internet at its best.

re: over-classification and government refuse of a “metadiscussion”

Both fair points in general.

But specifically with respect to the Bullrun material, I have a serious doubt. The Bullrun material was not perfunctorily stamped Top Secret.

The briefing material, itself intended for a small circle of persons, stresses that even disclosure of the very FACT of the capabilities would compromise the program’s effectiveness. The program is described as among the most fragile, and most important signals intelligence operation in existence. Procedures are given for obscuring the program’s existence when information collected by the program is passed on to other groups, including those with TS clearance.

This doesn’t strike me as run of the mill, typical “when in doubt, classify” language.

It also seems clear – but maybe I’m missing something – why this program would be so fragile.

For any operation, there will be some frontier at which opsec and other capabilities must be traded. If a terrorist group wants to coordinate an attack in the US, but not use any telecommunications, their opsec will be enhanced, but their capability will be degraded. If a terrorist group wants to mount an attack, but wants to avoid any rehearsals, their opsec will be enhanced, but their capability will be degraded.

Where and how to make that tradeoff is a judgment call, and it’s tough for me to discern how certain groups are making those calls. However, if a program is briefed like Bullrun is, then it seems likely to me that some important groups were making judgment calls that rendered them vulnerable to Bullrun collection. And I’d suspect that at least some of those groups are now changing their calculus.

Anyway – I don’t mean to impugn your intentions (have to say I have some doubts about Greenwald at this point, but maybe I’m wrong) with any of this. But the warnings on the Bullrun briefing – to me – set it apart from many other TS material. I have misgivings about it being reported at all, and the idea of this material relying on the security of the NY Times, ProPublica, The Guardian, and their journalists (many skilled and experienced, who I read and respect) frankly is slightly terrifying.

Orgdoctor September 6, 2013 12:06 PM

I just finished reading today’s NY Times front page article on Snowden’s latest revelations about the NSA’s massive and successful effort to destroy the concept of privacy in this country. I was upset at the earlier revelations, but this one has put me over the top. To have met with a very public and official rejection of their efforts to gain authorization to insert a backdoor into the then-current encryption scheme (DES, as I remember), the NSA then proceeded to achieve the same ends through covert, extra-legal, and in many cases, coercive means. And they call Edward Snowden a “rogue” employee? What do you call a “rogue” agency?

I cannot believe that people are not taking to the streets over this. Is there no one left who has read “1984”? How long will it be before sufficient money is proffered to our political “leaders” to gain access to these communications by advertisers, credit agencies, private investigators, insurance companies, and … you can fill in the others. Moreover, has there ever been a case where the data a government has acquired on its citizens has not been used to increase that government’s control over its citizens? I think not.

Wake up, people. This is the “red line” that has been crossed in this country — to hell with Syria. If we don’t rise up to put a halt to this NOW, we’ll remember this as the time when we really lost our freedom by acquiescing to the emergence of a surveillance state.

unimportant September 6, 2013 12:10 PM

@random832

There is a method for proving/disproving TrueCrypt binaries of being built from the published source code: Just use the MS disassembler which comes with Visual Studio and compare the signed disassembled binary code with a self-compiled version. Ideally they should be identical.

Enigmatic September 6, 2013 12:31 PM

The comment about Truecrypt I tried to leave is now also available here:

p a s t e b i n (dot) com/7LNQUsrA

In the comment, I ask some inconvenient questions and examine the structure of the “Truecrypt-company.”

absinth September 6, 2013 12:33 PM

So now here is a “solution” at least as far as cell phones go…a supposedly unbreakable (uses proprietary cryptography) cell phone called Quasar IV.

“World’s most secure smartphone” looks like snake oil, experts say

The funny thing in this story? Last year, the company published an ad in the New York Times, containing a message encrypted using their proprietary system. Now they claim that the fact that no one has decrypted this message is proof of the Quasar IV’s effectiveness.

The problem with this is that that long string of characters may not even contain any real message.

Gordon Davisson September 6, 2013 12:34 PM

I’m surprised there isn’t more discussion about resurrecting the TCPCrypt project (opportunistic encryption at the TCP layer), since it seems like a good option to make ubiquitous network snooping less useful. The main criticism I’ve seen is that it doesn’t necessarily authenticate the server, making it vulnerable to MitM attack, but it should help a great deal against passive snooping. It also has the advantage that it sets up encryption first, then (optionally) authenticates the other end of the connection — meaning that an attacker has to decide whether to try an active MitM before knowing if it’ll be detected.

speedie September 6, 2013 12:47 PM

Didn’t read the piece yet , my comment was regarding your answers on this thread .
I remember you discarding possibility of backdoor in SE Linux , do you still find it implausible ? My comment about your OS choice was more of trolling , but I think SE linux question is important , since red hat is de facto linux standard in corporate business . Should we trust the company with such ties to NSA ?

Squyd September 6, 2013 12:51 PM

@Bruce,

So, Bruce, it seems from much of what has been published about these recent revelations, and comments on this thread, we can summarize with the following observations and conclusions…

A. For users concerned about securing their computers against theft, TrueCrypt is likely the best choice (over PGP Whole Disk Encryption). TrueCrypt does NOT protect against network intrusion.

B. For encrypting backups and files uploaded to cloud services and email attachments, using a downloaded source code compiled by the end-user (where available) may offer more confidence than the pre-compiled installer. If the end-user can’t or won’t compile the source code himself, then use the pre-compiled installer, but realize that a backdoor (which may be discovered if it were in the source code) cannot be checked IF its in the installer.

C. Free, Open Source software such as GPG, TrueCrypt, may offer more confidence over PGP and PGP Whole Disk Encryption because its more likely that large companies are cooperating with the N.S.A., whereas the developers of free software have less to loose and therefore may be less likely to cooperate. This also applies to anti-spyware and anti-virus software, and password management software. PGP was recently re-branded “Symantec Encryption”. Remember when Peter Norton sold Norton Anti-Virus to Symantec back in 1990?

[NOTE: This also implies that free software developers can be more easily “disappeared” or otherwise harassed in other ways by an organization with a multi-billion dollar budget. Also, governments may be able to routinely, covertly substitute a compiled installer (supposedly compiled from a published sourcecode) with one in which it has inserted a backdoor.]

Such developers should also publish the hash of the installer which THEY compile themselves. If they refuse…

D. Symmetric/Conventional encryption keys with length (at least 24 characters) AND complexity (lower case, upper case, numbers, special characters) is preferable to Asymmetric (public-key encryption) keys, but if users MUST use public-keys make sure they are at least 4096 bits… with a passphrase of… wait for it… at least 24 characters with complexity.

E. TSL/SSL/SSH/HTTPS are probably compromised by the N.S.A. and any protection they offer is ONLY viable against small to medium B2B corporate espionage, and relatively inexperienced hackers; not by governments with multi-billion dollar budgets.

F. Any OS whose sourcecode is not available is likely to be from a very large company cooperating with a government to insert vulnerabilities and backdoors. OSs are expensive to produce and thus only a large company is able to produce (and market) such a product.

The solution here is to have a FREE OS which is built with security in mind by an opensource collaborative, which can EASILY (the key word is EASILY) be installed over the factory OS on a computer. However, it would take years to develop, and it would be VERY difficult to identify and catch any fellow contributor who was secretly working for a government to actually ADD vulnerabilities to the free OS.

G. As far as vulnerabilities and backdoors which were designed and installed at the hardware-level, its not likely that end-users can EASILY detect and eliminate them. I suppose that a diagnostic software program could be developed that would deep scan and test EVERY component of the hardware to detect behavioral capabilities and such, but ONLY if it was installed on a computer running the free OS and ONLY if that free OS did not itself have vulnerabilities could users be reasonably confident that the hardware was not performing odd, unknown actions behind the user’s back. Until an end-user can build his own chips, disks, and all other components of a computer, and assemble them for less time effort and money than it takes to go to a big-box store or order it from the web, this is NOT something an end user can do very much about.

Thoughts, anyone?

MarkH September 6, 2013 1:01 PM

@ATN:

I would not be surprised to see non-ethernet frames going over the Internet – frames never displayed by any Ethernet protocol analyzers.

Where the physical transport is Ethernet, any content that doesn’t conform to the Ethernet PHY format will in general be lost, because usually it must be relayed by various pieces of equipment along the way, originating from a variety of vendors in several countries.

Unless NSA has subverted almost all of this equipment, such a technique would fail. For example, routers must remap packet addresses; this address translation is generally done in software (in many cases, open source software).

Fiddling the Ethernet hardware on a router wouldn’t solve the packet forwarding problem, because the Ethernet module does not (in general) contact the “other side” of the router; and even if it did, it wouldn’t know what address to map, this being a question of router configuration managed by the software.

There’s no question that network devices can be “fiddled” so as to leak all sorts of information (undoubtedly, they leak plenty of information purely by accident). However, I don’t see non-standard local network frames as a very practical way to get internal information out on the public internet.

unimportant September 6, 2013 1:14 PM

@Alan Kaminsky

I see your point 😉 However, if you suspect the MS disassembler of producing false output on certain patterns, then the disassembler itself would also be recognizable huge. And a disassembly of the disassembler would also reveal the suspicous code patterns which are specially treated.

Bryan September 6, 2013 1:56 PM

Hi Bruce

Since you’ve mentioned you’ve accessed some of the Snowden materials, you might have received a NSL, or might receive one in the future.

Have you?

–Bryan

P.S. Someone might ask again in the future, and if you then say you cannot disclose any such thing, then we’ll know.

Quartermain September 6, 2013 2:35 PM

@unimportant

@Alan Kaminsky

I see your point 😉 However, if you suspect the MS disassembler of producing false output on certain patterns, then the disassembler itself would also be recognizable huge.

Maybe it is not the disassembler but the Win OS that filters out “certain specially marked” patterns.

“Just kidding” of course…

Charlemagne September 6, 2013 3:28 PM

Wayne • September 6, 2013 3:01 AM


Regarding the command line options in Password Safe, what are the command line arguments?

I would also like to know this…but maybe it is Top Secret?

Nick P September 6, 2013 3:47 PM

@ Wayne and Charlemagne

re password safe command line

I was curious too. A quick search turned up nothing from project web site so I downloaded the source code zip and looked in there. Here’s some commands I found in the help folder (copied verbatim):

“Invoking Password Safe with no arguments will cause the application to prompt you for the combination of the last database that was opened, or for the combination of a new database if none was previously opened on your machine (e.g., the first time you use Password Safe). It is, however, possible to modify this by invoking Password Safe as follows:

pwsafe database
This will open the specified database file, instead of the last one opened. If just a filename is given, without a path, it will be searched for in the directory in which the application was invoked. Note that if the filename and/or path has spaces, it should be enclosed in double quotes.
pwsafe -r [database]
This will open the specified database in read-only mode. If a database is not specified, then the application will prompt the user for a database, which will be opened in read-only mode.
pwsafe -e filename
This will prompt the user for a passphrase, and encrypt the file with a key derived from the passphrase. Note: The file can be any file. The encrypted file will have the same name as the original file, with ".PSF" appended to it.
pwsafe -d filename
This will prompt the user for a passphrase, and decrypt the file with a key derived from the passphrase. Note: This will work only on files that were encrypted by invoking pwsafe with the '-e' option (see above).
pwsafe -c
This will start the application closed, that is, with no database, and without the initial opening dialog (To access a database, use the File menu).
pwsafe -s [database]
This will start the application "silently", that is, minimized and with no database (unless one is specified). When the application is restored, the user is presented with the opening dialog box (This option is meant for starting the application upon login, via a shortcut in the user's Startup folder). Note: This implicitly puts the application in the system tray.
pwsafe -m
This is the same as the '-c' option, with the addition that the application is started as minimized.

In addition, the following options are accepted and may be useful if you wish to share the same preferences across several machines, for example, when running Password Safe from a disk-on-key.

-u username
This will cause the application to read and write preferences under the specified username, instead of under the login name.
-h hostname
This will cause the application to read and write preferences under the specified hostname, instead of under the machine's name.
-g config_file
This will cause the specified file to be used for loading and storing preferences, instead of the default pwsafe.cfg. If just a filename is given, without a path, it will be searched for in the directory in which the application was invoked. Note that if the filename and/or path has spaces, it should be enclosed in double quotes.

Finally, there are some special command line flags beginning with “–“. They are:

--novalidate
This will prevent Password Safe validating databases automatically when they are opened. Some validation is always required e.g. uniqueness of the entry's ID and the group/title/username combination.
--testdump
This allows testers to verify the mini-dump production when Password Safe has a problem to help developers resolve the isssue.
--cetreeview
A new feature allows two entries to be selected and compared either via the Edit menu or by right-clicking on either. Selecting more than one entry is natively supported in the List view but not in the Tree view. This flag enables "Compare Entries" in the Tree view via an extra dialog to select the "other" entry. Supporting multiple selction in the Tree view is under development. Once supported, this flag will be ignored."

END OF COPY/PASTE

I didn’t do a source audit so there might be more commands. I would guess that “pwsafe -e filename” is the method Bruce referred to when he said he uses PWSafe to encrypt files.

DownInIt September 6, 2013 3:56 PM

@David

I’m not asking about specific techniques, or a how-to; rather, I think the biggest block to telling the world is that those who know have a hard time answering the question, “How do I even start?”

One example is the collaboration between Aaron Swartz and the New Yorker to create Strongbox, a server application intended to let news organizations and others set up an online drop box for sources.

Code is available here: http://deaddrop.github.io/

Jacob September 6, 2013 4:01 PM

@Nick P :
The -e command is well documented on the project site. I also used it in the past (limited to ~1+ GB file size, at least under W32, due to in-mem encryption (i.e. not disk-based temp file method)).
Bruce mentioned that he used an old version with blowfish. The old version might have other (undocumented) commands. Note that prior to PWSafe V.3 the program used blowfish, and since V.3 the maintainer changed the algorithm to twofish.

Stuart R September 6, 2013 4:19 PM

I’ve read all the comments here… and I’m just saddened by the whole thing. I’m so very disappointed in my own (UK) government and that of the USA.

We live in interesting times.

Nicholas Weininger September 6, 2013 4:50 PM

Bruce, you say you’ve seen a lot of the Snowden documents, and presumably that means you have or had full unredacted copies of them, and you’ve alluded to controversies over how much to release. What leads you not to release those documents you have/had in full? Why not give others a chance to pore over the details and glean things you might have missed about how to improve security? Is it simply that you feel bound by an agreement with the Guardian and/or Greenwald? If so, are you pushing back on the terms of that agreement or questioning its justification? Or do you believe, for example, that unredacted releases might jeopardize Snowden himself or his ability to release further newsworthy information?

Honest questions all, not intended as criticisms. What you’ve done so far is already incredibly important and helpful. It just sure looks to me like, given what we know about what the NSA has done, we ought to question the usual justifications for not revealing these sort of documents in full whenever possible.

Jonathan Morton September 6, 2013 5:03 PM

Personally, I’m not so worried about BIOS- and firmware-resident viruses in my PCs. I am mildly concerned about my ADSL modem’s firmware, but I’ve been meaning to switch it to CeroWRT anyway, and since it operates in bridge mode with the wireless interface disabled, it’s a low priority in terms of attack surface.

If I was sufficiently paranoid to worry about such things, then I would most likely rely on my Acorn RiscPC for all supersecret, air-gapped stuff. That is the newest machine I own that has never been connected to the Internet (there is no TCP/IP stack installed) or, indeed, by cable to any other computer – at least since it left the factory in 1994. It is even capable of booting to a fully functional desktop entirely from ROM, but that is unnecessary since the hard disk has never been physically removed from the case. The hard disk means that I can still use the old software installed on it, which is just as useful now as it was in the mid-1990s; a half-decent word processor is there, for example.

That doesn’t mean that I would have to forego encryption, either. It has a C compiler installed, which is even (nearly) ANSI-compliant. Crypto algorithms tend to be small enough that I could manually type them in, referring to a conventionally trustworthy source on another machine, and keeping an eye out for obvious traps as I went. I could verify the algorithms by transferring files out by floppy disk – I still have a few good ones left – and decrypting them using an existing implementation on a normal machine. The 30MHz ARM CPU would be pretty slow for modern crypto – on the order of a really good 486 – but it would work.

Entropy for an RNG would be problematic since the hardware is very simple and deterministic, but the old waggle-the-mouse and rattle-the-keyboard tricks should allow generating that; another approach would be to scan an image using the handheld scanner which happens to be installed, and use the low-order bits of that as a source of noise.

But, since I’m not that paranoid, my more practical air-gap solution would involve a redundant m/board-CPU combination that has been sitting around, unpowered, for a few years. Build a fresh machine around that, using old but serviceable hard disks (a trio of 120GB units, for example) in RAID for reliability, perhaps even harden it against physical compromise by encrypting everything except /boot. Disable the Ethernet ports in the BIOS and password-protect that too. Install Debian Linux straight from the DVDs, no Internet connection required – and verify the DVDs using the public-key signatures before I start. That gives me a thoroughly modern AMD64 machine with all the software I could want, and I think a USB drive is good enough to get the data off it for wider distribution – or else I could go old-school again and use floppies or Zip disks.

This is, however, not a very man-portable solution. If I’m paranoid enough to want an air-gapped machine, I’m paranoid enough to want an air-gapped laptop to take with me on the run. That would be somewhat more difficult to arrange, but not impossible. I have a couple of old PowerBooks that could quite easily be adapted for the purpose, for example by physically removing the Airport cards; for one of them, I can also still obtain a pair of new batteries which would give it a respectable 5-hour battery life.

As a side bonus, their USB 1.1 hardware might involve a slightly smaller attack surface than modern USB 2 or 3 hardware. There’s just one major problem – their built-in Firewire interfaces might permit direct access to RAM, and are thus potentially a major disaster area in the event of physical compromise. The best defence, without physically removing the Firewire interfaces from the machine, is probably to never leave it switched on or in sleep under circumstances where physical compromise is likely (eg. while in an airport).

Ultimately, what you do to secure yourself depends on how paranoid you are, how paranoid you need to be, and precisely what threats you need to defend against. In some cases, reading your communications or files might be an acceptable breach, but manipulating them would not be – so you could work on an air-gapped machine without whole-disk encrypting it, and attach GPG signatures to your e-mails rather than encrypting them completely. In other cases where reading files and/or communications would be unacceptable, the more hardcore defences start to come into play – and if you also need to protect your communications’ metadata, really exotic means of communication are currently required.

John Gilmore September 6, 2013 5:35 PM

Regarding leaking information in nonstandard Ethernet packets:

Tsutomu Shimomura built a patch for an old Sun operating system many years ago, which leaked information using the trailing bytes of short IP packets. At the time he was collaborating with NSA some of the time, so they undoubtedly know of this technique.

See, IP packets such as acknowledgements or TCP open packets are often shorter than the minimum length of Ethernet packets. IP packets can be as short as 20 bytes. When transmitted, the IP packet is padded out to the minimum Ethernet packet size of 64 bytes (46 bytes of payload).

That padding is supposed to be zeroes on transmission (see RFC 894), but under the standard Jon Postel rules of “Be conservative in what you send, and liberal in what you accept”, recipients don’t check to see if this padding is actually zeroes. So, subverted sending sites are free to put anything they want in there — like passwords, private keys or random number seeds useful to NSA.

Now, you might expect that as soon as such a nonzero-padded Ethernet frame was received at an IP router, the frame itself would be discarded, and only the shorter IP packet contained within it would be forwarded on toward the destination. This would cause the loss of this information, since if the router isn’t subverted, the new frame containing that IP packet would be padded with zeroes. For a large number of routers, this expectation would be wrong. High speed routers tend to route frames rather than IP packets; for example, the first packet to a destination sometimes causes a software lookup, then the hardware is configured to automatically forward the frame for each subsequent packet. Low speed software-based routers may reuse the received packet buffer for the transmitted packet. The padding bytes could be copied over by “accident” or by intent. In your own network, check it yourself by patching the padding values in your kernel’s short packets, and seeing how far along your networks the padding persists (perhaps in your network it gets all the way to the destination of the IP packet).

For NSA’s purposes the extra data only has to get as far as one of the places that NSA has subverted — say, for example, AT&T — where they can snarf up the secrets from the padding data.

MarkH September 6, 2013 5:46 PM

@John Gilmore:

Thanks for passing along Shimomura’s work.

Note that that particular method of covertly sending data would not meet ATN’s criterion of invisibility to protocol analyzers. No doubt, there are Ethernet sniffers that would miss such “piggybacking,” but it wouldn’t be difficult to find or construct systems that would reveal such unusual padding bytes.

nycman September 6, 2013 5:55 PM

So in a world where one can’t trust many of these products or implementations, and there’s no easy way to verify the secure operation of these, perhaps we should fall back to the old adage: The enemy of my enemy is…. So, if your adversary is the NSA and not, say, the Russian gov, use Russian encryption and cloud services. Sure, the Russians may be able to read your stuff, but if they are not your adversary, they won’t care about you. And they certainly won’t share anything with the NSA. And vice versa, if Russia is your adversary, go American for as much as you can.

I’m seeing a lot of paranoia and hysteria in these posts. But the NSA is not an adversary to anyone on this board (OK, maybe Bruce, after the latest articles). And even if the NSA identifies you as some sort of adversary, what can they do? Send a reaper drone to your suburban house on Wallingford lane, Ohio? Mere harassment at the airports is only for those who’ve publicly embarrassed the USG/NSA or some high level functionary. An NSA “adversary” would be very lucky to only get that treatment.

Perhaps this will come soon in the disclosures…what actions have the NSA and USG actually taken as a result of this broad snooping? How does all this data get processed into actionable intelligence, what agencies is it shared with, and what actions does the gov take? Perhaps there are Americans being disappeared from American soil, secretly arrested, tried in secret courts with secret evidence. Because that’s what will have to happen for the NSA to protect it’s tradecraft. And even then, they can only get away with only so much before they’re noticed. The NSA does not have a history of sharing much with other gov agencies. Heck, the president reports directly to them, and their priorities are at the national and international level in scope. They do not even care about big financial frauds or “mass murderers” (differentiated from “terrorists” because their weapon of choice is a gun rather than exploding contraptions). They are certainly not gonna work with the FBI and risk disclosing their capabilities because you dropped by an Occupy demonstration, downloaded some kiddie porn, cheat on your taxes, or show anarchist tendencies.

travis September 6, 2013 6:33 PM

Some people suspect the NSA is running man-in-the-middle attacks, but one thing I haven’t seen mentioned much is that these aren’t very covert. Projects like the EFF SSL Observatory can help us catch these attacks if they’re really happening. We could use similar projects for SSH, DNS, software distribution mechanisms (like apt-get), WiFi logins, Kerberos/Active Domain-type systems, VOIP, VPNs, …

It’s also worth mentioning that the NLnet Foundation has a history of funding network security software.

Gordon Davisson September 6, 2013 6:58 PM

@travis: The impression I got was that they were running the MitM attacks using the real server certificate (with private keys stolen from the real server), which means they’re very hard to detect.

Nick P September 6, 2013 7:48 PM

@ Jacob

“The -e command is well documented on the project site. ”

Where? I’m not saying it isn’t there. I checked pwsafe.org first. The “Quickstart Guide” was GUI stuff and there’s no manual pinned in “discussion forum.” I also did a search for “command line” and “command” with results restricted to sites pwsafe.org and passwordsafe.sourceforge.net [to see if I missed the page] with no useful results. The history pages occasionally mentioned adding or fixing a command.

Normally, I’d then Google 3rd party sites for the info. Bruce’s claim about “undocumented” got me curious (as zero doc’s so far) so I decided to grab the source. Downloaded it, saw a “help” folder, and it had the goods in a manual-grade format. If it’s on the site, they should link it to Quickstart guide or place it in another spot that’s obvious, at least to Google. I’ve rarely had this problem looking for CLI doc’s.

Contrast this with KeePass’s web site you click Help on the left and “Command Line options” is on the very next page in the navigation menu. I found it in under 10 seconds no joke.

“Bruce mentioned that he used an old version with blowfish. The old version might have other (undocumented) commands. Note that prior to PWSafe V.3 the program used blowfish, and since V.3 the maintainer changed the algorithm to twofish.”

The -e option is in the 2.x versions of password safe that used Blowfish. It’s sufficient for encrypting files with blowfish via a password. I guess it depends on what he meant by “undocumented.” He might be using another command but -e does what he described. And, as you pointed out, it uses the stronger Twofish in current versions.

E.V. September 6, 2013 8:09 PM

@nycman: We already know they were sharing information with the DEA. What makes you so sure they aren’t sharing with other domestic agencies?

What really troubles me is what future uses this massive hive of data could be put to, especially given the tendency of the country toward moral panics. The Red Scare would have been ten times worse if McCarthy could have just keyword searched everyone’s mail.

0design September 6, 2013 8:24 PM

Does the NSA actively censor or hide from the media and public details of illegal hacker activity on the grounds that revealing exploits and techniques used illegally by hackers against citizens would also reveal technology deemed a state secret.

I imagine if, as seems likely, the same technology is reverse engineered or used by hackers or identity thieves against innocent citizens. If NBC does a news story telling people they aren’t safe from hackers because hackers know a back door built by the NSA then NBC is in violation of revealing state secrets. if NBC would also mention how to protect yourself against these exploits used by hackers then they also give aid and comfort to the enemy and are guilty of treason.

On the other hand, if the government does not censor or protect illegal hackers using the same exploits then doesn’t the use of such techniques by hackers put the knowledge and details of the techniques in the public domain. If hacker X is convicted of using NSA exploit #32 the public needs to be protected and told by law don’t they? That would let everybody protect themselves from the hacker X followers but also deny NSA reliable use of that exploit since it is fixed.

Has this one of these scenario’s already happened?

Odesign September 6, 2013 8:39 PM

@ nycman • September 6, 2013 5:55 PM

Your right sort of. There’s enough laws on the books they’ll find one you’ve broken and prosecute you for that, not for the reason they discovered by breaking your security. The point is they keep the exploits secret and catch you and convict you on an unrelated crime. Think of all the times police officers ignore crimes like speeding or don’t look to hard for evidence of financial wrongdoing because the perpetator didn’t get flagged for attention. That’s probably why they keep the do not fly lists secret and people wind up on do not fly “accidently”. Accidently = can not give a good reason without revealing we broke your security so we’ll exagerate and say oops it was this other email you posted publicly we missunderstood that put you on the do not fly.

The purpose of a red list is that it stays secret and you target the people on the red list for without letting them know they’re on the red list and how they got on it.

Sean September 6, 2013 9:29 PM

The Maginot line is easily defeated by Blitzkreig tactics.

All you have to do is attack the end points, the pipeline can be made of Astronomicum-Unobtainium alloy wrapped with Niven molecular strand material and Fullerene armor plate, but there’s always some place where exposure occurs.

Then you could always just use http://xkcd.com/538/ techniques on the right people and you’re in like Flynn anywhere you want with a proper tap for that pipeline.

Rob September 6, 2013 11:57 PM

Once a long time back, I sent an email to you, Bruce, for clarification. In “Applied Cryptography” there is a section that mentions using multiple encryption algorithms together (http://en.wikipedia.org/wiki/Multiple_encryption). I don’t have your book available at the moment, but there was a slightly vague mention in there that there was potentially a weakness in stacking three algorithms together. I emailed you for clarification, but you essentially said that the weakness most likely does not lie in the symmetric encryption algorithm but in the implementation or perhaps what are now called side-channel attacks.

I never really figured out what precisely the weakness was… but if we, as engineers, are to try to prevent attacks on the general folk of the world… it seems that using multiple algorithms for both the symmetric portion as well as the asymmetric portion of encryption would be a decent idea.

The recent revelations point to NSA wanting to attack the implementations rather than the actual algorithms themselves… but that does not prove that they have not cracked AES, DH key enchange or whatever else we regularly hope protects us.

Perhaps Twofish and Serpent as well as elliptic and lattice based public key in serial is what is needed to prevent attacks.

It is true that the software will always be a particularly weak link. All software has bugs and exploits will always be found. But the core algorithms which are based on presumably difficult math are the real blockers to NSA or other state actors. Perhaps stacking algorithms is key to hoping to defeat them.

When AES was being chosen… I always felt odd that they seemed to focus on speed of implementation vs. security. The Serpent team as I remember seemed to feel that speed was less important than security. Stacking algorithms could slow things down but when security is the most important aspect of your communication.. being able to run it on a 16 bit processor or a smart phone should perhaps be the least important design choice.

I feel the same about Tor vs. I2P. Low latency networks have a tragic flaw when we are looking at state-level adversaries. I think if it takes a week to send a safe email… that would be acceptable to many. Encryptions taking two minutes for a 100 MB file would also be acceptable I think, if not longer.

Cloudy Blue September 7, 2013 1:33 AM

I’m more than a bit surprised that most commenters’ reactions revolve about the NSA violating US citizens’ rights and the US constitution. It would seem more fitting to me to mention the Universal Declaration of Human Rights, for example article 12:

“No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

Discuss “arbitrary” 😉

dontknow September 7, 2013 5:15 AM

I am not into the encryption technologies, but just one thought about the rsa code..
They admitted that they have been hacked / broken in
what if this did not happen, but somebody wanted to tell that the generated security algorythms arent sure anymore because some agencies demanded the code or parts of it?

Clive Robinson September 7, 2013 5:22 AM

@ Jonathan Morton,

Like you I have some very old (and some very modern) hardware that has never been connected to the internet much of which I have built myself. The oldest Perssonal Computer is an Apple ][ and a couple of Z80 based CP/M SBC’s and several 6502 systems of various vintages (including modem chip sets that have 6502s embedded).

The reason I mention this is that it has a lot to do with air gaps and how you cross them safely.

I was developing cross platform code on the Apple back in 1980 which was befor the term “Sneaker Nets” existed as a name and as for “data diodes” youl’d have been carted off if you muttered such terms.

But they actually existed in practical reality in that I used RS232 serial lines to get working code stored on the Apple disks to the targets.

For my sins I still do quite a bit of development across serial lines as it’s generally very easy to hook up and requires only two pads on a PCB. Importantly though it’s very easy to make data analysers using common micro controlers and plenty of hobbyist books with code for “Bit Banging” serial interfaces through to TCP/IP stacks.

Also in my “dead tree cave” of a library I have a number of Linux books which go back to the early days of Slackware and have CDs in the back with all the source code. Which as I know will run quite happily on those PC/104 industrial control cards [1] with a nice rugged ISA bus which you can connect old PC-ISA cards to. The joy of the ISA bus is of course it’s easy to develop your own hardware for as you don’t need specialised controller chips just a few TTL chips…

Crossing the air gap properly by reading off one terminal and typing into another is a pain that nobody wants to suffer, but using memory sticks hard drives or even floppies is asking for trouble. Using a serial data diode with monitoring software you’ve built yourself alows you to do things safely without the grind of tipy-taping for endless hours of pain.

As for the Apple][ it’s now way to valuable for general use. Many years ago I purchased an Amstrad PPC640 8086 based battary powered lugable with built in modem. Likewise it is to valuable for general use, but it’s software however is not fragile and sees almost daily use on other hardware one of which is a 486 PC/104 card. With dual floppy and quad serial port card. It also quite happily runs good old Turbo-Basic, Turbo-C and my hand cut and crafted version of Small-C likewise a version of Fig-Forth, it sees a lot of development work go through it still.

Likewise I’ve another 486 PC/104 card system this has a hard disk and it runs Consensis Unix (AT&T Sys 5 R4.2) and DOS Merge. It has the advantage of being able to run multiple copies of quite old DOS based software that drives in circuit emmulators with serial connections, which you can’t do with MessDross on it’s own.

Whilst I’m not suggesting people replicate my development systems I am suggesting you dig around and use stuff from Pre-Clipper_Chip days to build “Air Gap Bridges” and seriallines might be slow but they are easily moniterable and you can do hardware development with them using little more than a soldering iron and volt meter.

[1]. For info on PC/104 SBC industrial systems have a look at, http://en.m.wikipedia.org/wiki/PC/104

Johann Gevers September 7, 2013 6:54 AM

@GhostIn(Your)Machine: We’re looking for people like you. We’re building a completely decentralized system for financial and legal transactions, with high security and privacy. If you’re interested, get in touch.

RonK September 7, 2013 8:32 AM

Been thinking what might be effective against the described “Tailored Access Operations” group. My suggestion is as follows:

  • Refactor the application into many independent modules (this step requires a bit of artistry — the modules cannot be so low-level that the interconnection of the modules has its own vulnerabilities independent of the functionality of the modules themselves, yet one needs to have a fairly large number of such modules).
  • Each module must have a well-defined API (this is good practice, anyway)
  • Have independent groups code at least two alternatives to every module
  • If you’re compiling the application yourself (recommended), the build system uses a source of reliable entropy to randomly select one out of every set of alternatives for each module.
  • For the more technically challenged who need mass-distributed binaries, an executable with every module linked in is generated, but a reliably random number generated upon first execution is used to determine the actual execution path with respect to choices between alternative modules.

For added security and (greater) possibility to detect attacks, duplicate the input to the executable and distribute the duplicate inputs to two independent versions running in separate VMs, and then compare the states of the VMs afterwards.

The work to do this is only a linear factor greater than the work to code the original, but it seems to me that the complexity of TAO which cannot be detected by comparing VMs is increased by a much larger factor.

Nick P September 7, 2013 9:03 AM

@ Rob

“When AES was being chosen… I always felt odd that they seemed to focus on speed of implementation vs. security. The Serpent team as I remember seemed to feel that speed was less important than security. Stacking algorithms could slow things down but when security is the most important aspect of your communication.. being able to run it on a 16 bit processor or a smart phone should perhaps be the least important design choice.”

Performance is very important. One could even argue it’s more important than the security. The reason is that a cipher won’t get used if it makes operations too slow. Businesses focus on the bottom line and personal use often shifts for convenience. So, anything they approved would need to be unnoticeable in many cases and not introduce too much delay in others. Then, there was how well it can be implemented in hardware for similar reasons.

Far as multiple ciphers go, you can use multiple ciphers. Some people on this blog tell me there’s theoretical problems with it. I haven’t seen them in practice in my use case but who knows. I tell people to make sure each individual cipher is top notch (e.g. AES candidate) and use different keys/IV’s with each one. TrueCrypt has this style of multiple encryption as an option for volumes. I’ve never heard of it being hacked. Bonus tip: make sure the ciphers use different internal structures or techniques just in case.

Bruce Schneier September 7, 2013 9:48 AM

“To everyone calling out Bruce’s opsec; I and everyone else would probably get a good laugh at your attempts of true opsec for no more than a week. You would give up, guarantee it.”

Agreed. This, in itself, is an essay.

Bruce Schneier September 7, 2013 9:50 AM

“Since you’ve mentioned you’ve accessed some of the Snowden materials, you might have received a NSL, or might receive one in the future. Have you?”

I have not.

They’re not really intended for someone like me. They are intended for data intermediaries.

If I get one, I will have to decide if I want to challenge the (I believe unconstitutional) secrecy requirement.

Bruce Schneier September 7, 2013 9:53 AM

“Regarding the command line options in Password Safe, what are the command line arguments?”

pwsafe [-e|-d] filename

That’s how you get to the undocumented encryption feature. A dialog box will prompt you for the password. (Yes, we know that asking for the password twice for decryption makes no sense.)

Bruce Schneier September 7, 2013 9:55 AM

“Who are the companies? Which companies implemented back doors for NSA as stated in the leaked documents?”

I do not know.

Bruce Schneier September 7, 2013 9:58 AM

“Bruce, you say you’ve seen a lot of the Snowden documents, and presumably that means you have or had full unredacted copies of them, and you’ve alluded to controversies over how much to release. What leads you not to release those documents you have/had in full? Why not give others a chance to pore over the details and glean things you might have missed about how to improve security? Is it simply that you feel bound by an agreement with the Guardian and/or Greenwald? If so, are you pushing back on the terms of that agreement or questioning its justification? Or do you believe, for example, that unredacted releases might jeopardize Snowden himself or his ability to release further newsworthy information?”

These are not my documents to release. I was allowed to see them under a set of rules, and I agreed to abide by those rules. The reporters who are working with these documents are putting themselves in considerable personal danger — the UK has much more onerous laws about this than the US — and I will not make things worse for them by publishing anything on my own. Nor will I make things worse for Snowden, or our ability to publish further.

Jacob September 7, 2013 10:01 AM

@Nick P – I apologize – it is not on the project site proper but in the on-line help of the program ( in the pwsafe.chm file in the archive on the site):

pwsafe -r [database]
This will open the specified database in read-only mode. …
pwsafe -e filename
This will prompt the user for a passphrase, and encrypt the file with a key derived from the passphrase. Note: The file can be any file. The encrypted file will have the same name as the original file, with “.PSF” appended to it.

pwsafe -d filename
This will prompt the user for a passphrase, and decrypt the file with a key derived from the passphrase. Note: This will work only on files that were encrypted by invoking pwsafe with the ‘-e’ option (see above).
pwsafe -c
This will start the application closed, …
pwsafe -s [database]
This will start the application “silently”, …
pwsafe -m
This is the same as the ‘-c’ option, with the addition

Finally, there are some special command line flags beginning with “–“. They are:

–novalidate
(clipped)

QnJ1Y2U September 7, 2013 10:45 AM

These are not my documents to release. I was allowed to see them under a set of rules, and I agreed to abide by those rules.

There’s an obvious irony here, in that Edward Snowden made a similar agreement for these exact documents.

And there are obvious differences, too, with the scope of the groups that the agreements were made with, and the issues related to those groups. But those differences won’t fit well in a sound bite or tweet; expect to see this statement used against you.

It may be useful to prepare a comment or essay on why you’ve chosen to conform to the requests of the journalists, but have chosen to be an outlier with respect to the wishes of government officials.

Richard George September 7, 2013 1:09 PM

Bruce, what are the new directions in public key cryptography away from Diffe-Hellman / RSA and ECC? Are there new mathematical principles “orthogonal” to the existing work that could underpin a future public key cryptosystem?

e.g. Integer multiplication and addition of points on an elliptic curve are both albelian operations in the sense a·b = b·a; is there a sensible way to extend public key cryptosystems to non-albelian groups; would that add enough extra complexity to foil existing attacks?

If the NSA has spent ~$1B to build custom hardware capable of taking out one specific, widely deployed scheme (1024-bit RSA keys, RC4, etc) then it seems that an answer would be to build and implement as many different algorithms as possible, to diversify the target away from their existing investment. The NSA couldn’t spend $100B on custom hardware, but it seems plausible that the academic / open source community together could create a suite of 100 different symmetric key ciphers.

gitarr September 7, 2013 2:34 PM

The reddit discussion link you posted is not relevant anymore, as it has been removed by the moderators of /r/netsec, due to “lack of technical details” and “speculation”.

One of the moderators sent me following postscriptum: “P.S. This message was sponsored by the letters N, S, & A” I guess they find that funny.

Mike Delta September 7, 2013 3:24 PM

“These are not my documents to release. I was allowed to see them under a set of rules, and I agreed to abide by those rules.”

Dear Bruce,

I think this is perfectly respectable. Although maybe a meta-level discussion about what kind of information should/should not be redacted (by Mr. Snowden, Mr. Greenwald and his colleagues) could be useful.

I have a feeling that the possible dangers for the US of releasing some information is way too overestimated. As the examples show mass surveillance capabilities are mostly effective against some lunatics (and even some of them slip through). I would be surprised if a serious adversary with adequate resources (financial and educational) couldn’t circumvent the surveillance. This means that disclosing as much of the capabilities as possible lets the public see more clearly what is definitely broken and what may be trusted. The intelligence community should accept that there may be channels that are completely inaccessible for them – anyone is entitled to have reliable encryption. Adversaries/terrorists using these means is just a side effect, which should be fought with “conventional” methods. (I mostly agree with the opinion of the TOR project regarding this issue.)

Although I think that trying to break encryption isn’t bad – the whole point of cryptography is to create algorithms that withstand the attacks. Of course it would be surprising if agencies would disclose their findings (although from an other perspective, the intelligence services could defend the nation’s interests better by improving the algorithms and protocols – a defensive strategy instead of the current offensive one).

The real sin is however the weakening and backdooring. It was shown numerous times during the history of computing that the safety provided by obscurity/lack of information is only temporal. A weakness could be eventually discovered by other parties – this means that the real enemies may also have the capabilities of breaking weak systems, while the public still relies on their supposed safety. (This is why I prefer the full disclosure policy of the open source world.)

As a non-US citizen I think that the debate shouldn’t be focused only on the implications for US residents. I think that the mass surveillance should be debated on the basis of human rights instead of just US laws/constitution.

Anyway, thanks for all the efforts done in this topic.

(Sidenote: I’m not a cryptography expert, however I have an academical background in computer science and I have some basic knowledge of the topic.)

Just.Out.Of.Interest September 7, 2013 3:27 PM

There is something I would like to get on the record.

A few days ago, a “mysterious” presentation called “Computer Forensics for Prosecutors” (dated February 2013) had been mentioned here in the comments. The presentation was originally published by the website “Tech ARP”, on April 1, 2013.

Article:

http://www.techarp.com/showarticle.aspx?artno=770&pgno=0

Download:

http://www.techarp.com/article/LEA/Encryption_Backdoor/Computer_Forensics_for_Prosecutors_(2013)_Part_1.pdf

The presentation “made some waves”, because it claims that commercial encryption programs like “Bitlocker” have backdoors (for the authorities), including the program “Truecrypt.”

However, the presentation was quickly exposed as a “hoax”, as it contained some “silly” parts as well, for example silly names for detectives. So it quickly became apparent that it was not “real.”

But I was still intrigued, because the presentation was very detailed, and I wondered about the real “background.” So I wrote to Dr. Adrian Wong, the owner of the website “Tech ARP” (based in Malaysia)…

http://en.wikipedia.org/wiki/Tech_ARP

…and asked him about the presentation and the article from April 1, 2013. I told him him that the presentation had been discussed here on this blog (and was exposed as a “hoax”). He gave the following explanation:

Hello xxx,

It’s not a hoax but it is an April Fool’s joke. We intentionally inserted many clues for our readers to figure out, which MarkH picked out. There are more actually.

That said, it was actually based on a genuine presentation – not the presentation we based our joke on (which is genuine), but another presentation that I happened to come across a year or two years ago (as I recall!) on Tor.

The premise of the presentation was exactly the same – how law enforcement officials can gain access to encrypted partitions and files. If I remember correctly, it was quite detailed about how to get into Bitlocker, which is why I would never use Bitlocker myself.

Sadly, I seemed to have lost the presentation and could not find it even on Tor after my recent discussion with Marauderz about its contents. But I stumbled across this other presentation, and thought it would make good material for an April Fool’s joke.

So there you have it – the article is a joke, but the first part was true – we did have a discussion about this other presentation, and it did exist. I just cannot find it. If you do stumble upon it, please let me know.

Thanks!

Dr. Adrian Wong

+++

So I just wanted to get the “real story” on the record.

Cosmicbrat September 7, 2013 4:50 PM

This whole page reads like it be the government doing the Crime…
maybe Mr. Obama should legislate them, or delete their departments if they can’t operate in our culture and society without doing evil…

New ECC Handshake in Tor September 7, 2013 5:34 PM

Bruce, in one of your linked essays: http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance

you caution against ECC saying

5) Try to use public-domain encryption that has to be compatible with other implementations…
Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

If you’ve been keeping up with news in the Tor world, you’ll know there has been a very large influx of users (a suspected botnet) and Tor devs are moving to a new ECC based handshake in (nTor) as documented here: https://blog.torproject.org/blog/how-to-handle-millions-new-tor-clients

Do you believe this is a mistake on the part of the Tor devs? I know an additional consideration here is the computational cost of the operation, but in terms of security, do you believe this is a more insecure handshake than what is currently in place?

Ken September 8, 2013 1:41 AM

1st:
I think it is time we all switch to OpenBSD (which is specifically designed to be secure). We can all jump on it, and (in our spare time) make it more user friendly, so that even grandma can install it and use it.

2nd:
Encrypt everything! For now, sometimes only log-ins are encrypted, and bank sessions, (etc.). If we encrypt everything, it just makes their jobs harder.

3rd:
In every email you send, down at the bottom (in very small type that is colored to be invisible) place 100 to 500 “keywords” that you know the NSA will be scanning for. These keywords should be in plain-text. This will flood their systems with “red flags”– (so much so that they will not be able to see the forest for the trees)… Kind of like a Denial Of Service attack– only in this case “DOS” means “Denial Of Snooping”…

Prohias September 8, 2013 11:09 AM

Forever, you’ve been teaching us that to do it best, security needs to be built into a system right from it’s inception. It should be an explicit goal and experts in the area need to be involved early to foster that mindset. In Practical Cryptography you stated that the book was a consequence of people’s ad hoc and careless use of what you taught us in Applied Cryptography. Today everyone serious about building secure software systems understands this as axiomatic.

Now, shouldn’t we use the same criterion when reviewing the state of security on the internet and the havoc the NSA has unleashed? When a lot of these basic systems and protocols were established, security was an after-the-thought exercise rather than a guiding principle. Haven’t the chickens come home to roost?

I read your calls for engineers to claim it back and establish new standards. Where must we start – redefining base protocols and assumptions we’ve taken for granted? What about the silicon and the OS itself?

anyone September 8, 2013 12:08 PM

Just.Out.Of.Interest

There is something I would like to get on the record.

A few days ago, a “mysterious” presentation called “Computer Forensics for Prosecutors” (dated February 2013) had been mentioned here in the comments. The presentation was originally published by the website “Tech ARP”, on April 1, 2013…

and just why are you supposedly telling this? to get people to not to stop using truecrypt and other tools?

Czerno September 8, 2013 1:24 PM

@New ECC Handshake in Tor :
“Tor devs are moving to a new ECC based handshake in (nTor) …
Do you believe this is a mistake on the part of the Tor devs?”

Not Bruce, and not an expert but, as I understand it, not ECC per se is suspect, only /certain/ standard curves using sets of constants “manipulated” by the US-NSA are. Tor devs are well aware of the background and they chose a particular curve that is thought to be secure.

Like you I hope to learn more details from Bruce and the ‘resident’ schneier.com experts…

MarkH September 8, 2013 2:23 PM

@Just.Out.of.Interest:

The letter purportedly from Dr. Adrian Wong refers to a presentation he found online, of which he is not presently able to find a copy.

It reports his recollection of seeing instructions for breaking into BitLocker encrypted drives, but says nothing about TrueCrypt.

It admits the creation of a sham presentation, published on TechARP as a joke.

If you had important news of great consequence to report, do you think you would publish it that way?

speedie September 8, 2013 2:37 PM

Someone above pointed to the fact that tor is 60 % USGOV funded .
Bruce himself repeated in comments not to trust anything . I don’t think I’ll be using tor anytime soon .

MarkH September 8, 2013 2:47 PM

Regular readers of this blog will already know about this, but as a reminder, and for the benefit of anyone else concerned about confidentiality:

Software encryption provides no protection against an adversary able to physically access your computer while an encrypted drive is mounted (open), whether the computer is running or in hibernation.

The reason for this, is that while an encrypted drive is mounted, the encryption key is stored in the computer’s RAM. Contrary to what you might expect, finding such a key is not difficult.

ElcomSoft sells a software package that extracts the keys from either a RAM dump, or a hibernation file on the disk. It works against the most popular disk encryption products, including TrueCrypt.

This vulnerability is not due to a design fault in these disk encryption systems — it is a practical limitation of software disk encryption.

Practical protection measures against this:

(a) avoid hibernation with an encrypted drive mounted whenever the computer is vulnerable to physical access by an adversary

(b) ensure that all encrypted drives are dismounted whenever the computer is vulnerable to physical access by an adversary

Note that well-designed disk encryption software will “erase” the key from RAM when dismounting the drive. If you’re not sure that the software does this, then you need a stronger condition:

(c) ensure that the computer has been powered off (NOT suspended) for at least several minutes at any time it is vulnerable to physical access by an adversary. If insufficient time has elapsed since power down, the adversary can recover RAM data.

NOTE: I haven’t checked whether any particular encryption software erases the key on dismount. If you use TrueCrypt’s key caching feature, you are obviously asking it to keep key information in RAM, making the encrypted drive(s) vulnerable at least until TrueCrypt is shut down.

gonzo September 8, 2013 3:41 PM

@MarkH, et. al.

I think the reality at this point is that we must assume that even if disk encryption software like TrueCrypt is not itself compromised, it must be considered whether it can ever be used securely on a net connected machine.

Such a machine, whether via backdoor written into the OS or served up by update software routines, or some other malicious vector, can easily send encryption keys, recorded keystrokes from entering a pass phrase, or other information over the net connection to an awaiting server. Alternatively, the OS or other software can put this data surreptitiously onto an unused sector of an attached usb flash drive, or broadcast a malformed packet to a backdoored comsumer router where it would sit in a buffer or something so long as power were still applied.

True security at this point must consider use of drive encryption, PGP, and other endpoint use of crypto only on an air gapped computer that receives and delivers ciphertext only via antiquated technology such as 1.44 floppy disc’s, ZIP drive disc’s, or scanned and OCR’d printouts.

Physical security means nothing when the OS software itself could (probably is) compromised, as is the consumer router and the carrier who provides the ‘net connection.

I suppose one can consider doing crypto on a connected machine if one were to boot to a highly secure live cd environment — but even then, what shenanigans exists at this point in the hardware itself?

NSASUX September 8, 2013 6:46 PM

@Ken

As your #3: “In every email you send, down at the bottom (in very small type that is colored to be invisible) place 100 to 500 “keywords” that you know the NSA will be scanning for. ”

Establishing random VMs in the Middle East running various virtual machines that send random e-mail messages. Using encryption, but weak encryption and subject lines that leak the contents.

The above VMs communicating with other VMs in the USA. Asterisk PBX that calls and plays OBL’s voice to random disposable phones.

In the end I believe that the recent revelations are going to push SysAdmins to up their game and implement SSL/PFS, IMAPs, POP3s, etc. etc. Google has already announced that they have ramped up their encryption project on their transport data (grain of salt inserted).

خدا کی تعریف تعالی شہید شیخ اسامہ بن لادن. ہم عظیم شیطان پر اینتھریکس کے ایٹمی آگ کی برسات ہوگی. ہم سنٹری فیوجز اور افریقہ سے پیلے رنگ کے کیک کے ساتھ بڑی کامیابی تھی. انشاء اللہ ہم برائی امریکیوں کو ایٹمی موت میں جلا دیکھتے ہیں. خدا ہوائی جہاز پر شہیدوں کی حفاظت کریں

MarkH September 8, 2013 7:25 PM

@NSASUX:

Just ran the arabic through google translate.

As angry as I’ve been feeling about all this news … I needed the belly laugh.

I know what my new email sig is going to be.

Henry Hertz Hobbit September 8, 2013 9:32 PM

I do not know exactly what you meant by “security software” on either a router or a switch but I am assuming you mean AV software. Routers and switches have their own OS and it doen’t make sense to have AV software on them unless they also have something like a pix firewall. One of the major concerns for a router is to prevent IP spoofing. You don’t want somebody “out there” pretending they are “in here.” Edge firewalls frequently do have AV software but not for the firewall itself – they are guarding all the machines inside against web based malware. SMTP / POP agents also have their own malware scanning software. You do not want to turn on logging on very busy routers or switches. The drop in throughput will be noticed. Logging on most of these network devices is used primarily for debug purposes. I will blog about it later on my blog. Thousands of Windows bot machines all over the globe sending me spam and malware directly should be entertaining metadata for the NSA. For me it is an onerous burden.
But if you ask me, most of what is going on here is fear of the unknown.

Buck September 9, 2013 12:14 AM

Is there any actual math/science/theory behind the “fact” that security through obscurity is useless? It seems more likely to me that this is simply a commandment handed down in InfoSec101 with little more thought required… I’ll agree that choosing an obscure encryption algorithm essentially offers no additional security, but then we’d be conflating security by obscurity with security through minority…

Obviously, those with whom one is communicating with would have to agree ahead of time on which encryption technique to utilize, and (seeing as there are a limited number of reliable algorithms) there’s not much more than a few milliseconds of “obscurity” involved. Sure, one could concoct their own unique encryption algo, but countless pitfalls abound for crypto beginners! Luckily, that doesn’t appear to be the dilemma here (if we are to believe “the math is good”).

The problem appears to be systems being subverted prior to any encryption ever actually taking place. Here’s a brief thought experiment:
Say you’ve learned that your house has been broken into by (professional) thieves whilst you were out on vacation. Which possessions do you think are more liable to remain: your grandmother’s engagement ring sitting in your “uncrackable” safe in the bedroom closet, or your emergency cash stash hiding in your hollowed out edition of “Applied Cryptography” lying amongst your large collection of books?

Even Mr. Schneier himself has admitted, “Sometimes security through obscurity works.”
https://www.schneier.com/blog/archives/2008/06/security_throug_1.html

Though this all depends on our definition of obscurity… Of course if the general public starts to use identical methods, said methods cease to be obscure. If trying to remain invisible to prying eyes with deep (practically unlimited) pockets, we must take advantage of what we do know about penetration practices. As an example, NMAP; why appear to be running the operating system & services that we are actually running? Chances are, we’ve missed an update or an SSHD 6.2 0day is lurking right around the corner! Why not disguise your machine as a lowely out-of-service XP box? Run your SSH service on port 139 with the same public capabilities of a vulnerable NetBIOS instance (minus the vulnerabilities of course)! Keep your adversaries frustrated with the “low hanging fruit” that should really just be a quick hack, all the while leading them down an ever-deepening rabbit hole…

Keep the TAOs busy… Assume the daily “script-kiddy” attacks are more than they appear… When large-scale attacks and specific vulnerabilities are on the offensive, have an additional layer of security!

Commenter “RonK” offers a novel solution:
https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1688784

Sounds difficult to develop deployable applications for systems using such a framework, but not impossible; especially if the modules are small and provide sensible output for all possible inputs.

The alternative seems to be that which “GhostIn(Your)Machine” alludes to here:
https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1676641
Having some (a very small sum) of experience with mathematical proofs however, I am unable to conceive how “provably secure” systems can avoid issues with Turing’s halting problem…

RobertT September 9, 2013 1:09 AM

I’m shocked genuinely shocked, in my world a gentlemen simply would not read one another’s private correspondence, it’s simply bad form, bad form I say!

On a more serious note, isn’t the rise and rise of the NSA just symptomatic of a declining US Empire. As real power slips from their grasp they make ever more elaborate plans to hang-on by engaging in thoroughly immoral processes. This sweet contradiction reminds me of the writings of Bertolt Brecht: Lob des Zweifels

“Schönster aller Zweifel aber Wenn die verzagten Geschwächten den Kopf heben und An die Stärke ihrer Unterdrücker Nicht mehr glauben”

John Yardly September 9, 2013 3:12 AM

I love how people like this Enigmatic character who keeps posting his supposed damning questions about TrueCrypt (as if they all haven’t been asked for years already) like to try and play both sides of the coin.

You spend all this time trying to claim it’s a honeypot because the developers want to remain secret, and then turn around and ask “Would the US-government leave a US-company like Truecrypt in peace and do nothing while they develop ‘uncrackable’ encryption?”

Hmm. You think maybe developers thought about that? Is it at all possible that that could be a reason said developers would have an interest in being anonymous?

eyeroll

unimportant September 9, 2013 3:33 AM

@John Yardly

It is reasonable to question things — especially with the NSA’s aggressive surveillance behavior in mind. The verification process of open source software should be as transparent (and profound) as possible though.

Clive Robinson September 9, 2013 3:41 AM

@ Robert T,

I’ve not seen you post for a while, how long since you first tried to post unsuccessfully?

I know Bruce has been changing things to push HTTPS over HTTP so that might be causing probs.

Also if I remember correctly you tend to be a bit mobile so you might have burned on somebodies firewall.

Failing that were you trying to post links?

Any way please keep trying there are a number of questions that have poped up over the past couple of days that you would have an interesting angle on.

Clive Robinson September 9, 2013 4:00 AM

@ Buck,

    Is there any actual math/science/theory behind the “fact” that security through obscurity is useless? It seems more likely to me that this is simply a commandment handed down in InfoSec191.

Yes it does get handed down from day 1 because it’s been found to often that breaking the rule leads to a world of hurt.

The rule is the inverse of “the enemy knows the system” and predates any kind of math/science/theory in what we would now call information theory. And has been pointed out befor “keeping the key secret” is a form of “obscurity”.

But before you can make the expression amenable to analysis you need to give a definition to “obscurity” that is actually constrained in some way to make it of use.

Z.Lozinski September 9, 2013 5:51 AM

@Buck,

Is there any actual math/science/theory behind the “fact” that security through obscurity is useless? It seems more likely to me that this is simply a commandment handed down in InfoSec191.

Have a look at Kerckhoff’s 6 principles on the design of military ciphers from 1883. The 2nd principle, the one people usually refer to is:

“It must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience;”

The mathematical version is Claude Shannon’s “perfect secrecy” in “Communication Theory of Secrecy Systems” (the classified version is 1946, the published version is 1949). In essence, having the ciphertext and the system should make no difference to the message probabilities and hence should make no difference to breaking the system.

http://netlab.cs.ucla.edu/wiki/files/shannon1949.pdf

There are statistical tests that can be applied to a text in an unknown cipher that will reveal a surprising amount of information. William Friedman developed many of these at the Riverbank Labs before he became Chief Cryptographer of the US Army Security Agency (the forerunner of the NSA).

As Clive says too many people have relied on security by obscurity. Sometimes with fatal results.

David B September 9, 2013 6:12 AM

Regarding the suspicion that the spooks may be ‘light years ahead’, yet must tread the fine line between protecting society from the crims, but not the crims from law enforcement.

Well, if that were the case, then it would make a lot of sense to run a periodic competition: to encourage disclosure of the best thinking available to civilian encryption. Aside from the opportunity to influence standards towards algorithms which displayed the correct ‘balance’ (as perhaps opposed to being simply the strongest cryptographically), perhaps the more urgent goal would be to assess what level of capability is ‘out there’.

It makes less sense to hold the competition if they didn’t at least have a comfortable lead. It would not be in their interests to encourage development of cryptographic understanding if it threatened to compromise their ability to carry out their mission.

unimportant September 9, 2013 6:21 AM

@Z.Lozinski — Secrecy through obscurity

But what if the hidden algorithm is performed after the publicly known cipher (with independent keys). Especially in situations where encryption is legally restricted (e.g. to 56 bit key sizes).

Z.Lozinski September 9, 2013 8:39 AM

@unimportant,

Combining ciphers does not always give the improvement in security you might expect. The details is in the details and these details will vary depending on the cipher, so you need to do the mathematical analysis.

A simple example may help. The Caesar cipher “a publicly know cipher” with key 3, so A->D, B->E etc. Now combine this with Caesar cipher with key 7 (“independent keys”) A->H, B->I. This is equivalent to a Caesar cipher with key 10, where A->K, B->L. Oops.

Yes I know this is an incredibly simplistic example, and my apologies for that, but the point holds for more complex ciphers too. The reason for the creation and use of Triple-DES, is that simpler Double-DES isn’t superior to ordinary single DES.

There are lots of techniques for combining ciphers and crypto-primitives. Bruce’s Applied Cryptography is a good place to start.

algol September 9, 2013 9:30 AM

security through obscurity may be considered useless but then again no security should be viewed as 100% foolproof anyway.

Besides that security through obscurity is hardly ever used alone.

Actually NSA uses security through obscurity as one of their security components – they are not particularly open about what they do or how they do it.

So I guess in the end it depends on:
1. what are we obscuring (if we are obscuring some complex encryption scheme, obscurity can be a component in the overall security schyeme.)
2. how good is the obscurity? (some things have still not been ‘figured out’, such as the supposed location of Captain Kidd’s treasure).

unimportant September 9, 2013 9:49 AM

@Z.Lozinski

I am aware of the nastiness when combining structurally similar ciphers. I thought of carefully simple undisclosed algorithms like swapping bit locations or selective inverting (when it makes sense).
Like: EK56^K0(M ^ M0) with cipher E, message M, 56-bit key K56, 128-bit constants K0 and M0. And where M0is only different from 0 for enforced K0=0.

Nick P September 9, 2013 12:34 PM

@ Buck

Obscurity is A Good Thing, if Done Right

I think what originally made it into a commandment was relying on obscurity INSTEAD OF good security engineering. This led to many closed or proprietary solutions that had little peer review and got into widespread use before obvious flaws were found. And now they’re still with us. Also, Kerckhoff’s principle states that the part that remains secret should be tiny and easy to swap. We see this in crypto where they try to put the algorithms in the open, but keep the key secret. I’m sure this has factored into no obscurity becoming a sort of commandment.

That said, my stance is that good security engineering PLUS obscurity can have a benefit. The idea is to increase the burden of the attacker. Many modern designs are reusable kits on the black market or exploits that work due to extensive standardization. Obfuscating and diversifying key details eliminates the one-size fits all, scalable and ignorant types of attacks. It turns the attack into a targeted one requiring more sophistication. If the system is black box enough, the attacker might need to be physically there. The points to diversify include chipset, OS, middleware, compilers, code security strategies, languages, services, architecture, and so on. I call my approach Boosting Security Through Diversity.

(Note: Complexity is the enemy of security. Yet, the irony is that my approach provides security by massively boosting complexity on the enemies’ side. But, to me, it’s all a simple design with low complexity components, connection rules, and transformation techniques.)

So, you use good security engineering principles. You get review by qualified people (maybe under NDA). If no review, each component or piece in your design should be proven on its own. Now, from here, you’re mainly worried about 0-day issues. The diversity/obfuscation kicks in as you use nonstandard components in your design and mix it up in other ways. And the 0-days are prevented in many cases unless they’re logic attacks (eg bad protocol design or requirement).

So, it definitely has value. We tend to call it “obfuscation” so it’s not confused with blind security by obscurity. Just remember to use proven security strategies b/c if obscurity is your only protection you’re screwed the moment you’re targeted.

I missed Ronk’s comment. His recommendations are about the same as some I’ve given here before. It will improve things. I’ve prevented attacks plenty of times that way.

Modular redundancy + diversity for batch processing. I’m not saying this is practical in the general sense. I had it in mind for certain applications that required mutually untrusting parties/countries and can be done in batches.

Here is (almost) everything you must consider in engineering a secure system in one comment. It helps to know each layer or component that can be attacked. Your application’s TCB will be composed of one or more of these layers, attacks will come through them, and therefore most security engineering effort should be focused on those layers. That includes the obfuscation too.

MarkH September 9, 2013 4:12 PM

On “security by obscurity”, I agree with Nick P’s assessment.

To use obscurity in place of a strong security system (for example, a well-designed cipher) is extremely risky. Obscurity can supplement the protection afforded by strong systems, but cannot replace that protection.

For most people, using a good open-development security tool (like GNU privacy guard) is absolutely the best choice — it’s been studied by many pairs of eyes and exposed to a lot of analysis and testing.

One of the dangers of making an obfuscated security system is that it is (of necessity) “home grown” to a significant extent. Even a Really Smart Person and make some Really Dumb Mistake. For example, when putting together several “layers” of security technology — the idea being to have stronger security than that of the components standing alone — it is not at all difficult to accidentally and unknowingly create a leakage path for secret data that spoils the whole construction.

So, I suggest:

1) Unless you consider your risk to be exceptional, use trustworthy (open-source) security tools in accordance with recommended best practice. Follow all of the instructions and advice, including the annoying and time-consuming steps: they are given for good reason!

2) Don’t mess about with security tools unless you Seriously Know What You Are Doing … and as Nick sagely recommends, get a review by somebody else who Seriously Knows What They Are Doing, because you can easily have missed something important.

Dirk Praet September 9, 2013 5:11 PM

@ Bruce, @Curious, @ Fridz

However I found your tacit endorsement of Silent Circle a little out of character for you.

In a situation where every US-based product/service needs to be considered compromised or “insecure by law”, it all boils down to trust and reputation, preferably first-hand. Although contrary to Bruce I don’t know anyone at Silent Circle personally, in shutting down Silent Mail the company made a controversial stand that at least in me inspires more trust than a lot of others out there denying collaboration with carefully crafted word games or marketing their stuff as NSA-proof.

A company inspiring similar trust is Wickr. They equally offer a “secure” text, picture, audio and video message service for iOS, with an Android version in the making. A while ago, CEO Nico Sell publicly stated that she had been asked by the FBI to provide a backdoor, but that she had refused to do so. According to a Guardian article by Cory Doctorow, the company is looking into ways to implement a warrant canary system. These are the kind of companies I believe deserve at least the benefit of the doubt, and I would very much like to see a lot more of them.

RobertT September 9, 2013 5:58 PM

@Clive Robinson
It seems that can post if I do it directly but I cant post using my usual source obfuscating techniques.

Oh well imagine that the NSA prefers system end-run solutions over cryptanalysis whodathunkit. I didnt see anything in the recent releases that I didnt already know except maybe the magnitude of the effort, but to be honest I suspected that also especially given the insane growth of their physical footprint (buildings, secure compounds etc etc)

The really interesting game starts when they try to translate the metadata surveillance into physical surveillance, reminds me of the Borne3 movie where they identify the leak source as the one who turned off their cell phone. I wonder how long it’ll be before we close the loop and dedicate surveillance resources to tracking everyone that is identified as NOT having a broadcasting device on their person. The belief that we can close the loop will be simply to strong a force to resist.

MarkH September 9, 2013 6:22 PM

@RobertT:

Your gloomy projection reminds me of a Ray Bradbury story, about a man who takes walks at night. From the windows of every house he passes, he can see the bluish glow from TV screens. He is the only one who isn’t staring at television all evening.

Eventually, he is arrested (into a custody from which it is implied he will not return) for his crime of deviation from the norm.

RobertT September 9, 2013 6:46 PM

@MarkH
“2) Don’t mess about with security tools unless you Seriously Know What You Are Doing … and as Nick sagely recommends, get a review by somebody else who Seriously Knows What They Are Doing, ”

The problem with this is that there are very few people who really know what they are doing, and of that small set there exists but a minuscule subset that BOTH know what they are doing AND are not somehow part of the great game.

PS most members of this minuscule subset are so paranoid about their own physical security that they elect to have absolutely no further involvement with cyber security.

Dirk Praet September 9, 2013 8:46 PM

@ Daniel Larsen

Who are the companies? Which companies implemented back doors for NSA as stated in the leaked documents?

This looks like a good summary of the companies currently known/alleged to have links to the NSA surveillance. It doesn’t include any crypto specific stuff (yet), but it’s a good start.

That Jerk September 10, 2013 12:33 AM

I can’t wait for the NSA’s escrow keys to all of our financial institutions get leaked. The first thing I’m gonna do is drain Barack Obama’s bank account 🙂

Figureitout September 10, 2013 12:45 AM

MarkH
–I used to take pretty late-night runs (11pm), just something I used to do b/c most everyone would be in bed and I had a late schedule. I liked having my running routes all to myself. It one time allowed me to witness the military drones flying over the suburbs w/ absolutely no doubt, I’ve mentioned it at least 2-3times here, but I gave one coming right at me the “double-bird” and it nearly hit the powerlines as it flew over me. In the future, I would be dead. Had I a gun or a nice stone, I would’ve tried to be the 1st American to down a drone on American soil.

Z.Lozinski September 10, 2013 4:19 AM

@unimportant,

OK, so if chosen carefully, I think there are several interesting observations with a second layer of encryption/obfuscation:

  1. Assuming that the signature of well-known cryptosystems are well-known, if the obfuscation is high-quality, and your communications look statistically uniform, it immediately highlights your traffic as “interesting”.

  2. It makes the task of breaking your communications harder, which means it is probably only worth doing if you are a Person of Interest, or as the Chinese curse has it “you come to the attention of those in authority”. But if you store all encrypted traffic forever, someone can always come back to it later.

  3. The implementation details matter. Are there header blocks or padding blocks that get obfuscated that will leak information about the obfuscation. The header with the key indicator was one of the production breaks into the Enigma. One of the earlier (unclassified, published) analyses of DES by IBM shows some of the issues with disk encryption and known plaintext in disk blocks. (IBM Systems Journal, late 1970s).

  4. Another implementation detail. The source(s) of randomness used in the obfuscation function are probably important.

unimportant September 10, 2013 5:27 AM

@Z.Lozinski

Thanks — the application is firmware encryption. The US exempt encryption restrictions for medical devices and for DRM. But Europe is officially bound to the Waasenaar dual-use arrangement which limits symmetric encryption to 56-bit key sizes. My train of thought is to use AES-128 with a constant key K0 which is then part of the encryption algorithm, and exclusive ored with a flexible 56-bit key. If the legal language does not allow this, then I would use K0=0 combined with using a mild form of obfuscation (swapping and selective inversion of plaintext and ciphertext bits).

Clive Robinson September 10, 2013 6:39 AM

@ Z.Lozinski,

Your point 3 sort of covers one of the most important parts of practical systems that just does not get covered these days, which is “what do you do withe the plaintext before you encrypt”.

The normal argument is to compress it but this alone does not deal with the issues of the likes of “Known position Plaintext”.

Even when using OTPs the Russians employed various steps to prevent known position plaintext and methods to flatten the plaintext statistics.

To simple examples from hand encoded ciphers were “Russian Coupling” and a variation on the Nihilist cipher “stradaling checkerboard” to turn letters into numbers and
back again.

The stradaling checkerboard compreses the message by giving the eight high frequency letters (eat on irish) single digits and the others double digits. This also produces fractionation which helps considerably with short group transposition to hide other plaintext statistics (with hand ciphers such as VIC you would then cipher the message with some form of running key, then turn the digits back to letters using another checkerboard in reverse).

The cipher text would then be given long group transposition by russian coupling. In essence you break the plain text one or more times to give two or more different sized blocks then transpose them in some pattern where they are not in the same place or order they were originaly in. A key to this process was then put into the cipher text at aknown place in the resulting ciphertext which was aranged by some key schedual.

The two points to note are the fractionation of individual charecters and the transposition across the entire message (by short and long transposition). This fairly effectivly flattens the plain text charecter statistics and spreads letter contact statistics across the whole message achiving in some measure Shannons diffusion and confusion.

underhanded September 10, 2013 7:12 AM

@GregW : “So Bruce, should true hackers boycott the Obfuscated C contest, since its value for adding to the NSA playbook of “accidental mistakes” outweighs the honor of displaying one’s cleverness?”

The NSA has enough money to enlarge that playbook without participation of true hackers.

Maybe you wanted to refer to the Underhanded C contest, not the International Obfuscated C Code Contest.

This Underhanded C contest should continue to exist, to convince programmers of open-source OS to switch from C/C++ to another language (Parasail ? Go ?): this is the only way to get out of reach of NSA (you also need to lock down the motherboard OS, a.k.a. Intel AMT/Intel IPMI/DELL iDRAC/Hewlett Packard iLO … https://www.schneier.com/blog/archives/2013/01/the_eavesdroppi.html …).

sshdoor September 10, 2013 7:24 AM

@Carlo Graziani: SMTP obviously needs to be replaced by a messaging protocol that minimizes exposed metadata and routinizes application-layer encryption.

Or, better, the IETF should write a standard specifying how protocols can be TLA-resistant and DoS-resistant:

  • Do not transmit anything not needed.
  • And hence, do not reject requests from TOR, if the IP is not needed.
  • Limit the length of an answer packet to the length of its request packet (think of recent DoS attack using DNS).

Andrew September 10, 2013 6:31 PM

Hi Bruce, you had mentioned in “NSA surveillance: A guide to staying secure” in The Guardian (http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance) that TLS and IPSec are suitable encryption systems; however documentation seems to indicate that the NSA seems to have “some capability against encryption used in . . . IPSEC, TLS/SSL”. (http://cryptome.org/2013/09/nsa-bullrun-brief-propublica-13-0905.pdf)

I know you’ve mentioned that we should trust the math, and the issues often lie in the implementation and at the end-points, but is there a specific reason why you can recommend using IPSec/TLS, despite the fact that the documents indicate these are both protocols that they have a some effectiveness with, and in the same article indicate that they have put forth “aggressive effort” in the “development of advanced mathematical techniques? I’m assuming it may just be that there is evidence of specific vendors compromising the protocols (i.e. Microsoft regarding IPSec based on their history: http://www.heise.de/tp/artikel/5/5263/1.html); if so, in what circumstances could we expect these protocols to be secure/insecure?

tinfoil hatter September 11, 2013 4:45 AM

Could someone explain what impact would bugged ecc have in ssl/tls ?
Bruce advised not to use ecc but didn’t explain. Not to use ecc nomatter the occasion?

Would that mean that someone could sniff the key exchange between client – server?

phred14 September 11, 2013 10:57 AM

I run Gentoo Linux, where pretty much everything is built from source on my own computer, all source packages as well as the package system have at least checksums, sometimes signatures. The source code is obtained from the original source. So before I feel too smug…

Obviously there are places where this chain can be subverted, though why anyone would bother with a minority geeky distribution like Gentoo is a different question.

The other point of subversion would be in the C compiler itself. Long ago there was an article on subverting the C compiler to detect login source and insert malicious code into the generated object. Further, it also detected when the compiler was being used to compile itself, and inserted the malicious insertion code. There is also an essay called “Trusting Trust” that describes this problem and how to get around it.

Question… Has such a subverted compiler ever been found in the wild and reported?

Will Hill September 11, 2013 1:01 PM

Surely someone in your life can help you dump Windows for a nice free software desktop. I use Debian for all my work and see it as a reasonable compromise because the community is large and careful. I use tsclient to talk to unfriendly older systems when work demands it. After 15 years of it, I don’t see how anyone can stand those other OS and I’m glad that other people are stuck making them work.

Science, freedom and community are the only things you can trust in a world where you can’t trust anything. The free software community would be happy to have you. Why waste your time with systems with owners have always demanded the power we now know they are abusing?

JTHutch September 11, 2013 2:55 PM

@Bruce Schneier • September 6, 2013 7:03 AM

“Clearly I need to write an essay about how to figure out what to trust in a world where you can’t trust anything.”

This. Please. I’d imagine it could all be reduced to some variant of Pascal’s wager. But we all need to figure out ourt options going forward. I suspect that neither “head in the sand” nor “cabin in the woods recluse” are viable responses for most of your readers.

Gilbert September 11, 2013 4:23 PM

Bruce, why dont we have cryptographers develop a high number of bits cipher instead of raising it from time to time ? We could jump to symetric 1024 bits directly. Same for computers. We spent a hell of a time to 32 bits to move to 64 bits. Why not directly to 128 or 512 bits ?

We raise a little the number of bits. Spend a lot of time working on ciphers just one decade or decades later, raise it again and start all over the same work. Why not target a very high value with a key space several magnitudes greater than the estimated number of particules in our universe and the more time spent work on it, the more benefits.

name.withheld.for.obvious.reasons September 11, 2013 8:41 PM

@JHutch

I know the national corporate military services complex has relocated…

You can spot them quite easily, just look for the person(s) with their head up their arse.

Clerambart Adrien September 12, 2013 4:52 AM

Schneier’s and other academic people claims are pityful. They should just follow most works presented in hacking conferences (Black Hat, defcon, CCC, CanSecWest…). Most of the attacks used in BullRun are known since a very long time and the control over the cryptography is enforced for years. Have just a look at http://www.concise-courses.com/infosec/20121220/# (a talk formerly presented at CanSecWest 2011) and at https://www.hackinparis.com/talk-eric-filiol. You will see that the academic world and decision-makers are just blind and deaf
A.C

time to update September 12, 2013 8:13 AM

The Off-the-Record (by cypherpunks) plug-in uses DH-1536 and they plan to switch to ECC and use NIST curves..not good. Should use their own curves. And meanwhile update from DH-1536 to at least DH-2048…

BullRun September 12, 2013 9:51 AM

@Clerambart Adrien

The news is that the evil is NSA, with a very high budget.

Before this information, these talks were merely academic, disconnected from reality and the decisional sphere.

Compromised September 12, 2013 10:34 AM

@phred14 “all source packages as well as the package system have at least checksums, sometimes signatures”

http://www.zdnet.com/patch-fixes-flaw-behind-gentoo-attack-3039118330/
These checksums were nearly compromised in 2003.

Debian was compromised for two years (from 2006) and recompiling was not a
solution, the fiasco was in a Debian patch to the source of openssl/openssh:
http://research.swtch.com/openssl ; I am now confident it was part of Bullrun.

I am confident that Bullrun has compromised other Linux distributions as well, Gentoo included, because they have millions of open-source code without openbsd’s style audit.

@phred14 “Question… Has such a subverted compiler ever been found in the wild and reported?”

The precise vulnerability you are talking about has been found in the wild outside Berkeley: http://www.catb.org/jargon/html/B/back-door.html

MarkH September 12, 2013 2:22 PM

@Compromised:

I think it exceedingly unlikely that the Debian SSL disaster had anything to do with NSA. For a start, see my comment above.

Then, read the excellent switch.com article, the link for which you kindly provided in your post. From it, one can discover the identity of the guy who made the mistake; the perfectly innocent reason he made it; and the still-visible record of his asking for advice.

Third, I believe that NSA people are smart enough to do their work with considerable subtlety. Though the Debian error went long undetected, this was only because nobody checked; it would have been quite easy to find if anyone had bothered to audit.

Finally, it is obviously in NSA’s interest to weaken most cryptosystems enough, that NSA can break them. But if you consider the agency’s overall mission, it is definitely NOT in its interest to weaken cryptosystems so that ANYBODY can break them. The OpenSSL keys from the Debian disaster are accessible to any 12-year-old who knows basics of programming.


I have no doubt that NSA has done its best to compromise open-source cryptography software; with how much success, we don’t know yet. Mistaking bowery bums for master criminals won’t help us, in uncovering the truth.

Z.Lozinski September 12, 2013 2:52 PM

@unimportant,

Interesting problem. Given that the plaintext is the firmware binary, there is already a lot of structure in there to aid an opponent. I would also look at the key management aspects – how to manage the distribution of your two keys (K56 and K0) to the end system? At some point, you need an executable firmware image, and that means running AES and feeding it the keys. If that decryption step is done in a general purpose processor, outside some trusted computing module, that is the obvious target. If the decryption step is done inside a trusted computing module then you need to have a way to load the keys. Loading at manufacturing time means you are dependent on tamper resistance (and can’t change keys if there is a break). Loading dynamically means you are in the key management business.

The guys who designed IBMs’ original DES products for the mainframe used to say that the hard part was designing all the key management infrastructure, not the basic HSMs.

The way you turn a 56-bit key into the 128-bit Rijndael key is critical. While getting a sheet of graph paper and a pencil is tempting, I’d be tempted to suggest finding a starving cryptographer in a local university and see if they would like to develop some working attacks on your proposed approach.

O course, how much of this you actually do, depends on the value of the firmware.

Clive Robinson September 12, 2013 4:04 PM

@ Z.Lozinski,

Out of curiosity do you work for IBM?

It’s just curiosity as a paper I have from IBM has the same name on it.

Raouf September 12, 2013 4:19 PM

@Bruce Schneier

“Clearly I need to write an essay about how to figure out what to trust in a world where you cannot trust anything”

This is a real problem to solve, while an essay would be greatly welcome, the problem needs to be addressed in a formal way similar to provable security methods.
It would go something like this:
If you trust conditions A and B and C then you can trust method C.

I would hope that we as a community can quickly find some baselines for trusting some restricted forms of SSL and IPsec with existing transforms. (notice this would be about the protocols themselves not about implementations at this point)

Z.Lozinski September 12, 2013 4:28 PM

@Clive,

Yes I do. If the paper is on telecoms or networks, I’m probably the guilty party.

The first time I posted here I was in a bit of a rush, and didn’t bother spelling out my first name. Ever since then maintaining consistency seemed more important.

Raouf September 12, 2013 4:30 PM

@Gilbert

AES can be done with 512 or 1024 bit keys however there is one step that is not clearly defined in the key scheduling algorithm when using keys larger than 256 bits. The key expansion can be done in different ways resulting in incompatible implementations.

You can dig into the details in the following document if you wish.
http://goo.gl/rXAb7Y

It is not a major hurdle but to my knowledge there is no consensus on the best method for key expansion for keys larger than 256 bits.

unimportant September 12, 2013 5:24 PM

@Z.Lozinski

the platform is a micro-controller. The new firmware is run through a simple diffuser and then encrypted with an AE which uses a session key (= K56). K56 is first encrypted with a constant hardware key (sounds a bit redundant though). K0 is constant and part of the encryption algorithm. Unilateral key management is based on time stamp encryption. The ciphertext is: E(K56) || AE(diffuser(firmware))

Compromised September 13, 2013 1:29 AM

@MarkH: “The OpenSSL keys from the Debian disaster are accessible to any 12-year-old who knows basics of programming.”

You are contradicting: you pretend that a Debian maintainer in charge of many packages did not understand, and yet he made it through the Debian selection of maintainers, and still is a Debian maintainer.

Bruce Schneier, about that OpenSSL bug, stated that “Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.” https://www.schneier.com/blog/archives/2008/05/random_number_b.html

As operation BullRun collected and tested lots of private keys, they would have obviously noticed collisions, in the unprobably case that they did not that compromise.

Hence, in that case, the overall mission of NSA was brillantly achieved.

@MarkH: “one can discover the identity of the guy who made the mistake; the perfectly innocent reason he made it; and the still-visible record of his asking for advice.”

Deniability.

This guy, Kurt Roeckx, may be working for NSA, or may have been blackmailed by NSA, as any congressmen may have been. Remember NSA had and has routine access to you love status (see Lovint), and to the love status of all congressmen, which is for me the biggest threat.

Czerno September 13, 2013 12:55 PM

Regarding the NSA’s presumed “groundbreaking cryptanalytic capabilities”, here is copy/pasted a recently ‘blogged’ remark by the Tor project’s head, Roger Dingledine, who once worked there (a summer internship) :

Quote Roger :
Almost all the people there [at NSA] are typical government employees (not very motivated and not very competent).
Ends quote.

Well, I don’t doubt NSA owns (in whatever sense) /a number of/ remarkably talented scientists, on the other hand, Roger’s note seems to imply that it would be adventurous to think of the NSA globally as a terrific collection of super-brains… although they’d like us to be in awe in front of their (assumed, undisclosed) abilities :=)

They /are/ military types, after all…

John September 15, 2013 7:58 AM

What do you think about these quotes from the chat logs of Manning?

(7:55:26 AM) bradass87: DES / Triple DES… you’re doomed in minutes
(7:55:46 AM) bradass87: AES variants… take brute force
(7:56:06 AM) bradass87: days to weeks to break
(7:56:24 AM) bradass87: its about securing the keys, using complex enough keys…
(7:56:42 AM) bradass87: and sticking to Rijndael variants
– SNIP –
(7:58:06 AM) bradass87: RSA 1024 takes a few weeks… university of michigan finally broke it with a partial
(7:59:00 AM) bradass87: 2048… never heard of it being broken publicly… NSA can feasibly do it, if they want to allocate national level “number-crunching” time to do it…

obtained from http://cablegatesearch.net/manning-logs-diff.php

MarkH September 15, 2013 12:53 PM

@John:

Interesting, I hadn’t seen these before.

I’m skeptical of most of the claims there. Consider the last: Unless somebody found the magic shortcut for factoring semiprimes, even applying “national level ‘number-crunching’,” NSA would be lucky to factor one 2048-bit semiprime in a year.

For another perspective, I estimated (for a post on this site) that using the best publicly known equipment and techniques, the electric power to factor a 1024-bit semiprime would cost at least $100,000. Because of the way the cost of factoring scales up with size (using the best known algorithm), factoring a 2048-bit semiprime would run up an electric bill far exceeding NSA’s annual budget, and assembling the hardware to perform one such factoring per year would consume all of NSA’s budget for a number of years.

The claims about AES seem even more far-fetched: the best known attacks* — and there’s been a lot of analysis of this cipher! — are still practically brute-force in cost, and the entire resources of planet Earth wouldn’t be enough to mount such an attack. Even if usable quantum computers existed, finding an AES key would still take an enormous number of computer years.

[Aside: I expect to see quantum computers solving crypto-sized problems about the same year that I can go the car dealer and buy a fusion-powered DeLorean.]

Of course, we can’t eliminate the possibility that NSA achieved some super technology nobody knows about. Personally, I doubt that they are that far ahead of the world’s academia and the developments of the computing industry.

For me the big question about the claims in the chat log: what was Manning’s source? As far as I know, nothing about his work would have given him access to NSA inner workings. Secret crypto breaks are exactly the kind of thing NSA wants to guard most zealously; they will advise interested parties (like the Pentagon) “use this system,” or “avoid that system,” and the like … but they don’t want to advertise (for example) that they’ve broken AES. Why would that be disclosed to an army private?

One can imagine that Manning received some ultra-sensitive briefing, but does that make sense? Might the statements in the chat have been some mixture of reading blogs like this one, and his own personal surmise?

*AES related-key attacks are much better than brute force (though still incredibly costly). However, these attacks require very special conditions that will rarely exist in practice, and don’t apply at all to the use of AES for securing personal files, unless the attacker has subverted your encryption software, in which case they don’t need cryptanalysis!

Martin September 22, 2013 11:42 AM

Doesn’t access to cables and ability to mess with CA certificates mean they have total practical control of HTTPS through man in the middle attacks?

Jonathan McCain September 30, 2013 10:19 AM

People speculate that the RDRAND instruction on Ivy Bridge processors has been compromised. If anyone has a spare CPU and motherboard lying around, this can be tested.

The RDRAND internals put the entropy through a random generator before sending the results to the user. This is similar to how rand() works: a single “seed” with limited entropy will generate a long list of seemingly random output, but because there is only one seed the output is predictable and can be reproduced.

To get around this, check the RDRAND data at reset time.

If you had access to a spare CPU and motherboard, you could install your own program in lieu of the BIOS which would catch the RESET vector, get the RDRAND information, initialize a serial port, log the results to a 2nd computer, and force the CPU into RESET.

(For clarity, glossing over some obvious stuff such as storing results in memory and dumping blocks, or dumping to a faster device than a serial port.)

All of the RDRAND tests I’ve seen have looked at continuously-generated data; which, due to the internal hashing algorithm, would pass even if started with a low-entropy seed. To the best of my knowledge, no one has checked to see if different machines generate the same string of random numbers, or if the starting seed has good entropy.

With a terabyte drive on the logging computer, it should be possible to see if RDRAND has at least 32-bits of entropy: log 4 billion rounds and look for collisions.

RDRAND probably has at least this much entropy, but if not – boy would that paper hit like a bombshell!

0x7a69 September 30, 2013 10:20 AM

RDRAND backdoor more subtle than that.

Only few chips backdoored, most not. Brazilian mission to the United Nations in New York had computer spied on. Botnet uplink was to 177.135.198.244, still online, very big.

Hardware reversing of CPU: Masks normal to optical anaylsis. But transistor doping tampered with on feed from CBC-MAC whitener to CTR cascade DRBG. All but 32 read constant. Microcode tampered with on sample to shortcut AES-NI after XORing in RDRAND.

If known constant and mask, CTR(n+1)-CTR(n) with 2^32 search. Sounds familiar. Recent publish.

Sorry for poor language: Identity disguise.

You stole our revolution. Now we’re stealing it back. 0x7a69

Brian October 25, 2013 7:47 PM

Why don’t (We) put together enough I.T Programmers with the great skills we have Start an new OS that will be untouchable I am very skilled within C++ and Information Technology let get a team and build our own OS dedicated to the freedom we so fought and in so many cases died for let rise up now and work together in an effort to protect and secure our privacy that we are due I guess they think to have privacy you there can be none I would gladly work countless hours to produce an untouchable OS that “we the People” can trust I can be reached at cing4us@gmail any one with the right resources that would be willing to put our foot down and draw the line that will put in the work to create this OS please contact me PLEASE WE DESERVE TO FEEL SAFE AND PROTECTED not intruded and invaded. This stance will be epic, triumphant, yet patriotically strengthen through out the world Information Technology… a great man once said “All that is necessary for the triumph of evil is that good men do nothing”. Do not allow evil to triumph. Do not do sit by and do nothing. Stand United

Write In Vote December 13, 2013 11:36 AM

Open source does not discourage the NSA. They just put their people to work on it.

Google spokeswoman Gina Scigliano confirms that the company has already inserted some of the NSA’s programming in Android OS.

Also look up Linux in wikipedia.

Mark RIchards February 3, 2014 5:45 PM

Great work Bruce, but even as you document this, your website is happy to negotiate RC4 connections in Firefox.
Is this a vote for shrugging and accepting the situtation… it seems that even within the technical/privacy community (never mind Jo public), we don’t know what to do about this problem yet.

"Germanium" Jack May 31, 2014 1:21 PM

Question about one time pad key

If you pre-share a large set of truly random keys (in the gigabytes) and surmount the hurdles of physical security at both ends, making sure that the one time pad data is deleted on a byte by byte sequence so that it can never be accidentally reused, is the one time pad algorithm itself secure although a nuisance from the requirements? This is assuming that the one time pad data comes from something like a noise diode or op-amps creating pulses from resistor and shot noise, sampled by a cosmic ray detector trigger.

Pretty ugly requiements compared to a good crypto algoritm! Just wondered if it would be secure in a case like that. Pad reuse or using a pseudo random key generator would of course render it pointless. Silly question. I just wondered. Of course any flash with wear leveling for key storage would be out because of no immediate writeover or delete. Possibly a custom flash device.

I did read the page about one time pads. It seemed like MAYBE if ALL of the security requirements were met…(?) IF I understood. The last “if” is why I’m asking.

THANKS! – I really enjoy reading you techno-stuff. 🙂

Thanks for helping this techie learn (and relearn) the fun of computer science while working to recover from some health stuff that dropped anchor on my career for a while. Some days I still miss the days of minis and having to oil the terminal. It was fun, but today’s stuff is awsome!

Mr. Pragma May 31, 2014 1:36 PM

“Germanium” Jack (May 31, 2014 1:21 PM)

Theoretically OTPs are perfectly secure.

But practically there are lots of ifs, some of them mentioned by you, some of them related to being random (or not), etc.
Reading this blog, btw., you will find that some of the conditions you so lightheartedly assumed to be resolved, actually are major problem zones in terms of implementation, verification, and other factors.

Clive Robinson May 31, 2014 6:03 PM

@ “Germanium” Jack,

As @Mr. Pragma says, theoreticaly OTPs are secure.

However in practice there are all sorts of problems with them.

The reason that OTPs are “Theoreticaly Secure” is because “all messages of the same length are equiprobable” and this is because “the state of the next bit of they keymat is indipendant of all preceding and following bits and has no bias so is equiprobable”.

The problem is that this theoretical security does not consider certain types of attack that effect all stream ciphers (of which the OTP is one).

Assume for a moment the plain text is in a very styalised and ridgid format, especialy the message header containing the “to” and “from” fields. If you know where a message has originated from then you very well may know exactly what the plaintext is for the “from” field and exactly where it is in the ciphertext.

If you are able to intercept the message between the sender and the recipient then by the process of “bit flipping” you can change the content of the “from” field to something else you chose. To do this you simply find the difference between what you know the “from” field says and what you want it to say. This produces a bit mask that you use to “bit flip” the bits in the intercepted ciphertext. You then forward this changed message to the receipient who will on decoding the message see what you want in the “from” field, not what the sender actually put in the “from field”.

To stop this you have to reliably “Armor the plaintext” prior to encrypting with the OTP. There are various ways that this can be done such as message checksums, but these checksums have to be produced by a cryptographicaly secure method. If ordinary checksums are used then you can compute the difference in the check sum and bit flip it to correct it. Thus you must take care to use message authentication codes that are “fit for the purpose”.

This is but one issue that makes the use of OTPs way more chalenging than most books would have you beleive. When coupled with keymat managment issues OTPs are generaly regarded as more trouble than they are worth.

In general the only use they are used for these days is emergancy key transfer or emergancy “flash” engineering order wire (EOW) traffic. That is if an “out station” is surounded or bounced etc, then the process of “bugging out” usually involves the loss of current session keys. An OTP with appropriate plaintext encoding and armoring which can be done using pencil and paper, can be used to transfer a new emergancy session key from the “home station” or send engineering information. However with modern integrated systems where the communications and encryption kit is in a single man transportable unit such as a hand held unit such emergancy procedures are very much depreciated.

Wael May 31, 2014 9:56 PM

Some unsorted thoughts…

Re OTP:

One time pads are not breakable given the following:
1- Length of the message = length of the pad
2- Pad is confidential (of course)
3- Pad is not predictable

The resulting enciphered message is indistinguishable from a random sequence — theoretically.

This is the “theoretical” part. When it comes to implementations, the surface of attack is, more often than not, increased. This increase of surface of attack is proportional to the complexity of the system (this is empirical), and one of the reasons to adhere to another security principle: KISS, AKA “Economy of Mechanism”, or “Reduction of complexity”, etc… Problem is, in practice, when a weakness is found in the system, designers tend to add yet another “layer of complexity” — which is something that was discussed here more than once. The designer should not be designing the system wearing the hat of the attacker, although attackers’ methods should be kept in mind (one aspect of awareness.)

From an implementation point of view, the Pad should be random. Reason being: (Random) XOR (Message) = Random — and that applies even in the trivial case where Random = Message, where the resultant output becomes zero. This is very secure; neither the attacker nor the recipient will be able to decipher it (the recipient already has the message before it was sent — a case of non-causal systems 😉 — where the output is produced before the input is fed-in.)

The challenge is mainly in three areas:
1- Agreeing on a OTP — how to exchange it, if it’s random, and not a particular “book”?
2- Exchanging it with multiple parties.
In it’s most elementary the simple scheme, OTP suffers from lack of non-repudiation, breaches require replacing the group pad, and the quality of the random number generator must be “pretty good”

On top of that, If the system is somehow, although hard to imagine ;), insecure, the OTP can be extracted out, and one can kiss “perfect security” goodbye. If the OTP is discovered to be not really random, for example the output of an LFSR, then it’s possible to deduce it with better than brute force work factor, or effort, through frequency analysis coupled with chosen clear text attacks and/or the determination of the sequence logic of the used LFSR.

Now on a separate vein, I mentioned previously that I use FreeBSD for some purposes. Last night, I upgraded it to FreeBSD 10, and found out that gcc was replaced with LLVM and Clang. Don’t understand the rationale (although I read the pros and cons of each). I am starting to wonder about the effects on security — performance issues aside. Now almost every Tom, Dick, and Harry are jumping on the LLVM bandwagon, and using it for various purposes of “Security”. Not sure how that’ll pan out, and remains to be seen. Kinda not happy FreeBSD took such direction, and I hope I am proven wrong in the future, because I liked the OS, even when other “distributions” are much simpler to setup, for example PCBSD, or any of the Linux distributions. Any thoughts on LLVM, it’s chances of getting subverted through the LLVM backend? This will take some time to evaluate, I think.

Different topic:

I noticed something strange on my network so I fired up Wireshark (the new “native” Mac version that uses QT rather than GTK looks much better and doesn’t require X11 or XQuartz, but not quite functional) . Turns out it’s not my hot spot that was causing the “issue”. During the exercise, I found out somethings that I was subconsciously aware of. Looking at Wireshark traces confirmed some of them…

Was reading some parts of a Kindle book[[1] (turned out it’s on the boring side)…
So not only do they know what book you purchased; they also know what page you are on, what you highlighted, what words you searched for — this was evident when I continued reading a book on a different device, and it asked me if I would like to go to the last page I was at on the other device. Also, I think I noticed a new feature when I search for something in the book. It asked me: Would you like to view “popular highlights”.

I see a problem with SIRI, too. Can you image searching for someone’s address and then something “bad” happens to him or her? A google search may be “deniable”. When one’s voice is captured, it makes it a bit harder to deny…

[1] Between Silk and Cyanide: A code Maker’s War; Leo Marks

Chris June 5, 2014 11:37 AM

“Clearly I need to write an essay about how to figure out what to trust in a world where you can’t trust anything.”

I love it! Please write this essay!

Mike June 8, 2014 7:16 AM

I would ignore spin doctors that seem to live here. You have a brain, use it. The source have been available since “EM4” Days, and never been broken. You can keep using TrueCrypt 7.1a safely, of that I have little doubt. Besides, if you are worried then write your own version from their 7.1a source. I am sure some new team will take it up, rewrite it under a new license. Most probably GPL. It would have been much better had they done it themselves, but it seems they didn’t want to. But it doesn’t matter, it will happen regardless of how they feel about it. They have abandoned it, and openly said so. That means it now falls under “Orphan works” and even though they still have copyrights, they have no right do go after anyone who change their work under a new license. Maybe a lawyer here can correct me, but I seem to remember that is how it works.

Clive Robinson June 8, 2014 9:46 AM

@ Mike,

Before talking about works from TC one needs to consider if TC is it’s self a primary work or a derivative work and in what areas of it.

We know for instance that the encryption methods used by TC are prior art and likewise much of the way it works with the host OSs are prior art and in both cases plenty of documentation supports this. Which leaves askance of what is new or sufficiently novel not to be either obviously and easily shown to be derived or sufficiently similar to be likely to be judged as derived work.

I don’t know about later versions of TC but early versions I looked at to my eye had nothing new or novel about it and at best could be considered a pooling of clearly derived work.

This leaves the software source code on which those writing it may have a degree of originality which is therefore copyright to them, but their use of standara API’s and algorithms renders much of it as suspiciously agregated code from other authors. Thus they are like editors and publishers releasing a compendium of others work, they can claim originality on the agrigation but little else.

Thus a “clean room” aproach where one team takes the TC source and derives an above source code abstraction of it will just as a painter painting a picture of the inside of an art gallary will produce an original work even though the work of others is included.

Thus this new original work can be used by another team to produce a new source code level work, that may not even be in the same language as the TC source. Provided the “look and feel” is sufficiently different or is like the TC code compliant to some other recognisable standard then there is little the TC writers could do legaly even if minded to do so and of sufficient resources to make the effort viable as anything other than patent trolling.

Claude Petit September 13, 2014 2:15 PM

I use Truecrypt to protect data of my employers, and also protect my personnal data (bank accounts data, government data exchanges, …). If the NSA can decrypt it easily, very soon, others will do the same thing. Obfuscation has never been safe, and the same rule applies to backdoors. Some day, somewhere, someone will discover or learn how to do it and it’s what that worry me. For sure, I’ll not move to Microsoft’s cryptographic solutions as suggested !

Sancho_P September 13, 2014 6:55 PM

@ Claude Petit:

AFAIK it is not known that they “can decrypt it easily” or that anybody could at all.
It may depend how you treat your keywords, though.
Also you should avoid their latest 7.2 update.

Maybe one of the experts could hint what would be “easily” ?

John G. November 4, 2014 12:23 PM

OTP, if done correctly, is unbreakable against all attacks. Whether or not the message is or isn’t stylized or has certain patterns doesn’t matter, these are effectively erased, when processed through the XOR process with the random numbers. If the attacker does not have access to the random numbers used, then in essence it is unbreakable.

Many have speculated as to how to break an OTP encrypted message, but no real specific methods that will guarantee success in breaking the encryption have been described. Other than gaining access, though other means, to the random numbers or the message itself, or the person him/herself, OTP is unbreakable.

So if you are an secret government employee or want to somehow show off, sorry it won’t work. Tough Luck

John G. November 4, 2014 12:32 PM

So here is how to properly encrypt a message using OTP:

  1. Make a random number generator, there are several designs online, and if truly paranoid, you can make a good random number generator simply using ping pong balls (make sure they are nearly exactly the same), and a Bingo tumbler. Ensure you are not under visual, audio surveillance, or EEG surveillance.
  2. Encrypt message and send to target.
  3. Share encryption pad with target. Ensure Target is careful not to reveal numbers, in fact, burn or destroy all remaining copies of random numbers, such that only the target has the random numbers. Random numbers are encased in a container such that requires it to be physically broken to access or read random numbers (other than using x-rays or the like).

So how exactly would this encryption be broken?

Wael November 4, 2014 12:56 PM

@John G,

OTP, if done correctly, is unbreakable against all attacks.

I don’t believe anyone will argue against OTP is theoretically unbreakable. Reason being: Message XOR Random OTP = Random… I.E. the encrypted message is indistinguishable from a random sequence. However, the “if done correctly” is the point of contention.


Share encryption pad with target.

This is one challenge of an OTP implementation… How do you exchange an OTP “correctly” with someone:

  • You don’t know
  • Far away from you; a different continent, for example

Ensure Target is careful not to reveal numbers

How exactly do you do that? OTP suffers from two points of attacks; the sender and the reciever. You have no control on the “receiver”, and as such, you have to “trust” their protection mechanisms. This weakness doesn’t exist in a PKI solution where you share a public key, because… It’s public. OTP must necessarily be private.


So how exactly would this encryption be broken?

A better question is how exactly would one implement OTP “correctly”? Propose a method, and you’ll get a list of “attacks”… Or should I say: Propose a “design”, and you’ll get a list of “methods”?

Clive Robinson November 4, 2014 5:14 PM

@ John G,

There are a whole load of problems with OTPs besides the issue of “issuing, securing, and auditing” KeyMat.

First of the OTP is just another “stream cipher” and suffers many of the same ills of stream ciphers that use determanistic key generators.

One mistake people make is over the definition of a OTPs secrecy which is “all messages are equiprobable” not that “it keeps the message hidden”. In some cases OTPs can be broken, without the message being reveiled, the classic being the “yes or no answer”, thus other measures have to be taken, which is why OTPs have in many cases been reserved for “super encryption” of another cipher text and never plaintext.

Because as with all stream ciphers a OTP on plain text falls foul of plaintext structure. In many communications the message has a standard rigid format like ” Time&Date, To, From, Clasification,…”

By a process of “bit flipping” you can change information in the fields from one thing to another, without having any knowledge of the KeyMat. Thus you can “resend” an enemies message with new and seamingly correct time&Date, To, from fields etc. To prevent this the plain text needs to be “armoured” in some was that does not involve “simple checksums” (because they can be bit flipped as well).

Then there are issues to do with “KeyMat run length” a true random source could churn out one hundred zero charecters in succession or any other regular pattern. This means that the ciphertext becomes a simple substitution over that length and if the plaintext has significant statistics they will be recognizable.

Thus there are rules when generating OTPs about limiting run lengths down to about a word (five charecters) and no more.

Thus contrary to many peoples claims their is one heck of a set of rules about using OTPs, which if the Russian’s had followed VERNONA would not have worked to their disadvantage, even though they had reused their pads.

Anura November 4, 2014 5:51 PM

Ah, authenticating OTP is easy… Just hash the ciphertext, append it to the message, and encrypt it using the OTP!

I saw someone do this with a stream cipher once; seriously, don’t roll your own, as something like this that sounds fine on the surface becomes trivial to crack in reality.

Figureitout November 5, 2014 10:17 PM

John G.
–Agreed that OTP won’t be broken easy, if at all. Especially if you pre-encrypt messages, which is trivial, and if you practice good OPSEC and keep mouths shut there will be little proof of what the message actually is.

People saying that OTP “is trivial” to break don’t have enough practical experience in my view and don’t appreciate the unspeakable amounts of data created everyday.

For all you OTP-crackers out there, I created a “trivial” OTP, in less than 1 minute w/ practically zero design considerations and practically nil OPSEC. I’ll keep the pad and check back here in case someone actually breaks it (which you could get a few letters w/ old school analysis and probably work out a good guess, I didn’t fill the message up w/ garbage which would make it harder), also, the initial message is meaningless, what it actually says to someone is an entirely different code word; one can continue this game to the Nth degree. I also made it easy by using correct spelling (for the most part) and plain english.

End of story, if you’re to the point of OTP’s (which isn’t even paranoid today, just reasonable), they’ll be worth less than the effort solving them (be sure to drink your ovaltine).

MLJJV PGHYJ ACVBA SMLJ4 KXDSA 0JUIV PLG3M

Wael November 5, 2014 10:37 PM

@Figureitout,

MLJJV PGHYJ ACVBA SMLJ4 KXDSA 0JUIV PLG3M

This looks like my Windows activation key 🙂

Figureitout November 5, 2014 11:13 PM

Wael
–Who said it isn’t? :p Sorry, no mention of “pumpkin butts”, I know…tear down the cheek, not that cheek!

RedRedMane November 12, 2014 11:28 PM

If you really think you can be COMPLETELY anonymous online, get random generating encryption codes, get behind proxies, ( never mind the TOR browser! They have “merged” with StartPage and it’s twin sister Ixquick. This seems to have been a “condition” placed on TOR by the authorities. Now, with TOR being “one” with StartPage, ( there ARE no versions of the TOR browser that DON’T have StartPage, now.), you can’t do anything “controversial” — i.e. “illegal”!” StartPage is acting much like a TOR “NANNY” that restricts the user from doing anything that might offend Law enforcement! So, again, FORGET about using TOR. There ARE absolutely “NO” existing versions of the TOR browser,now, that do NOT have StartPage guarding the gateway.)
So, forgetting about TOR, if you ( again ) think that it is possible to “ghostify” yourself with proxies, randomly generating/reconfiguring encryption codes, etc,… go ahead and TEST that theory of yours by typing in a search query ( after having established all those proxies, and random encryption codes )for child pornography!!!! I mean, after all, you ARE a literal “Cyber-GHOST!” ..right? Why not order a few PIPE BOMBS while your at it!?! Upon getting all that “CyberGhostery” set up, why not “really” TEST IT OUT?!!? Just go to the top of your search engine and, in the search space, type in “alt.kidsex!” Then type in “alt.bomb.building kits!”
Do that, and if you think that you’re some kind of invisible Cyber-Ghost, you’ll get one very RUDE awakening when you find a SWAT TEAM crouched outside your front door!!!! Dude, the NSA, Homeland Security, the ICAC ( Internet crimes Against Children taskforce), the CIA, the FBI, and even many local Police detectives use advanced, ultra-secret forensic-based “stealth” cyber technologies that “sift” right through proxy nodes and crack “encryption codes” like professional chefs crack EGGS!!! If you think encryption codes and proxies will save you from the authorities, you need to do a real honest to God REALITY CHECK!!!

Wael November 13, 2014 12:07 AM

@RedRedMane,

If you really think you can be COMPLETELY anonymous online, get random generating encryption codes, get behind proxies

On the flip side, your recipe can be used as an anonymizer, too! It will quickly anonymize the identity of the person! Instead of johnsmith@gmail.com, the new anonymous identity will become inmate378@singsing.gov 🙂 Don’t try this at home!

Justin November 13, 2014 1:11 AM

@RedRedMane

Are you talking about Ghostery? That just blocks advertising trackers. It has nothing to do with “random generating encryption codes” or proxies. I like to block ads and trackers as much as possible, just to keep my computer from getting infected with adware, malware, and spyware.

Figureitout November 13, 2014 1:38 AM

RedRedMane
–Lol…yeah first thing I’m doing is typing up kiddieporn and pipe bombs once I get an anonymous connection. Sh*t’s disgusting and bombs aren’t that fun (too loud). That’s the only purpose I seek for anonymous browsing; not computer security research and tipping off other designs and systems I’m working on. Yeah, I’m sure they can wad thru the torrents of data leading to nowhere, just like they could find where a commerical plane 8 months after disappearing. They also locate and identify every single attacker penetrating their networks; oh and they were able to stop the one nutjob that just jumped a fence and literally ran unimpeded into the WhiteHouse…

And yeah you could have a SWAT team at your door at any time anyway b/c emergency calls for citizens aren’t secured and the military I mean police get involved and get led around and actually turn into an attack weapon for someone else… http://www.nbcphiladelphia.com/news/local/NJ-Family-Swatted-by-Video-Gamers-280455482.html

Crack this egg (and again, drink up on your ovaltine). Aaaaannnnd the key’s gone.

xc67L vq?sq 56#vJ 94Tna OH/Bd !PQ9T TTpfx

RedRedMane November 13, 2014 1:42 AM

Like I said, If you really think that proxies and randomly changing encryption codes can transform you into some kind of cyber-ghost-BATMAN, why not TEST your faith!? By searching for the most ILLEGAL THING ON THE PLANET, you become one of the most “pursued” people on Earth!!!!…By those who are paid SIX to SEVEN FIGURE INCOMES, 80% of that funding goes toward catching stupid folks who thought that if they were extremely “clever”, they wouldn’t get busted!!! People who, like you, thought that they were “Cyber-Ghost!”
To TEST your cyber “magic”, all you have to DO is commit the most furiously “fought” cyber crime of all!! If, after 20 years, the authorities STILL haven’t found out about you, you can “congratulate” yourself!!! You really WERE a “GHOST” in the internet!!! Don’t try it! Your chances of NOT getting caught are about 1 in 20,000!!! And IF you somehow manage to survive years in the pen, you’ll probably end up being LOCKED UP for LIFE due to the existing “CIVIL COMMITMENT” laws!!!
Man, you talk about NOT WORTH it!!!!

Figureitout November 13, 2014 2:15 AM

RedRedMane
–Perhaps you’re on the verge of “figuring it out”. And for those that have, I think they learned a good lesson; while probably remaining bitter for the rest of their lives and hating me for it.

And like I said, there’s no purpose looking up stupid things and that people seeking actual real security (which involves anonyous searching of information) become “terrorists” for seeking to fully protect themselves. How can a normal citizen protect their credit card these days when even the damn president is getting his card hacked? And good to know money being spent wisely and people earning lots of $$$ being lazy and catching idiots and not actual criminals; country’s going broke anyway due to wealthy elites sucking funds out and gov’t not having balls/competence to tax and prosecute them and normalize the economy to get a normal middle class again and income mobility so people feel like they actually have a chance at a decent life.

This is just you making up these scenarios and thanks to the insecurity of nearly everything and no solid proof existing anymore all the evidence could be fabricated anyway (especially by breaking into people’s homes when they go to work/school and ignoring all laws and creating secret ones w/ secret courts and secret interpretations)

Oh check out one more code you can probably crack: http://i.imgur.com/cEdPi.gif

Nick P November 13, 2014 10:33 AM

@ RedRedMane

That’s a dumb idea and you’d know how dumb if you read the Snowden leaks. They’ve had the ability to catch all kinds of crooks whole time despite their false security efforts. That includes the pedophiles the FBI always mentions. Yet, they didn’t employ their tech to do that. They let the crimes continue. The only ones they pursued outside terrorism were fraud and drug charges that I’m aware of. Even with those, they used a parallel construction technique to make it appear like the bust happened a different way. So, people following your advice would (a) get ignored or (b) get busted without realizing their technical security was broken.

Hence, doing a bunch of crimes on the Internet doesn’t test anything at all except their most basic L.E.O. capabilities. The higher level capabilities are a secret that they’re willing to sacrifice kids to protect. Too bad for them that Snowden leaked them, but still effective against most from what I see.

malak February 15, 2015 10:00 PM

Well, the Republicans would do something to change this with some pressure by their constituents, most likely could come from Tea Party or Libertarian constituents, but Democrats will fight changing this tooth and nail because the reality is, this fits in with their agenda of controlling and monitoring the populace.

Dave623 February 28, 2015 10:30 AM

The NSA can decrypt anything that is encrypted (one way encryption is the hardest)
Why don’t they provide American businesses with the technology to keep prying eyes out?
All American personal and private information is available on the internet for free if you ever did any one of the following:

  1. Paid a phone bill
  2. Paid US Gov’t taxes.
  3. Owned your own house.
  4. Posted anything on social media, or other internet blog(including this one)
  5. Got a traffic ticket.
  6. Obvious other reasons including if you have ever been arrested.

The personal and private information includes:
Your complete first, last, and middle name.
Your complete current address.
Your car including make, model, year – if you own a car.
Your age and date of birth.
Your social security number (fee is charged)

Trust me the bad guys have everything they need to know about all of us, to do each of us harm.
Can’t the US Federal Government, provide US businesses with the tools and support they desperately need to keep OUR personal and private information out of the hands of the bad guys?

I know its already too late for me, but what about my grand children, and yours.
Don’t they deserve privacy?

Keep our business transactions safe!
-Dave623

Charlie February 28, 2015 11:31 AM

Right on Dave!
The only thing keeping us safe, the odds are in our favor.
There are 318.9 million Americans (2014), so the odds of a bad guy getting to me is one in 318 million, the chances of winning a single ticket lottery are better at one in 175 million.

Everyday there are thousands of Americans attacked and injured by criminals using personal and private information obtained from the internet.

The Federal Trade Commission’s Consumer Sentinel Network (CSN) received over 2 million consumer complaints in 2013. Identity theft complaints accounted for 14% or approximately 280,000 complaints. That averages to 767 people injured every day (that were reported).

Let’s demand that government work for us.
Keep internet use safe from the criminals.

Your mom March 7, 2015 10:10 PM

This guy who made a post “Mike • September 5, 2013 3:40 PM” should be shot on site

Keith Lockstone March 21, 2015 2:21 PM

That sort of “backdoor constant” trick is possible in SHA – during the production of the constants generated from square and cube roots of prime numbers.

I have not seen any explanation as to why this should occur.

The fractional part is scaled up to a 32 bit integer, ignoring the fractional part i.e. rounded down.

Example: the fractional part of root 2 is 0.4142135623730950
scaled up by 2^32 is 1779033703.95
and the hex version used in SHA is 6A09E667.

Keith.

Olaf September 4, 2015 4:40 AM

I believe that RC4 is unbreakable if properly used. If you remove the BIAS (dropping the first 1024 keystream bytes) and a use a large key size (RC4 allows up to 2048 bits) no one can break it in a feasible amount of time.

Tom February 21, 2017 12:30 PM

I have to say, after reading some of the comments here, those against the NSA cracking encryption are blind to the benefits of what they are trying to do, and yes, the ability to break encryption IS a big benefit to being able to do what they do.

Encryption isn’t just protecting the patriots, those that want to simply keep proprietary, financial, medical, or otherwise private and personal information safe from prying eyes. Despite those that say they should already have all the information on anyone they need through social medial, encryption is also protecting the bad guys-they encrypt files, too. ISIS uses encryption over chat apps to organize military-style attacks and terrorist bombings, and they aren’t posting their home addresses on their FB accounts! Plans to attack institutions will likely be stored and transmitted to their partners as encrypted by their criminal attackers. This forum seems to be pretty one-sided and people overlook the “police” aspect of this, where police and security firms need the ability to see if they can pre-empt some of these plots before they cause 9-11-style damage and loss of life. If even one life is saved because the NSA decryption algorithms were used by the police or FBI to stop such a plot, it is well-worth their doing it. If your records are so personal to you that you think even the potential that the NSA could see them is not worth this to you, I would strongly encourage you to re-think your position. No one cares you have a birthmark on your bum or a house in the Cayman Islands you don’t want your wife to know about. That is not why the NSA needs to crack encryption, and they aren’t looking through the average citizen’s medical or financial records. They are subject to getting warrants just like the FBI or any other agency, which means they need probable cause – a reason that a federal magistrate can agree with – to look into what they are looking into. Don’t want them looking at you, don’t give them a reason!

And if someone were to steal and use these NSA tools for nefarious purposes, they would be guilty of cyber-intrusion and could be arrested and charged as such. Even the guy that stole some NSA tools was charged with theft, without even having used them. Obviously the NSA can figure out if the tools are copied by virtue of having found the guy they arrested for it. This should make people feel safe, but instead they love building webs of conspiracy and spies and have this paranoid attitude I’ll never understand.

Dirk Praet February 21, 2017 1:23 PM

@ Tom

Any particular reason for revisiting a thread from 3.5 years ago? Like a school assignment or something?

Do revisit the archives of this blog for a full debunking of all your arguments.

Clive Robinson February 21, 2017 4:42 PM

@ Tom,

This forum seems to be pretty one-sided and people overlook the “police” aspect of this, where police and security firms need the ability to see if they can pre-empt some of these plots before they cause 9-11-style damage and loss of life.

Firstly “security firms” is a very dangerous way to go when it comes to “Police”, history shows it almost invariably goes bad. Have a look at the history of “London thief takers” the origins of the term “Straw Man” and why Robert Peel was able to over come what we would now call “the lobbyists” to get the “Peelers / Bobbies” setup in the first place.

Secondly you appear not to understand the difference between privacy of communications content (encryption) and privacy of communications association (traffic analysis) and other forms of intelligence.

One of the problems with US Intelligence is the over reliance on “Signals Intelligence” (SigInt) and lack of resources devoted to Human Intelligence (HumInt) also known as “Boots on the ground”.

There are two points in time that can be pointed to for this which historians have noted. The first being the downing of a U2 during the cold war. The second being a choice being made not to train US armedforces in “Policing”.

When it comes to the minor danger of terrorism[1] in the US and other Western Nations history shows that HumInt not SigInt is what actually counts by a very very long way.

I could go on at length over your other points but one important thing to note is “bias” by “reporting bias” and “comfirmation bias”. By definition “news” is something rare that activates our irrational fears and sense of tribal superiority over those we see as “different” or “not one of us” thinking[2].

Bias is one of the major reasons that “Terrorism” and it’s off shoots are a hot button political ticket item. Easy to punch the button as a sound byte, virtually impossible as a problem to solve with money or policy. Because the simple fact is that any action taken against groups of people is actually going to do considerably more harm than it is good, and this is not a liberal mantra but a well established financial and economic principle as well as being even more fundemental in terms of “hybrid vigor”. The more you polarize a society the more fragile it becomes, the more inwards it turns, the more fragile, it’s a downward spiral. Which as history shows causes empires to fall, the only real question being the level of violence associated with the collapse.

As @Dirk Praet has pointed out your arguments have been rebuked hear in the past and contray to the bias many implicitly feel due to news and hot button reporting we try in general to be as balanced as we can, and find solid root arguments backed by valid data not bias and assumptions.

[1] Compare the numbers of deaths and injuries from direct terrorist acts not just to the number of deaths by vehicle / fire / disease / workplace incidents / home incidents but also to the secondry deaths from terrorism due to peoples changed behaviours. That is compare the numbers of expected deaths by those who would have flown to the number of expected deaths by those who chose now to take the very much more dangerous choice to drive.

[2] There are quite a few examples of this given but most often the one you get to hear about is “child abduction” it is very very rare and mostly carried out by those who are part of the victim’s family or at the behest of a family member. But because the very very rare gets widely reported the reporting makes the crime appear many many times more prevalent than it realy is. Which combined with the primitive inbuilt responses in the brain to our children magnifies the effect, thus the reporting bias. However another primitive part of the brain kicks in which is the tribal effect, we look for reasons that make us different and brings out the “isms” in people almost as a defence mechanism, this gives rise to the superiority effect which causes further bias’s such as confirmation bias where the brain tries to find patterns in what is in effect random data.

NATHAN August 1, 2017 9:59 PM

i have an original hard copy photo on my fb photo section of the Sep 7, 2001 worldwide caution warning ..

wanted to chime in as thread is interesting but surprisingly brazen in it’s criticism of the fed govt.
not every thing done online by our govt. is always done the best, in my ostentatious simple critique .. they have a large work load to handle .. pardon my saying so but the frankly ubiquitous masses could possibly handle some nonsensical but pertinent shared and ..
yet often probably ignorant comm. to “additional degrees” of ambiguity from time to time .. just my two c 🙂

curiouser and curiouser October 30, 2017 1:18 AM

@Bruce

I want to keep some secrets in my back pocket.

This concerns me. While I know that not everything that could be called “security through obscurity” is always bad (anti-forensics often relies on it, quite successfully), I am extremely curious what sort of things you mean. Are you just saying that you use some tools/techniques that are already public, but rather underrated? For example, a TCP injection monitoring tool like honeybadger? Things like avoiding certain libraries in which you know of 0days (not too uncommon among friends of exploit brokers)? Hardware protections such as setting rather obscure MSRs which may improve security?

Though this is an older blog post, I hope you have time to reply. Thanks.

Clive Robinson October 30, 2017 1:51 AM

@ curiouser and curiouser,

Remember this blog entry was written by Bruce when he still had access to the Ed Snowden Trove Archive.

It was only a short while later that access was taken away by Glen Greenwald, and the Intercept became effectively the only window through which the General Public gained glimpses of the archive, as a political not technical view point.

Understandably this has upset quite a few people, as it is in effect holding back potentially valuable information that could limit the activities of the NSA et al SigInt Agencies of the Five Eyes Nations against their citizens…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.