Ed Felten on the NSA Disclosures

Ed Felten has an excellent essay on the damage caused by the NSA secretly breaking the security of Internet systems:

In security, the worst case—the thing you most want to avoid—is thinking you are secure when you’re not. And that’s exactly what the NSA seems to be trying to perpetuate.

Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.

So the problem is not (only) that we’re unsafe. It’s that “the N.S.A. wants to keep it that way.” The NSA wants to make sure we remain vulnerable.

Posted on September 12, 2013 at 6:05 AM45 Comments

Comments

Mike B September 12, 2013 6:35 AM

That’s a bad analogy because people are never safe. Security is always about tradeoffs. Everybody lives in a house with a lock that can be picked in under 15 minutes, but they serve the purpose of defeating the attackers that the locks are designed to defeat. Consumer grade security isn’t about keeping you safe from top tier nation state actors any more than the locks on your door are designed to keep out a SWAT team. What consumers need to be worried about are two bit criminals in Russia and Rumania.

Its baffling why the threat of someone out there in the vast universe being able to know what you do online freaks people out when the threat is no more serious than someone being able to find out what you do in an offline context or the way things used to work before there was an online and all communications were carried out using blatantly “insecure” means.

The real joke is that the interactions that most people seem to want to protect are with other people that they don’t know anything about.

SeriousThreat September 12, 2013 6:47 AM

@Mike B: “the threat is no more serious than someone being able to find out what you do”

The threat is serious because NSA is spying on my own government.

The threat is serious because NSA makes its best to design bad locks on my door (windows/) in presence of criminals in Russia and Rumania.

Mike B’s comment seems to be damage control payed by the NSA.

musical September 12, 2013 7:07 AM

With very sincere apologies to Creedence Clearwater Revival:

Just bugged someone from Illinois – I thought he was from Luxor –
Got to sit down, take a look through his mail.
His encryption sets in, but pretty soon I’m singing,
Doo, doo, doo, looking out my back door.

They try to hide their ID,
Ciphered asymmetrically,
Look at all the happy creatures browsing on the porn!
Thinking they’re anonymous, thinking they can hide from us;
Doo, doo, doo, looking out my back door.

Eliptic curves don’t worry us; we stacked the numbers right.
Took the standard setters for a ride, doo do doo.
Legal justifyin’ by secret rubber stampin’,
Doo, doo, doo, looking out my back door.

Eliptic curves don’t worry us; we stacked the numbers right.
Took the standad setters for a ride, doo do doo.
They might fix it tomorrow; today, I’ll snoop on one and all,
Doo, doo, doo, looking out my back door.

Ain’t no trouble Illinois, secret courtroom, oh boy!
Look at all the happy creatures browsing on the porn.
Might fix it tomorrow; today, I’ll snoop on one and all,
Doo, doo, doo, looking out my back door.

Peter A. September 12, 2013 7:17 AM

@Mike B.

You are making a very bad analogy between physical security in real world and digital security in virtual world of information.

As many commenters here have pointed out (Clive Robinson most prominently) there are huge differences between those two worlds and one major one is nonlocality of virtual world.

Following your analogy, having a weak lock or even no lock at all on a house in the middle of nowhere, where hardly anybody goes, is fine – there’s very low probability anyone is going to rob you there. Even having a somewhat good lock, to which local firemen or sheriffs have a master key, and you know it, is fine – it is a low chance the master key would be lost or abused, and even then, you can change your lock or the key.

Virtual world is different. When your lock has a known weakness in a world in which every crook on the planet can instantly teleport to your doorstep at no cost to him and lockpicking tools tailored for your lock cost nothing and are freely available, you’re going to be robbed multiple times a day – but at least you have a chance to learn about it and change your lock or secure your door in some other way. When – in the same world of instant free teleporting and instant free procurement of tools – your lock have a weakness engineered-in from the start by some spooks, and virtually all lock manufacturers are in bed with them, you’re doomed.

Moreover, in real world, when someone breaks into your house and steals something, you’ll notice some goods are missing. In virtual world, you won’t. In real world, when someone is occupying your house against your will you will know it and kick him out or call the police. In virtual world you may never notice that someone has took over half of your house and is moving tons of illegal stuff through it, putting blame on you. Etc. etc. etc.

Albert September 12, 2013 7:29 AM

“if you know your communications aren’t secure, you can be careful about what you say”

Yes, that is exactly what happens. I have already noticed my friends on facebook never say anything about politics anymore. Lately they have just posted pictures of food, children and mundane activities. The same thing happened in Eastern Europe during the cold war. There, if people dared to criticize the system, they did it in very subtle ways, hoping the authorities wouldn’t take notice. I think this sort of self censorship is sad and not suitable in a western democratic system.

Jon Eliot September 12, 2013 7:43 AM

@SeriousThreat
”@Mike B: “the threat is no more serious than someone being able to find out what you do”

Mike B’s comment seems to be damage control payed by the NSA.”

I thought exactly the same thing, before reading your comment. But no, NSA spinners cannot be that misinformed about the mental faculties of the readers and commenters on this blog. Or can they? Hell, this new mental environment of distrust and suspicion makes it hard to think!

Raouf September 12, 2013 7:44 AM

The biggest damage is the destruction of trust in the standards that we depend on for security. Because of NSA activities much of NIST’s work is now suspect. The suspicion also has been extended to IETF and other bodies.
While it is good to put things under scrutiny, it is a tragedy to dismiss honest work done by many sincere individuals in NIST and IETF because of the dark clouds that now hovers over them.
Sorting through this situation will take tremendous effort and the environment to do so will be contentious.

Recently there has been many quotes of an article by John Gilmore casting suspicion on the IPsec protocol. While much criticism can be leveled against this protocol, there is nothing that indicates any security weakness in that protocol or any of the current transforms approved for use in that protocol

Karl September 12, 2013 7:47 AM

I wonder if we can’t lay some of the blame at the feet of the NSA for the recent news about the thorough extent to which US businesses, and in particular defense contractors, have been penetrated by Chinese and other foreign-sponsored hackers. By knowingly sitting on problems with cryptographic systems has the NSA passively permitted the largest string of security breeches in US history? Proper notification of the community might have prevented some of the damage. It can, I think, be fairly presumed that the NSA are not the only ones who have figured out how to exploit the current situation.

Jupp Müller September 12, 2013 8:22 AM

Mr Schneier, I agree wholeheartedly. That’s why the Snowden documents all these revelations are based on have to be made public. The public has a right to know, which algorithms are compromised, which companies are purposely integrating backdoors into their products and which standards have been damaged.

Of course, this will reduce the ability of the NSA to spy on genuine terrorist conversations, but it will also be the chance for a rethink of more conservative people regarding open/closed software and their relative security value. Transparency is the only way we can benefit from this situation in the long run.

AlanS September 12, 2013 8:37 AM

@Raouf

Agree. The NSA’s role in the creation of security standards involves an obvious conflict of interest. Unfortunately this role is embedded in the Computer Security Act (for discussion and a link to the law see https://epic.org/crypto/csa/ ).

NIST has now re-issued the draft SP 800-90:
http://csrc.nist.gov/publications/nistbul/itlbul2013_09_supplemental.pdf

“NIST strongly recommends that, pending the resolution of the security concerns and the re-issuance of SP 800-90A, the Dual_EC_DRBG, as specified in the January 2012 version of SP 800-90A, no longer be used.”

ramriot September 12, 2013 8:53 AM

Since the NSA’a mandate is twofold, first to devise methods of breaking foreign codes and second to devise codes to protect national interests and also that in the most part in a post 9/11 heterogeneous threat model, the foreign is also the local in threat and code then:

Acquiring a break into a code that in used by friend and foe alike, and not revealing it to friends is failing 50% of their mandate.

Storing signals for breaking without proof they are from a threat actor is failing the other 50% and also outside of their core mandate.

So IMHO this organisation has proved it is failing in at least 50% of its requirements. If it were any other federal government organisation the auditor general would have shut them down for gross financial waste long ago.

Also in this modern open source data protection economy are they even needed to assist, they can only do harm by the perception of their actions.

Our only recourse is to declare this organisation as an enemy of the people and act with that foreknowledge to undermine their efforts in all civil and legal ways we can i,.e reworked open encryption standards used for all communications, digital dead-man’s handles on all corporate actors (see Cory Doctorow), and removal of NSA budget to be used to pay reparations to all harmed by their actions.

Nick P September 12, 2013 9:41 AM

I think the article is wrong. I posted this rebuttal to it on his blog:

“Many users assume — or have been assured by Internet companies — that their data is safe from prying eyes, including those of the government, and the N.S.A. wants to keep it that way. ”

This was a bad assumption for users to make in the first place. Even many lay people over the years assumed that if the NSA wanted them, it was game over. Movies such as Enemy of the State kept that in their mind. Additionally, during consults, I reminded people that the security required to beat TLA’s basically wasn’t going to happen. And that most security vendors push a false sense of security for their products, meaning one must carefully evaluate them. All in all, these NSA revelations just confirm what I told people for a decade and change little about how they do security because they don’t change the nature of their tradeoffs.

“Suppose you’re driving a car that has no brakes. If you know you have no brakes, then you can drive very slowly, or just get out and walk. What is deadly is thinking you have the ability to stop, until you stomp on the brake pedal and nothing happens. It’s the same way with security: if you know your communications aren’t secure, you can be careful about what you say; but if you think mistakenly that you’re safe, you’re sure to get in trouble.”

That’s a bad analogy. A car without brakes has a high likelihood of killing or severely injuring you. NSA tapping your SSL doesn’t. Matter of fact, if you ran the numbers, you’d probably find a low rate of harm across total number of internet users. Also, NSA isn’t completely disabling the protection: they design it with subtle, secret problems so they can break it. It works in most scenarios, giving us protective value, and can be weakened under certain circumstance for one organization that legally can do that. (Schell warned us of this subversion threat, but people dind’t listen…) So analogy is shattered beyond any comparison.

Now, I certainly don’t want my crypto being weakened or implementation flaws left in. Guess what, though? Weak security, security defeating complexity, and tons of residual vulnerabilities are what the software market has given (and consumers demanded implicitly) to keep features up and price down. The situation already existed. There are proven methods for producing software with low defects in general and low amounts of trusted code. There are very secure ways of operating information systems. Businesses haven’t been interested in any of that. So serious problems stay in our software awaiting NSA’s (and others’) bug hunters. Hard for me to see how we’d be more secure w/out NSA’s current shenanigans.

And, lastly, Congress and the US taxpayers kept voting to give them this power. Over and over. It’s a consequence of the will of the public. It’s really their mess. They will have to clean it up. Know why? If you’re a US citizen, no amount of technical security will save you from the courts or the SWAT teams when you become a threat. The public has to pressure Congress to change the laws or the NSA can just say “it’s legal” or “we’re just doing our authorized job.”

Brian September 12, 2013 9:52 AM

One of the more fascinating aspects of this whole discussion is the expectation expressed in the article, that it is (or should be) possible to casually use widely available programs to technologically protect your information even from billion dollar intelligence agencies. This is such an article of faith in the tech community that the suggestion that it’s not true immediately became a massive issue for discussion.

I’m not saying it’s a bad expectation or anything, just that it’s quite extraordinary when you step back and really think about it. There is no other example of the relationship between government power and individual power that comes even close. In the US you can own a gun, but there’s no serious expectation that you could use it to fight off the US military (or even the local police). But we expect to be able to download, say, a free, easy to use, encrypted chat program and talk with a friend and be completely protected from even the NSA listening in.

Joe X September 12, 2013 10:15 AM

Milo M. • September 12, 2013 9:58 AM

Wonder what brands and models of new slot machines use NIST SP 800-90?

Maybe that lottery-winning “reclusive maths professor” in the article works for NSA.

BCross September 12, 2013 10:16 AM

The liberties of a people never were, nor ever will be, secure, when the transactions of their rulers may be concealed from them.
Patrick Henry
JUN 9 1788
Speech to the Virginia Ratifying Convention

Arclight September 12, 2013 10:40 AM

The idea that “complete protection from state level actors” is the only relevant standard of security is not correct. Even state-level actors rarely get this.

What we should instead be focused on is making surveillance more expensive. Sending our data through even a flawed encryption system automatically makes it much more time-consuming and expensive to sift through than if we sent those same data in the clear.

Money still has to be spent recovering keys, bribing telcos, cracking the now weakened ciphertext, etc. It’s certainly not as good as a system with no intentional flaws. But in the absence of better, people should use the most practical options available to them right now.

And of course, we should continue to pressure our governments to back away from this nonsense for a variety of reasons already stated.

Chief among them being that leaving such an expansive domestic surveillance apparatus in the hands of our government is analogous to leaving a loaded gun on the table in a house full of idiots. It’s a question of when, not if, it will be misused.

Arclight

G September 12, 2013 11:16 AM

For me, a impact this article illustrates:

“This is going to put U.S. companies at a competitive disadvantage, because people will believe that U.S. companies lack the ability to protect their customers—and people will suspect that U.S. companies may feel compelled to lie to their customers about security.”

NobodySpecial September 12, 2013 11:35 AM

@Nick P – the difference is that “enemies of the state” assumed a shadowing set of expensive agents specifically targeting you.
What’s more dangerous about this is that they are targeting everybody.

Regularly get off transit at the same stop as a known drug dealer? The DEA now has you logged as a possible accomplice thanks to the phone companies handing over your cell location.

Been on an anti-war march? The cop pulling you over for speeding has the info flashed up on his computer before he gets out of the car.

That next IRS audit. They have your online quickbooks and all the emails to your accountant

Complain about your child’s school? The education authority has a list of your website visits in case you want to make too much of a public fuss.

Work for an oil/aerospace/computer company that has American competitors? Then you can bet you are “of-interest”

Gweihir September 12, 2013 11:45 AM

Good analysis by Felten, as usual.
I think that except for the extremely gullible, the cat is out of the box:

The NSA has intentionally and systematically sabotaged critical infrastructure.

Due to the scope, they must be considered several orders of magnitude more dangerous than what terrorism could ever hope to achieve. The current NSA can only be regarded as very dangerous nihilistic organization that has managed to evade any meaningful oversight.

It seems NIST is pretty pissed at them too: http://csrc.nist.gov/publications/nistbul/itlbul2013_09_supplemental.pdf

Michael Brady September 12, 2013 12:08 PM

@ Nick P

That’s a bad analogy. A car without brakes has a high likelihood of killing or severely injuring you. NSA tapping your SSL doesn’t.

You might be right. When such tools are put to use by an executive branch that uses drones to kill people who engage in suspicious behaviors, engages in extraordinary rendition, applies enhanced interrogation techniques, ignores the constitution and the law, and lies repeatedly and without repentance to to congress and the judiciary, it’s much more like a hacked throttle control module. With or without your knowledge or control such a compromised chip might be used to operate your vehicle in manner contrary to your interests and those of people around you, all while telling you you’re doing 30 in a 35 mph zone…

cowbert September 12, 2013 12:15 PM

@Nick P.
The problem is that it still comes down to “security by obscurity”. It works as long as knowledge of the flaws stays within the NSA. Of course, you only need one person to leak it and then anybody else now has a chance to break it. People put faith in the cryptosystems because they have faith in the math, and they could use certain implementations which did the correct math. Well guess what? NSA screwed with the math too (EC), and that’s part of what is scary. In terms of SSL, if the NSA did a runaround and obtained private keys of CAs, then this is just a case of commercial exploitation. Unless the NSA used a “legal” method to obtain them, then whatever other actions they conducted to achieve this goal is also capable of being taken by non-NSA actors to some extent.

Daniel September 12, 2013 12:33 PM

@ I normally agree with you but I do not here…

“And that most security vendors push a false sense of security for their products…”

You’re missing the point. The question is not whether the vendors are pushing a false sense of security the question is whether people are in fact believing that falsehood. They are. There was a recent study done that showed that 85% of people believe they can disguise their identity by clearing their browser cache. You and I know that is false, but people think it’s true.

“A car without brakes has a high likelihood of killing or severely injuring you. NSA tapping your SSL doesn’t. ”

That is simply a ridiculous assertion. In fact, the harm begins with the mere /knowing/ that one is being watched or capable of being watched. In Free Speech issues that is called the “chilling effect”. People’s behavior changes when they think someone is watching them or has the potential to watch them. That alone is a tangible harm.

Your final paragraph I agree with in its entirety, however.

Clive Robinson September 12, 2013 12:53 PM

@ Gweihir,

    The NSA has intentionally and systematically sabotaged critica infrastructure.

Agreed, but the follow on question is, “Have the likes of a foriegn entity governmental or otherwise used these weaknesses to the detriment of of a US entity?”.

If the answer to that is yes even once then Gen Alexander and his subordinates are guilty of Treason, which I beleive still carries the equivalent punishment as a capital crime…

Who would like to put their name forward for pulling the leaver?

gurrfield September 12, 2013 1:26 PM

Sure this all may create a market for non-US hardware and software for security ( if there isn’t one already ).

Alex T September 12, 2013 1:47 PM

Here is how HW back doors work. There is a secret that gets entered, but is first rejected as incorrect. It is reentered, perhaps several times. So, it’s not like in the movies where they hammer away until they get lucky. You have to already know what it is or you will just get locked out.

J.R. September 12, 2013 1:55 PM

Can’t speak to the ins and outs of security, but on a scale of 0 to 1 my faith/trust in the security of on-line transactions has gone from about .85 to under .20, maybe under .10 in the last 4 months.

Clive Robinson September 12, 2013 2:51 PM

@ Michael Brady, Daniel,

Guys you take excpetion with the first two lines of Nick P’s paragraph,

    A car without brakes has a high likelihood of killing or severely injuring you. NSA tapping your SSL doesn’t.

But you fail to take into acount the third qualifing sentance,

    Matter of fact, if you ran the numbers, you’d probably find a low rate of harm across tota number of internet users.

Which is a little unfair.

Salach September 12, 2013 3:08 PM

Come on people, grow up!

Anyone thinking that connecting your computer to the whole world can be still secure is either naive or stupid or both.

Thomas September 12, 2013 3:12 PM

@ Nick P

That’s a bad analogy.

I agree (what is it about computers and car analogies?).

If my brakes are bad, at least I find out about it fairly quickly, costing nothing more than a new mailbox at the end of the driveway.

Bad crypto can fester for years (just ask any of UK’s allies about Enigma).

Bad brakes cause random accidents. Bad crypto enables targeted attacks, and the damage doesn’t even have to be done in real-time.

This is more like inventing a timemachine to retrospectively disable anyones brakes at the worst possible moment.

Michael Brady September 12, 2013 4:24 PM

@ Clive Robinson

I didn’t disagree with all of Nick P’s comments primarily in the interest of brevity, but since you accuse me of lack of charity…

Matter of fact, if you ran the numbers, you’d probably find a low rate of harm across total number of internet users.

By that measure the fatalities, injuries, and trauma resulting from drone war signature strikes, extraordinary rendition, and enhanced interrogation have resulted in a very low rate of harm across all of humanity.

Also, NSA isn’t completely disabling the protection: they design it with subtle, secret problems so they can break it. It works in most scenarios, giving us protective value, and can be weakened under certain circumstance for one organization that legally can do that.

If back doors, trap doors, false floors, crawl spaces, hidey holes, wire taps, and booby traps can be activated legally history assures us they can and will be activated illegally as well.

And, lastly, Congress and the US taxpayers kept voting to give them this power. Over and over. It’s a consequence of the will of the public…The public has to pressure Congress to change the laws or the NSA can just say “it’s legal” or “we’re just doing our authorized job.”

Our lawfully elected representatives in congress have been lied to repeatedly about what the NSA has done, is doing, and intends to do. The FISA courts have been lied to or simply ignored by the alphabet agencies whenever the executive branch chooses. If the American electorate convinced congress to revise the NSA’s ambit – let alone revoke its charter – we have no reason to believe its behavior would change or that the administrators of an unrestrained security state would bother to tell the public the truth about its history, actions, or plans. If abiding by the constitution is optional according to those in authority then we do not actually live under the rule of law.

Bruce’s analogy is not quite strong enough. We are driving a car with brakes that work only when some technician in the basement of Ft. Meade secretly decides to let them.

Alex R. September 12, 2013 4:50 PM

I wonder if we can’t lay some of the blame at the feet of the NSA for the recent news about the thorough extent to which US businesses, and in particular defense contractors, have been penetrated by Chinese and other foreign-sponsored hackers.

I think this is a good point. Consider the following: your nation’s intelligence agency knows that another nation’s intelligence agency has been weakening various security products, whether it be Intel’s random number generator or a common security standard. At that point it’s possible to take back-bearings and (for example) compare the code built into version 1.21 of a chip versus 1.22 of a chip, or read the minutes of a standards meeting with careful attention to who works/has worked/does consulting for the NSA and what they said.

Once the back-bearings have been taken, it should be obvious what has been altered and how. Then turn your hackers loose on the problem and voila! All your base belong to us!

Keccak September 12, 2013 8:55 PM

Something I have been wondering about: During the AES competition, the winner was selected by a public vote of the experts involved in the process. NIST made it clear that no matter how the vote went down, they had the power to select the algorithm they wanted anyway (most likely with a lot of NSA input). But that didn’t matter as it turns out that the vote was for Rijndael and NIST agreed with the results. Thus there was no conflict and everyone seemed happy that the whole process was pretty transparent.

But what about SHA-3? I can’t find anywhere in the NIST SHA-3 documents that a vote was held to select Keccak. From everything I have read, NIST made the decision internally after “considering the input” from the community at large. So, for those in the know, was there a vote? Did the cryptologic community at large agree that Keccak was the best of the 5 finalists.

NIST’s rationalization seems to be that “all 5 finalists were good enough” but Keccak had a unique, elegant and simple design that made analysis easy. They also mentioned that Keccak was based on different primitives than SHA-2, which means if SHA-2 is ever broken, Keccak would probably be immune from the same techniques. (Skein and BLAKE were based on a similar ARX design as SHA-2).

NIST’s rationalization seems logical to me and nothing about their reasoning seems amiss, but with what we now know about NSA attempting to “steer” NIST in various directions, it necessitates that we look at any NIST recommendation with a skeptical eye.

Also it is worth noting that Joan Daemen was one of the co-designers of both Rijndael and Keccak. NIST really seems to like his work. Maybe he and his team are the best, or maybe their designs are fragile enough that the NSA sees “potential” in exploiting them. As you can see, these latest revelations from Snowden is making a lot of people rightfully paranoid.

Nick P September 12, 2013 9:34 PM

NSA isn’t to blame. It’s really that simple.

I think the situation has turned into such an NSA-bashing party that everyone’s forgotten why the situation exists, who the majority of stakeholders are, and what can/can’t be done against a surveillence state actor. Not to mention pushing all kinds of figurative comparisons making it seem like the NSA is massively robbing, injuring and/or murdering American citizens. I mean, come on… I’ll address the people that replied to me in the next post after I set the foundation in this one.

The Why: NSA’s Mission

I’m not going to totally blame the NSA here like everyone else is doing, nor make analogies that paint an unrealistic, deadly picture of the results. Before anyone talks legal or ethics, they need to look at what the NSA was required to do. Here is the “en ingles” version of their mission.

  1. Gather every piece of vital information they can on foreign targets of interest.

(Ridiculously huge number, from crooks to business to government, across many mediums in many countries with a variety of security approaches.)

  1. Do the same if their communications cross into US territory.

(Previously, there were very strict rules about exactly how to do this.)

[So far, we just have two requirements, a massive amount of technicians, little oversight and a huge budget. I said that was a bad idea but I was a minority. Moving on. Even back then, they were working toward intercepting everything with filters and automated analyses to make sense of it all. Echelon being a prime example that leaked. Yet, even with massive spying and alleged abuses in the 90’s, neither American people nor Congress pushed hard to reign in their power or establish strong oversight of expanding SIGINT activities. And federal courts did about nothing. Strike 1. ]

  1. Acquire intelligence on anything that might lead to another 9/11 happening on US soil, even if it means spying on Americans.

[This was post 9/11. This let them turn their technology on Americans more often, although they had certain legal issues. Congress begins passing laws that remove legal obstacles and create plenty secrecy. American public majority supports the activity as in our interests for our safety, as they like trading off liberty to sleep better. Strike 2.]

  1. Expanding on 3, more pushes for cyberwar readiness, streamlined intelligence sharing, greater visibility, rapid response that requires fewer filters and long-term analysis capabilities similar to commercial sector’s business intelligence apps.

[The Business Intelligence type apps and proposals go way back with software demos from CIA’s In-Q-Tel on specifics of a few apps for government use. That was public. Even if it’s a Special Access Program, a significant number of people in Congress would know they were tapping into phone, carrier and/or encrypted networks. Some of this leaks publicly many years ago. American public fails to use common sense test of “is more secrecy, money, spying, lying and lack of due process a good thing for citizens now or later?” Congress continues authorizing and funding the operations. Strike 3.]

I say that’s already all you need for a situation to occur much like recent revelations. This situation was the logical conclusion of a continuous series of events going back 10-20 years. Maybe more depending on who you asked. Every step of the way, the NSA’s modern activities got the blessing of Congress, the public, the important courts, Federal LEO’s, state LEO’s that could use the data, and the media. Sounds legal and endorsed enough to me. So, certain claims people are making don’t hold any water despite how often they’re repeated.

NSA didn’t do this: everyone else did with NSA acting as their authorized agent.

The How: Accomplishing the Mission

People have also been saying they hate all these methods NSA uses. I’m not going into them right now. You’ve probably heard of them. Yet, I’ve just illustrated their mission requirements. And they could be pretty sure if they missed another 9/11 the public wouldn’t say, “Well, they were using SSL and you people are only the NSA so it’s all good. Shit happens. Don’t worry about it.” Please… They were under enormous pressure over the decades to solve equally enormous problems associated with targets using crypto, domestically or foreign. 9/11 just added to it. And they tried several options.

  1. They tried to get backdoors into systems. That was strongly opposed by the public.
  2. They tried banning export of strong crypto and keeping useful systems from being published while allowing US use of ciphers that would stop most attackers. DES cracking got cheaper and the cypherpunks beat them at moving crypto out of the states.
  3. They tried to allow stronger crypto, but with built-in escrow to provide them access. Cryptographers were also working on secure escrow back in the day. That was all shot down. (Maybe partly due to Bruce’s paper on the problems with it.)
  4. They tried subverting crypto software by both big software companies and some crypto companies. Each subversion was for tech with widespread use by governments and companies. That worked to quite a degree till a few became public. Yet, subversion proved to be quite successful and each of those companies are still in the same business. (And I’m not talking about the recent leaks either: these were a long time ago.)

So, the NSA had a tough job mandated by the public and Congress. The job kept getting harder. EVERY proposal they made to deal with their problems through legal or technical means was shot down. The only solution was subversion, the one I’ve written here about for years. Matter of fact, NSA knew that the widespread use of low assurance software implementations and business processes meant subversion was easy for their organization. So, it worked in the past, it kept working, those in Congress cleared for it apparently kept OKing it, Americans kept demanding high effectiveness, and they saw opportunity to expand on the same strategy.

(Note: They also realized if they did it carefully they could embed subversions into otherwise effective security tech. So, public would get good ciphers, online banking, secure purchasing, safer email, network encryption, trusted boot, etc. It would stop most attackers. And NSA could still get in. In a dual-mission spy & stop spies organization, this would be seen as a Win-Win approach to subversion. And subversion was their only effective approach. And so 2+2 =…)

The Result

And so they subverted… everything. And they did it with “open” standards, too. If anything, they were only doing the exact job they were asked to do and did an extremely impressive job. I figured over the years they had done plenty of this stuff. Yet, after the leaks, even I was a bit impressed at how much they accomplished. They certainly had the budget for it but their management sucked. They must have really done a 180 over the past decade. They fixed big organizational issues first, then expanded capabilities, and then accomplished Mission Impossible.

I didn’t think they’d pull it off. I give them props and respect. But… what if “they” come after “us”?

The “Risk” To Individuals + Who Are The Stakeholders?

Practically nonexistent in over 99% of cases. (Yes, that stat was as made up as recent analogies.) The majority of Americans are the kind of people The Machines weren’t built to look into. The Machines were even used over time to benefit American activities and companies far as I know. They will use their capabilities in self-defence if government power is threatened by a minority player w/out public support. NSA also knows majority public pays their checks. That public is mostly OK with what groups NSA targeted so far. Many even demanded it at one point.

And the public get hit by lightening more often than NSA injures or kills them. And brake failures like in recent analogy? MUCH more dangerous than NSA to people in this country. THAT is something Joe Public might have experienced personally or heard via friends’ horror stories. Overall, NSA is one of the public majority’s… lesser concerns.

The Real Risk: A Fake Democracy

Many claim it’s been a fake democracy. Let’s ignore that angle for this debate. 😉 The biggest risk was (and is) that The Machines one party builds are used for evil reasons, by them or another party, that have a huge negative impact on the safety of Americans, the economy, the voting process, etc. Like nukes, capabilities like this are better to never be invented and a country will never “unbuild” them after they’re working. Further, they will get expanded continuously both technically and legally.

The very second Americans asked for NSA/LEO’s to develop near God-like omniscience in their intelligence capabilities, they brought all of this on themselves willingly. The long history of government power grabs, corruption and abuses of power should have clued them in. They weren’t cautious enough. Congress knew a bit better with all the dirt people could find on them. Those morons asked for an exemption for themselves, as if unaccountable watchers keep their promises over time. Now, The Machines are a risk to our democracy itself because they can be used against us in many ways and it would take huge effort by Congress to limit that. And Congress is apparently on their side.

Conclusion: NSA isn’t guilty or going overboard. They’re doing exactly what they’ve been required to do. And they’ve done it too well. They succeeded to the point that NSA’s legal, technical and HUMINT capabilities will make impractical for US citizens most INFOSEC defenses people are proposing right now. Congress and The Courts, who possess checks and balances, are partly responsible for this mess. The other part goes to the American people. If the three really want to, a combination of private and public action can change the status quo. Otherwise, NSA will just keep doing what they’ve been paid to do for decades.

Robert Thille September 12, 2013 10:05 PM

It’s funny, but I learned to drive as a child on a go-kart which had brakes at one point, but the master cylinder failed or something and so I learned to drive it with no brakes. We never had any trouble with it, until we let some friend drive it who didn’t listen and panicked. Not really pertinent to the discussion, but the example brought it to mind.

Nick P September 12, 2013 10:40 PM

Necessary clarification to original post: To people not familiar with my style, I’m an opponent of surveillance states and quite pro-democracy/privacy. I’m playing devil’s advocate on this debate as I have in some past debates. Those led to people coming forward with points that contributed plenty to the discussion. And to the disappearance of some points that were only harming it. Same goal here. My motivation is explained in the essay above. It also covers some things in the replies people gave me but I’ll respond to each anyway.

@ Arclight

Good points. I agree. Yet, the legal aspect can’t be ignored. If we don’t build in LEO capability, it will be forced on us. And existing practice allows them to force about anything on us. And maybe charge us if we don’t cooperate. Who knows. The legal issues must be resolved even as we make it technically harder on them.

@ NobodySpecial

Re: targetting everyone

They’ve been doing that for a long time, mostly foreigners, and information got out on it. They were limited mainly by technical capability rather than ethical restrictions. As I said in above essay, almost every legal decision and public choice pushed them in their natural direction of more SIGINT over time. And more domestic capabilities. And more automation. And more storage. The historical precedent is that such organizations only expand their power. It was to be expected.

And if any of those scenarios happen, the other enabled them and should share in responsibility.

@ Michael Brady

Re drones, rendition, etc.. And NSA is like a car (flying?) on the road with false speedometer reading

Again, how many Americans in a country of 378 million have been physically harmed or killed by NSA’s crypto defeating activities? And how many are expected to this year? None? Almost none? Less than lightening strikes and bee stings? Then why is it a cornerstone of a debate about individuals’ risk?

We can’t afford wild speculations in a debate this big. That will loose public’s confidence in us quickly. We must focus on solid risks that the public can understand. NobodySpecial was onto something mentioning IRS, just forgot to add “targeting conservatives.” It will take examples like that which scare or anger the public to get them to reign in NSA. If they will…

“Our lawfully elected representatives in congress have been lied to repeatedly about what the NSA has done, is doing, and intends to do. The FISA courts have been lied to or simply ignored by the alphabet agencies whenever the executive branch chooses.”

This is a fair point. It should have led to laws being passed by Congress repealing certain authorities, increasing monitoring, and maybe even giving a group like GAO power to reign them in. After all, NSA activities were kept in check (to quite an extent) for years. And Hoover’s FBI got reigned in. And CIA’s MKULTRA and other stunts got them reigned in quite a bit. And so on.

Also, the more the public was behind it and votes were threatened, the more the Congress acted. And we see many cases in local and state government corruption where courts were a big help. Mandating long prison sentences for proven corruption and using them would probably go a long way.

@ cowbert

re security by obscurity, risks of weakening standards

Yes, I agree. It’s why I’m not a fan of this activity, although I don’t totally blame NSA (see essay). The more dangerous of their methods leave or introduce vulnerabilities of the kind black hats can find. That’s dangerous. However, esoteric stuff embedded into hardware, root of trust subversions, and other such stuff isn’t so risky for us. They can be done in a way where the protections are pretty solid and the risk is very low for the public. I’m leaving out key details on that just to keep them from getting more ideas.

Matter of fact, I’d venture to guess two things:

  1. The public is protected by harm via NSA endorsed methods vastly more than any harm that comes from using them.
  2. The public, both in purchasing and operating choices, causes vastly more harm to themselves than NSA ever would.

Re: faith in mathematics

Oh come on, we both know that’s not true. Most people believe in certain security products and protocols because these were recommended by a source they trusted to some degree. Or they had no better options. Or a combo of both. Most people don’t know an elliptic curve from a discrete logarithm. The majority hates math. They are social, listen to their friends, and they like looking up reviews/recommendations.

@ Daniel

“I normally agree with you but I do not here”

I appreciate the compliment on the old posts. 😉 Far as this debate, I am playing devil’s advocate to stir things up as I said. It’s all good. Fire away.

“The question is not whether the vendors are pushing a false sense of security the question is whether people are in fact believing that falsehood. They are. ”

Actually, most of the issue has been that NSA and vendors are pushing subverted products. What individuals and companies believed about their security is an important, but secondary, attribute. And you’re right: they bought into it. And guess what, they did it for the security industry too.

The kind of complaint you have have could be directed at the entire marketing industry. They’ve been playing psychological games on people to get them to trust products and stuff forever. Politicians in US government are known for it, esp during elections. That NSA wouldn’t try to do the same thing to accomplish their protective and mandated mission (their perspective) would be… incompetent? 😉

Solution is better critical thinking for the general public. That requires better education strategies and investment. And the lack of it is also not NSA (or INFOSEC community’s) fault.

Re chilling affect:

” In Free Speech issues that is called the “chilling effect”. People’s behavior changes when they think someone is watching them or has the potential to watch them. That alone is a tangible harm. ”

Yes. This doesn’t negate my claim about where the responsibility for this mess lays, but it is a REAL problem. It’s also quite intangible. How to quantify it? How much effect does NSA revelations have on it over long term?

Remember, this is the country that loves plaintext email, convenience, credit card purchases, location enabled cellphones, social networking, trading personal info for free apps, etc. Oldest generation barely uses the computer and don’t trust it much with highly private information. Youngest generation, as Bruce pointed out, lived much of their lives in public. Middle groups are… in the middle I guess. If there’s really a chilling effect, I have no idea how to measure it. And I don’t see a resistance forming to it, which is my measure of how much public cares in a way that creates change.

So, it’s a real worry but I can’t say much of it in this country and its cultures.

@ Clive Robinson

“But you fail to take into acount the third qualifing sentance, … which is a little unfair”

I appreciate it. That sentence does make all the difference. If the risk was as great as what it was compared to, then you’d be seeing effects as great as what it’s compared to. We don’t. So, bad comparison. QED. 🙂

@ Thomas

The key failure of these analogies are that they compare (a) an “accidental” failure of a “safety” device causing “physical harm and damage” that “has affected” huge amounts of people (b) an “intentional” subversion of a “security” system that does its job, has done no “physical/economic harm” to huge amounts of people, and can “selectively target” in a way that “might harm” (e.g. drones, sabotage) and “might not harm” (e.g. precautionary intel gathering).

At least, these sound entirely different to me. And one causes so many provable problems there’s a bunch of shops nearby for preventative maintenance and post-repair trouble.

@ Alex R

I’ve often linked to a Bell paper [1] where he traces the process by which NSA/DOD got strong computer security started, incentivized private market to make many high assurance products, and then through stupidity (i think that was it…) killed off the entire market. Nowdays, they push medium assurance solutions. I’ve often bashed them here pointing out their own Orange Book, even though outdated, shows that the architecture and development methods of today’s endorsed solutions can’t even meet the self-protection requirement, much less prove no subversion.

They also had export restrictions back then on high security products. So, there was hardly a ROI. Low features, slow time to market, plus uncertain market = both NSA and commercial profit incentives killed off such products. I often joked they probably liked that they could hack into each tech and that most “security” they push is about “control.” Then, years later certain documents show they’ve subverted almost every system including in ways I’ve pointed out as risks. Go figure…

Worse, Schell, one of originators of Orange Book, has been presenting for years subversion as the greatest risk and promoting A1-class tech for it. Mainly his (GEMSOS). Regardless of his debatable marketing material, he turned out to be right about need for strong TCB and subversion resistant development.

In a nutshell, NSA is partly to thank for creating the INFOSEC industry and funding many good methods/principles for security engineering. These are in papers the public can read and use against them. They also later pushed solutions that were pretty good in general, but that they could hack. So, they’re kind of heros and villians in INFOSEC. It’s why I often claim NSA is comparable to multiple personality disorder or paranoid schizophrenic behavior. They’re just weird like that.

[1] For some reason, none of the links to his paper work on this blog after they go through preview. Weird. Just open Google, type these words: Bell Looking Back Addendum. PDF should be on top.

response to Nick P September 13, 2013 2:39 AM

Nick P:
>
I certainly don’t want my crypto being weakened or implementation flaws left in. Guess what, though? Weak security, security defeating complexity, and tons of residual vulnerabilities are what the software market has given (and consumers demanded implicitly) to keep features up and price down.

Except there’s a clear difference. If a bug causing a weakness is introduced by carelessness, it can be discovered and fixed, due to marketplace pressure to improve one’s product. Or it might never be discovered and exploited by anyone. If a weakness is intentionally introduced by the NSA, it’s probably NOT going to be fixed, due to NSA pressure to keep the weakness, and it IS going to be exploited by the NSA and eventually by others.

>
Congress and the US taxpayers kept voting to give them this power. Over and over. It’s a consequence of the will of the public.

I agree with you that “the public” voted irrationally in their desire for magic safety after 9/11 etc, but no one voted FOR the NSA to do all these things which the NSA is doing or wanted these activities. Citizens and congress were consistently lied to. No one voted FOR the NSA to have all this power. They were voting naively for magic safety without being cognizant of possible results of this desire.

If people vote for someone who secretly does something you don’t like without telling you that’s their plan, how can you say that the people voted “for” that secret action? This is like asserting that Republican voters voted for Nixon to do the Watergate break-in simply because they voted to make Nixon president.

>
NSA didn’t do this: everyone else did with NSA acting as their authorized agent.

Aha… the NSA bears no responsibility for their illegal and unethical actions; they were “just following orders”, the classic defense for evil actions in a power hierarchy. Sorry, but for me the “just following orders” defense doesn’t hold much water here.

Clive Robinson September 13, 2013 3:49 AM

@ Nick P,

Hmm you are getting as long in the presentation as I used to be 🙂

If you remember back I made a list of where I would focus if I was the NSA and that was,

1, Plaintext
2, Protocols
3, Standards

Whilst I did consider “black bag jobs” for private keys and the placing of operatives “upstream of code signing” in both software and hardware which we’ve discussed in the past. I’d assumed that they would stay with the old “Never Say Anything” stance and avoid either covert or overt action against organisations at the managment and above level. But then like many others I was not aware –but should have considered it after the PATRIOT rumpus– secret legislation / rulings. Which is why I was so shocked about the gambeling software writer getting the SWAT treatment for declining to put back doors in his code (which I still think Bruce missed an opportunity by not blogging about it at the time). The “sign posts” as they say were out there only I didn’t connect the dots on the tips of the ice burges.

However of the three I listed only “protocols” have not yet come out in Ed Snowden’s releases, and I think this is something people should think about quite seriously.

Because security protocols are natoriously difficult to get right –probably more so than basic crypto algorithms– and as history with WEP and others have shown we are not yet even moderatly OK at them let alone very good.

It might be that the NSA only went after the eliptic curve random generator as a “toe in the water” first run on direct manipulation of a standard as it was so obscure (although I realy don’t think so). As I’ve said –and others likewise– I firmly beleive that the NSA have been pulling NISTs strings in less direct ways, hence the fact that AES is only good for data at rest unless extream care is excercised.

As protocols are such a fruitfull hunting ground I realy think we should look very carefully at them as they are such a tempting target I realy cannot see the NSA resisting indulging in tampering with them.

Jonathan Morton September 13, 2013 4:21 AM

The analogy of the car with suddenly failed brakes reminds me irresistibly of the Armagh Disaster of 1889 – the railway accident that finally convinced the British government to enact the “Lock, Block & Brake” regulations that fundamentally define railway safety culture to this day.

Before 1889, railways chose technology – including safety technology – according to whether it gave them an economic advantage or not. Usually it didn’t, especially for smaller railways. In this case, the Great Northern Railway (of Ireland) had chosen to fit trains with a continuous brake because it allowed running fast, heavy trains with sufficient brake power without the need to employ brakemen on each vehicle – but they had then chosen the very cheapest form of continuous brake, the “simple vacuum” brake.

The fatal flaw in the simple vacuum brake was that if the train became divided in motion, the entire train would lose it’s benefit. Further, if a train were intentionally divided, the portion of the train now detached from the locomotive would lose vacuum braking power. This set the scene.

One fine day, a school excursion train departed Armagh, full to bursting and with more vehicles than the locomotive could easily haul up the steep gradient on the first few miles of the line – an experienced driver could have done it, but the driver of this train was relatively inexperienced. Before starting, he had complained that if he had been told in advance of the weight of the train, he would have brought a bigger locomotive. He did manage to reach within a few yards of the summit, but the train stalled and could not be restarted.

Following him on the time-interval system was a local train on the normal timetable, with a locomotive of slightly less power but with a much lighter train. Due to the steep gradient, this train would have soon caught up to the excursion, but would have easily stopped in time due to the necessarily low speed of a climbing train.

The excursion driver was later criticised for not waiting for the local train to provide banking assistance, which would have been sufficient to crest the summit. Instead, he elected to detach two-thirds of the carriages and leave them on the hill, taking the remainder forward to a siding which would allow him to come back for them. Bearing in mind the limitations of the simple vacuum brake, the order was given to screw down the handbrake in the rearmost vehicle as hard as possible, and to place stones behind some of the wheels as chocks.

All concerned were confident that these precautions would be sufficient to hold the detached portion of the train on the hill. They were wrong. The front portion of the train shifted backwards by about a foot while attempting to start, causing the improvised chocks to be crushed beneath the wheels of the carriages, and the rear portion began to run down the hill, eventually reaching perhaps 40mph before smashing into the local train behind it. Dozens of passengers, many of whom were children, died as a result.

A second part of this disaster was narrowly averted. The locomotive of the local train was overturned by the collision and detached from its tender – severing and disabling the simple vacuum brake on this train as well – and another coupling also parted, leaving two new trains running back down the hill in close proximity to each other. Fortunately, the handbrake of this train’s rear vehicle was strong enough to bring it to a halt after a considerable distance, and the driver had managed to hang onto the tender and applied the handbrake there, halting the other portion.

The “Lock, Block & Brake” regulations aimed to prevent anything like this – as well as many other classes of easily preventable accidents – from happening again, by mandating several technologies that had already been available to railways for many years.

The most relevant to this accident was the continuous automatic brake, the automatic vacuum brake being only slightly more expensive than the simple one. Automatic brakes are so-called because they apply automatically when inadvertently disabled, rather than releasing. Today, compressed-air and electro-pneumatic systems are much more common than vacuum, but they work on the same energise-to-release principle and have the same basic level of safety.

What has this got to do with cryptography? Perhaps the analogy between commercial interests compromising safety and national-security interests compromising everyday-security has something to do with it.

Nick P September 13, 2013 2:01 PM

@ “response to nick p”

“Except there’s a clear difference. If a bug causing a weakness is introduced by carelessness, it can be discovered and fixed, due to marketplace pressure to improve one’s product. Or it might never be discovered and exploited by anyone. If a weakness is intentionally introduced by the NSA, it’s probably NOT going to be fixed, due to NSA pressure to keep the weakness, and it IS going to be exploited by the NSA and eventually by others.”

Does it make us more vulnerable?

You’re kind of mixing things with that. One class of bugs that NSA introduces (or leaves in) are the same kinds of bugs as those that developers produce on their own. And they can be detected by effort. That an attacker can use these against us is a legit gripe and risk to consider. However, a system with 4 vulnerabilities and a system with 8 are equally vulnerable to talented black hats who need 1 “sploit.” So, software norm vs NSA’s contributions = zero difference in practice to high end blackhats. Kind of depressing, eh?

So, my main challenge to the security community on this angle is “do we have data saying it happens often?” There is the Vodafone screwup and it’s great to use in this debate. I count it in favor of anti-govt-subversion. However, most people are just assuming a massive risk is there. I’m saying prove it with examples of it happening. I’d like to see more data so we can say empirically how bad NSA practice is or isn’t far as risk to users. Otherwise, although the concern is legitimate, most of what people say about it will be pure speculation. (read “bulls***”)

Myth: Each NSA subversion = widely vulnerable system

The other class of NSA subversions is relevant to claim that each NSA subversion = a sploit for black hats. This is a common misconception. The ECC RNG and CA subversions are examples of exemplar subversions that maximize benefit to NSA, while providing little risk to users. That ECC RNG would only be exploitable by NSA group with the magic numbers. It was a backdoor, but it didn’t create vulnerability that hackers everywhere could use. Likewise, a CA with good security practices presents a decent security layer against all sorts of black hats. If they give certs to NSA, it doesn’t negate the CA protection against other attackers or let others forge CA’s at a whim: it just gives NSA access. An NSA rootkit signed for TPM and customized for a device would give them access, but not allow compromise by black hats in the wild. And so on.

A subversion != vulnerability to blackhats. Must be judged case by case basis. Even though I’d rather them not be subverting everything, I’m not going to make up crap about how every subversion they do negates security or gives black hats a door into the system. That’s provably false in specific examples I gave and many more. If anything, I prefer them use such low risk subversion methods b/c at least the security still works against the people it was intended to stop.

Guess what? NSA told us how to stop them. And we didnt.

High assurance security engineering and development practices can prevent most bugs like this regardless of their source while making it easier to prove that to reviewers. And their economic properties make sense on a slow moving, high priority target like IPSec or S/MIME. I mean, why wouldn’t we be using such high-security practices for our most security critical stuff?

Irony is that “businessmen,” not INFOSEC, taught me what quality/security effects you get in these situations: “fast, good and cheap; choose two.” They should have known better than to go cheap on the most critical stuff. Yet, the market continually chose their roots of trust to be thrown together cheaply by unknowns, overcomplicated, and have little vetting. That low assurance practices and risky configurations led to constant hacks didn’t deter them. I’ve always said on this blog (& to businesses) that, without high security processes, security-critical systems would result in many residual flaws, esoteric attacks or subversion. And what happened? All three across the board. This time I hate to say I told them so.

“No one voted FOR the NSA to have all this power. They were voting naively for magic safety without being cognizant of possible results of this desire.”

Well said. Yet not as true underneath for two reasons: mission and character. More on that below.

“If people vote for someone who secretly does something you don’t like without telling you that’s their plan, how can you say that the people voted “for” that secret action? This is like asserting that Republican voters voted for Nixon to do the Watergate break-in simply because they voted to make Nixon president.”

“Aha… the NSA bears no responsibility for their illegal and unethical actions; they were “just following orders”, the classic defense for evil actions in a power hierarchy. Sorry, but for me the “just following orders” defense doesn’t hold much water here.”

What They Are

With most organizations I’d agree with you about “follow orders” excuse. NSA is an exception: they’re a quasi-military, intelligence organization serving executive branch, overseen somewhat by congress, & with laws/EO’s determining their mission. They were ordered to find any information pertinent to an attack on America (or beneficial to us) in communications flowing through America and foreign countries across many providers & using crypto. That’s the mission and a legal requirements. To me, that mission sounds exactly like “hack stuff all over the place to find important stuff, but filter and delete non-essential things.” That’s what they do for a while until post-9/11 requirements added mission creep. And now we’re here.

I think the NSA’s mission requirements are entirely relevant to a discussion about “did they go to far?” People wanted NSA to catch about anything of value. You can’t catch “everything of value” unless you intercept and analyse “everything.” You got a different set of methods for them that would be effective? By all means, propose them to NSA and Congress. I don’t, so I think the mission itself is the root cause of problems.

So, rather than just crying out for The Machines to be torn down, I say we re-evaluate NSA’s mission requirements, what we would blame them for, etc. And set in stone certain limits, oversight and even prison sentences for intentional violations. And, SOX-style, put risk on the Directors and senior management. If NSA’s current operational requirements lead to our risks, then we need to change those requirements rather than just gripe about how they meet them. All I’m saying.

Character

It’s good that you used the Nixon example. In the case of US LEO’s and TLA’s, they have a long history of bullshiting for more power and abusing it. NSA, particularly, tried to subvert America’s systems in public and semi-public ways several times before 9/11. They got negative press plenty of times. They also got investigated for Echelon which to me sounds like the same allegations and worries of today, just less numerous and pervasive. Then, this often guilty party says “just give us free reign, lots of money, more spy gear and less accountability we’ll protect you” and Congress+people say OK. And my jaw drops.

So, they start doing their mission. Then a leak comes out showing potential overstep. It’s tolerated. Then again and tolerated again. And so on until today with massive power, subversion and reach. Using your Nixon example, he would have done Watergate two or three times, then got re-elected with more power. Would you hold him totally responsible for his 3rd or 4th major act of misconduct? Or tell people voting for him they should have known better after Watergate, CryptoAGgate, and Echelongate? 😉

Full circle is problem and solution

So, I blamed Congress, taxpayers and courts for causing this. They gave NSA a mission that would require them to do devious stuff on a widespread basis. Every overreach was tolerated way too much. This was done despite the organization’s poor ethical character. (Comparable to Nixon, indeed.) Just as they caused the problem, it will take strong effort by The People, Legislative and Judicial branches to change it. The People can even influence Executive branch, which runs NSA, via their vote. The primary solution is there and anyone pretending otherwise needs to revisit the history of how this country successfully dealt with major problems in government. Hint: it wasn’t by focusing on crypto, faxes, typewriters or telephones.

Security community’s role in solving this

The security community has a role in this. I charge them to stop BSing and using low assurance methods. I know high assurance software development is a painstaking, boring thing to do. It also dictates using proven, old solutions (read: more boring) to avoid creating new problems. I myself have posted many design ideas for them to build on. Both academics and quality/security-centered companies have created concepts, designs, prototypes and products to build on (or emulate). So, no excuse for NSA’s job being so easy except laziness (and perhaps lack of knowledge sharing by veterans).

The most important thing’s are that INFOSEC professionals do these:

  1. Use a high assurance development model
  2. Use decentralized development with peer review and signed vetting.
  3. Build a solution to each critical problem (e.g. signing, network encryption, endpoints, individual content distribution)
  4. Gather and enhance best existing tech as interim solution.
  5. Standardize behavior and interfaces, diversify implementations and hardware.

  6. Ensure every activity involves people from several mutually suspicious countries, esp during review step.

  7. Host in many countries with privacy protections who are buddy-buddy with main spying nations.

  8. Rotate roles often among coders, reviewers, testers, etc.

  9. Rotate among boring and more interesting jobs. Perhaps an incentive where a certain amount of high quality work on boring jobs nets community assignment to more interesting work.

  10. Strong reputation system attaching to public keys things like location, specific problems found, specific problems created, etc. Will let people guage trust better, although process itself kills off majority of problems.

So, there you go, have at it.

Nick P September 13, 2013 2:22 PM

@ Clive Robinson

“Hmm you are getting as long in the presentation as I used to be :-)”

It was the best way of identifying you: length of post, then content quality. As for me, I figure recent debates justify a bit more mental effort and typing.

” Which is why I was so shocked about the gambeling software writer getting the SWAT treatment for declining to put back doors in his code (which I still think Bruce missed an opportunity by not blogging about it at the time). The “sign posts” as they say were out there only I didn’t connect the dots on the tips of the ice burges.”

Yes, as with past governments getting devious, the over warning signs were starting to appear. People probably thought of them as isolated or unusual instances. Maybe didn’t want to connect dots.

“However of the three I listed only “protocols” have not yet come out in Ed Snowden’s releases, and I think this is something people should think about quite seriously.”

I actually thought that was in the releases. We’ve been debating how they weakened crypto standards. SSL and IPsec are major protocols. The wireless protocols have had issues that might have been obvious to NSA, so they’re suspect there too.

Re AES

It’s probaly OK. I recommend multi-ciphers (min: AES + another), Salsa, and other options anyway. Should knock it out. But I think they were fine with AES being strong b/c they knew it would be hosted or constructed in weak ways. That proved true plenty of times. They’d probably know this in advance. So, I think AES is safe long as we use it properly. And we have options like Bernstein’s Salsa and NaCl if we don’t trust AES. And if we can’t trust Bernstein’s open code, who can we trust? 😉

“As protocols are such a fruitfull hunting ground I realy think we should look very carefully at them as they are such a tempting target I realy cannot see the NSA resisting indulging in tampering with them.”

Absolutely. I’ll add that there are few people that can really design and evaluate protocols. Bruce has pointed this out too. This means there’s less experience in doing them right AND fewer eyes spotting problems. So, they’re a can of worms. I propose:

  1. Millions of dollars be poured into improving existing methods of embedding security in or analysing security properties of protocols. There’s quite a few toolkits from formal analysis all the way to autogeneration of safe implementation from specs. I want more of these and more accesibility.
  2. A strong simplification of each important protocol (eg SSL/IPSec)
  3. Application of every useful tool to widespread protocols, esp upon simplification.
  4. Strong focus on integrating protocols as black boxes that can be swapped out when problems are found.
  5. Simple, secure methods of doing that in many types of devices.

  6. Optional: standardize primitives & certain key steps, but the rest can be customized. Essentially make protocols interpreters whose steps can be randomized a bit for sessions, making one-size-fits all eavesdropping & attacks impossible.

  7. Bunch of investment into secure, lightweight interpreters and automated tools for transforming software into different but functionally equivalent code. These combined can realize 6 and make exploitation of legacy code more difficult. We’ve already seen academics prove that in research papers for each capability in isolation. I think they could be combined for more effect. And the risk now justifies it.

Also, veterans can start doing interim work tweaking protocols, removing risky constructs. Every single design decision must trace to a requirement, have a good argument for its existing, and allow it to be removed if developer chooses. Crypto and protocol experts need to create to ge all the necessary rules together in one place for developers categorized by topic or development phase. That way, they can use existing protocols as building blocks, eliminate risky aspects, customize them with less risk of introducing errors, implement/test properly, and we get more diversity from their use. (And obfuscation tools give us even more automatically.)

Long road ahead.

potatohead September 13, 2013 2:44 PM

So what can we see from the leaked NSA papers? This:

  1. Even though the government has employed tens of thousands in the surveillance work and provided Top Secret access to over 100K people, this surveillance system was up-and-running for years and no one said much of anything. Until Ed Snowden came around and first left for another country.
  2. Considering how much this surveillance is against the American “ideals”, it is surprising that we have not had more whistle-blowers during the last 5 years. So even though the US system is not openly “dictatorial”, it manages to control its populace pretty much as good as any country that is.
  3. Considering how much the US government and large enterprises (MS, Google, etc) have lied to the public about “we have nothing to do with anything like that”, it is surprising that people believe the official explanations about other things.

So if they lied to us about the surveillance, what other things have they lied to us about?

How about 9/11? If it is farfetched that the official story is a lie, why is it farfetched? Because the government says so?

There are reasons why these lawyers have now signed a petition asking the government to look into the 9/11 testimony provided during the 9/11 investigation:
September 11, 2001: Legal Scholars Question Government’s 9/11 Testimony

The lawyers are joining others who have already brought up concerns about the official story. An interesting book about facets in that story is BTW “Towers of Deception: The Media Cover-up of 9/11” by Zwicker.

Clive Robinson September 13, 2013 9:59 PM

@ Nick P,

Of your 7point plan we have discussed some before.

For instance point 1 is the methodologies youv’e been talking about for several years now, but it’s only recently the tools are appearing in usefull form.

Points 2&3 are realy the same process and follow on from 1 and as such are what lies in making PnP moduals for 4.

Point 4 is the “framework” standard I have repeatedly said NIST should be doing rather than running algorithm competitions. If done properly then the NSA shouldn’t be able to cow-bird it.

Points 5&6 are realy part of what goes into step 4. The idea of the interpreter is one I did several years ago when I modified a public domain BASIC to sit ontop of a well known crypto library that I had put wrappers around to give a more uniform API. I kind of forgot about finishing it when people started formalising the idea for a Cryptographers Work Bench.

Point 7 is part of what is behind the idea of a “scripting language” that we discussed as part of C-v-P. The simple fact is ordinary programers are not going to be capable of writing secure code in the current crop of medium level languages. It’s better for them to use a high level scripting language where the primatives have been written by crypto specialists. Whilst I would still like to use the “prison-cell” method of running and hypervising each primative, with certain safe guards it can be done in the “banqueting-hall” but with a certain loss in supervision.

Perhaps it’s time I draged down a copy of the NaCL library you mention and lashed together a series of wrappers and BASIC interpreter again, though these days a fully abstracted stack language would be better as the code would be smaller and virtualy architectral nutral. Forth comes to mind but it has some issues with the way the program dictionary works, and LISP would be a better high level language.

As for crypto algorithms, in addition to Salsa if you look back at the finalists I would consider both Bruce’s teams entry and Ross J. Andersons teams entry over the AES winner, for a number of reasons not least because the constructs are conservative in nature and have been well studied.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.