Internet Subversion

In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.

Trust is inherently social. It is personal, relative, situational, and fluid. It is not uniquely human, but it is the underpinning of everything we have accomplished as a species. We trust other people, but we also trust organizations and processes. The psychology is complex, but when we trust a technology, we basically believe that it will work as intended.

This is how we technologists trusted the security of the Internet. We didn’t have any illusions that the Internet was secure, or that governments, criminals, hackers, and others couldn’t break into systems and networks if they were sufficiently skilled and motivated. We didn’t trust that the programmers were perfect, that the code was bug-free, or even that our crypto math was unbreakable. We knew that Internet security was an arms race, and the attackers had most of the advantages.

What we trusted was that the technologies would stand or fall on their own merits.

We now know that trust was misplaced. Through cooperation, bribery, threats, and compulsion, the NSA—and the United Kingdom’s GCHQ—forced companies to weaken the security of their products and services, then lie about it to their customers.

We know of a few examples of this weakening. The NSA convinced Microsoft to make some unknown changes to Skype in order to make eavesdropping on conversations easier. The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.

I have heard engineers working for the NSA, FBI, and other government agencies delicately talk around the topic of inserting a “backdoor” into security products to allow for government access. One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.” The NSA’s SIGINT Enabling Project has a $250 million annual budget; presumably it has more to show for itself than the fragments that have become public. Reed Hundt calls for the government to support a secure Internet, but given its history of installing backdoors, why would we trust claims that it has turned the page?

We also have to assume that other countries have been doing the same things. We have long believed that networking products from the Chinese company Huawei have been backdoored by the Chinese government. Do we trust hardware and software from Russia? France? Israel? Anywhere?

This mistrust is poison. Because we don’t know, we can’t trust any of them. Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it. But now that system is in turmoil. Foreign companies are fleeing US suppliers because they don’t trust American firms’ security claims. Far worse governments are using these revelations to push for a more isolationist Internet, giving them more control over what their citizens see and say.

All so we could eavesdrop better.

There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover a vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.

We can’t both weaken the enemy’s networks and protect our own. Because we all use the same products, technologies, protocols, and standards, we either allow everyone to spy on everyone, or prevent anyone from spying on anyone. By weakening security, we are weakening it against all attackers. By inserting vulnerabilities, we are making everyone vulnerable. The same vulnerabilities used by intelligence agencies to spy on each other are used by criminals to steal your passwords. It is surveillance versus security, and we all rise and fall together.

Security needs to win. The Internet is too important to the world—and trust is too important to the Internet—to squander it like this. We’ll never get every power in the world to agree not to subvert the parts of the Internet they control, but we can stop subverting the parts we control. Most of the high-tech companies that make the Internet work are US companies, so our influence is disproportionate. And once we stop subverting, we can credibly devote our resources to detecting and preventing subversion by others.

This essay previously appeared in the Boston Review.

Posted on May 12, 2014 at 6:26 AM93 Comments


kashmarek May 12, 2014 7:08 AM

My observation is that this point needs to become a primary discussion for the political future of this country. Every political candidate needs to take a stand on this. Accountability has to be restored.

Just an Australian May 12, 2014 7:22 AM

I think you misunderstand the empire – it will do whatever it can to protect itself. The fact that it will destroy itself doing that, and take us all with it – well, that’s just history repeating itself.

Renato Golin May 12, 2014 7:32 AM

“Internet governance was largely left to the benign dictatorship of the United States because everyone more or less believed that we were working for the security of the Internet instead of against it.”

Sorry Bruce, that’s not even remotely true.

We (as in everyone else) only “trusted” US because we didn’t have power to change it. No other nation on Earth alone could sponsor backbones between other companies.

That trust was never on the benevolence of the US state, that everyone else in the world knows it doesn’t exist, but in the economic interest of “what doesn’t bite me in the arse doesn’t need changing”. That, and only that, is what changed with Snowden.

Do you think Angela is remotely interested on what Obama is doing to Dilma? Or vice versa? Nobody cares about anyone else. That will never change.

Do we need trust? No we don’t! We need international standards, we need international stewardship, we need international investment on the strengthen of the infrastructure, security, anonymity. While one country rules that, we will never have a true internet.

The internet that Americans dream never existed and never will while one country rules it. There is no such thing as “benign dictatorship”. Never was, never will.

yesme May 12, 2014 7:42 AM

For even a fraction of this $250 million you can write an UNIX like OS in Ada that is fast, secure, readable, extendable, portable and formally verified.

Add to that an internet that is sane and simple (layer 7 without the crap, duplication and filesystem based) and it could be quite trustable.

Will it ever get to this?

Steve May 12, 2014 7:58 AM

(Renato Golin) Based on your comment starting with “Internet governance” I think you misunderstood about who the article is referring to that left it to the benign dictatorship. It sureley did not refer to individual citizens. It was largely left to the US until relatively recently. As it is the whole internet grew out of the US, and as besides NSA’s actions, I believe stands a much bigger change to be less corrupted while in the US control, inspite of all the flaws of the US. The US also have a better constitution than any other country. Future will tell…

I can’t quite understand how you can say we don’t need trust and then go on saying we need international standards. Who’s going to use them if they are not trusted? Only those forced to, and only if there’s no alternatives. Many players, who make a difference, will not use untrusted standards. I’m a very small player but I sure have stopped using a number of standards as I no longer trust them.

As far as “benign dictatorship” goes they do exist, my company is an example. And I know others as well. I’m sure you meant anyone running a country could not be benign. Certainly the ones we hear about have not been. To me it looks you might have not have enough faith in humanity (with all it’s bad history) which is understandable, but does not necessarily make it so.

Wilgum May 12, 2014 8:07 AM

We can only hope that there is something left in the Snowden cache that will give the debate more weight on the pro-security/anti-NSA side. Much of what has been reported has been depressingly glazed over by the politicos and the public at large. This is largely because there is still a measure of trust the NSA has from much of the public and the politico despite what has already been released.

What was Hayden referring to about Alexander going too far when the leaks started and before he changed his tone? Large-scale domestic warrantless Internet wiretapping happened on Hayden’s watch before it was legalized. It’s infinitely clear that the NSA doesn’t actually care about the phone metadata program since they let it fall into technical irrelevance internally before Snowden came into the picture. Was he referring to Bullrun or something else that has yet to be released?

Mike the goat May 12, 2014 8:16 AM

yesme: I think so. You only need a ’cause’ compelling enough to attract the support of someone with the finances to make it happen. That said, perhaps our best bet is crowd-funding.

There would have to be concessions made – it is always a trade off between security and usability. I think if we can have a super simple kernel, think Mach or even MINIX. Bolt on top of this kernel anything else that is required. TCP/IP support is ubiquitous but unnecessary this low in the stack (ignore performance reasons). Have a completely independent process handle it, and we can likely steal the already heavily audited Berkeley code. Have as much of the OS in userspace, even filesystem support.

Perhaps I am crazy, but this is the direction I would potentially go in.

Of course all of this is irrelevant if your hardware (and the firmware/etc. that runs it) has been poorly implemented, vulnerable or even deliberately compromised. The latter wouldn’t surprise me in the age of mass surveillance.

Mike the goat May 12, 2014 10:07 AM

Nelson: you’re right – trust never existed on the Internet, but way back in the early days your adversaries weren’t well funded governments capable of analyzing pretty much all traffic. DNSSEC sucks, many are not even using TLS to protect their POP/IMAP/etc traffic and traffic routinely leaks from misconfigured VPN appliances. We – the sysadmins and security professionals – need to change the whole culture of security in IT. What was previously viewed as an unnecessary annoyance is now critical especially for businesses where corporate espionage is a threat. The US gov’t has opened Pandora’s box. Soon every country in the world will be intercepting and analyzing traffic. Oh wait, it is already happening… What a sad, sorry state of affairs.

Rogers blowing networks up May 12, 2014 10:15 AM

The focus on trust makes it awfully touchy-feely. This is destruction, not just wounded feelings. Let’s call it what it is. NSA’s engaged in sabotage, “une activité préjudiciable à la sécurité de l’Etat” in the Geneva Conventions. An irregular act of war in breach of US obligations. The US government will be responsible for reparations, compensation, or satisfaction under law.

NSA’s other mission is espionage, the other activité préjudiciable à la sécurité de l’Etat. This too makes the US government responsible for reparations, compensation or satisfaction under law. As the International Law Committee puts it, “Non-material damage is generally understood to encompass the affront to sensibilities associated with an intrusion on the person, home or private life. No less than material injury sustained by the injured State, non-material damage is financially assessable and may be the subject of a claim of compensation.”

Na ga happen? The World Court is looking at espionage,

NSA’s out-of-control saboteurs and spies are gonna cost ya.

Celos May 12, 2014 10:31 AM

Indeed. All that all the terrorists could ever hope to achieve pales before this massive corrosion of the very fabric of society. It does not get any more evil than this.

RSaunders May 12, 2014 10:40 AM

@ yesme,

Let’s suppose somebody makes a from-scratch computer with operating system and drivers that is free of security vulnerabilities. Putting aside, for the sake of argument, the cost; let’s presume that some wealthy person funded this effort to make Earth a nicer planet to live on.

Would the NSA like to buy and use these computers? Perhaps, but probably under the nobus principle the “somebody” would have to agree not to tell anybody that such a trustworthy computer existed. Our beneficent patron might not buy into that.

Would the NSA like other people to use those computers? Perhaps, if evildoers trusted the computers and put their evil plans in them. Lots of software besides the OS would allow other vulnerabilities to be exploited. If law enforcement found an actual evildoer with such good hygiene that no vulnerable software was used in their raw ASCII text file evil plans; they always have TEMPEST and physical means to extract the information. If they are inconvenienced, they simply use the courts to force “somebody” to let them in; at the penalty of being Lavabitten.

So yes, money could be spent that way, but it’s not clear it would make a difference. As Bruce has said many times, the really critical thing Government can do, and the US Government has done, is gagging people to prevent them from disclosing what they have been ordered in court to do.

The whole issue with national security letters and the related data collection is that the companies involved are forced to lie to their customers. If I sue my neighbor to try and get him to fork over his cheese, he can complain to his friends and the “free press”. This transparency in the use of the courts is a significant, important benefit by protecting society from trolls and other sorts of tortious interference. Government orders that are kept secret have been around for ages, wiretaps for the whole history of commercial telephones, secret arrest warrants back into the middle ages. When they were only applied to evildoers, society was mostly OK with it. Now the orders are being applied to everyone, in order to make sure the evildoers are included in the collection, and that’s causing all the issues.

Everyone is OK with issuing an arrest warrant for an evildoer and not telling the evildoer the cops are coming. Perhaps their is even a time limit, after which the evildoer (who presumably hasn’t been found because they’ve figured out the cops are on their trail) gets on America’s Most Wanted or get’s their picture in the Post Office. That’s the transparency we need to restore. Perhaps after some company has been ordered to do something against their customers, and the government has had time to accomplish their investigation, they should be able to disclose what they were asked and what they turned over. When something isn’t going to be a secret forever, Government has to plan for the firestorm that might come up when the clock runs out. That’s the only thing likely to cause them to be more retrospective in their court orders.

Evan May 12, 2014 10:58 AM

@Renato Golin:
Re: “international standards” & co…

Don’t be naive. There is no international actor that anyone could actually trust more than the US government – which isn’t to say that the US government is particularly trustworthy. The UN has proven itself completely porous to espionage and bribery. ISO is slow and only functions well when the finalized standard is unlikely to be controversial. The ITU and the IEC have the expertise, but they also exist more as fora for discussion and consensus rather than administrative organs in their own right. The European Commission is too politicized and beholden to competing interests, and even Europeans don’t trust it very much.

That doesn’t leave much left to go on, which is the problem. The trust in the US was not trust that the Americans would do the right thing for everyone, but at least that more internet and secure cryptography were, at the very least, good for America. As long as Americans had access to strong cryptography, more or less everyone did, and that was good enough. Now? Thanks to NSA meddling, potentially nobody has access to strong cryptography, even the systems used by military contractors secure source code for weapons is of dubious value. That’s not just bad for the world in general, that’s potentially very bad the United States.

Sam May 12, 2014 11:04 AM

One of them told me, “It’s like going on a date. Sex is never explicitly mentioned, but you know it’s on the table.”

Rape might be a better metaphor, given that the people being penetrated aren’t there to give their consent.

Renato Golin May 12, 2014 11:12 AM


I didn’t mean citizens, I meant government. Yes, the “network” was born in US and for a good part of the time the “internet” was mainly an American thing, but in the last 15 years it became an integrated part of the world’s infrastructure (technological, scientific, educational, etc).

It’s the same thing if I said that only England can control “capitalism”, or only Germany can control “socialism” or China can control pasta and gunpowder. They’re international concepts, like maths, art, water and electricity.


The only people that truly believe that American control is better, more trustworthy than any other are the Americans. The rest of the world know that any government or international standard will be as corrupt as any other. Your constitution means nothing to the rest of the world and all movement in the US government about surveillance is towards the “American people”, not the Internet.

The US government doesn’t care about the Internet, and for what it seems, it doesn’t care about its people either. Newsflash, nor does any other government.

When I say about “international standards”, I mean a corrupted international standard, which at least has corruption coming from all sides, not a single country of origin, and policy makers will have to deal with more lobbying than just paying for Republicans or Democrats, Labour or Conservatives.

Imagine the US government trying to legalise the NSA surveillance if the Russians were in the law-making process… Better still, purportedly weakening the encryption algorithms would be consider an act of war.

David Leppik May 12, 2014 11:39 AM

Reminds me of “smart dust” in Vernor Vinge’s A Deepness in the Sky.

In that book, the crew of a trading spaceship are enslaved by people from a different ship. The new rulers discover that the small tracking devices (think RFID stickers) used by the enslaved crew are in fact dust-sized computers wrapped in crippling security hardware. The crew didn’t trust the smart dust, since they didn’t know where the technology came from. Auditing the smart dust takes time, so they must decide whether the powerful surveillance capabilities (having ubiquitous cameras, microphones, and other sensors in the air) are worth the security risk.

The Internet is the same way. Auditing is difficult and expensive, and you can’t always audit the camera that’s pointed at you. You won’t get anywhere if you don’t trust anyone or anything. So it’s a question of how much trust, and where. And sometimes to get what you want, you need to place trust where you’d rather not.

The same is true within the NSA. In any secret organization, especially where not everyone knows everyone else personally, they need to trust questionable people and systems in order to get their jobs done. It’s crazy to think that they haven’t been infiltrated by foreign spies– it’s just a question of where the spies have managed to infiltrate. So NSA-only back doors in security software is really a game of chicken– whether the back doors will help to discover spies, or whether the spies will discover the back doors.

The only viable security solution is to have secure software and protocols collaboratively designed by parties that don’t trust each other.

Magnus Cartwright III May 12, 2014 12:00 PM

The western world is corrupt to the extreme. Its time for us folks to put our faith in crypto instead of politics. I see a lot of potential in wireless mesh networks and bitcoin’s blockchain technology to shift this paradigm out of the dark. Innovation > Protest

Nick P May 12, 2014 12:07 PM

@ Mike the goat

That was the model QNX chose. They just integrated a bunch of NetBSD and opened their source for volunteers for a while. Then, they closed it back up. In any case, they ended up having a vulnerability or two because of the NetBSD networking stack. Certain assumptions or design choices it made that were fine in NetBSD were a problem in QNX.

So, if I used your approach, I’d modify it to not directly use the BSD code where possible. Instead, developers would use BSD code to understand the problem (eg doing TCP). They would produce formal specifications, a reference implementation in a safe language, and a test suite as they went through the BSD code. They would also think of anything they implement in terms of their OS design, assumptions, etc. This approach, while slower than reusing BSD code as-is, should eliminate vulnerabilities do to incorrect assumptions of or bad integration with the existing code.

@ RSaunders

“Would the NSA like to buy and use these computers? Perhaps, but probably under the nobus principle the “somebody” would have to agree not to tell anybody that such a trustworthy computer existed. Our beneficent patron might not buy into that.”

The original ones in Orange Book were for DOD to purchase. And while they didn’t hide their existence, they did try to restrict them just as you said. And then they killed off the market with different purchasing policy. It would be wise for next effort to not target government use at all, but allow for it if they want it. Not focusing on their requirements, certifications, appeasement, etc can only benefit a clean slate effort not focused on profit.

“they always have TEMPEST and physical means to extract the information. If they are inconvenienced, they simply use the courts to force “somebody” to let them in; at the penalty of being Lavabitten.”

Absolutely. Yet, automagically exploiting these systems via automated tools surveillance and hacking the entire internet might not be feasible. They would also be forced to focus resources on individual targets in ways that can be detected or might force a suspicious change in their behavior. Altogether, the situation would be far better than what currently exists.

“As Bruce has said many times, the really critical thing Government can do, and the US Government has done, is gagging people to prevent them from disclosing what they have been ordered in court to do.”

The best I’ve been able to do is come up with designs that choke the attackers out, physical tamper resistance, and have whole thing designed/built outside of US’s control. There are schemes where the operator of these servers couldn’t use them for anything but the stated application with stated permissions & users could get an attestation. Operators trying to change the scheme or run services on more easily subverted systems simply wouldn’t be trusted by security-conscious.

Yet, the legal attack you mentioned is still very powerful. If the service provider gives customers the boxes, then nearly invisible implants might be added to them. The USB and VGA cable implants come to mind. If the risk is within an app, then a “temporary security hotfix” might be used by the provider to backdoor the app. There’s potential app backdoors that exist on top of the otherwise secure system. There’s the physical stuff. They might even get carriers to modify their infrastructure to make EMSEC attacks on the line feasible remotely. Many possibilities that might or might not be beaten stemming from their power of legal coercion.

That’s why I kept telling Bruce it was a political problem, not a technical one. Yet, the tech can accomplish a lot so long as it’s hosted and developed by the right kind of people way the hell away from Five Eye’s influence.

paul May 12, 2014 12:39 PM

Maybe the “free” market will ultimately take on secure computing? There are tens or hundreds of billion of dollars that aren’t going to be made on the Next Big Thing (be it smart tvs or wearables or body trackers or augmented reality or whatever) because some set of early adopters are going to find their personal lives splattered all over the usual media and no one is going to want that liability.

Daniel May 12, 2014 1:20 PM

Security may need to win but I am pessimistic that it is going to. It requires a radical change in American culture and one that we simply are not prepared to do. Not in my lifetime.

Anura May 12, 2014 1:29 PM


I don’t hold out much hope for the free market. They have shown that they are either incapable or unwilling to resist power. The other problem is that it’s cheaper to do things badly, so you end up with a ton of insecure, difficult to update devices. From where I sit, unless the governments themselves decided to stop subverting technology, and took an active role in making everything open and secure, then I don’t think this will change any time soon. The open source movement may create a secure software platform, but I don’t expect to see secure hardware take hold without major world governments requiring them.

Personally, if I was President of the US, I would make the goal of the NSA purely to secure our infrastructure, and let the CIA handle SIGINT. I would then start a public, open source project to design a new platform from the ground up, redesigning protocols, hardware, and software to be secure and modular. Essentially, I would literally design everything, from programming languages, to protocols, to PCs, to servers, to routers, switches, and other appliances, to mobile phones and tablets, as completely, 100% open-source products to meet the requirements of government, business, and the public, by allowing all three to contribute, while enforcing coding standards to make everything readable and verifiable.

Then again, I won’t be elected President of the US: I’m middle class, unmarried and childless (and very happily so), atheist, have long hair, and don’t wear suits; all of these pretty much make me unelectable in the US. Also, I’m probably way too progressive for most Americans. I’m also too idealistic, and my convictions are too strong. You basically need to be manipulative and flexible about your beliefs (flexible, as in, changing them depending on who you are talking too).

u May 12, 2014 1:52 PM

NSA did and does aid the enemy by not disclosing critical security vulnerabilities they knew so they couldn’t get fixed and opened the door to russian and chinese hackers thanks to the NSA.

NSA did and does kill people by not disclosing critical security vulnerabilities they know and helping hackers to break into hospitals and emergency systems.

NSA did and does help terrorists by not disclosing critical security vulnerabilities they know so they can more easily break into airplanes, flight-assistance-systems and nuclear power-plants.

NSA did and does help foreign countries to spy on US companies by making it easyer for remote hackers to hack those systems.

NSA, we uncovered you, you must be the best russian-chinese-cooperation for subversting the US, it’s the only logical outcome!

Act now, arrest everyone at the NSA, to prevent further damage on us! Act quickly, russians and chinese are laughing at us right now.

The Last Stand of Frej May 12, 2014 2:22 PM

@Magnus Cartwright III

While corruption is certainly rampant, it’s absolutely not confined to the Western world. We’re just not as non-corrupt as our government would have everyone believe.

name.withheld.for.obvious.reasons May 12, 2014 2:33 PM

@ Just an Australian

I think you misunderstand the empire – it will do whatever it can to protect itself.

I would make this statement the frame of reference, the locus, of any efforts to “fix” what ever it is we “believe” is broken. Our societies are not cohesive enough to bring together the type of change that many here express needs to happen. The Internet was not part of some grass-roots community, but one did spring from it, and for the most part private corporations are now using these systems to affect everything from what you here, see, and touch to–what you wear, eat, and drink. Your dystopia will not be what you expect–nor what anyone expects–but it will from you, expect much. “Give unto Ceaser…” fool!

Magnus Cartwright III May 12, 2014 3:12 PM

@The Last Stand of Frej

I completely agree and like your wording better…We can only hope there are people like us in places like China as well.

Mr. Pragma May 12, 2014 4:14 PM

Risking to sound blunt:

The nsa explosion not only is a reason for nun-us entities to avoid us products and services. It’s also a powerful argument against the not-at-all benevolent dictatorial us regime in general.

But there is more it, way more.

It’s not just “Oh no, Mum, look, they’ve corrupted and subversed the internet!”

The usa have tainted major parts of IT in general. Examples? Et voila:

x86 – an architecture that wasn’t the best from the beginning that has been experiencing series of “extensions” by makeshift gluing (rather than proper design). It’s a major money machine, though (which again explains pretty much in the usa).

Microsoft Windows

Unix – a beautiful system. Then, at that time.

a gazillion of Unix derivates, some of them trying to enhance Unix implementations to halfway reasonable levels (BSD), and some of them playing democracy, meritocracy, bazaar, and gpl terrorism games, “enhancing” an overages system by chaos, politics, and sheer idiocy.
(Being at that: If you *really
believe that large us corporations “benevolently support” linux with billions you should see a shrink. And rather soon.
And the joke of the decade: nsa graciously providing “security”).

C,C++,java,… – No need to elaborate on that any more.

IPv6 – sheer insanity, unnecessary, bluntly and brutally “market driven”, less than half-heartedly implemented. For good reasons.

… and the list could go on

Happily, we have some excellent alternatives here in Europe although, unfortunately, most of them are slowly decaying ruins or very poorly nurtured adolescents.

If Mr. Putin wanted he could — within less than 2 years — have a highly reliable, safe, and secure OS with lots of highly reliable programs. And I’d be more than glad to help them creating it.

Why Russia, why not France, Germany, etc, you ask? Because basically all western europe (“nato members” (read: “vasalls”)) is infested and remote controlled by us interests.

Have a nice — and well observed — day.

Adjuvant May 12, 2014 4:28 PM

@Celos: “Indeed. All that all the terrorists could ever hope to achieve pales before this massive corrosion of the very fabric of society. It does not get any more evil than this.”
You’re halfway there. Now, let’s consider the possibility that intelligence agencies and networks — formal and less formal — might in fact be behind a significant portion of the terrorist activity that occurs. We could probe this question quite deeply, but for most it seems to be a non-starter. However, just as a proof of concept, let’s start with an example that is relatively non-controversial.

There’s a documentary the BBC put out in 1992, post-Glasnost, the publication of which i’m certain is bitterly regretted in certain quarters. Directed by the redoubtable Alan Francovich, this three-part BBC Timewatch series entitled Gladio clearly lays out how false-flag terrorism as a means to manipulate political perceptions was systematically deployed in Europe, at the direction of US and NATO interests. Examples included the Bologna railway station bombing of 1980, and public court and parliamentary proceedings in Italian and other European jurisdictions have established these connections beyond serious dispute. I could expand at great length on the theme, but the discussion would tend to exceed the scope of this venue. I’ve posted relevant material in the past.

The crucial point here is to recognize that terrorism can be and has been clandestinely and cynically deployed as a means to various political ends, by institutions and interests that are ostensibly dedicated to combating it. It is reasonable, then, to suppose that patterns of effective clandestine strategy that have been documented in the past may continue into the present and future, in the absence of compelling disincentives to their continued practice.

Next step for investigation of this theme might include a review of this 2004 piece by Prof. Ola Tunander of the PRIO (Peace Research Institute Oslo) entitled Securitization, Dual State and US-European Geopolitical Divide — or, The Use of Terrorism to Construct World Order”. Also worth reviewing would be the work of Prof. Daniele Ganser (previously of ETH-Zurich, now at Basel) and, as always, the works of Prof. Peter Dale Scott (UC-Berkeley emeritus). Finally, given that I’ve begun by recommending a 1992 BBC Documentary, this (rather more reticent) BBC piece from 2005 might be a propos as well: The Power of Nightmares, dir. Adam Curtis.

Adjuvant May 12, 2014 5:09 PM

If you will indulge me, a short excerpt from a powerful poem by Scott…

The Tao of 9/11

At the First Emperor’s Tomb
the Chinese People’s Republic
shows you a preliminary movie
in which this monument of empire

is seen through the eyes of peasants
who rose up in rebellion
and smashed the terra cotta statues
we have come so far to see.

I tried asking whether the government
is more in favor of the tomb
or of its being smashed? The guide answered
Both! We think the tao of history

contains both the bright yang of order
and the dark yin of revolt.
So I said, Does that mean
that in the present phase of history

the yin is the Falun Gong?
A short silence. Then
You must understand that in China
there are some things we do not think about.

I know why I’m remembering this.
There are things we don’t think about in America
things I don’t want to think about myself…
[Click to continue…]

kashmarek May 12, 2014 5:21 PM


It was once speculated that 9/11 was fomented by agencies within the U.S. for the purpose of provoking a “new pearl harbor” as once stated in the agenda for PNAC (Project for a New American Century).

What has played out since 9/11 seems to fit the pattern. All that remains are the internment camps (the “gulag” so to speak, run by the prison for profit groups), show me your papers (hell, they already have them), the show trials (whoa, these might already be underway), and the routine commitment of people deemed unfit for (someone’s) society.

Quoting Bart Simpson to Homer, “are we there yet, are we there yet, are we there yet…”

kashmarek May 12, 2014 5:23 PM

I forgot to add:

Skinner’s Box has long been the bane of the new world order. Before, we couldn’t see it, but now it is in plain view in front of us.

MingoV May 12, 2014 5:33 PM

There is no way to regain trust short of completely destroying every snooping agency in the world and rebuilding the internet with superb security features. Since this is la-la land thinking, the internet will never be secure.

I gave up on internet privacy years ago, before the revelations about the NSA. I just threw up my hands, figuratively, and decided that if the government wants to know what brand of tea I bought last month from, it’s welcome to the info. The only difference between a dozen years ago and now is that the government now collects info without a warrant. Since judges hand out warrants like the Easter Bunny hands out chocolate eggs, I don’t see much difference between then and now.

Alex May 12, 2014 6:00 PM

“Because we all use the same products, technologies, protocols, and standards…” — do they?

As far as I know, governments uses very special products, technologies and protocols. They are techno-segregationists: one product for mass-market, another for “us”. For example, DoS uses SIPRNet.

NobodySpecial May 12, 2014 6:45 PM

re: none but us.

The ACM just announced the winner of best security PhD dissertation award goes to
Sanjam Garg – A graduate of the Indian Institute of Technology, Delhi and the 2nd place Shayan Oveis Gharan – A graduate of Sharif University of Technology in Tehran

Silicon Vichey May 12, 2014 7:31 PM

There’s an implied nationalization of U.S. private industry in the legal setup that allows the government to force companies to install backdoors in their products.

This has eliminated the possibility of buying or owning purely commercial software or hardware.

So the choice is not between commercial and non-commercial IT products anymore, the choice is between government-controlled products and products free of government control.

bws May 12, 2014 7:37 PM

Maybe I’m just naive, but obviously the NSA is not the only intelligence entity out there that’s employing divisions of brilliant individuals to comb over every piece of code they can get their mitts on. I’m going to go out on a limb and simply suggest that other nations have their own programs, irrespective of their involvement with the US government… Ironically, the phrase “trust but verify” comes to mind… By watching traffic patterns, former Soviet states, China, The Koreas and even developing countries in Africa, Latin and South America all have skin in this game, and NOT just the NSA. Since we seem to be focused on the presumed guilt of USA, I’m going to use an analogy that most Americans will be able to relate to: The NSA is the Richard Nixon (or Bill Clinton) of the intelligence community. Every country with an advanced cyber intelligence infrastructure is equally guilty and double edged as the NSA, but ONLY the NSA was dumb enough to get caught (and impeached) with their pants down. Or as that implausible?
While I am rather disappointed about all of that, I’m rather surprised by the underwhelming world response to the amount of counterfeited network gear, manufactured in an unnamed eastern nation, for a company whose name sounds eerily similar to Sysco. It’s duplicated almost perfectly in that the only way it’s been able to be identified as being counterfeit is when said manufacturing company has no record of that device in its master product manifest. Albeit remote, some of that very gear could have found its way into some of the NSA? It has been found on US Military posts.
Anywho, anymore, with governments making a half fast attempt at fortifying their defenses, why focus most of your cyber intelligence attacks or probes on the perimeters of the government you’re trying to infiltrate? Why not simply target the weakest element of any given society: The citizens and or the services they frequent most, i.e. social networking? Facebook, Google, et. al. have all shown a rather limited amount of effort in protecting their users privacy and in some cases are more than happy to sell it to anybody who is willing to pay enough. Here’s a theory, both were born in the US but are now multinational corporations that hide behind US law, when convenient, and then hide behind others when not. But what the hell, this has already been beat to death, no?

65535 May 12, 2014 7:54 PM

“…it’s not clear it would make a difference. As Bruce has said many times, the really critical thing Government can do, and the US Government has done, is gagging people to prevent them from disclosing what they have been ordered in court to do.” –Rsaunders

This is a huge problem. I really don’t believe the founders of The USA intended to permit secret courts and gag orders. These gag orders are based on dubious legal ground work. These secret gag orders should be challenged and brought to light.

“There is a term in the NSA: “nobus,” short for “nobody but us.” The NSA believes it can subvert security in such a way that only it can take advantage of that subversion. But that is hubris. There is no way to determine if or when someone else will discover vulnerability. These subverted systems become part of our infrastructure; the harms to everyone, once the flaws are discovered, far outweigh the benefits to the NSA while they are secret.” –Bruce S.

That is the main problem. There is an arms race to see who can hack the internet the fastest and gain from it. I believe that back-door hacks will lead to ruin and loss of business in the American Cloud computing sector. The NSA is sacrificing American business to further their mission-creep filled agenda.

NobodySpecial May 12, 2014 10:25 PM

@bws – yes other countries do it too.
But I don’t have an Iranian operating system installed on my computer and my (non-US) company doesn’t use a Chinese email provider or a Libyan cloud service.

Leon Wolfeson May 12, 2014 10:57 PM

65535 – It’s not just the NSA, I’m afraid.

It’s the entire streak of exceptionalism throughout the American government.
I’m referring to this;

The new proposed EU data protection law, which has widespread support, would thus effectively make companies choose between doing business in America and doing business in the EU. I’ll be backing it to the hilt. (And intend to campaign for it via 38 Degrees if necessary)

Clive Robinson May 13, 2014 12:06 AM

@Nick P, RSaunders,

One problem with ICT security is the very few people capable of getting it right from first thought through to packaged product.

This scarcety makes them the weak link in the chain.

If you think back to the Iraq Big Gun there was only a handfull of people in the world that could have designed it and only one who was looking for a sponsor.

Supposadly he was “warned off” but ignored the warnings befor a mossad hit squad executed him outside his flat. Personaly I doubt mossad issued any warning as it would have made executing him far more difficult and very much more public.

Thus all countries and many organisations have this sanction –all be it illegal for most– to execute the weakest link…

Brian Dell May 13, 2014 12:53 AM

“The NSA also inserted a degraded random number generator into a common standard, then worked to get that generator used more widely.”

Where’s the proof, Bruce? Where is the academic paper revealing the backdoor, and I don’t mean revealing the POSSIBILITY of such.

Benni May 13, 2014 1:07 AM

News for all non-us persons who bougth a router from an american company: “The NSA routinely receives – or intercepts – routers, servers and other computer network devices being exported from the US before they are delivered to the international customers.”

And that is why they issued so dramatic warnings against the chinese router manufacturer Huawei: It is more difficult for NSA to place bugs in these products: “An important motive seems to have been preventing Chinese devices from supplanting American-made ones, which would have limited the NSA’s own reach”.

By the way, which routers does google or Deutsche Telekom use?

name.withheld.for.obvious.reasons May 13, 2014 2:20 AM

I don’t know how many of you looked at the slides that were released by Snowden describing the PRISM program. If you look carefully, the co-terminus intercontinental connections have bandwidth allocations that allow QOS and OSPF to route calls across different geological boundaries. The diagram, from the data provided, suggests that outside the U.S. all communications will see a path (may appear virtual) that crosses the U.S. border. I imagine all long-haul and NAP (network access provider) nets are configured this way.
This suggests that the telco’s are way more into the network than would be implied. And, how did NSA get access to Google’s private link–there is typically a contract (and I’m sure Google pays big bucks) for a private virtual circuit that guarantees bandwidth and availability. If I were Google, I’d be pissed off like a MOFO…wonder what other long-haul/NAP networks are compromised?

yesme May 13, 2014 3:51 AM

@Mike the goat

Talking about technical details is fun, but that was not what I meant.

I was talking more about the big picture. Since WW2 (especially since 9/11) “They” rather spend billions to play the “cop vs bad guy” thing than even try to solve the cause of the “evil doing” at the core.

That is not politics, as in looking foreward to long term solutions, that is short sighted narrow mindness and fire fighting.

But somehow that is what they like to do.

yesme May 13, 2014 4:04 AM

I would like to add to my previous post that while they still not recognise their polarizing mindsetting they are not making the world a safer place. In fact the opposite is true. Just take a look at the zero days that they buy and actively misuse.

Clive Robinson May 13, 2014 4:33 AM

@ YesMe,

You forgot to mention that while the US etc are fighting these wars certain “favoured sons of the republic” are becoming even wealthier, gaining more power and getting more control on those you vote for.

Remember the only cheap thing in modern war are the lives taken, the rest just makes nice profits and puppet governments.

name.withheld.for.obvious.reasons May 13, 2014 7:30 AM

I wonder if Aaron Swartz, he was working on a dead drop for the New Yorker, had developed the application to allow whistleblowers to disclose information to the press was more significant at the time then others may have realized. It sounds as if Aaron was aware of a couple of underlying problems that “we” had missed. Given his circumstances I don’t understand why the application was so important–he was already dealing with a fecal storm being generated by the DoJ. There seems to be no pay-off given his case before the court. And why wouldn’t other similar services be sufficient for contacting journalists.

Not that I like to speculate, the idea that Aaron was actively working on a project that would be a direct threat to the DoJ, Secret Service, and the NSA–it must have raised his visibility beyond the prosecutable course of action. I am unaware of the scope surrounding his activity prior to his demise–it seems too coincidential to dismiss outright.

The pressure and potential blowback for developers has to be considered (I’ve been advised) when formulating code for secured comms and messaging. Several have warned of the need to keep one’s liability to a minimum, this of course is what the IC wants–keep em in fear. As one of those whom has been involved in similar development, my experience includes a series of incidents that caused a colleague to back out of the project (not by choice). The possibility of a deliberate attempt to undermine our work was never a consideration–but we did make contingencies knowing that things could go bad. And that was in April of 2013–so we were not naive about the environment.

Development may require a shell programming scheme (and I don’t mean Korn or Bourne).

Leon Wolfeson May 13, 2014 7:43 AM

I don’t see the need, “name.withheld.for.obvious.reasons”

There’s an existing answer, which is highly resistant to attacks on individual developers. It’s that good anarchist solution, open source.

popom May 13, 2014 8:04 AM


While open source is not perfect as is illustrated by heartbleed going undetected for years, it’s the best we got.

Gweihir May 13, 2014 9:29 AM

@Brian Dell: That argument is BS, and I suspect you know that. The point is that it is known and proven that the design of the CPRNG is compromised. It is also proven and known that it is compromised in a way that the actual implementation compromise cannot be proven without a secret the attacker holds. The design compromise is quite enough for Bruce’s statement.

Incidentally, the Intel RDRAND compromise is also a design compromise, that may or may not be paired with an implementation compromise.

Now, some really clueless people could argue that a design compromise is not actually a compromise, because a compromised design of this type can still be implemented securely. But that is naive. A fundamental principle is that a non-compromised design of any crypto-primitive must make compromising the implementation as hard as possible and must make the detection of a compromised implementation easy. Hence, for all practical purposes, a compromised design is as bad or worse than a compromised implementation. To use a non-specialist term, a compromised design is “highly dangerous” and only a complete fool would see nothing seriously wrong with it.

This is a bit like the completely bogus claim that snooping on metadata only does not violate privacy. By now we know that people are killed on the basis of metadata only.

Nick P May 13, 2014 10:28 AM

@ Leon Wolfeson

I’m sure Wikileaks would’ve called BS on that. The opponents need to be able to compromise machines to get information to get control of or sabotage an operation. Giving them information that makes that easier, as in open source, just… makes that easier. They have zero days on almost every mainstream offering, whether closed or open. Goes to show that open source has zero effect on protecting against attacks by a TLA. Matter of fact, it makes it easier as there’s more money (six to seven digits) in flaw finding for the bad guys & source being open makes it much easier.

What does work is picking the right tools, physical environment, processes, and independent reviewers. And throw in OPSEC for every aspect of the operation. (eg Special Access Program guidelines) Assume opponents are doing traffic analysis, trying to hack machines, controlling the networks, might pay you a visit, etc. Then, do the security from there. The solutions are far from open as they’re essentially obfuscation: they can’t hit you if they don’t know where to aim & the target is always moving.

The only time I’d consider opening source of such an effort is if widespread distribution of the source was an actual goal. Or as a last ditch effort to distribute my previously secret tool if I’m getting shutdown or attacked by someone. The tool would only be partially done in such a situation, but might still help other designers accomplish what opponent doesn’t want to accomplish.

Leon Mingo May 13, 2014 12:07 PM

I am not a technical expert in these matters, so please excuse what might be a silly question…

Is it feasible to successfully combat governmental and corporate subversion of the internet by essentially overwhelming government and corporate actors with information?

That is, can internet users willfully spread misinformation and irrelevant information to the point that government and corporate data are so error-ridden as to be useless?

This is a tactic that would be largely independent of the technology, I would think, and it would be relatively easy to explain to non-technical internet users how to go about employing it.

In that regard, bomb terror plot NSA al-Qaeda airport bomb bomb bomb gun attack ricin anthrax. 😉

Jacob May 13, 2014 1:43 PM

The mistrust in the US government runs so high, that now I have some second thoughts about the Intel AES-NI macro instruction set.

We used to think that if you compile good implementation or use a solid open source crypto lib on a clean air-gapped computer, then AES can be highly secure. But since many well respected libraries, or even a clean compiler, can delegate during compilation the encryption primitives to that AES-NI, who can guarantee that the microcode there has not been subverted? There is a lot of noise about whether Intel RDRAND can be trusted or not – why nobody talks about AES-NI ?

Do I have to forbid AES-NI delegation, or even go to the extreme and compile and use on an obscure processor (e.g. from one of the European/Japanese embedded-app vendors) that can run C code?

Anura May 13, 2014 2:31 PM


One difference between AES-NI and RDRAND is that the output from RDRAND can’t be verified; you have to take it on faith that it is secure. With AES-NI, you can actually verify that the output is correct; if it was incorrect, then it would break anything that you tried to use it with. So while it’s not impossible that there is an attack against it, it would be extremely difficult to do.

Jacob May 13, 2014 2:59 PM

I agree that the RDRAND is much easier to subvert, but AES-NI can have an embedded code in it that would trigger a weakness if some flags would be set by the computer e.g. specific time zone, OS language, or even a special packet received by the network adapter. True, they can subvert any instruction in the CPU, but activating AES-NI by the user must be an attention grabber for the NSA.

Besides, many libraries use the AES-NI. Not many use RDRAND due to the suspicion surrounding it.

That’s what Bruce talks about: when you lose trust, everything is suspect.

Mr. Pragma May 13, 2014 3:31 PM

Jacob (and probably others)

The first problem is confusion. Basically one can break down the problem into three issues:

  • quality of the algorithm (“security”)
    This is about DES, AES, etc, and this includes Pseudo Random Generators. It’s about the algorithms as such and typically about some quantitative properties (like key length).
    Usually the problems are not in this area IF one uses algorithms that have enjoyed a high level of peer scrutiny (Translation: AES is a solid algorithm, XYZ from Corp abc probably is not).
    If, however, an algorithm that is official by some standards body like nist, is rotten (or intentionally weakened), that’s a major problem. And some by nist have been, and worse, nist itself is to be considered rotten and untrustworthy due to its mingling with nsa.

Rule of thumb: Stay away from anything a) us-american (Exceptions apply, e.g. AES, Blowfish, et al.) and b) not very widely peer-audited.

  • quality of the algorithms implementation
    No matter how solid and trustworthy an algorithm is, it can be — and often is — watered down, tainted, or rightout backdoored or crippled in the implementation, both in software and hardware, intentionally or not.
    While there is a (usually more theoretical) chance that such weaknesses can be detected in software, they basically can’t in hardware. (Yes, I know, one can detect them but then, who has the equipment and know-how to skillfully analyse nanometer structures …). Another problem with hardware implementations is that it’s so delicate and complex that even without any bad intentions whatsoever errors can be (and probably are) introduced.
  • safety, “not being crackable”
    Now, this is the big fat issue that doesn’t get the attention it merits.
    You may have a trustworthy processor, a fine algorithm incl. a high quality implementation but you are still fu**ed if you are on windows or linux. Using software written in C, C++, java and accomplices you’re finally lost, no matter how great your algorithm is.
    Post heartbleed we need not discuss that point but we can know it for certain.

A sidenote ad closed vs open source:
As a rule of thumb one should assume slightly higher code quality in properly managed open source projects. This, however, quickly turns to the ugly side with many open source projects that are not properly and skillfully guided and managed.
Obviously, open source has the big advantage that the source code can be checked (and repaired) though, alas, that rarely happens as heartbleed amply demonstrates.

Sad, sad summary:
It seems there is no or next to no reliable, safe, and secure network system components out there. We’ll have to do with compromises. There are some side routes in lower bandwith area (< some Gb/s) but they usually require lots of know-how and work. As for the network core one is – and will for some time be – very limited to very few players, all of them large corps and possibly (read: assumed to be) linked to state interests and services.

Pragmatic Rule of thumb for network equipment in < some Gb range:
No windows, no linux, no just-plug-in boxes, particularly not anything from the usa.
Openbsd, open solaris (pre oracle!), or the like, stripped down to the absolute necessary (No X, LibreOffice or other bloated junk). Necessary and well proven daemons only. So, no bind but nsd, no sendmail but postfix or qmail, etc.
If you have the know-how available (and the nerves) avoid intel and rather get and use an old UltraSparc T (1 or 2) box (Don’t worry about its age. That box will blow any intel quad core blabla thingy out of the water as a router, firewall, dns servers and alike).

In case you are more the intel boxes and cisco router type, don’t forget to enter and establish a quick route to nsa. After all it’s the tax payers paying for any burden nsa has to carry and analyse your traffic. So, if you want to support usa and nsa, do it at least properly and consequently.

Mr. Pragma May 13, 2014 3:39 PM

Apologies, I got trapped using less-than characters (html).

So, above (paragraph “sad sad summary”), it should read:

There are some side routes in lower bandwith area (less than some Gb/s)

And somewhat down it should read:

Pragmatic Rule of thumb for network equipment in the less than some Gb/s range: No windows …


Wesley Parish May 13, 2014 7:27 PM


I think the various governments, Nation-State and Feral, in the Washington Consensus have Free Air For Sale.

The more we learn about the degradation of the Washington Consensus to a Feral Empire, the more I feel that Phillip Kindred Dick (Horselover Fats to the cognoscenti) hit the nail right on the head.

Thank VALIS for Snowden! 🙂

Clive Robinson May 13, 2014 7:51 PM

@ Jacob,

With regards backdooring AES in CPU hardware, as @Anura points out it cannot fritz with the actual ciphertext or plaintext data as you could verify correct operation with “playing computer” with the published algorithm and pencil and paper (it would be tedious but easily doable).

What could be messed around with is covert time channels, say for every 1024th operation it would cause a cache hit or equivalent that would in effect Pulse Code Modulate the data stream with the key value RSA (or equivalent) encrypted by Intels public key they keep for micro code updates (which the NSA more than likely has the private key).

The trick would be spotting any subversion as other parts of the CPU –like memory page info– would like as not tell the CPU if it was being used ordinarily or if some one was running test code, and thus in theory Intel could turn off any such timing covert channel mechanism.

Artur Nankran May 13, 2014 9:48 PM

Here is an interesting effort by IETF, it has released a Best Practice related to this.

Pervasive Monitoring Is an Attack
Pervasive monitoring is a technical attack that should be mitigated
in the design of IETF protocols, where possible.

mike~acker May 14, 2014 6:46 AM

there is only one way I know of to keep anything honest: get all the cards face up on the table.

right now i’m thinking Open Source Software is in good shape, generally. the #1 Issue of Concern at this point is in the silicon,…..

still, I would suspect that subversive code on silicon is probably implemented in what used to be called ‘microcode’ — which is really just a different kind of ‘slate’ on which to write software…

by putting the subversive code into microcode chips coming off the assembly line can be ‘clean’ — and then later compromised. even the compromise can be variable .

Bruce: Superb Essay !!!!!

Steve Besch May 14, 2014 8:54 AM

This is a fantastic essay – and it clarifies all of the vague ideas that have been kicking around in my head for some time now. ToWit: Almost all government security experts are naive, many so called professional security types are out to lunch and virtually all the claims of security companies are exaggerated! Someday, I also believe that Snowden will be considered an American hero. I can see it now – the first day after the winter solstice will be Edward Snowden day! This essay gives me hope that that day will be sooner rather than later.

John Suffolk May 14, 2014 11:30 AM

Bruce thanks for the essay, but you shouldn’t really tar everyone with the same NSA brush. Let me declare an interest I am the CSO for Huawei, having previously been the UK Government CIO for three Prime Ministers.

I left Government specifically to focus on cyber security and endeavoured to have a foot in the east and a foot in the west. If we are all passionate about the way technology has and will continue to improve people’s lives we must collectively collaborate to make technology in all its guises safer – pointing fingers never solved any problem.

Whether we like it or not at Huawei, we are a Chinese company. We are proud of our roots as we would be proud if our roots were in America, or the UK or almost any other country.

We recognise that we have to be open and transparent; we recognise that we have to show any customer, any reporter, any Government official everything we do – sales, R&D, manufacturing, supply chain, legal et el and this is exactly what we do.

We know and we positively encourage every kind of audit and inspection. We are probably the most poked, prodded, probed, investigated, audited (including hardware and software, including source code) and reviewed company in the world, and whilst this is painful at times – it is the only way forward when it comes to security and trust – we just believe that it shouldn’t just be Huawei that does this.

We were the first company, and I believe still the only large company to make the emphatic statement:

“We can confirm that we have never received any instructions or requests from any Government or their agencies to change our positions, policies, procedures, hardware, software or employment practices or anything else, other than suggestions to improve our end-to-end cyber security capability. We can confirm that we have never been asked to provide access to our technology, or provide any data or information on any citizen or organization to any Government, or their agencies.
We confirm our company’s unswerving commitment to continuing to work with all stakeholders to enhance our capability and effectiveness in designing, developing and deploying secure technology.”

From our perspective this is the time to bring industry together, industry that is not tethered to their Government whims and hidden laws or use their commercial power to pressurise commercial companies and drive forward to a new level of security.

We all have to be more demanding of vendors, more demanding of users, more demanding of lawmakers and more demanding as individuals.

Finally I would be more than happy to host a symposium on security at our HQ in Shenzhen and open up what we do to experts from around the world.
Best wishes

Leon Wolfeson May 14, 2014 8:36 PM

NickP – Thank you for that lovely call for security through obscurity.


With open source software, you can verify what you have. Closed source software inherently relies on trust, which is what is broken here in the first place. Wikileaks had and has a problem, who managed to drive off two complete technical teams with his behavior, and then act carelessly and get himself into a situation which was…well, a man without that sort of ego… and had journalists prefer not to continue to work with him.

The name of the problem is, of course, Julian Assange. Who’s now done a lot of damage to the cause he espoused, there’s a reason media attention’s moved on to Mr. Snowdon, who’s not like that.

Anyway – I’d also mention Facebook’s open hardware project here, which I think has interesting potentials, especially if has built-in integrity checking.

Mr. Pragma May 14, 2014 9:57 PM

Leon Wolfeson (May 14, 2014 8:36 PM)

NickP – Thank you for that lovely call for security through obscurity.

To stay in your diction: Thank you for that lovely demonstration of your lack of understanding the basics (and your attempt to compensate by attacking Nick P).

The correct version of that on-dit you allude to that is as often repeated as it’s wrong would be “security through only the simply hiding type of obscurity is no security”.

A door handle is a non-obscure mechanism to open a door. A door lock is an obscurity mechanism. Encryption is a form of obscurity mechanism.

And obscurity basically always is a part of security. That’s why you don’t publish your password. That’s why using public key mechanisms you keep your private key secret.

Before you allow your hybris to attempt attacking people like Nick P who has oftentimes amply demonstrated his knowledge you might want to consider to employ obscurity for your thoughts rather than stripping yourself …

Nick P May 15, 2014 12:05 AM

@ Leon Wolfeson

“NickP – Thank you for that lovely call for security through obscurity.”

All security is through obscurity, addition of attacker effort, or actual risk prevention (rare). I call for all three.

“With open source software, you can verify what you have. Closed source software inherently relies on trust, which is what is broken here in the first place. ”

Which open source software have you verified? For what of the dozens of forms of code injection? And the drivers? And firmware/BIOS? And device firmware? And processor microcode? And middleware and protocols? Sounds like your solution has plenty of very high privilege components with closed source and produced in a subversion-loving country even if you totally verified the open code to be free of vulnerabilities. (A BIG if…)

All the most popular open source project have had vulnerabilities. Many of them. One from 2009 allowing potential code execution in Linux kernel was just found in 2014. NSA and its contractors have plenty of 0-day hunters looking for this stuff along with a $200+ million annual subversion budget. Plus, their TAO catalog has attacks on almost everything I mentioned above. It isn’t theory. Open source, proprietary, they rip it all to pieces if it’s in their sights. The Russians, Chinese, and others also tear those systems apart easily too. So, given the success of our enemies, recommending what they rip apart provides no security benefit. Telling them what you use and how it works only benefits them given their vast resources for finding vulnerabilities that probably outnumbers the volunteer effort at finding flaws in the open code.

Note: If it’s mainstream, they might even have automated attacks built into their QUANTUM tools.

Wikileaks started with some proven software. Upon that they added all kinds of obfuscation layers and techniques. This did cause the TLA’s serious problems. They couldn’t do a damned thing. So, we know the approach worked. The eventual demise of Wikileaks happened when the banks cut off its funding… a devastating attack the techies were unprepared for. Julian’s insanity also tore his team apart. Yet, the lesson is still there: even the smartest, most powerful, most well-funded enemy can’t hit what they can’t see. Wikileak’s clever obfuscations worked against the most capable, remote attackers where many open source approaches failed (and are still failing) against basic ones.

Note: Modern example would be Tor which is obfuscation on steroids and one of NSA’s biggest headaches per Snowden leaks.

I also referenced SAP protection methods government uses to protect its most sensitive projects. These methods successfully protected plenty of information from all but the most capable opponents. If they didn’t half-ass it, they might have stopped many of them too. This combo of secrecy, obfuscation, and good security controls worked for me too. One early example of mine was using a NIX on non-x86 and Windows NT on Alpha. My network guard totally hid the nature of the systems on the network, forcing all traffic to look like mainstream clients & servers. It was also pretty secure itself & didn’t run x86 either. All attacks, including really clever one’s, failed. All assumed x86 and/or mainstream operating systems. Meanwhile, people with Red Hat Linux etc were getting hacked rather easily by the same types of people.

Trustworthy people, reliable tools/methods, obfuscation, a bankroll, good physical location, and disaster planning are the only methods that can stop a TLA-type opponent. Still no guarantees, but there are examples of success here. Merely running open source software can’t do the job. It never has. It probably never will. Only the concept of blind faith can explain people’s continued reliance on it for their security. It’s useful to improve auditibility of a given layer, but that’s all it achieves. Real security comes from denying enemy knowledge of what’s there, how to exploit it, the ability to exploit it (if lucky), and/or a way to avoid detection while doing the deed.

Wael May 15, 2014 12:40 AM

@ Leon Wolfeson,

With open source software, you can verify what you have. Closed source software inherently relies on trust, which is what is broken here in the first place.

Open source software verification inherently relies on trust as well! I take that back! It relies on the trust that some one verified it, simply because it’s open-source. It’s a double edged sword as well, the fact that it is open source means that adversaries can subvert it with relative ease contrasted with closed source. I think you may have misread what @ Nick P wrote. As Nick P said, you are not going to verify it by yourself, so you must trust someone else to verify it for you. And since trust is broken, then open-source and closed-source share that broken common denominator of Trust.

Regarding obscurity, it can be referenced in two ways.
Bad: security of a crypto-system depends on the secrecy of implementation and / or design. If either is revealed, system is broken.
Not Bad: a shared secret for example needs to be “obscured”

Don’t mix up the two situations. If you hear “Security through obscurity”, then understand that it refers to the “bad” definition.

Mr. Pragma May 15, 2014 1:03 AM

The credo “opensource == verified == secure” has lost its power with heartbleed.

If even the security crown jewels of open source obviously haven’t been properly verified then what open-source has been?

Similarly major open-source cornerstones have failed.

The bazaar model has failed. Reality shows a clear correlation between code quality and the cathedral model in the form of professional, highly qualified benevolent dictators like Linus or Theo.

The meritocracy model also has failed insofar as the lower end is the very basis for new top generations rather than being excluded.

The highly politicized cope-left model of GPL has failed insofar as it has driven uncounted potential users away into the arms of closed source because paying in money is so much more attractive than paying in legal risks and intellectual property.

About the only more or less accepted, wide spread, and reliable open-source software is not gpl and/or not bazaar.

Again, there are advantages with the open-source model but we should have learned by now to stop dreaming and to accept that the open-source model has disadvantages, too, in particular where social and political issues and values are de facto higher valued than technical ones.

For the open-source model to deliver we urgently need to really add verification to it rather than just blabbering about all those eyes.

And just btw, the very fact that those statements will quite probably bring up more open-source proponents against me than it will induce them to critically reflect their model and to considerably enhance (or even fully understand) it, already *is of a political, social, and psychological nature”.

Clive Robinson May 15, 2014 7:06 AM

@Mr. Pragma, @Leon Wolfeson,

At heart the issue is one of “Trust” and how to establish it or deal with it’s loss.

The problem with trust is “the layer below” –in the computing stack– if it is not both trustworthy and secure none of our current methods will be subversion free, no matter how far up the stack you go with formal methods type safe languages etc etc.

From what I can tell all layers down to and including the device physics can be subverted in some way, so arguably none of our current solutions can be both trustworthy and secure so free from subversion…

The old argument when device layout was still in the optical range was that you could have a trusted development process followed by an inspection verification process so trust could be established. Well device physics and the fact you cannot trust the available –to consumer– development process means that method is nolonger available.

Other methods have been proposed but they all suffer from the fact that you cannot establish trust, so some other method has to be found.

And there is a way it’s called “mitigation” and it’s the way high reliability systems establish reliability figures above and beyond any of the component parts.

It works on the assumption that all parts are unreliable and will fail at some point, but importantly not at the same time nor in a way that cannot be detected.

What I realised was if you replaced “reliability” with “trustworthy” in the above statment and ensured the two conditions held then you had a mitigation solution to the layer below issue.

When I first started thinking about this some years ago there were no published papers or other information I could find that other people were thinking along similar lines.

I have discussed how to do this in a practical way on this blog before and it would appear others have taken it on board and have started actualy researching in that direction and publishing their results.

anonymous May 15, 2014 9:02 AM

@David Leppik: The only viable security solution is to have secure software and protocols collaboratively designed by parties that don’t trust each other.

…or software and protocols written and reviewed by parties that do not trust each other. This model works, and it is known as “open source”.

anonymous May 15, 2014 9:11 AM

@Wael: “Open source software verification inherently relies on trust as well! I take that back! It relies on the trust that some one verified it, simply because it’s open-source. It’s a double edged sword as well, the fact that it is open source means that adversaries can subvert it with relative ease contrasted with closed source. I think you may have misread what @ Nick P wrote. As Nick P said, you are not going to verify it by yourself, so you must trust someone else to verify it for you. And since trust is broken, then open-source and closed-source share that broken common denominator of Trust.

Open source can be verified, but closed source cannot. This one alone is a big difference.

Believe it or not, open source code is audited not only by developers but also by clever users that suggest patches to address bugs. Do you think OpenBSD cannot be trusted?

Regarding obscurity, it can be referenced in two ways.
Bad: security of a crypto-system depends on the secrecy of implementation and / or design. If either is revealed, system is broken.
Not Bad: a shared secret for example needs to be “obscured”

Sorry, I do not think you understand what security by obscurity means.

yesme May 15, 2014 10:07 AM


“Believe it or not, open source code is audited not only by developers but also by clever users that suggest patches to address bugs. Do you think OpenBSD cannot be trusted?”

OpenBSD cannot be trusted.


  • There have been quite a lot serious leaks in the OpenBSD code[1].
  • BSD is Unix based. Altough Unix has security build in, it’s a pre internet ’70s OS[2].
  • Besides the OS, the internet is THE biggest problem. IETF is broken by design.
  • It is written in C, a language that is impossible to be formally verified[3].

Todays computing is a sorry state of affairs. That is the inconvenient truth.


Wael May 15, 2014 1:53 PM

@ anonymous,

Open source can be verified, but closed source cannot

First of all, I am not against open-source, and I don’t wish to be dragged into an open_v_closed source debate. To each his own… Secondly, Your statement is factually incorrect. Closed source can be verified as well. Who verifies it is the question. You will not verifiy it anymore than you will verify closed-source — that’s the point.

Believe it or not, open source code is audited not only by developers but also by clever users that suggest patches to address bugs. Do you think OpenBSD cannot be trusted?

I believe that open source code is audited not only by developers, but also by clever users. This is essentially “trust”. I am not an opponent of open source. I use FreeBSD myself for pruposes I see fit. I do not trust every component of it is verified, and I don’t know the level of verification done, etc… I do build the kernel, the OS and libraries, and applications from source to customize some aspects, but I dont have the time to look at millions of lines of code. Bugs can be at the code level, the design and architecture, bad implementations of a specification, etc… No single person has the time or the skill set to do that. You must “Trust” that someone did it for you. Closed source software is also verified, and you are asked to trust that a different set of people did the verification. My point is that “trust” is not a strong differentiatior between open source and closeed-source in this context.

Sorry, I do not think you understand what security by obscurity means.

Yea? Why don’t you enlighten me then? And while you’re at it, choose a name so I know which anonymous person I am talking to. Then try to put some “objectivenss” in your statements rahter than shifting the subject to personal and subjective, dismissive statements.

Mr. Pragma May 15, 2014 3:16 PM

Putting it somewhat bluntly, it seems that many open source advocates, at least as far as security and the like is concerned, so not understand their own religion they preach. (No aggression intended).

The decisive point is not that open source (safety, security) can be verified!

The decisive point is that open source does as major step to build trust!

This step lays in pro-actively (without being asked to) offering an important, commonly accessible, and relatively simple way to verify the software. This builds trust (as gazillions of blindly trusting open source fan(atic)s demonstrate (“open source is open and hence it’s secure!”))

This incarnation of a behavourial pattern is generally and possibly across all mankind understood. It’s the simple — and powerful — pattern of saying and showing “look! Nothing to hide, nothing hidden. See for yourself!”

That does not mean that open source is secure. Not even that open source is of a certain quality.
Importantly, it also does not mean that there were no malevolent intentions during design and development. It is important to keep this in mind when talking about larger projects.

A very similar and comparable step is often made by closed source projects when they offer to have their code inspected by a trusted third party or to open their code to major customers (which then, if at all, again hire a trusted third party to check the code).

Actually – this should be seen and understood – this model can (not “is” but “can”) be better than open source for some reasons. One important and attractive reason is that typically the trusted third party has a rather high competence in the areas of code quality, safety, security – unlike 99% of the often mentioned open source “eyes”.

I’d like to see a “bridge” built mentally, a bridge connecting trust and end result. Because those two points are the starting and end points, the latter probably being even more important in practical considerations.
And that end point is to be reasonably capable of saying that some soft (or hard) ware can be assumed to be reliable, safe, and secure to a relevant degree.

And here open source usually doesn’t deliver (neither does closed source but then, they don’t evangelize about it). The reason funnily is that open sources blue eyed credo and assurance is an empty promise if it isn’t actively followed up but rather is widely ignored and shrugged off “We’ve done our part. It’s open source so someone can look at it”.

In that context it should also be noted that open source quite typically ignores or outright rejects any and all responsibility, assurances, or binding statements about its quality.

Open source has delivered a lot, it has often been an enabler, it is often quite desirable and it has advantages, no doubt.
But “open source” by itself has no significant security value. That might change once at least some of the “many eyes” become a widely understood must-have part of open source projects.

Nick P May 15, 2014 4:48 PM

@ anonymous

“…or software and protocols written and reviewed by parties that do not trust each other. This model works, and it is known as “open source”.”

That is not how open source, especially FOSS, works. The assumption in the FOSS world is that each code contributer isn’t trying to design ingenious backdoors: they are just sending improvements that might have flaws in it. Some projects don’t really review the submissions at all. Some just glance at it. Some look for the most common defects (and still miss many). Some, like OpenBSD team, are more proactive at looking at the code and do what I’d consider a real review.

So, open source doesn’t imply a review, distrusting developers, etc. It only tells us the source is available for potential reviews. The CVE list shows that potential is virtually never reached. OpenBSD, the strongest OSS review team, has had hundreds of bugs and who knows how many vulnerabilities that were fixed after code was in production machines. One can only imagine how many vulnerabilities are in the other projects that focus on features more than quality.

Note: I’ve posted multiple schemes here in the past that leverage mutually distrusting parties. An early one was to use Israelis and Arabs for the development process. The reviewer vs coder role would be switched regularly. Both would be thoroughly trained in finding and preventing vulnerabilities. Software could be open or closed with quality guaranteed by the review process, whose participants reduced likelihood of subversion by both parties.

@ next anonymous

“Open source can be verified, but closed source cannot. This one alone is a big difference.”

That’s entirely wrong. The first security evaluations were done on closed source code. The only high assurance security evaluations ever done were on closed code. There’s many models that allow it without widespread dissemination of the code. The OSS competition, on the other hand, has never been evaluated above medium assurance despite hundreds of millions of dollars worth of effort being put into Linux.

Closed source can be verified. You just have to trust the verifier. Just like with open source. More potential for the latter, yet I’ve already shown with empirical evidence they rarely get strong reviews or quality. And they’ve never achieved high security. The use of mutually distrusting parties in private evaluations can boost trustworthiness of a closed source audit. High assurance design & review takes a team of pro’s with time, money, specialist knowledge, mandatory reviews, willingness to throw away all work due to single flaw, and willingness to throw away legacy compatibility. This is seriously lacking in software development in general and there’s only a handful in IT history to be involved in open source highly secure systems work.

“Sorry, I do not think you understand what security by obscurity means.”

It doesn’t mean anything actually. It’s a phrase tossed around by many that has about as many meanings. Some use it if entire design isn’t totally open, some use it if there’s any secret aspects of usage, and so on. Security engineers typically use it to refer to designs that entirely (or almost entirely) rely for protection on the secrecy of the mechanism used, while also not putting any real effort into proving mechanism’s security benefit. If there are good, proven mechanisms used in ways that cloak or obscure them, we call that “obfuscation” as a form of “defense in depth.”

My method can’t be security through obscurity as it uses many proven methods in combination with one another, with the interactions and configurations themselves secret. Therefore, in security engineering terms, I gave a blueprint for an organization that secures its operations with proven techniques and a ton of obfuscation throughout to force enemies to give themself away if trying to undermine those techniques. Keeping tactics secret and fluid is a practice with proven effectiveness in defense via thousands of years of human wars. It helps secure information, too, if applied properly.

@ yesme

Good points re OpenBSD. Here’s another essay on that for you:

The Insecurity of OpenBSD

I also found this 2014 demo that includes a crack of OpenBSD. So, nothing has changed.

@ Wael

” but I dont have the time to look at millions of lines of code. Bugs can be at the code level, the design and architecture, bad implementations of a specification, etc… No single person has the time or the skill set to do that. You must “Trust” that someone did it for you. Closed source software is also verified, and you are asked to trust that a different set of people did the verification. My point is that “trust” is not a strong differentiatior between open source and closeed-source in this context.”

Excellent point. You put it better than me. It’s mindboggling to think that anyone could consider one of these projects vetted as they’re so huge. The security (past) and separation (current) kernels were pushing the limit of verification technology at low 10’s of kloc with small modules, layering, and simple interfaces. Tens of thousands of lines of well-written code took years to rigorously verify by security pro’s. And people think (insert average FOSS project here) is getting reviews that mean something for security against TLA’s? (laughs) No more than the for-profit developers.

@ Mr Pragma

“This step lays in pro-actively (without being asked to) offering an important, commonly accessible, and relatively simple way to verify the software. This builds trust ”

Very true. It’s why I give open source a higher rating on subversion resistance, if not penetration restistance in general. It has that potential and makes a nice first step in the overall process.

yesme May 15, 2014 6:44 PM

@Nick P

@Anonymous took OpenBSD as a holy secure operating system. Altouth I think OpenBSD is today one of the most secure OS there is, it’s the bigger picture that I am questioning.

The problem with the internet is that holes can be exploited on a massive scale with really serious consequenses. And the fact that there are parties with multi million dollar funding, expertise, resourses and motivation that are actively searching these holes, that’s what makes it scared.

I think that Open Source (not FOSS, more BSD like) is best when there is a funding and professional team. But the problem with that model is that the industry doesn’t pick it up, even tough it has serious benefits. It’s probably that the competative attitude doesn’t allow cooperation. And that is a fundamental flaw.

Btw, offtopic, I think I will change my nickname. It was kind of a joke but it just doesn’t make sense anymore. Now I think about “Your Honor” 😉
Or something dutch such as windmill, wooden shoes, tulip…

yesme May 15, 2014 6:55 PM

On a second thought, I think it’s better to quit commenting at all. I just lack the fundamental expertise about security.

Wael May 15, 2014 7:44 PM


On a second thought, I think it’s better to quit commenting at all. I just lack the fundamental expertise about security.

Oh, no! Don’t do that! Go with your first thought and spawn a decipherable sockpuppet.

Mr. Pragma May 16, 2014 4:51 AM


I for one would miss your often insightful and well reasoned comments.

So, please stay, cheese-mill *g

Anorlunda May 16, 2014 7:13 AM

I work in critical infrastructure (power grid). We too must use products that have been compromised by NSA. That means the biggest threat vector could be the day when bad guys learn how to exploit those back doors.

Perhaps the event to tame the beast will come after a Chernobly scale disaster in the USA where blame can credibly be laid at the feet of government snoops. Then the public would learn thst the risks due to weakened security exceed the risks of terrorist attacks. But everything I just said is BS because government is so good at obsfucating blame.

It always comes back to what Bruce keeps telling us; the only thing that matters is trust. If trust is broken, we are screwed. Methaphorically speaking, should I just shut down the grid, go home, and tell the world to learn to live without electricity (because the grid, and everything else can never be trusted)? The world economy still ticks, so loss of trust can’t be broken that badly yet, but we are playing with very dangerous fire.

Steve Besch May 16, 2014 10:14 AM

All of this debate and opinionating about security has been going on for years. As has been pointed out, security is hard – really hard. But this most recent re-intensification fueled by the HeartBleed back door illustrates another point. You can never get security when there are people in positions of power and authority who are actively working to compromise it.

I am a programmer and have been in one capacity or another for 40+ years. Recently, I took a really hard look at the HeartBleed code. Now, keep in mind that this is an opinion, but something smells of “intentional” in this code. To better understand why I get this feeling, I tried to analyze the overall situation. First, this is code to implement a “Heartbeat” in ssl – the sole purpose of which is to transmit a tiny bit of data back and forth so that both ends of a communication know that each other is still alive. The idea is to keep the connection open. By definition, this should NEVER be more than a few bytes – 8 is more than enough. Second, any rational design would never allow any variation of this. The byte length would be set and always be exactly the same no matter what. Third, it turns out that this is also the most code efficient way to write this feature. The whole need for any bounds checking simply goes away. In fact, for ultimate efficiency, nothing really needs to be sent back and forth at all except an empty packet.

The fact that this code is explicitly written to allow for assymetrical open data requests of arbitrary length directly from the SSL memory allocation, where all manner of sensitive data is stored, smacks of “INTENTIONAL”. The programmer himself has claimed that it was a simple mistake, but honestly, while this may be true, I have a hard time believing it. To be sure, if it came out that the NSA had a hand in this code fragment – even to the extent of having written it themselves – i would not be in the least surprised. American Hero Ed Snowden’s documents do reveal that the NSA did in fact have an intense interest in cracking SSL, and at just about the time when this code was committed. It also seems that they have been exploiting it already for several years. I am not big on conspiracy theories, and a mere few weeks ago I would have laughed at this suggestion, but, Sorry folks, it does smell a lot like an NSA rat. Here’s a suggestion for the Media folks – to quote DeepThroat: Follow the Money!

yesme May 16, 2014 11:40 AM

@Steve Besch

(snipping lots of text) … “I am not big on conspiracy theories, and a mere few weeks ago I would have laughed at this suggestion, but, Sorry folks, it does smell a lot like an NSA rat. Here’s a suggestion for the Media folks – to quote DeepThroat: Follow the Money!”

Does it matter whether the guy who wrote Heartbleed is “guilty” or whether he made a mistake? Does it really matter? I don’t care about that at all.

I do care, in the case he is guilty, about who ordered it. And I don’t mean the poor little fish. I mean the guy on top. The guy in the White House who put his signature on Executive Order 13233[1] and the Patriot Act[2] that allowed BULLRUN[3] to happen. And who is still a free man btw.


Nick P May 16, 2014 12:02 PM

@ yesme

“Oh, no! Don’t do that! Go with your first thought and spawn a decipherable sockpuppet.” (Wael)

Haha his suggestion is a good one.

Thing about comments on technical matters is one doesn’t need to be an expert on domain in question to have value. If you make suggestions about what you don’t understand, then that’s probably detrimental. Yet, any security initiative has many components ranging from actually designing/building it to managing it to maintaining it to using it. People with expertise or experience in each level will look at a proposal with a different perspective and perhaps offer insight. This is especially valuable after a new effort is launched with the “Lesson’s Learned” teaching us how to do it better. It’s uncommon for the technical stuff to make or break a project, so the lesson’s learned accordingly draw on wisdom of non-technical people (esp users or mgmt).

So, stick around and just focus on what you know or understand if you’re pushing something. If you’re trying to learn something, then there’s plenty more opportunities as the blog attracts knowledgeable people in many fields. I’ve learned quite a lot from many.

yesme May 16, 2014 12:15 PM

@Nick P

Someone was questioning the quality of this blog. That got me thinking (out loud).

But I guess we can’t help ourselves so I posted another message already 😉

Steve Besch May 16, 2014 12:31 PM


Does it matter whether the guy who wrote Heartbleed is “guilty” or whether he made a mistake? Does it really matter? I don’t care about that at all. I do care, in the case he is guilty, about who ordered it. And I don’t mean the poor little fish. I mean the guy on top.

I’m sorry. I assumed that everyone would have known about the reference to “Deep Throat”. When the Washington post broke the Watergate scandal, that’s exactly what they were after: The guy calling the shots. The bit players (in this case the programmer) were only starting points from which to track back to the origin of the evil. That’s the “Follow the Money” quote relevance. If NSA money had any part in this, you have to start where the money would up and follow it back to where it started (the NSA maybe?). And, in fact that’s the only relevance to any “guilt” is the sense you mention: if there is none then look no further. On the other hand, if there is, then we need to trace back the source.

yesme May 16, 2014 1:13 PM

@Steve Besch

I am aware of “Deep Throat” and Nixons little scheme.

Maybe you are right and “follow the money” could work. But somehow I think these guys are too experienced to make such beginners mistakes.

The funny difference between heartbleed and Watergate is that we know that the NSA (US gov) is behind active weakening of the internet security but we don’t know weather heartbleed was part of that and with Watergate it was the other way around.

Steve Besch May 16, 2014 1:53 PM


“we know that the NSA (US gov) is behind active weakening of the internet security but we don’t know weather heartbleed was part of that”

Alas, for the time being this seems to be the case. But then, the lameness with which the NSA seems to proceed and the sheer volume of the Snowden documents at least leaves some hope that this may yet hit the fan.

Adjuvant May 16, 2014 2:02 PM

@yesme, Steve Besch:
If we’re going to bring up Watergate, I’ll have to share this excerpt from Russ Baker’s book. There was much more going on than the accepted narrative would suggest. The title of the post will be quite jarring, I realize, but sit it out: I think people will find the additional background both counterintuitive and supremely enlightening.

So I don’t lose everybody right off the bat, a few accolades regarding the book Family of Secrets, from which this is excerpted:
“One of the most important books of the past ten years” – Gore Vidal

“A tour de force….Family of Secrets has made me rethink even those events I witnessed with my own eyes” – Dan Rather

“Russ Baker’s work stands out for its fierce independence, fact-based reporting, and concern for what matters most to our democracy…A lot of us look to Russ to tell us what we didn’t know” – Bill Moyers

“This is the book people will be mining for years to come” – David Margolick, Newsweek and Vanity Fair

“Shocking in its disclosures, elegantly crafted, and faultlessly measured in its judgments…. Russ Baker’s Family of Secrets is sure to take its place as one of the most startling and influential works of American history and journalism.” – Roger Morris, author of Richard Milhous Nixon: The Rise of an American Politician and Partners in Power: The Clintons and Their America

Mr. Pragma May 16, 2014 2:30 PM

You ignited a funny curiosity there.

So, let’s assume that there are the evil grey suits (gov.) and the good casuals (ICT guys).
Now, happen to have to do with diverse grey suite guys of diverse countries; regional governments, federal agents, even some secret service guys.

My impression pretty much always was that they know their laws, paragraphs, and some relevant detail about their field. I also found quite some of them, say, intellectually sharp in some ways (which probably translates to “not low level guys”). But I would also feel very confident to correctly guess whether any guy, state or private, is an ICT guy (tech.) or a grey suite. Because one could fit worlds in between the two; in terms of mindset, of approaching problems, of major personality factors, and more.

But one is to believe that the grey suite drones pretty each and every time f*ck the techies? Sorry, no way. That must be a fairy tale.

Not because the grey suites are stupid (they often are not at all). But because they are drone minded, ignorant in many respects. And because their whole mindset quite reliably chooses enforcing over seducing and even if they somehow understood that often seduction delivers way more and better than coercion they wouldn’t know how to do it.

I’m wondering since quite some time now about that aspect. Are we really to believe that they f*ck us and we just politely obey and smile? I don’t think so.

OK, that heartbleed student may have been pushed to do it. I personally, btw. rather assume him to have acted on Ferengi motivation.

But where are all those little mines and traps that we, the techies, installed? There must be thousands and thousands of them? And you bet that the grey suites are batshit paranoic about those.

Or, to say it in other words:
Do not consider the other side to be stupid. Rather try to understand them and how they tick.
Do not concentrate almost exclusively on the bombs of the other side. Quite some bombs may be ours. But they’re still bombs and you might want them to not explode in your server or network.
It’s not as simple as “they are evil and we are good”, particularly not for us who are way less hive minded than the grey suites. Or are we?

Steve Besch May 16, 2014 3:10 PM

@Mr. Pragma.

You make some interesting points. Nevertheless, it behooves us to remember that there is an arrogance to power that not infrequently inspires actions that are “above the law”. I also believe that it is often the case that those who act above the law also have the hubris to believe that they are in fact “above the law”. The potential danger in this type of thinking is, I think, self evident. I only suggest that we all must remain vigilant and committed to the rule of law. The NSA appears to be flagrantly ignoring the fact that the law prohibits a lot of what they are doing. Our history is not devoid of comparisons – J. Edgar Hoover comes immediately to mind. The horror in such examples is that these people often act as lone wolves, promoting their own agenda – not to be sure with either the permission of – or indeed even the knowledge of – their superiors. If the above recommended book by Russ Baker teaches us anything at all, it is that one of the greatest responsibilities of a democratic nation’s public is a never ending vigilance and a calling to task of those that would act above the law.

Mr. Pragma May 16, 2014 4:07 PM

Steve Besch (May 16, 2014 3:10 PM)

Front up: Please, do not take the following in any way as personal, ok.

“Above the law” is a hive minded model, sorry. By this I do not mean to say that hive-mindedness — to a degree — is per se bad, it isn’t. Actually it’s needed because for a society to work a “we” is needed as well as some “we” based commonly agreed rules.

But that doesn’t change the fact that, no matter how much we are driven to believe or even to wanting to believe otherwise, in the end we are individuals and nature (instincts, …) are stronger than politics or social concepts.

One very basic problem is that while groups may be strong, the individuals in groups are by definition weakened. Just look at a bank robbery. Quite often the bad guys are outnumbered by the clients and employees. So, why can the bad guys win and enforce their will? Because they have weapons? No! The guards have weapons, too. The reason, on a certain level is simple and shocking: the clients and employees are bound by group think, the robbers aren’t. The guard will drop his weapon when a hostage is threatened because according to group rules loss of live must be prevented at any cost (unless “the group” decides otherwise, e.g. war). Chances are the bad guys don’t care; if shooting is useful they will do it, or, in other words, they are not bound by common group rules.

Long story cut short: The group is stronger than the individual by definition. A non-group individual, however, is stronger than both the group and its individuals.

That’s the underlying problem with politicians.
Actually, they play the most brutal version of that game by not caring about group rules but at the same time enforcing them on the vast majority (knowing well that that very much weakens the citizens).

Another error often made is to confuse actual motivation and reason with stated motivation and reason. Of course the “powerful individuals” will proclaim group rules and their importance the loudest and shrillest, of course the “powerful individuals” (e.g. politicians) will dress anything they do in group rules clothes like “to protect the people”.

Looking at it realistically, if that “above the law arrogance” bla were true, why then is it that the enforcing power is always with the powerful individuals rather than “us”, the large group, the “souvereign” in the democratic concept?

Maybe after all it’s not them arrogant but us stupid to have believed in their fairy tales of social constructs being stronger than nature (and their greed)?

Well noted, my point isn’t about politics; My point is about proper analysis.

Now, having the answer to 1% controlling and forcing their will upon the 99%, we may better understand that we, too, have the liberty to act individually, i.e. outside of hive mind anytime; even worse (for them) their “kingdom” needs us having and using some level of individuality, for instance to invent, design, build new things, better things (that can then feed the industry and finally the “powerful individuals”).
A major part of that whole nsa sh*t is about fearfully controlling that our individuality is useful (to them) but never threatening.

Just consider the following: the powers to be classically paint hackers as (at least very close to) crackers, i.e. the paint curious playful individuals as (at least potentially) evil malevolent criminals. Why on earth would they do that considering that a major part of their richesse and power are built on some individuals creativity and thinking outside the box? Why don’t they rather welcome and groom hackers? Simple. Because to them the very combination of knowledge and a tendency to act outside of hive norms is bloody f*cking dangerous and threatening.

Another funny mind game: Just look at the usa instigated colour revolutions; technically I mean, not politically. Now replace maidan terrorists and nazis with hackers and loud public actions with silent actions, say, in an operating system or network code …
For you and me that might be a funny mind game, an interesting thought experiment. For politicians and nsa and the like that’s a nightmare they would do any and everything incl. curfews and wars to avoid … and to detect in the earliest stage possible – et voila there you have the whole nsa, fbi, etc. sh*tload again.

Steve Besch May 19, 2014 9:09 AM

@Mr Pragma.

I hate to tell you, but I fear your point got lost somewhere in your overblown verbiage. I don’t intend to waste my time or the time of anyone else addressing the silliness of some of your half baked ideas. Many of them seem to be based on half-truths and others on a badly misinterpreted notions that appear to have been derived from marginal social science theory. In all, the glibness is just to thick to wade through.

Nevertheless, what I do perceive from your response is that you do not believe in the rule of law or in the essential evil that is embodied in ignoring it. This leads me to believe that at heart you may be an anarchist. Had I known this, I would never have wasted my time commenting on your post in the first place.

Mr. Pragma May 19, 2014 5:28 PM

Steve Besch (May 19, 2014 9:09 AM)

Sticking to the simple rule “If someone doesn’t discuss the content but the person or, even worse, ad hominems the person, there isn’t much more left than accepting his total capitulation” I refrain from commenting any further on your capitulation.

Thanks anyway.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.