Blog: June 2013 Archives

My Talk at Google

Last week, I gave a talk at Google. It’s another talk about power and security, my continually evolving topic-of-the-moment that could very well become my next book. This installment is different than the previous talks and interviews, but not different enough that you should feel the need to watch it if you’ve seen the others.

There are things I got wrong. There are contradictions. There are questions I couldn’t answer. But that’s my process, and I’m okay with doing it semi-publicly. As always, I appreciate comments, criticisms, reading suggestions, and so on.

EDITED TO ADD (6/30): Two commentaries on the talk.

EDITED TO ADD (8/1): To date, 14,000 people have watched the talk.

Posted on June 28, 2013 at 2:42 PM18 Comments

Preventing Cell Phone Theft through Benefit Denial

Adding a remote kill switch to cell phones would deter theft.

Here we can see how the rise of the surveillance state permeates everything about computer security. On the face of it, this is a good idea. Assuming it works—that 1) it’s not possible for thieves to resurrect phones in order to resell them, and 2) that it’s not possible to turn this system into a denial-of-service attack tool—it would deter crime. The general category of security is “benefit denial,” like ink tags attached to garments in retail stores and car radios that no longer function if removed. But given what we now know, do we trust that the government wouldn’t abuse this system and kill phones for other reasons? Do we trust that media companies won’t kill phones it decided were sharing copyrighted materials? Do we trust that phone companies won’t kill phones from delinquent customers? What might have been a straightforward security system becomes a dangerous tool of control, when you don’t trust those in power.

Posted on June 28, 2013 at 1:37 PM25 Comments

Pre-9/11 NSA Thinking

This quote is from the Spring 1997 issue of CRYPTOLOG, the internal NSA newsletter. The writer is William J. Black, Jr., the Director’s Special Assistant for Information Warfare.

Specifically, the focus is on the potential abuse of the Government’s applications of this new information technology that will result in an invasion of personal privacy. For us, this is difficult to understand. We are “the government,” and we have no interest in invading the personal privacy of U.S. citizens.

This is from a Seymour Hersh New Yorker interview with NSA Director General Michael Hayden in 1999:

When I asked Hayden about the agency’s capability for unwarranted spying on private citizens—in the unlikely event, of course, that the agency could somehow get the funding, the computer scientists, and the knowledge to begin making sense out of the Internet—his response was heated. “I’m a kid from Pittsburgh with two sons and a daughter who are closet libertarians,” he said. “I am not interested in doing anything that threatens the American people, and threatens the future of this agency. I can’t emphasize enough to you how careful we are. We have to be so careful—to make sure that America is never distrustful of the power and security we can provide.”

It’s easy to assume that both Black and Hayden were lying, but I believe them. I believe that, 15 years ago, the NSA was entirely focused on intercepting communications outside the US.

What changed? What caused the NSA to abandon its non-US charter and start spying on Americans? From what I’ve read, and from a bunch of informal conversations with NSA employees, it was the 9/11 terrorist attacks. That’s when everything changed, the gloves came off, and all the rules were thrown out the window. That the NSA’s interests coincided with the business model of the Internet is just a—lucky, in their view—coincidence.

Posted on June 27, 2013 at 11:49 AM46 Comments

Lessons from Biological Security

Nice essay:

The biological world is also open source in the sense that threats are always present, largely unpredictable, and always changing. Because of this, defensive measures that are perfectly designed for a particular threat leave you vulnerable to other ones. Imagine if our immune system were designed to deal only with a single strain of flu. In fact, our immune system works because it looks for the full spectrum of invaders ­ low-level viral infections, bacterial parasites, or virulent strains of a pandemic disease. Too often, we create security measures ­ such as the Department of Homeland Security’s BioWatch program ­ that spend too many resources to deal specifically with a very narrow range of threats on the risk spectrum.

Advocates of full-spectrum approaches for biological and chemical weapons argue that weaponized agents are really a very small part of the risk and that we are better off developing strategies ­ like better public-health-response systems ­ that can deal with everything from natural mutations of viruses to lab accidents to acts of terrorism. Likewise, cyber crime is likely a small part of your digital-security risk spectrum.

A full-spectrum approach favors generalized health over specialized defenses, and redundancy over efficiency. Organisms in nature, despite being constrained by resources, have evolved multiply redundant layers of security. DNA has multiple ways to code for the same proteins so that viral parasites can’t easily hack it and disrupt its structure. Multiple data-backup systems are a simple method that most sensible organizations employ, but you can get more clever than that. For example, redundancy in nature sometimes takes the form of leaving certain parts unsecure to ensure that essential parts can survive attack. Lizards easily shed their tails to predators to allow the rest of the body (with the critical reproductive machinery) to escape. There may be sacrificial systems or information you can offer up as a decoy for a cyber-predator, in which case an attack becomes an advantage, allowing your organization to see the nature of the attacker and giving you time to add further security in the critical part of your information infrastructure.

I recommend his book, Learning from the Octopus: How Secrets from Nature Can Help Us Fight Terrorist Attacks, Natural Disasters, and Disease.

Posted on June 27, 2013 at 6:34 AM11 Comments

Secrecy and Privacy

Interesting article on the history of, and the relationship between, secrecy and privacy.

As a matter of historical analysis, the relationship between secrecy and privacy can be stated in an axiom: the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late. The horse is out of the barn. The post office has opened your mail. Your photograph is on Facebook. Google already knows that, notwithstanding your demographic, you hate kale.

Posted on June 26, 2013 at 12:35 PM6 Comments

US Offensive Cyberwar Policy

Today, the United States is conducting offensive cyberwar actions around the world.

More than passively eavesdropping, we’re penetrating and damaging foreign networks for both espionage and to ready them for attack. We’re creating custom-designed Internet weapons, pretargeted and ready to be “fired” against some piece of another country’s electronic infrastructure on a moment’s notice.

This is much worse than what we’re accusing China of doing to us. We’re pursuing policies that are both expensive and destabilizing and aren’t making the Internet any safer. We’re reacting from fear, and causing other countries to counter-react from fear. We’re ignoring resilience in favor of offense.

Welcome to the cyberwar arms race, an arms race that will define the Internet in the 21st century.

Presidential Policy Directive 20, issued last October and released by Edward Snowden, outlines US cyberwar policy. Most of it isn’t very interesting, but there are two paragraphs about “Offensive Cyber Effect Operations,” or OCEO, that are intriguing:

OECO can offer unique and unconventional capabilities to advance US national objectives around the world with little or no warning to the adversary or target and with potential effects ranging from subtle to severely damaging. The development and sustainment of OCEO capabilities, however, may require considerable time and effort if access and tools for a specific target do not already exist.

The United States Government shall identify potential targets of national importance where OCEO can offer a favorable balance of effectiveness and risk as compared with other instruments of national power, establish and maintain OCEO capabilities integrated as appropriate with other US offensive capabilities, and execute those capabilities in a manner consistent with the provisions of this directive.

These two paragraphs, and another paragraph about OCEO, are the only parts of the document classified “top secret.” And that’s because what they’re saying is very dangerous.

Cyberattacks have the potential to be both immediate and devastating. They can disrupt communications systems, disable national infrastructure, or, as in the case of Stuxnet, destroy nuclear reactors; but only if they’ve been created and targeted beforehand. Before launching cyberattacks against another country, we have to go through several steps.

We have to study the details of the computer systems they’re running and determine the vulnerabilities of those systems. If we can’t find exploitable vulnerabilities, we need to create them: leaving “back doors,” in hacker speak. Then we have to build new cyberweapons designed specifically to attack those systems.

Sometimes we have to embed the hostile code in those networks—these are called “logic bombs”—to be unleashed in the future. And we have to keep penetrating those foreign networks, because computer systems always change and we need to ensure that the cyberweapons are still effective.

Like our nuclear arsenal during the Cold War, our cyberweapons arsenal must be pretargeted and ready to launch.

That’s what Obama directed the US Cyber Command to do. We can see glimpses of how effective we are in Snowden’s allegations that the NSA is currently penetrating foreign networks around the world: “We hack network backbones—like huge Internet routers, basically—that give us access to the communications of hundreds of thousands of computers without having to hack every single one.”

The NSA and the US Cyber Command are basically the same thing. They’re both at Fort Meade in Maryland, and they’re both led by Gen. Keith Alexander. The same people who hack network backbones are also building weapons to destroy those backbones. At a March Senate briefing, Alexander boasted of creating more than a dozen offensive cyber units.

Longtime NSA watcher James Bamford reached the same conclusion in his recent profile of Alexander and the US Cyber Command (written before the Snowden revelations). He discussed some of the many cyberweapons the US purchases:

According to Defense News’ C4ISR Journal and Bloomberg Businessweek, Endgame also offers its intelligence clients—agencies like Cyber Command, the NSA, the CIA, and British intelligence—a unique map showing them exactly where their targets are located. Dubbed Bonesaw, the map displays the geolocation and digital address of basically every device connected to the Internet around the world, providing what’s called network situational awareness. The client locates a region on the password-protected web-based map, then picks a country and city—say, Beijing, China. Next the client types in the name of the target organization, such as the Ministry of Public Security’s No. 3 Research Institute, which is responsible for computer security—or simply enters its address, 6 Zhengyi Road. The map will then display what software is running on the computers inside the facility, what types of malware some may contain, and a menu of custom-designed exploits that can be used to secretly gain entry. It can also pinpoint those devices infected with malware, such as the Conficker worm, as well as networks turned into botnets and zombies—the equivalent of a back door left open…

The buying and using of such a subscription by nation-states could be seen as an act of war. ‘If you are engaged in reconnaissance on an adversary’s systems, you are laying the electronic battlefield and preparing to use it’ wrote Mike Jacobs, a former NSA director for information assurance, in a McAfee report on cyberwarfare. ‘In my opinion, these activities constitute acts of war, or at least a prelude to future acts of war.’ The question is, who else is on the secretive company’s client list? Because there is as of yet no oversight or regulation of the cyberweapons trade, companies in the cyber-industrial complex are free to sell to whomever they wish. “It should be illegal,” said the former senior intelligence official involved in cyberwarfare. “I knew about Endgame when I was in intelligence. The intelligence community didn’t like it, but they’re the largest consumer of that business.”

That’s the key question: How much of what the United States is currently doing is an act of war by international definitions? Already we’re accusing China of penetrating our systems in order to map “military capabilities that could be exploited during a crisis.” What PPD-20 and Snowden describe is much worse, and certainly China, and other countries, are doing the same.

All of this mapping of vulnerabilities and keeping them secret for offensive use makes the Internet less secure, and these pretargeted, ready-to-unleash cyberweapons are destabilizing forces on international relationships. Rooting around other countries’ networks, analyzing vulnerabilities, creating back doors, and leaving logic bombs could easily be construed as acts of war. And all it takes is one overachieving national leader for this all to tumble into actual war.

It’s time to stop the madness. Yes, our military needs to invest in cyberwar capabilities, but we also need international rules of cyberwar, more transparency from our own government on what we are and are not doing, international cooperation between governments, and viable cyberweapons treaties. Yes, these are difficult. Yes, it’s a long, slow process. Yes, there won’t be international consensus, certainly not in the beginning. But even with all of those problems, it’s a better path to go down than the one we’re on now.

We can start by taking most of the money we’re investing in offensive cyberwar capabilities and spend them on national cyberspace resilience. MAD, mutually assured destruction, made sense because there were two superpowers opposing each other. On the Internet there are all sorts of different powers, from nation-states to much less organized groups. An arsenal of cyberweapons begs to be used, and, as we learned from Stuxnet, there’s always collateral damage to innocents when they are. We’re much safer with a strong defense than with a counterbalancing offense.

This essay originally appeared on CNN.com. It had the title “Has U.S. Started an Internet War?”—which I had nothing to do with. Almost always, editors choose titles for my essay without asking my opinion—or telling me beforehand.

EDITED TO ADD: Here’s an essay on the NSA’s—or Cyber Command’s—TAO: the Office of Tailored Access Operations. This is the group in charge of hacking China.

According to former NSA officials interviewed for this article, TAO’s mission is simple. It collects intelligence information on foreign targets by surreptitiously hacking into their computers and telecommunications systems, cracking passwords, compromising the computer security systems protecting the targeted computer, stealing the data stored on computer hard drives, and then copying all the messages and data traffic passing within the targeted email and text-messaging systems. The technical term of art used by NSA to describe these operations is computer network exploitation (CNE).

TAO is also responsible for developing the information that would allow the United States to destroy or damage foreign computer and telecommunications systems with a cyberattack if so directed by the president. The organization responsible for conducting such a cyberattack is US Cyber Command (Cybercom), whose headquarters is located at Fort Meade and whose chief is the director of the NSA, Gen. Keith Alexander.

None of this is new. Read this Seymour Hersh article on this subject from 2010.

Posted on June 21, 2013 at 11:43 AM45 Comments

The Japanese Response to Terrorism

Lessons from Japan’s response to Aum Shinrikyo:

Yet what’s as remarkable as Aum’s potential for mayhem is how little of it, on balance, they actually caused. Don’t misunderstand me: Aum’s crimes were horrific, not merely the terrible subway gassing but their long history of murder, intimidation, extortion, fraud, and exploitation. What they did was unforgivable, and the human cost, devastating. But at no point did Aum Shinrikyo represent an existential threat to Japan or its people. The death toll of Aum was several dozen; again, a terrible human cost, but not an existential threat. At no time was the territorial integrity of Japan threatened. At no time was the operational integrity of the Japanese government threatened. At no time was the day-to-day operation of the Japanese economy meaningfully threatened. The threat to the average Japanese citizen was effectively nil.

Just as important was what the Japanese government and people did not do. They didn’t panic. They didn’t make sweeping changes to their way of life. They didn’t implement a vast system of domestic surveillance. They didn’t suspend basic civil rights. They didn’t begin to capture, torture, and kill without due process. They didn’t, in other words, allow themselves to be terrorized. Instead, they addressed the threat. They investigated and arrested the cult’s leadership. They tried them in civilian courts and earned convictions through due process. They buried their dead. They mourned. And they moved on. In every sense, it was a rational, adult, mature response to a terrible terrorist act, one that remained largely in keeping with liberal democratic ideals.

Posted on June 21, 2013 at 6:25 AM26 Comments

New Details on Skype Eavesdropping

This article, on the cozy relationship between the commercial personal-data industry and the intelligence industry, has new information on the security of Skype.

Skype, the Internet-based calling service, began its own secret program, Project Chess, to explore the legal and technical issues in making Skype calls readily available to intelligence agencies and law enforcement officials, according to people briefed on the program who asked not to be named to avoid trouble with the intelligence agencies.

Project Chess, which has never been previously disclosed, was small, limited to fewer than a dozen people inside Skype, and was developed as the company had sometimes contentious talks with the government over legal issues, said one of the people briefed on the project. The project began about five years ago, before most of the company was sold by its parent, eBay, to outside investors in 2009. Microsoft acquired Skype in an $8.5 billion deal that was completed in October 2011.

A Skype executive denied last year in a blog post that recent changes in the way Skype operated were made at the behest of Microsoft to make snooping easier for law enforcement. It appears, however, that Skype figured out how to cooperate with the intelligence community before Microsoft took over the company, according to documents leaked by Edward J. Snowden, a former contractor for the N.S.A. One of the documents about the Prism program made public by Mr. Snowden says Skype joined Prism on Feb. 6, 2011.

Reread that Skype denial from last July, knowing that at the time the company knew that they were giving the NSA access to customer communications. Notice how it is precisely worded to be technically accurate, yet leave the reader with the wrong conclusion. This is where we are with all the tech companies right now; we can’t trust their denials, just as we can’t trust the NSA—or the FBI—when it denies programs, capabilities, or practices.

Back in January, we wondered whom Skype lets spy on their users. Now we know.

Posted on June 20, 2013 at 2:42 PM36 Comments

The US Uses Vulnerability Data for Offensive Purposes

Companies allow US intelligence to exploit vulnerabilities before it patches them:

Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

No word on whether these companies would delay a patch if asked nicely—or if there’s any way the government can require them to. Anyone feel safer because of this?

Posted on June 20, 2013 at 6:04 AM40 Comments

Petition the NSA to Subject its Surveillance Program to Public Comment

I have signed a petition calling on the NSA to “suspend its domestic surveillance program pending public comment.” This is what’s going on:

In a request today to National Security Agency director Keith Alexander and Defense Secretary Chuck Hagel, the group argues that the NSA’s recently revealed domestic surveillance program is “unlawful” because the agency neglected to request public comments first. A federal appeals court previously ruled that was necessary in a lawsuit involving airport body scanners.

“In simple terms, a line has been crossed,” Marc Rotenberg, executive director of the Electronic Privacy Information Center, told CNET. “The agency’s function has been transformed, and we think the public should have an opportunity to say something about that.”

It’s an ambitious—and untested—legal argument. No court appears to have ever ruled that the Administrative Procedure Act, which can require agencies to solicit public comment, has applied to the supersecret intelligence community. The APA explicitly excludes from judicial review, for instance, “military authority exercised in the field in time of war.”

EPIC is relying on a July 2011 decision (PDF) it obtained from the U.S. Court of Appeals for the D.C. Circuit dealing with installing controversial full-body scanners at airports. The Transportation Security Agency, the court said, was required to obtain comment on a rule that “substantively affects the public.”

This isn’t an empty exercise. While it’s unlikely that a judge will order the NSA to suspend the program pending public approval, the process will put pressure on Washington to subject the NSA to more oversight, and pressure the NSA into more transparency. We’ve used these tactics before. Two decades ago, EPIC launched a similar petition against the Clipper Chip, a process that eventually led to the Clinton administration and the FBI abandoning the effort. And EPIC’s more recent action against TSA full-body scanners is one of the reasons we have privacy safeguards on the millimeter wave scanners they are still using.

The more people who sign this petition, this, the clearer the message it sends to Washington: a message that people care about the privacy of their telephone records, Internet transactions, and online communications. Secret judges should not be allowed to use secret interpretations of secret laws to authorize the NSA to engage in domestic surveillance. Sooner or later, a court is going to recognize that. Until then, the more noise the better.

Add your voice here. It just might work.

Posted on June 19, 2013 at 2:18 PM29 Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AM57 Comments

Details of NSA Data Requests from US Corporations

Facebook (here), Apple (here), and Yahoo (here) have all released details of US government requests for data. They each say that they’ve turned over user data for about 10,000 people, although the time frames are different. The exact number isn’t important; what’s important is that it’s much lower than the millions implied by the PRISM document.

Now the big question: do we believe them? If we don’t, what would it take before we did believe them?

Posted on June 18, 2013 at 4:00 PM47 Comments

NSA Secrecy and Personal Privacy

In an excellent essay about privacy and secrecy, law professor Daniel Solove makes an important point. There are two types of NSA secrecy being discussed. It’s easy to confuse them, but they’re very different.

Of course, if the government is trying to gather data about a particular suspect, keeping the specifics of surveillance efforts secret will decrease the likelihood of that suspect altering his or her behavior.

But secrecy at the level of an individual suspect is different from keeping the very existence of massive surveillance programs secret. The public must know about the general outlines of surveillance activities in order to evaluate whether the government is achieving the appropriate balance between privacy and security. What kind of information is gathered? How is it used? How securely is it kept? What kind of oversight is there? Are these activities even legal? These questions can’t be answered, and the government can’t be held accountable, if surveillance programs are completely classified.

This distinction is also becoming important as Snowden keeps talking. There are a lot of articles about Edward Snowden cooperating with the Chinese government. I have no idea if this is true—Snowden denies it—or if it’s part of an American smear campaign designed to change the debate from the NSA surveillance programs to the whistleblower’s actions. (It worked against Assange.) In anticipation of the inevitable questions, I want to change a previous assessment statement: I consider Snowden a hero for whistleblowing on the existence and details of the NSA surveillance programs, but not for revealing specific operational secrets to the Chinese government. Charles Pierce wishes Snowden would stop talking. I agree; the more this story is about him the less it is about the NSA. Stop giving interviews and let the documents do the talking.

Back to Daniel Solove, this excellent 2011 essay on the value of privacy is making the rounds again. And it should.

Many commentators had been using the metaphor of George Orwell’s 1984 to describe the problems created by the collection and use of personal data. I contended that the Orwell metaphor, which focuses on the harms of surveillance (such as inhibition and social control) might be apt to describe law enforcement’s monitoring of citizens. But much of the data gathered in computer databases is not particularly sensitive, such as one’s race, birth date, gender, address, or marital status. Many people do not care about concealing the hotels they stay at, the cars they own or rent, or the kind of beverages they drink. People often do not take many steps to keep such information secret. Frequently, though not always, people’s activities would not be inhibited if others knew this information.

I suggested a different metaphor to capture the problems: Franz Kafka’s The Trial, which depicts a bureaucracy with inscrutable purposes that uses people’s information to make important decisions about them, yet denies the people the ability to participate in how their information is used. The problems captured by the Kafka metaphor are of a different sort than the problems caused by surveillance. They often do not result in inhibition or chilling. Instead, they are problems of information processing—the storage, use, or analysis of data—rather than information collection. They affect the power relationships between people and the institutions of the modern state. They not only frustrate the individual by creating a sense of helplessness and powerlessness, but they also affect social structure by altering the kind of relationships people have with the institutions that make important decisions about their lives.

The whole essay is worth reading, as is—I hope—my essay on the value of privacy from 2006.

I have come to believe that the solution to all of this is regulation. And it’s not going to be the regulation of data collection; it’s going to be the regulation of data use.

EDITED TO ADD (6/18): A good rebutttal to the “nothing to hide” argument.

Posted on June 18, 2013 at 11:02 AM35 Comments

Evidence that the NSA Is Storing Voice Content, Not Just Metadata

Interesting speculation that the NSA is storing everyone’s phone calls, and not just metadata. Definitely worth reading.

I expressed skepticism about this just a month ago. My assumption had always been that everyone’s compressed voice calls is just too much data to move around and store. Now, I don’t know.

There’s a bit of a conspiracy-theory air to all of this speculation, but underestimating what the NSA will do is a mistake. General Alexander has told members of Congress that they can record the contents of phone calls. And they have the technical capability.

Earlier reports have indicated that the NSA has the ability to record nearly all domestic and international phone calls—in case an analyst needed to access the recordings in the future. A Wired magazine article last year disclosed that the NSA has established “listening posts” that allow the agency to collect and sift through billions of phone calls through a massive new data center in Utah, “whether they originate within the country or overseas.” That includes not just metadata, but also the contents of the communications.

William Binney, a former NSA technical director who helped to modernize the agency’s worldwide eavesdropping network, told the Daily Caller this week that the NSA records the phone calls of 500,000 to 1 million people who are on its so-called target list, and perhaps even more. “They look through these phone numbers and they target those and that’s what they record,” Binney said.

Brewster Kahle, a computer engineer who founded the Internet Archive, has vast experience storing large amounts of data. He created a spreadsheet this week estimating that the cost to store all domestic phone calls a year in cloud storage for data-mining purposes would be about $27 million per year, not counting the cost of extra security for a top-secret program and security clearances for the people involved.

I believe that, to the extent that the NSA is analyzing and storing conversations, they’re doing speech-to-text as close to the source as possible and working with that. Even if you have to store the audio for conversations in foreign languages, or for snippets of conversations the conversion software is unsure of, it’s a lot fewer bits to move around and deal with.

And, by the way, I hate the term “metadata.” What’s wrong with “traffic analysis,” which is what we’ve always called that sort of thing?

Posted on June 18, 2013 at 5:57 AM82 Comments

Blowback from the NSA Surveillance

There’s one piece of blowback that isn’t being discussed—aside from the fact that Snowden has killed the chances of any liberal arts major getting a DoD job for at least a decade—and that’s how the massive NSA surveillance of the Internet affects the US’s role in Internet governance.

Ron Deibert makes this point:

But there are unintended consequences of the NSA scandal that will undermine U.S. foreign policy interests—in particular, the “Internet Freedom” agenda espoused by the U.S. State Department and its allies.

The revelations that have emerged will undoubtedly trigger a reaction abroad as policymakers and ordinary users realize the huge disadvantages of their dependence on U.S.-controlled networks in social media, cloud computing, and telecommunications, and of the formidable resources that are deployed by U.S. national security agencies to mine and monitor those networks.

Writing about the new Internet nationalism, I talked about the ITU meeting in Dubai last fall, and the attempt of some countries to wrest control of the Internet from the US. That movement just got a huge PR boost. Now, when countries like Russia and Iran say the US is simply too untrustworthy to manage the Internet, no one will be able to argue.

We can’t fight for Internet freedom around the world, then turn around and destroy it back home. Even if we don’t see the contradiction, the rest of the world does.

Posted on June 17, 2013 at 6:13 AM71 Comments

Sixth Annual Movie-Plot Threat Contest Semifinalists

On April 1, I announced the Sixth Annual Movie Plot Threat Contest:

I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

Submissions are in, and—apologies that this is a month late, but I completely forgot about it—here are the semifinalists.

  1. Crashing satellites, by Chris Battey.
  2. Attacking Dutch dams, by Russell Thomas.
  3. Attacking a drug dispensing system, by Dave.
  4. Attacking cars through their diagnostic ports, by RSaunders.
  5. Embedded kill switches in chips, by Shogun.

Cast your vote by number; voting closes at the end of the month.

Posted on June 14, 2013 at 12:20 PM

Ricin as a Terrorist Tool

This paper (full paper behind paywall)—from Environment International (2009)—does a good job of separating fact from fiction:

Abstract: In recent years there has been an increased concern regarding the potential use of chemical and biological weapons for mass urban terror. In particular, there are concerns that ricin could be employed as such an agent. This has been reinforced by recent high profile cases involving ricin, and its use during the cold war to assassinate a high profile communist dissident. Nevertheless, despite these events, does it deserve such a reputation? Ricin is clearly toxic, though its level of risk depends on the route of entry. By ingestion, the pathology of ricin is largely restricted to the gastrointestinal tract where it may cause mucosal injuries; with appropriate treatment, most patients will make a full recovery. As an agent of terror, it could be used to contaminate an urban water supply, with the intent of causing lethality in a large urban population. However, a substantial mass of pure ricin powder would be required. Such an exercise would be impossible to achieve covertly and would not guarantee success due to variables such as reticulation management, chlorination, mixing, bacterial degradation and ultra-violet light. By injection, ricin is lethal; however, while parenteral delivery is an ideal route for assassination, it is not realistic for an urban population. Dermal absorption of ricin has not been demonstrated. Ricin is also lethal by inhalation. Low doses can lead to progressive and diffuse pulmonary oedema with associated inflammation and necrosis of the alveolar pneumocytes. However, the risk of toxicity is dependent on the aerodynamic equivalent diameter (AED) of the ricin particles. The AED, which is an indicator of the aerodynamic behaviour of a particle, must be of sufficiently low micron size as to target the human alveoli and thereby cause major toxic effects. To target a large population would also necessitate a quantity of powder in excess of several metric tons. The technical and logistical skills required to formulate such a mass of powder to the required size is beyond the ability of terrorists who typically operate out of a kitchen in a small urban dwelling or in a small ill-equipped laboratory. Ricin as a toxin is deadly but as an agent of bioterror it is unsuitable and therefore does not deserve the press attention and subsequent public alarm that has been created.

This paper lists all known intoxication attempts, including the famous Markov assassination.

Posted on June 14, 2013 at 7:15 AM20 Comments

Trading Privacy for Convenience

Ray Wang makes an important point about trust and our data:

This is the paradox. The companies contending to win our trust to manage our digital identities all seem to have complementary (or competing) business models that breach that trust by selling our data.

…and by turning it over to the government.

The current surveillance state is a result of a government/corporate partnership, and our willingness to give up privacy for convenience.

If the government demanded that we all carry tracking devices 24/7, we would rebel. Yet we all carry cell phones. If the government demanded that we deposit copies of all of our messages to each other with the police, we’d declare their actions unconstitutional. Yet we all use Gmail and Facebook messaging and SMS. If the government demanded that we give them access to all the photographs we take, and that we identify all of the people in them and tag them with locations, we’d refuse. Yet we do exactly that on Flickr and other sites.

Ray Ozzie is right when he said that we got what we asked for when we told the government we were scared and that they should do whatever they wanted to make us feel safer. But we also got what we asked for when we traded our privacy for convenience, trusting these corporations to look out for our best interests.

We’re living in a world of feudal security. And if you watch Game of Thrones, you know that feudalism benefits the powerful—at the expense of the peasants.

Last night, I was on All In with Chris Hayes (parts one and two). One of the things we talked about after the show was over is how technological solutions only work around the margins. That’s not a cause for despair. Think about technological solutions to murder. Yes, they exist—wearing a bullet-proof vest, for example—but they’re not really viable. The way we protect ourselves from murder is through laws. This is how we’re also going to protect our privacy.

EDITED TO ADD (6/18): The Onion nailed it back in 2011.

Posted on June 13, 2013 at 4:06 PM32 Comments

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AM33 Comments

Essays Related to NSA Spying Documents

Here’s a quick list of some of my older writings that are related to the current NSA spying documents:

Much more here.

EDITED TO ADD (6/14): More essays by others: Claims that PRISM foiled a terrorist attack have been debunked. A collection of headlines. Interesting comments by someone who thinks Snowden is a well-intentioned fool. The Economist speculates on the political factors that would lead Obama to allow this.

Posted on June 13, 2013 at 6:09 AM9 Comments

Prosecuting Snowden

Edward Snowden broke the law by releasing classified information. This isn’t under debate; it’s something everyone with a security clearance knows. It’s written in plain English on the documents you have to sign when you get a security clearance, and it’s part of the culture. The law is there for a good reason, and secrecy has an important role in military defense.

But before the Justice Department prosecutes Snowden, there are some other investigations that ought to happen.

We need to determine whether these National Security Agency programs are themselves legal. The administration has successfully barred anyone from bringing a lawsuit challenging these laws, on the grounds of national secrecy. Now that we know those arguments are without merit, it’s time for those court challenges.

It’s clear that some of the NSA programs exposed by Snowden violate the Constitution and others violate existing laws. Other people have an opposite view. The courts need to decide.

We need to determine whether classifying these programs is legal. Keeping things secret from the people is a very dangerous practice in a democracy, and the government is permitted to do so only under very specific circumstances. Reading the documents leaked so far, I don’t see anything that needs to be kept secret. The argument that exposing these documents helps the terrorists doesn’t even pass the laugh test; there’s nothing here that changes anything any potential terrorist would do or not do. But in any case, now that the documents are public, the courts need to rule on the legality of their secrecy.

And we need to determine how we treat whistle-blowers in this country. We have whistle-blower protection laws that apply in some cases, particularly when exposing fraud, and other illegal behavior. NSA officials have repeatedly lied about the existence, and details, of these programs to Congress.

Only after all of these legal issues have been resolved should any prosecution of Snowden move forward. Because only then will we know the full extent of what he did, and how much of it is justified.

I believe that history will hail Snowden as a hero—his whistle-blowing exposed a surveillance state and a secrecy machine run amok. I’m less optimistic of how the present day will treat him, and hope that the debate right now is less about the man and more about the government he exposed.

This essay was originally published on the New York Times Room for Debate blog, as part of a series of essays on the topic.

EDITED TO ADD (6/13): There’s a big discussion of this on Reddit.

Posted on June 12, 2013 at 6:16 AM128 Comments

The Psychology of Conspiracy Theories

Interesting.

Crazy as these theories are, those propagating them are not—they’re quite normal, in fact. But recent scientific research tells us this much: if you think one of the theories above is plausible, you probably feel the same way about the others, even though they contradict one another. And it’s very likely that this isn’t the only news story that makes you feel as if shadowy forces are behind major world events.

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories,” says Viren Swami, a psychology professor who studies conspiracy belief at the University of Westminster in England. Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview.

[…]

Our access to high-quality information has not, unfortunately, ushered in an age in which disagreements of this sort can easily be solved with a quick Google search. In fact, the Internet has made things worse. Confirmation bias—the tendency to pay more attention to evidence that supports what you already believe—is a well-documented and common human failing. People have been writing about it for centuries. In recent years, though, researchers have found that confirmation bias is not easy to overcome. You can’t just drown it in facts.

Posted on June 11, 2013 at 12:30 PM37 Comments

Trust in IT

Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.

Virtually everything we work with on a day-to-day basis is built by someone else. Avoiding insanity requires trusting those who designed, developed and manufactured the instruments of our daily existence.

All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust. In this light, I find the modern technosphere’s complete disdain for obtaining and retaining trust baffling, arrogant and at times enraging.

Posted on June 11, 2013 at 6:21 AM18 Comments

Government Secrets and the Need for Whistle-blowers

Yesterday, we learned that the NSA received all calling records from Verizon customers for a three-month period starting in April. That’s everything except the voice content: who called who, where they were, how long the call lasted—for millions of people, both Americans and foreigners. This “metadata” allows the government to track the movements of everyone during that period, and build a detailed picture of who talks to whom. It’s exactly the same data the Justice Department collected about AP journalists.

The Guardian delivered this revelation after receiving a copy of a secret memo about this—presumably from a whistle-blower. We don’t know if the other phone companies handed data to the NSA too. We don’t know if this was a one-off demand or a continuously renewed demand; the order started a few days after the Boston bombers were captured by police.

We don’t know a lot about how the government spies on us, but we know some things. We know the FBI has issued tens of thousands of ultra-secret National Security Letters to collect all sorts of data on people—we believe on millions of people—and has been abusing them to spy on cloud-computer users. We know it can collect a wide array of personal data from the Internet without a warrant. We also know that the FBI has been intercepting cell-phone data, all but voice content, for the past 20 years without a warrant, and can use the microphone on some powered-off cell phones as a room bug—presumably only with a warrant.

We know that the NSA has many domestic-surveillance and data-mining programs with codenames like Trailblazer, Stellar Wind, and Ragtime—deliberately using different codenames for similar programs to stymie oversight and conceal what’s really going on. We know that the NSA is building an enormous computer facility in Utah to store all this data, as well as faster computer networks to process it all. We know the U.S. Cyber Command employs 4,000 people.

We know that the DHS is also collecting a massive amount of data on people, and that local police departments are running “fusion centers” to collect and analyze this data, and covering up its failures. This is all part of the militarization of the police.

Remember in 2003, when Congress defunded the decidedly creepy Total Information Awareness program? It didn’t die; it just changed names and split into many smaller programs. We know that corporations are doing an enormous amount of spying on behalf of the government: all parts.

We know all of this not because the government is honest and forthcoming, but mostly through three backchannels—inadvertent hints or outright admissions by government officials in hearings and court cases, information gleaned from government documents received under FOIA, and government whistle-blowers.

There’s much more we don’t know, and often what we know is obsolete. We know quite a bit about the NSA’s ECHELON program from a 2000 European investigation, and about the DHS’s plans for Total Information Awareness from 2002, but much less about how these programs have evolved. We can make inferences about the NSA’s Utah facility based on the theoretical amount of data from various sources, the cost of computation, and the power requirements from the facility, but those are rough guesses at best. For a lot of this, we’re completely in the dark.

And that’s wrong.

The U.S. government is on a secrecy binge. It overclassifies more information than ever. And we learn, again and again, that our government regularly classifies things not because they need to be secret, but because their release would be embarrassing.

Knowing how the government spies on us is important. Not only because so much of it is illegal—or, to be as charitable as possible, based on novel interpretations of the law—but because we have a right to know. Democracy requires an informed citizenry in order to function properly, and transparency and accountability are essential parts of that. That means knowing what our government is doing to us, in our name. That means knowing that the government is operating within the constraints of the law. Otherwise, we’re living in a police state.

We need whistle-blowers.

Leaking information without getting caught is difficult. It’s almost impossible to maintain privacy in the Internet Age. The WikiLeaks platform seems to have been secure—Bradley Manning was caught not because of a technological flaw, but because someone he trusted betrayed him—but the U.S. government seems to have successfully destroyed it as a platform. None of the spin-offs have risen to become viable yet. The New Yorker recently unveiled its Strongbox platform for leaking material, which is still new but looks good. This link contains the best advice on how to leak information to the press via phone, email, or the post office. The National Whistleblowers Center has a page on national-security whistle-blowers and their rights.

Leaking information is also very dangerous. The Obama Administration has embarked on a war on whistle-blowers, pursuing them—both legally and through intimidation—further than any previous administration has done. Mark Klein, Thomas Drake, and William Binney have all been persecuted for exposing technical details of our surveillance state. Bradley Manning has been treated cruelly and inhumanly—and possibly tortured—for his more-indiscriminate leaking of State Department secrets.

The Obama Administration’s actions against the Associated Press, its persecution of Julian Assange, and its unprecedented prosecution of Manning on charges of “aiding the enemy” demonstrate how far it’s willing to go to intimidate whistle-blowers—as well as the journalists who talk to them.

But whistle-blowing is vital, even more broadly than in government spying. It’s necessary for good government, and to protect us from abuse of power.

We need details on the full extent of the FBI’s spying capabilities. We don’t know what information it routinely collects on American citizens, what extra information it collects on those on various watch lists, and what legal justifications it invokes for its actions. We don’t know its plans for future data collection. We don’t know what scandals and illegal actions—either past or present—are currently being covered up.

We also need information about what data the NSA gathers, either domestically or internationally. We don’t know how much it collects surreptitiously, and how much it relies on arrangements with various companies. We don’t know how much it uses password cracking to get at encrypted data, and how much it exploits existing system vulnerabilities. We don’t know whether it deliberately inserts backdoors into systems it wants to monitor, either with or without the permission of the communications-system vendors.

And we need details about the sorts of analysis the organizations perform. We don’t know what they quickly cull at the point of collection, and what they store for later analysis—and how long they store it. We don’t know what sort of database profiling they do, how extensive their CCTV and surveillance-drone analysis is, how much they perform behavioral analysis, or how extensively they trace friends of people on their watch lists.

We don’t know how big the U.S. surveillance apparatus is today, either in terms of money and people or in terms of how many people are monitored or how much data is collected. Modern technology makes it possible to monitor vastly more people—yesterday’s NSA revelations demonstrate that they could easily surveil everyone—than could ever be done manually.

Whistle-blowing is the moral response to immoral activity by those in power. What’s important here are government programs and methods, not data about individuals. I understand I am asking for people to engage in illegal and dangerous behavior. Do it carefully and do it safely, but—and I am talking directly to you, person working on one of these secret and probably illegal programs—do it.

If you see something, say something. There are many people in the U.S. that will appreciate and admire you.

For the rest of us, we can help by protesting this war on whistle-blowers. We need to force our politicians not to punish them—to investigate the abuses and not the messengers—and to ensure that those unjustly persecuted can obtain redress.

Our government is putting its own self-interest ahead of the interests of the country. That needs to change.

This essay originally appeared on the Atlantic.

EDITED TO ADD (6/10): It’s not just phone records. Another secret program, PRISM, gave the NSA access to e-mails and private messages at Google, Facebook, Yahoo!, Skype, AOL, and others. And in a separate leak, we now know about the Boundless Informant NSA data mining system.

The leaker for at least some of this is Edward Snowden. I consider him an American hero.

EFF has a great timeline of NSA spying. And this and this contain some excellent speculation about what PRISM could be.

Someone needs to write an essay parsing all of the precisely worded denials. Apple has never heard the word “PRISM,” but could have known of the program under a different name. Google maintained that there is no government “back door,” but left open the possibility that the data could have been just handed over. Obama said that the government isn’t “listening to your telephone calls,” ignoring 1) the meta-data, 2) the fact that computers could be doing all of the listening, and 3) that text-to-speech results in phone calls being read and not listened to. And so on and on and on.

Here are people defending the programs. And here’s someone criticizing my essay.

Four more good essays.

I’m sure there are lots more things out there that should be read. Please include the links in comments. Not only essays I would agree with; intelligent opinions from the other sides are just as important.

EDITED TO ADD (6/10): Two essays discussing the policy issues.

My original essay is being discussed on Reddit.

EDITED TO ADD (6/11): Three more good articles: “The Irrationality of Giving Up This Much Liberty to Fight Terror,” “If the NSA Trusted Edward Snowden with Our Data, Why Should We Trust the NSA?” and “Using Metadata to Find Paul Revere.”

EDITED TO ADD (6/11): NSA surveillance reimagined as children’s books.

EDITED TO ADD (7/1): This essay has been translated into Russian and French.

EDITED TO ADD (10/2): This essay has also been translated into Finnish.

Posted on June 10, 2013 at 6:12 AM147 Comments

A Really Good Article on How Easy it Is to Crack Passwords

Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break them. The winner got 90% of them, the loser 62%—in a few hours.

The list of “plains,” as many crackers refer to deciphered hashes, contains the usual list of commonly used passcodes that are found in virtually every breach involving consumer websites. “123456,” “1234567,” and “password” are there, as is “letmein,” “Destiny21,” and “pizzapizza.” Passwords of this ilk are hopelessly weak. Despite the additional tweaking, “p@$$word,” “123456789j,” “letmein1!,” and “LETMEin3” are equally awful….

As big as the word lists that all three crackers in this article wielded—close to 1 billion strong in the case of Gosney and Steube—none of them contained “Coneyisland9/,” “momof3g8kids,” or the more than 10,000 other plains that were revealed with just a few hours of effort. So how did they do it? The short answer boils down to two variables: the website’s unfortunate and irresponsible use of MD5 and the use of non-randomized passwords by the account holders.

The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.

Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.

“The combinator attack got it! It’s cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

Great reading, but nothing theoretically new. Ars Technica wrote about this last year, and Joe Bonneau wrote an excellent commentary.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models).

I wrote about this same thing back in 2007. The news in 2013, such as it is, is that this kind of thing is getting easier faster than people think. Pretty much anything that can be remembered can be cracked.

If you need to memorize a password, I still stand by the Schneier scheme from 2008:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Until this very moment, these passwords were still secure:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst::amazon.cccooommm = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence, some personal memorable tricks to modify that sentence into a password, and create a long-length password.

Better, though, is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to store them. (If anyone wants to port it to the Mac, iPhone, iPad, or Android, please contact me.) This article does a good job of explaining the same thing. David Pogue likes Dashlane, but doesn’t know if it’s secure.

In related news, Password Safe is a candidate for July’s project-of-the-month on SourceForge. Please vote for it.

EDITED TO ADD (6/7): As a commenter noted, none of this is useful advice if the site puts artificial limits on your password.

EDITED TO ADD (6/14): Various ports of Password Safe. I know nothing about them, nor can I vouch for their security.

Analysis of the xkcd scheme.

Posted on June 7, 2013 at 6:41 AM144 Comments

The Cost of Terrorism in Pakistan

This study claims “terrorism has cost Pakistan around 33.02% of its real national income” between the years 1973 and 2008, or about 1% per year.

The St. Louis Fed puts the real gross national income of the U.S. at about $13 trillion total, hand-waving an average over the past few years. The best estimate I’ve seen for the increased cost of homeland security in the U.S. in the ten years since 9/11 is $100 billion per year. So that puts the cost of terrorism in the US at about 0.8%—surprisingly close to the Pakistani number.

The interesting thing is that the expenditures are completely different. In Pakistan, the cost is primarily “a fall in domestic investment and lost workers’ remittances from abroad.” In the US, it’s security measures, including the invasion of Iraq.

I remember reading somewhere that about a third of all food spoils. In poor countries, that spoilage primarily happens during production and transport. In rich countries, that spoilage primarily happens after the consumer buys the food. Same rate of loss, completely different causes. This reminds me of that.

Posted on June 6, 2013 at 5:58 AM23 Comments

Security and Human Behavior (SHB 2013)

I’m at the Sixth Interdisciplinary Workshop on Security and Human Behavior (SHB 2013). This year we’re in Los Angeles, at USC—hosted by CREATE.

My description from last year still applies:

SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

It is still the most intellectually stimulating conference I attend all year. The format has remained unchanged since the beginning. Each panel consists of six people. Everyone has ten minutes to talk, and then we have half an hour of questions and discussion. The format maximizes interaction, which is really important in an interdisciplinary conference like this one.

The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Both Ross Anderson and Vaibhav Garg have liveblogged the event.

Here are my posts on the first, second, third, fourth, and fifth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 5, 2013 at 7:20 AM2 Comments

The Problems with CALEA-II

The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it’s really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies. It will assist totalitarian governments in spying on their own citizens. And it won’t do much to hinder actual criminals and terrorists.

As the FBI sees it, the problem is that people are moving away from traditional communication systems like telephones onto computer systems like Skype. Eavesdropping on telephones used to be easy. The FBI would call the phone company, which would bring agents into a switching room and allow them to literally tap the wires with a pair of alligator clips and a tape recorder. In the 1990s, the government forced phone companies to provide an analogous capability on digital switches; but today, more and more communications happens over the Internet.

What the FBI wants is the ability to eavesdrop on everything. Depending on the system, this ranges from easy to impossible. E-mail systems like Gmail are easy. The mail resides in Google’s servers, and the company has an office full of people who respond to requests for lawful access to individual accounts from governments all over the world. Encrypted voice systems like Silent Circle are impossible to eavesdrop on—the calls are encrypted from one computer to the other, and there’s no central node to eavesdrop from. In those cases, the only way to make the system eavesdroppable is to add a backdoor to the user software. This is precisely the FBI’s proposal. Companies that refuse to comply would be fined $25,000 a day.

The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.

This is an old debate, and one we’ve been through many times. The NSA even has a name for it: the equities issue. In the 1980s, the equities debate was about export control of cryptography. The government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan “Don’t buy the U.S. stuff—it’s lousy.”

In 1993, the debate was about the Clipper Chip. This was another deliberately weakened security product, an encrypted telephone. The FBI convinced AT&T to add a backdoor that allowed for surreptitious wiretapping. The product was a complete failure. Again, why would anyone buy a deliberately weakened security system?

In 1994, the Communications Assistance for Law Enforcement Act mandated that U.S. companies build eavesdropping capabilities into phone switches. These were sold internationally; some countries liked having the ability to spy on their citizens. Of course, so did criminals, and there were public scandals in Greece (2005) and Italy (2006) as a result.

In 2012, we learned that every phone switch sold to the Department of Defense had security vulnerabilities in its surveillance system. And just this May, we learned that Chinese hackers breached Google’s system for providing surveillance data for the FBI.

The new FBI proposal will fail in all these ways and more. The bad guys will be able to get around the eavesdropping capability, either by building their own security systems—not very difficult—or buying the more-secure foreign products that will inevitably be made available. Most of the good guys, who don’t understand the risks or the technology, will not know enough to bother and will be less secure. The eavesdropping functions will 1) result in more obscure—and less secure—product designs, and 2) be vulnerable to exploitation by criminals, spies, and everyone else. U.S. companies will be forced to compete at a disadvantage; smart customers won’t buy the substandard stuff when there are more-secure foreign alternatives. Even worse, there are lots of foreign governments who want to use these sorts of systems to spy on their own citizens. Do we really want to be exporting surveillance technology to the likes of China, Syria, and Saudi Arabia?

The FBI’s shortsighted agenda also works against the parts of the government that are still working to secure the Internet for everyone. Initiatives within the NSA, the DOD, and DHS to do everything from securing computer operating systems to enabling anonymous web browsing will all be harmed by this.

What to do, then? The FBI claims that the Internet is “going dark,” and that it’s simply trying to maintain the status quo of being able to eavesdrop. This characterization is disingenuous at best. We are entering a golden age of surveillance; there’s more electronic communications available for eavesdropping than ever before, including whole new classes of information: location tracking, financial tracking, and vast databases of historical communications such as e-mails and text messages. The FBI’s surveillance department has it better than ever. With regard to voice communications, yes, software phone calls will be harder to eavesdrop upon. (Although there are questions about Skype’s security.) That’s just part of the evolution of technology, and one that on balance is a positive thing.

Think of it this way: We don’t hand the government copies of our house keys and safe combinations. If agents want access, they get a warrant and then pick the locks or bust open the doors, just as a criminal would do. A similar system would work on computers. The FBI, with its increasingly non-transparent procedures and systems, has failed to make the case that this isn’t good enough.

Finally there’s a general principle at work that’s worth explicitly stating. All tools can be used by the good guys and the bad guys. Cars have enormous societal value, even though bank robbers can use them as getaway cars. Cash is no different. Both good guys and bad guys send e-mails, use Skype, and eat at all-night restaurants. But because society consists overwhelmingly of good guys, the good uses of these dual-use technologies greatly outweigh the bad uses. Strong Internet security makes us all safer, even though it helps the bad guys as well. And it makes no sense to harm all of us in an attempt to harm a small subset of us.

This essay originally appeared in Foreign Policy.

Posted on June 4, 2013 at 12:44 PM71 Comments

The Security Risks of Unregulated Google Search

Someday I need to write an essay on the security risks of secret algorithms that become part of our infrastructure. This paper gives one example of that. Could Google tip an election by manipulating what comes up from search results on the candidates?

The study’s participants, selected to resemble the US voting population, viewed the results for two candidates on a mock search engine called Kadoodle. By front-loading Kadoodle’s results with articles favoring one of the candidates, Epstein shifted enough of his participants’ voter preferences toward the favored candidate to simulate the swing of a close election. But here’s the kicker: in one round of the study, Epstein configured Kadoodle so that it hid the manipulation from 100 percent of the participants.

Turns out that it could. And, it wouldn’t even be illegal for Google to do it.

The author thinks that government regulation is the only reasonable solution.

Epstein believes that the mere existence of the power to fix election outcomes, wielded or not, is a threat to democracy, and he asserts that search engines should be regulated accordingly. But regulatory analogies for a many-armed, ever-shifting company like Google are tough to pin down. For those who see search results as a mere passive relaying of information, like a library index or a phone book, there is precedent for regulation. In the past, phone books—with a monopoly on the flow of certain information to the public—were prevented from not listing businesses even when paid to do so. In the 1990s, similar reasoning led to the “must carry” rule, which required cable companies to carry certain channels to communities where they were the only providers of those channels.

As I said, I need to write an essay on the broader issue.

Posted on June 4, 2013 at 6:19 AM54 Comments

The Problems with Managing Privacy by Asking and Giving Consent

New paper from the Harvard Law Review by Daniel Solove: “Privacy Self-Management and the Consent Dilemma“:

Privacy self-management takes refuge in consent. It attempts to be neutral about substance—whether certain forms of collecting, using, or disclosing personal data are good or bad—and instead focuses on whether people consent to various privacy practices. Consent legitimizes nearly any form of collection, use, or disclosure of personal data. Although privacy self-management is certainly a laudable and necessary component of any regulatory regime, I contend that it is being tasked with doing work beyond its capabilities. Privacy self-management does not provide people with meaningful control over their data. First, empirical and social science research demonstrates that there are severe cognitive problems that undermine privacy self-management. These cognitive problems impair individuals’ ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.

Second, and more troubling, even well-informed and rational individuals cannot appropriately self-manage their privacy due to several structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework.

Posted on June 3, 2013 at 6:15 AM24 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.