Blog: March 2015 Archives

Survey of Americans' Privacy Habits Post-Snowden

Pew Research has a new survey on Americans' privacy habits in a post-Snowden world.

The 87% of those who had heard at least something about the programs were asked follow-up questions about their own behaviors and privacy strategies:

34% of those who are aware of the surveillance programs (30% of all adults) have taken at least one step to hide or shield their information from the government. For instance, 17% changed their privacy settings on social media; 15% use social media less often; 15% have avoided certain apps and 13% have uninstalled apps; 14% say they speak more in person instead of communicating online or on the phone; and 13% have avoided using certain terms in online communications.


25% of those who are aware of the surveillance programs (22% of all adults) say they have changed the patterns of their own use of various technological platforms "a great deal" or "somewhat" since the Snowden revelations. For instance, 18% say they have changed the way they use email "a great deal" or "somewhat"; 17% have changed the way they use search engines; 15% say they have changed the way they use social media sites such as Twitter and Facebook; and 15% have changed the way they use their cell phones.

Also interesting are the people who have not changed their behavior because they're afraid that it would lead to more surveillance. From pages 22-23 of the report:

Still, others said they avoid taking more advanced privacy measures because they believe that taking such measures could make them appear suspicious:

"There's no point in inviting scrutiny if it's not necessary."

"I didn't significantly change anything. It's more like trying to avoid anything questionable, so as not to be scrutinized unnecessarily.

"[I] don't want them misunderstanding something and investigating me."

There's also data about how Americans feel about government surveillance:

This survey asked the 87% of respondents who had heard about the surveillance programs: "As you have watched the developments in news stories about government monitoring programs over recent months, would you say that you have become more confident or less confident that the programs are serving the public interest?" Some 61% of them say they have become less confident the surveillance efforts are serving the public interest after they have watched news and other developments in recent months and 37% say they have become more confident the programs serve the public interest. Republicans and those leaning Republican are more likely than Democrats and those leaning Democratic to say they are losing confidence (70% vs. 55%).

Moreover, there is a striking divide among citizens over whether the courts are doing a good job balancing the needs of law enforcement and intelligence agencies with citizens' right to privacy: 48% say courts and judges are balancing those interests, while 49% say they are not.

At the same time, the public generally believes it is acceptable for the government to monitor many others, including foreign citizens, foreign leaders, and American leaders:

  • 82% say it is acceptable to monitor communications of suspected terrorists
  • 60% believe it is acceptable to monitor the communications of American leaders.
  • 60% think it is okay to monitor the communications of foreign leaders
  • 54% say it is acceptable to monitor communications from foreign citizens

Yet, 57% say it is unacceptable for the government to monitor the communications of U.S. citizens. At the same time, majorities support monitoring of those particular individuals who use words like "explosives" and "automatic weapons" in their search engine queries (65% say that) and those who visit anti-American websites (67% say that).


Overall, 52% describe themselves as "very concerned" or "somewhat concerned" about government surveillance of Americans' data and electronic communications, compared with 46% who describe themselves as "not very concerned" or "not at all concerned" about the surveillance.

It's worth reading these results in detail. Overall, these numbers are consistent with a worldwide survey from December. The press is spinning this as "Most Americans' behavior unchanged after Snowden revelations, study finds," but I see something very different. I see a sizable percentage of Americans not only concerned about government surveillance, but actively doing something about it. "Third of Americans shield data from government." Edward Snowden's goal was to start a national dialog about government surveillance, and these surveys show that he has succeeded in doing exactly that.

More news.

Posted on March 31, 2015 at 2:49 PM32 Comments

Australia Outlaws Warrant Canaries

In the US, certain types of warrants can come with gag orders preventing the recipient from disclosing the existence of warrant to anyone else. A warrant canary is basically a legal hack of that prohibition. Instead of saying "I just received a warrant with a gag order," the potential recipient keeps repeating "I have not received any warrants." If the recipient stops saying that, the rest of us are supposed to assume that he has been served one.

Lots of organizations maintain them. Personally, I have never believed this trick would work. It relies on the fact that a prohibition against speaking doesn't prevent someone from not speaking. But courts generally aren't impressed by this sort of thing, and I can easily imagine a secret warrant that includes a prohibition against triggering the warrant canary. And for all I know, there are right now secret legal proceedings on this very issue.

Australia has sidestepped all of this by outlawing warrant canaries entirely:

Section 182A of the new law says that a person commits an offense if he or she discloses or uses information about "the existence or non-existence of such a [journalist information] warrant." The penalty upon conviction is two years imprisonment.

Expect that sort of wording in future US surveillance bills, too.

Posted on March 31, 2015 at 7:14 AM72 Comments

Brute-Forcing iPhone PINs

This is a clever attack, using a black box that attaches to the iPhone via USB:

As you know, an iPhone keeps a count of how many wrong PINs have been entered, in case you have turned on the Erase Data option on the Settings | Touch ID & Passcode screen.

That's a highly-recommended option, because it wipes your device after 10 passcode mistakes.

Even if you only set a 4-digit PIN, that gives a crook who steals your phone just a 10 in 10,000 chance, or 0.1%, of guessing your unlock code in time.

But this Black Box has a trick up its cable.

Apparently, the device uses a light sensor to work out, from the change in screen intensity, when it has got the right PIN.

In other words, it also knows when it gets the PIN wrong, as it will most of the time, so it can kill the power to your iPhone when that happens.

And the power-down happens quickly enough (it seems you need to open up the iPhone and bypass the battery so you can power the device entirely via the USB cable) that your iPhone doesn't have time to subtract one from the "PIN guesses remaining" counter stored on the device.

Because every set of wrong guesses requires a reboot, the process takes about five days. Still, a very clever attack.

More details.

Posted on March 30, 2015 at 6:47 AM44 Comments

Friday Squid Blogging: Using Squid Proteins for Commercial Camouflage Products

More research.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on March 27, 2015 at 4:03 PM99 Comments

Yet Another Computer Side Channel

Researchers have managed to get two computers to communicate using heat and thermal sensors. It's not really viable communication -- the bit rate is eight per hour over fifteen inches -- but it's neat.

EDITED TO ADD (4/13): The paper. Similar research.

Posted on March 27, 2015 at 7:01 AM22 Comments

New Zealand's XKEYSCORE Use

The Intercept and the New Zealand Herald have reported that New Zealand spied on communications about the World Trade Organization director-general candidates. I'm not sure why this is news; it seems like a perfectly reasonable national intelligence target. More interesting to me is that the Intercept published the XKEYSCORE rules. It's interesting to see how primitive the keyword targeting is, and how broadly it collects e-mails.

The second really important point is that Edward Snowden's name is mentioned nowhere in the stories. Given how scrupulous the Intercept is about identifying him as the source of his NSA documents, I have to conclude that this is from another leaker. For a while, I have believed that there are at least three leakers inside the Five Eyes intelligence community, plus another CIA leaker. What I have called Leaker #2 has previously revealed XKEYSCORE rules. Whether this new disclosure is from Leaker #2 or a new Leaker #5, I have no idea. I hope someone is keeping a list.

Posted on March 26, 2015 at 9:46 AM28 Comments

Capabilities of Canada's Communications Security Establishment

There's a new story about the hacking capabilities of Canada's Communications Security Establishment (CSE), based on the Snowden documents.

Posted on March 25, 2015 at 6:55 AM25 Comments

Reforming the FISA Court

The Brennan Center has a long report on what's wrong with the FISA Court and how to fix it.

At the time of its creation, many lawmakers saw constitutional problems in a court that operated in total secrecy and outside the normal "adversarial" process.... But the majority of Congress was reassured by similarities between FISA Court proceedings and the hearings that take place when the government seeks a search warrant in a criminal investigation. Moreover, the rules governing who could be targeted for "foreign intelligence" purposes were narrow enough to mitigate concerns that the FISA Court process might be used to suppress political dissent in the U.S. -- or to avoid the stricter standards that apply in domestic criminal cases.

In the years since then, however, changes in technology and the law have altered the constitutional calculus. Technological advances have revolutionized communications. People are communicating at a scale unimaginable just a few years ago. International phone calls, once difficult and expensive, are now as simple as flipping a light switch, and the Internet provides countless additional means of international communication. Globalization makes such exchanges as necessary as they are easy. As a result of these changes, the amount of information about Americans that the NSA intercepts, even when targeting foreigners overseas, has exploded.

Instead of increasing safeguards for Americans' privacy as technology advances, the law has evolved in the opposite direction since 9/11.... While surveillance involving Americans previously required individualized court orders, it now happens through massive collection programs...involving no case-by-case judicial review. The pool of permissible targets is no longer limited to foreign powers -- such as foreign governments or terrorist groups -- and their agents. Furthermore, the government may invoke the FISA Court process even if its primary purpose is to gather evidence for a domestic criminal prosecution rather than to thwart foreign threats.

...[T]hese developments...have had a profound effect on the role exercised by the FISA Court. They have caused the court to veer off course, departing from its traditional role of ensuring that the government has sufficient cause to intercept communications or obtain records in particular cases and instead authorizing broad surveillance programs. It is questionable whether the court's new role comports with Article III of the Constitution, which mandates that courts must adjudicate concrete disputes rather than issuing advisory opinions on abstract questions. The constitutional infirmity is compounded by the fact that the court generally hears only from the government, while the people whose communications are intercepted have no meaningful opportunity to challenge the surveillance, even after the fact.

Moreover, under current law, the FISA Court does not provide the check on executive action that the Fourth Amendment demands. Interception of communications generally requires the government to obtain a warrant based on probable cause of criminal activity. Although some courts have held that a traditional warrant is not needed to collect foreign intelligence, they have imposed strict limits on the scope of such surveillance and have emphasized the importance of close judicial scrutiny in policing these limits. The FISA Court's minimal involvement in overseeing programmatic surveillance does not meet these constitutional standards.


Fundamental changes are needed to fix these flaws. Congress should end programmatic surveillance and require the government to obtain judicial approval whenever it seeks to obtain communications or information involving Americans. It should shore up the Article III soundness of the FISA Court by ensuring that the interests of those affected by surveillance are represented in court proceedings, increasing transparency, and facilitating the ability of affected individuals to challenge surveillance programs in regular federal courts. Finally, Congress should address additional Fourth Amendment concerns by narrowing the permissible scope of "foreign intelligence surveillance" and ensuring that it cannot be used as an end-run around the constitutional standards for criminal investigations.

Just Security post -- where I copied the above excerpt. Lawfare post.

Posted on March 24, 2015 at 9:04 AM15 Comments

BIOS Hacking

We've learned a lot about the NSA's abilities to hack a computer's BIOS so that the hack survives reinstalling the OS. Now we have a research presentation about it.

From Wired:

The BIOS boots a computer and helps load the operating system. By infecting this core software, which operates below antivirus and other security products and therefore is not usually scanned by them, spies can plant malware that remains live and undetected even if the computer's operating system were wiped and re-installed.


Although most BIOS have protections to prevent unauthorized modifications, the researchers were able to bypass these to reflash the BIOS and implant their malicious code.


Because many BIOS share some of the same code, they were able to uncover vulnerabilities in 80 percent of the PCs they examined, including ones from Dell, Lenovo and HP. The vulnerabilities, which they're calling incursion vulnerabilities, were so easy to find that they wrote a script to automate the process and eventually stopped counting the vulns it uncovered because there were too many.

From ThreatPost:

Kallenberg said an attacker would need to already have remote access to a compromised computer in order to execute the implant and elevate privileges on the machine through the hardware. Their exploit turns down existing protections in place to prevent re-flashing of the firmware, enabling the implant to be inserted and executed.

The devious part of their exploit is that they've found a way to insert their agent into System Management Mode, which is used by firmware and runs separately from the operating system, managing various hardware controls. System Management Mode also has access to memory, which puts supposedly secure operating systems such as Tails in the line of fire of the implant.

From the Register:

"Because almost no one patches their BIOSes, almost every BIOS in the wild is affected by at least one vulnerability, and can be infected," Kopvah says.

"The high amount of code reuse across UEFI BIOSes means that BIOS infection can be automatic and reliable.

"The point is less about how vendors don't fix the problems, and more how the vendors' fixes are going un-applied by users, corporations, and governments."

From Forbes:

Though such "voodoo" hacking will likely remain a tool in the arsenal of intelligence and military agencies, it's getting easier, Kallenberg and Kovah believe. This is in part due to the widespread adoption of UEFI, a framework that makes it easier for the vendors along the manufacturing chain to add modules and tinker with the code. That's proven useful for the good guys, but also made it simpler for researchers to inspect the BIOS, find holes and create tools that find problems, allowing Kallenberg and Kovah to show off exploits across different PCs. In the demo to FORBES, an HP PC was used to carry out an attack on an ASUS machine. Kovah claimed that in tests across different PCs, he was able to find and exploit BIOS vulnerabilities across 80 per cent of machines he had access to and he could find flaws in the remaining 10 per cent.

"There are protections in place that are supposed to prevent you from flashing the BIOS and we've essentially automated a way to find vulnerabilities in this process to allow us to bypass them. It turns out bypassing the protections is pretty easy as well," added Kallenberg.

The NSA has a term for vulnerabilities it think are exclusive to it: NOBUS, for "nobody but us." Turns out that NOBUS is a flawed concept. As I keep saying: "Today's top-secret programs become tomorrow's PhD theses and the next day's hacker tools." By continuing to exploit these vulnerabilities rather than fixing them, the NSA is keeping us all vulnerable.

Two Slashdot threads. Hacker News thread. Reddit thread.

EDITED TO ADD (3/31): Slides from the CanSecWest presentation. The bottom line is that there are some pretty huge BIOS insecurities out there. We as a community and industry need to figure out how to regularly patch our BIOSes.

Posted on March 23, 2015 at 7:07 AM108 Comments

New Paper on Digital Intelligence

David Omand -- GCHQ director from 1996-1997, and the UK's security and intelligence coordinator from 2000-2005 -- has just published a new paper: "Understanding Digital Intelligence and the Norms That Might Govern It."

Executive Summary: This paper describes the nature of digital intelligence and provides context for the material published as a result of the actions of National Security Agency (NSA) contractor Edward Snowden. Digital intelligence is presented as enabled by the opportunities of global communications and private sector innovation and as growing in response to changing demands from government and law enforcement, in part mediated through legal, parliamentary and executive regulation. A common set of organizational and ethical norms based on human rights considerations are suggested to govern such modern intelligence activity (both domestic and external) using a three-layer model of security activity on the Internet: securing the use of the Internet for everyday economic and social life; the activity of law enforcement -- both nationally and through international agreements -- attempting to manage criminal threats exploiting the Internet; and the work of secret intelligence and security agencies using the Internet to gain information on their targets, including in support of law enforcement.

I don't agree with a lot of it, but it's worth reading.

My favorite Omand quote is this, defending the close partnership between the NSA and GCHQ in 2013: "We have the brains. They have the money. It's a collaboration that's worked very well."

Posted on March 20, 2015 at 1:51 PM19 Comments

Cisco Shipping Equipment to Fake Addresses to Foil NSA Interception

Last May, we learned that the NSA intercepts equipment being shipped around the world and installs eavesdropping implants. There were photos of NSA employees opening up a Cisco box. Cisco's CEO John Chambers personally complained to President Obama about this practice, which is not exactly a selling point for Cisco equipment abroad. Der Spiegel published the more complete document, along with a broader story, in January of this year:

In one recent case, after several months a beacon implanted through supply-chain interdiction called back to the NSA covert infrastructure. The call back provided us access to further exploit the device and survey the network. Upon initiating the survey, SIGINT analysis from TAO/Requirements & Targeting determined that the implanted device was providing even greater access than we had hoped: We knew the devices were bound for the Syrian Telecommunications Establishment (STE) to be used as part of their internet backbone, but what we did not know was that STE's GSM (cellular) network was also using this backbone. Since the STE GSM network had never before been exploited, this new access represented a real coup.

Now Cisco is taking matters into its own hands, offering to ship equipment to fake addresses in an effort to avoid NSA interception.

I don't think we have even begun to understand the long-term damage the NSA has done to the US tech industry.

Slashdot thread.

Posted on March 20, 2015 at 6:56 AM42 Comments

More Data and Goliath News

Right now, the book is #6 on the New York Times best-seller list in hardcover nonfiction, and #13 in combined print and e-book nonfiction. This is the March 22 list, and covers sales from the first week of March. The March 29 list -- covering sales from the second week of March -- is not yet on the Internet. On that list, I'm #11 on the hardcover nonfiction list, and not at all on the combined print and e-book nonfiction list.

Marc Rotenberg of EPIC tells me that Vance Packard's The Naked Society made it to #7 on the list during the week of July 12, 1964, and -- by that measure -- Data and Goliath is the most popular privacy book of all time. I'm not sure I can claim that honor yet, but it's a nice thought. And two weeks on the New York Times best-seller list is super fantastic.

For those curious to know what sorts of raw numbers translate into those rankings, this is what I know. Nielsen Bookscan tracks retail sales across the US, and captures about 80% of the book market. It reports that my book sold 4,706 copies during the first week of March, and 2,339 copies in the second week. Taking that 80% figure, that means I sold 6,000 copies the first week and 3,000 the second.

My publisher tells me that Amazon sold 650 hardcovers and 600 e-books during the first week, and 400 hardcovers and 500 e-books during the second week. The hardcover sales ranking was 865, 949, 611, 686, 657, 602, 595 during the first week, and 398, 511, 693, 867, 341, 357, 343 during the second. The book's rankings during those first few days don't match sales, because Amazon records a sale for the rankings when a person orders a book, but only counts the sale when it actually ships it. So all of my preorders sold on that first day, even though they were calculated in the rankings during the days and weeks before publication date.

There are few new book reviews. There's one from the Dealbook blog at the New York Times that treats the book very seriously, but doesn't agree with my conclusions. (A rebuttal to that review is here.) A review from the Wall Street Journal was even less kind. This review from InfoWorld is much more positive.

All of this, and more, is on the book's website.

There are several book-related videos online. The first is the talk I gave at the Harvard Bookstore on March 4th. The second and third are interviews of me on Democracy Now. I also did a more general Q&A with Gizmodo.

Note to readers. The book is 80,000 words long, which is a normal length for a book like this. But the book's size is much larger, because it contains a lot of references. They're not numbered, but if they were, there would be over 1,000 numbers. I counted all the links, and there are 1,622 individual citations. That's a lot of text. This means that if you're reading the book on paper, the narrative ends on page 238, even though the book continues to page 364. If you're reading it on the Kindle, you'll finish the book when the Kindle says you're only 44% of the way through. The difference between pages and percentages is because the references are set in smaller type than the body. I warn you of this now, so you know what to expect. It always annoys me that the Kindle calculates percent done from the end of the file, not the end of the book.

And if you've read the book, please post a review on the book's Amazon page or on Goodreads. Reviews are important on those sites, and I need more of them.

Posted on March 19, 2015 at 2:35 PM15 Comments

Understanding the Organizational Failures of Terrorist Organizations

New research: Max Abrahms and Philip B.K. Potter, "Explaining Terrorism: Leadership Deficits and Militant Group Tactics," International Organizations.

Abstract: Certain types of militant groups -- those suffering from leadership deficits -- are more likely to attack civilians. Their leadership deficits exacerbate the principal-agent problem between leaders and foot soldiers, who have stronger incentives to harm civilians. We establish the validity of this proposition with a tripartite research strategy that balances generalizability and identification. First, we demonstrate in a sample of militant organizations operating in the Middle East and North Africa that those lacking centralized leadership are prone to targeting civilians. Second, we show that when the leaderships of militant groups are degraded from drone strikes in the Afghanistan-Pakistan tribal regions, the selectivity of organizational violence plummets. Third, we elucidate the mechanism with a detailed case study of the al-Aqsa Martyrs Brigade, a Palestinian group that turned to terrorism during the Second Intifada because pressure on the leadership allowed low-level members to act on their preexisting incentives to attack civilians. These findings indicate that a lack of principal control is an important, underappreciated cause of militant group violence against civilians.

I have previously blogged Max Abrahms's work here, here, and here.

Posted on March 19, 2015 at 8:09 AM32 Comments

How We Become Habituated to Security Warnings on Computers

New research: "How Polymorphic Warnings Reduce Habituation in the Brain ­- Insights from an fMRI Study."

Abstract: Research on security warnings consistently points to habituation as a key reason why users ignore security warnings. However, because habituation as a mental state is difficult to observe, previous research has examined habituation indirectly by observing its influence on security behaviors. This study addresses this gap by using functional magnetic resonance imaging (fMRI) to open the "black box" of the brain to observe habituation as it develops in response to security warnings. Our results show a dramatic drop in the visual processing centers of the brain after only the second exposure to a warning, with further decreases with subsequent exposures. To combat the problem of habituation, we designed a polymorphic warning that changes its appearance. We show in two separate experiments using fMRI and mouse cursor tracking that our polymorphic warning is substantially more resistant to habituation than conventional warnings. Together, our neurophysiological findings illustrate the considerable influence of human biology on users' habituation to security warnings.


EDITED TO ADD (3/21): News article.

Posted on March 18, 2015 at 6:48 AM26 Comments

Details on Hacking Team Software Used by Ethiopian Government

The Citizen Lab at the University of Toronto published a new report on the use of spyware from the Italian cyberweapons arms manufacturer Hacking Team by the Ethiopian intelligence service. We previously learned that the government used this software to target US-based Ethiopian journalists.

News articles. Human Rights Watch press release.

Posted on March 17, 2015 at 10:07 AM10 Comments

How the CIA Might Target Apple's XCode

The Intercept recently posted a story on the CIA's attempts to hack the iOS operating system. Most interesting was the speculation that it hacked XCode, which would mean that any apps developed using that tool would be compromised.

The security researchers also claimed they had created a modified version of Apple's proprietary software development tool, Xcode, which could sneak surveillance backdoors into any apps or programs created using the tool. Xcode, which is distributed by Apple to hundreds of thousands of developers, is used to create apps that are sold through Apple's App Store.

The modified version of Xcode, the researchers claimed, could enable spies to steal passwords and grab messages on infected devices. Researchers also claimed the modified Xcode could "force all iOS applications to send embedded data to a listening post." It remains unclear how intelligence agencies would get developers to use the poisoned version of Xcode.

Researchers also claimed they had successfully modified the OS X updater, a program used to deliver updates to laptop and desktop computers, to install a "keylogger."

It's a classic application of Ken Thompson's classic 1984 paper, "Reflections on Trusting Trust," and a very nasty attack. Dan Wallach speculates on how this might work.

Posted on March 16, 2015 at 7:38 AM61 Comments

Fall Seminar on Catastrophic Risk

I am planning a study group at Harvard University (in Boston) for the Fall semester, on catastrophic risk.

Berkman Study Group -- Catastrophic Risk: Technologies and Policy

Technology empowers, for both good and bad. A broad history of "attack" technologies shows trends of empowerment, as individuals wield ever more destructive power. The natural endgame is a nuclear bomb in everybody's back pocket, or a bio-printer that can drop a species. And then what? Is society even possible when the most extreme individual can kill everyone else? Is totalitarian control the only way to prevent human devastation, or are there other possibilities? And how realistic are these scenarios, anyway? In this class, we'll discuss technologies like cyber, bio, nanotech, artificial intelligence, and autonomous drones; security technologies and policies for catastrophic risk; and more. Is the reason we've never met any extraterrestrials that natural selection dictates that any species achieving a sufficiently advanced technology level inevitably exterminates itself?

The study group may serve as a springboard for an independent paper and credit, in conjunction with faculty supervision from your program.

All disciplines and backgrounds welcome, students and non-students alike. This discussion needs diverse perspectives. We also ask that you commit to preparing for and participating in all sessions.

Six sessions, Mondays, 5:00­-7:00 PM, Location TBD
9/14, 9/28, 10/5, 10/19, 11/2, 11/16

Please respond to Bruce Schneier with a resume and statement of interest. Applications due August 14. Bruce will review applications and aim for a seminar size of roughly 16­20 people with a diversity of backgrounds and expertise.

Please help me spread the word far and wide. The description is only on a Berkman page, so students won't see it in their normal perusal of fall classes.

Posted on March 13, 2015 at 2:36 PM86 Comments

Threats to Information Integrity

Every year, the Director of National Intelligence publishes an unclassified "Worldwide Threat Assessment." This year's report was published two weeks ago. "Cyber" is the first threat listed, and includes most of what you'd expect from a report like this.

More interesting is this comment about information integrity:

Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decisionmaking by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.

This speaks directly to the need for strong cryptography to protect the integrity of information.

Posted on March 13, 2015 at 6:05 AM29 Comments

Data and Goliath Makes New York Times Best-Seller List

The March 22 best-seller list from the New York Times will list me as #6 in the hardcover nonfiction category, and #13 in the combined paper/e-book category. This is amazing, really. The book just barely crossed #400 on Amazon this week, but it seems that other booksellers did more.

There are new reviews from the LA Times, Lawfare, EFF, and Slashdot.

The Internet Society recorded a short video of me talking about my book. I've given longer talks, and videos should be up soon. "Science Friday" interviewed me about my book.

Amazon has it back in stock. And, as always, more information on the book's website.

Posted on March 12, 2015 at 2:05 PM21 Comments

The Changing Economics of Surveillance

Cory Doctorow examines the changing economics of surveillance and what it means:

The Stasi employed one snitch for every 50 or 60 people it watched. We can't be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don't work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance -- a ratio approaching 1:10,000. What's more, the US has only ("only"!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn't all that expensive, compared to the number of lives that gear lets them pry into.

IT has been responsible for a 2-3 order of magnitude productivity gain in surveillance efficiency. The Stasi used an army to surveil a nation; the NSA uses a battalion to surveil a planet.

I am reminded of this paper on the changing economics of surveillance.

Posted on March 12, 2015 at 6:22 AM44 Comments

Equation Group Update

More information about the Equation Group, aka the NSA.

Kaspersky Labs has published more information about the Equation Group -- that's the NSA -- and its sophisticated malware platform.

Ars Technica article.

Posted on March 11, 2015 at 2:14 PM25 Comments

Hardware Bit-Flipping Attack

The Project Zero team at Google has posted details of a new attack that targets a computer's' DRAM. It's called Rowhammer. Here's a good description:

Here's how Rowhammer gets its name: In the Dynamic Random Access Memory (DRAM) used in some laptops, a hacker can run a program designed to repeatedly access a certain row of transistors in the computer's memory, "hammering" it until the charge from that row leaks into the next row of memory. That electromagnetic leakage can cause what's known as "bit flipping," in which transistors in the neighboring row of memory have their state reversed, turning ones into zeros or vice versa. And for the first time, the Google researchers have shown that they can use that bit flipping to actually gain unintended levels of control over a victim computer. Their Rowhammer hack can allow a "privilege escalation," expanding the attacker's influence beyond a certain fenced-in portion of memory to more sensitive areas.


When run on a machine vulnerable to the rowhammer problem, the process was able to induce bit flips in page table entries (PTEs). It was able to use this to gain write access to its own page table, and hence gain read-write access to all of physical memory.

The cause is simply the super dense packing of chips:

This works because DRAM cells have been getting smaller and closer together. As DRAM manufacturing scales down chip features to smaller physical dimensions, to fit more memory capacity onto a chip, it has become harder to prevent DRAM cells from interacting electrically with each other. As a result, accessing one location in memory can disturb neighbouring locations, causing charge to leak into or out of neighbouring cells. With enough accesses, this can change a cell's value from 1 to 0 or vice versa.

Very clever, and yet another example of the security interplay between hardware and software.

This kind of thing is hard to fix, although the Google team gives some mitigation techniques at the end of their analysis.

Slashdot thread.

EDITED TO ADD (3/12): Good explanation of the vulnerability.

Posted on March 11, 2015 at 6:16 AM38 Comments

Can the NSA Break Microsoft's BitLocker?

The Intercept has a new story on the CIA's -- yes, the CIA, not the NSA -- efforts to break encryption. These are from the Snowden documents, and talk about a conference called the Trusted Computing Base Jamboree. There are some interesting documents associated with the article, but not a lot of hard information.

There's a paragraph about Microsoft's BitLocker, the encryption system used to protect MS Windows computers:

Also presented at the Jamboree were successes in the targeting of Microsoft's disk encryption technology, and the TPM chips that are used to store its encryption keys. Researchers at the CIA conference in 2010 boasted about the ability to extract the encryption keys used by BitLocker and thus decrypt private data stored on the computer. Because the TPM chip is used to protect the system from untrusted software, attacking it could allow the covert installation of malware onto the computer, which could be used to access otherwise encrypted communications and files of consumers. Microsoft declined to comment for this story.

This implies that the US intelligence community -- I'm guessing the NSA here -- can break BitLocker. The source document, though, is much less definitive about it.

Power analysis, a side-channel attack, can be used against secure devices to non-invasively extract protected cryptographic information such as implementation details or secret keys. We have employed a number of publically known attacks against the RSA cryptography found in TPMs from five different manufacturers. We will discuss the details of these attacks and provide insight into how private TPM key information can be obtained with power analysis. In addition to conventional wired power analysis, we will present results for extracting the key by measuring electromagnetic signals emanating from the TPM while it remains on the motherboard. We will also describe and present results for an entirely new unpublished attack against a Chinese Remainder Theorem (CRT) implementation of RSA that will yield private key information in a single trace.

The ability to obtain a private TPM key not only provides access to TPM-encrypted data, but also enables us to circumvent the root-of-trust system by modifying expected digest values in sealed data. We will describe a case study in which modifications to Microsoft's Bitlocker encrypted metadata prevents software-level detection of changes to the BIOS.

Differential power analysis is a powerful cryptanalytic attack. Basically, it examines a chip's power consumption while it performs encryption and decryption operations and uses that information to recover the key. What's important here is that this is an attack to extract key information from a chip while it is running. If the chip is powered down, or if it doesn't have the key inside, there's no attack.

I don't take this to mean that the NSA can take a BitLocker-encrypted hard drive and recover the key. I do take it to mean that the NSA can perform a bunch of clever hacks on a BitLocker-encrypted hard drive while it is running. So I don't think this means that BitLocker is broken.

But who knows? We do know that the FBI pressured Microsoft to add a backdoor to BitLocker in 2005. I believe that was unsuccessful.

More than that, we don't know.

EDITED TO ADD (3/12): Starting with Windows 8, Microsoft removed the Elephant Diffuser from BitLocker. I see no reason to remove it other than to make the encryption weaker.

Posted on March 10, 2015 at 2:34 PM69 Comments

Geotagging Twitter Users by Mining Their Social Graphs

New research: Geotagging One Hundred Million Twitter Accounts with Total Variation Minimization," by Ryan Compton, David Jurgens, and David Allen.

Abstract: Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data.

Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors.

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.

Posted on March 10, 2015 at 6:50 AM6 Comments

Identifying When Someone Is Operating a Computer Remotely

Here's an interesting technique to detect Remote Access Trojans, or RATS: differences in how local and remote users use the keyboard and mouse:

By using biometric analysis tools, we are able to analyze cognitive traits such as hand-eye coordination, usage preferences, as well as device interaction patterns to identify a delay or latency often associated with remote access attacks. Simply put, a RAT's keyboard typing or cursor movement will often cause delayed visual feedback which in turn results in delayed response time; the data is simply not as fluent as would be expected from standard human behavior data.

No data on false positives vs. false negatives, but interesting nonetheless.

Posted on March 9, 2015 at 1:03 PM19 Comments

Attack Attribution and Cyber Conflict

The vigorous debate after the Sony Pictures breach pitted the Obama administration against many of us in the cybersecurity community who didn't buy Washington's claim that North Korea was the culprit.

What's both amazing -- and perhaps a bit frightening -- about that dispute over who hacked Sony is that it happened in the first place.

But what it highlights is the fact that we're living in a world where we can't easily tell the difference between a couple of guys in a basement apartment and the North Korean government with an estimated $10 billion military budget. And that ambiguity has profound implications for how countries will conduct foreign policy in the Internet age.

Clandestine military operations aren't new. Terrorism can be hard to attribute, especially the murky edges of state-sponsored terrorism. What's different in cyberspace is how easy it is for an attacker to mask his identity -- and the wide variety of people and institutions that can attack anonymously.

In the real world, you can often identify the attacker by the weaponry. In 2006, Israel attacked a Syrian nuclear facility. It was a conventional attack -- military airplanes flew over Syria and bombed the plant -- and there was never any doubt who did it. That shorthand doesn't work in cyberspace.

When the US and Israel attacked an Iranian nuclear facility in 2010, they used a cyberweapon and their involvement was a secret for years. On the Internet, technology broadly disseminates capability. Everyone from lone hackers to criminals to hypothetical cyberterrorists to nations' spies and soldiers are using the same tools and the same tactics. Internet traffic doesn't come with a return address, and it's easy for an attacker to obscure his tracks by routing his attacks through some innocent third party.

And while it now seems that North Korea did indeed attack Sony, the attack it most resembles was conducted by members of the hacker group Anonymous against a company called HBGary Federal in 2011. In the same year, other members of Anonymous threatened NATO, and in 2014, still others announced that they were going to attack ISIS. Regardless of what you think of the group's capabilities, it's a new world when a bunch of hackers can threaten an international military alliance.

Even when a victim does manage to attribute a cyberattack, the process can take a long time. It took the US weeks to publicly blame North Korea for the Sony attacks. That was relatively fast; most of that time was probably spent trying to figure out how to respond. Attacks by China against US companies have taken much longer to attribute.

This delay makes defense policy difficult. Microsoft's Scott Charney makes this point: When you're being physically attacked, you can call on a variety of organizations to defend you -- the police, the military, whoever does antiterrorism security in your country, your lawyers. The legal structure justifying that defense depends on knowing two things: who's attacking you, and why. Unfortunately, when you're being attacked in cyberspace, the two things you often don't know are who's attacking you, and why.

Whose job was it to defend Sony? Was it the US military's, because it believed the attack to have come from North Korea? Was it the FBI, because this wasn't an act of war? Was it Sony's own problem, because it's a private company? What about during those first weeks, when no one knew who the attacker was? These are just a few of the policy questions that we don't have good answers for.

Certainly Sony needs enough security to protect itself regardless of who the attacker was, as do all of us. For the victim of a cyberattack, who the attacker is can be academic. The damage is the same, whether it's a couple of hackers or a nation-state.

In the geopolitical realm, though, attribution is vital. And not only is attribution hard, providing evidence of any attribution is even harder. Because so much of the FBI's evidence was classified—and probably provided by the National Security Agency -- it was not able to explain why it was so sure North Korea did it. As I recently wrote: "The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong-un's sign-off on the plan." Making any of this public would reveal the NSA's "sources and methods," something it regards as a very important secret.

Different types of attribution require different levels of evidence. In the Sony case, we saw the US government was able to generate enough evidence to convince itself. Perhaps it had the additional evidence required to convince North Korea it was sure, and provided that over diplomatic channels. But if the public is expected to support any government retaliatory action, they are going to need sufficient evidence made public to convince them. Today, trust in US intelligence agencies is low, especially after the 2003 Iraqi weapons-of-mass-destruction debacle.

What all of this means is that we are in the middle of an arms race between attackers and those that want to identify them: deception and deception detection. It's an arms race in which the US -- and, by extension, its allies -- has a singular advantage. We spend more money on electronic eavesdropping than the rest of the world combined, we have more technology companies than any other country, and the architecture of the Internet ensures that most of the world's traffic passes through networks the NSA can eavesdrop on.

In 2012, then US Secretary of Defense Leon Panetta said publicly that the US -- presumably the NSA -- has "made significant advances in ... identifying the origins" of cyberattacks. We don't know if this means they have made some fundamental technological advance, or that their espionage is so good that they're monitoring the planning processes. Other US government officials have privately said that they've solved the attribution problem.

We don't know how much of that is real and how much is bluster. It's actually in America's best interest to confidently accuse North Korea, even if it isn't sure, because it sends a strong message to the rest of the world: "Don't think you can hide in cyberspace. If you try anything, we'll know it's you."

Strong attribution leads to deterrence. The detailed NSA capabilities leaked by Edward Snowden help with this, because they bolster an image of an almost-omniscient NSA.

It's not, though -- which brings us back to the arms race. A world where hackers and governments have the same capabilities, where governments can masquerade as hackers or as other governments, and where much of the attribution evidence intelligence agencies collect remains secret, is a dangerous place.

So is a world where countries have secret capabilities for deception and detection deception, and are constantly trying to get the best of each other. This is the world of today, though, and we need to be prepared for it.

This essay previously appeared in the Christian Science Monitor.

Posted on March 9, 2015 at 7:09 AM34 Comments

Friday Squid Blogging: Biodegradable Thermoplastic Inspired by Squid Teeth

There's a new 3D-printable biodegradable thermoplastic:

Pennsylvania State University researchers have synthesized a biodegradable thermoplastic that can be used for molding, extrusion, 3D printing, as an adhesive, or a coating using structural proteins from the ring teeth on squid tentacles.

Another article:

The researchers took genes from a squid and put it into E. coli bacteria. "You can insert genes into this organism and while it produces its own genes, [it] produces this extra protein," Demirel explains. He compares the process to making wine or beer, except that instead of the fermentation process producing alcohol, it produces more of the synthesized squid protein.

They began producing the material in a 1-liter tank, but by now have started using a 300-liter tank and can make 30-40 grams a day. In addition, they've made several changes to make the production process cheaper, whittling the cost down from $50 per gram to $100 per kilogram. Demirel says they are looking at using algae instead of bacteria to cut down costs further.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on March 6, 2015 at 4:21 PM130 Comments

Data and Goliath's Big Idea

Data and Goliath is a book about surveillance, both government and corporate. It's an exploration in three parts: what's happening, why it matters, and what to do about it. This is a big and important issue, and one that I've been working on for decades now. We've been on a headlong path of more and more surveillance, fueled by fear­--of terrorism mostly­--on the government side, and convenience on the corporate side. My goal was to step back and say "wait a minute; does any of this make sense?" I'm proud of the book, and hope it will contribute to the debate.

But there's a big idea here too, and that's the balance between group interest and self-interest. Data about us is individually private, and at the same time valuable to all us collectively. How do we decide between the two? If President Obama tells us that we have to sacrifice the privacy of our data to keep our society safe from terrorism, how do we decide if that's a good trade-off? If Google and Facebook offer us free services in exchange for allowing them to build intimate dossiers on us, how do we know whether to take the deal?

There are a lot of these sorts of deals on offer. Waze gives us real-time traffic information, but does it by collecting the location data of everyone using the service. The medical community wants our detailed health data to perform all sorts of health studies and to get early warning of pandemics. The government wants to know all about you to better deliver social services. Google wants to know everything about you for marketing purposes, but will "pay" you with free search, free e-mail, and the like.

Here's another one I describe in the book: "Social media researcher Reynol Junco analyzes the study habits of his students. Many textbooks are online, and the textbook websites collect an enormous amount of data about how­--and how often­--students interact with the course material. Junco augments that information with surveillance of his students' other computer activities. This is incredibly invasive research, but its duration is limited and he is gaining new understanding about how both good and bad students study­--and has developed interventions aimed at improving how students learn. Did the group benefit of this study outweigh the individual privacy interest of the subjects who took part in it?"

Again and again, it's the same trade-off: individual value versus group value.

I believe this is the fundamental issue of the information age, and solving it means careful thinking about the specific issues and a moral analysis of how they affect our core values.

You can see that in some of the debate today. I know hardened privacy advocates who think it should be a crime for people to withhold their medical data from the pool of information. I know people who are fine with pretty much any corporate surveillance but want to prohibit all government surveillance, and others who advocate the exact opposite.

When possible, we need to figure out how to get the best of both: how to design systems that make use of our data collectively to benefit society as a whole, while at the same time protecting people individually.

The world isn't waiting; decisions about surveillance are being made for us­--often in secret. If we don't figure this out for ourselves, others will decide what they want to do with us and our data. And we don't want that. I say: "We don't want the FBI and NSA to secretly decide what levels of government surveillance are the default on our cell phones; we want Congress to decide matters like these in an open and public debate. We don't want the governments of China and Russia to decide what censorship capabilities are built into the Internet; we want an international standards body to make those decisions. We don't want Facebook to decide the extent of privacy we enjoy amongst our friends; we want to decide for ourselves."

In my last chapter, I write: "Data is the pollution problem of the information age, and protecting privacy is the environmental challenge. Almost all computers produce personal information. It stays around, festering. How we deal with it­--how we contain it and how we dispose of it­--is central to the health of our information economy. Just as we look back today at the early decades of the industrial age and wonder how our ancestors could have ignored pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we addressed the challenge of data collection and misuse."

That's it; that's our big challenge. Some of our data is best shared with others. Some of it can be 'processed'­--anonymized, maybe­--before reuse. Some of it needs to be disposed of properly, either immediately or after a time. And some of it should be saved forever. Knowing what data goes where is a balancing act between group and self-interest, a trade-off that will continually change as technology changes, and one that we will be debating for decades to come.

This essay previously appeared on John Scalzi's blog Whatever.

EDITED TO ADD (3/7): Hacker News thread.

Posted on March 6, 2015 at 2:10 PM39 Comments

FREAK: Security Rollback Attack Against SSL

This week, we learned about an attack called "FREAK" -- "Factoring Attack on RSA-EXPORT Keys" -- that can break the encryption of many websites. Basically, some sites' implementations of secure sockets layer technology, or SSL, contain both strong encryption algorithms and weak encryption algorithms. Connections are supposed to use the strong algorithms, but in many cases an attacker can force the website to use the weaker encryption algorithms and then decrypt the traffic. From Ars Technica:

In recent days, a scan of more than 14 million websites that support the secure sockets layer or transport layer security protocols found that more than 36 percent of them were vulnerable to the decryption attacks. The exploit takes about seven hours to carry out and costs as little as $100 per site.

This is a general class of attack I call "security rollback" attacks. Basically, the attacker forces the system users to revert to a less secure version of their protocol. Think about the last time you used your credit card. The verification procedure involved the retailer's computer connecting with the credit card company. What if you snuck around to the back of the building and severed the retailer's phone lines? Most likely, the retailer would have still accepted your card, but defaulted to making a manual impression of it and maybe looking at your signature. The result: you'll have a much easier time using a stolen card.

In this case, the security flaw was designed in deliberately. Matthew Green writes:

Back in the early 1990s when SSL was first invented at Netscape Corporation, the United States maintained a rigorous regime of export controls for encryption systems. In order to distribute crypto outside of the U.S., companies were required to deliberately "weaken" the strength of encryption keys. For RSA encryption, this implied a maximum allowed key length of 512 bits.

The 512-bit export grade encryption was a compromise between dumb and dumber. In theory it was designed to ensure that the NSA would have the ability to "access" communications, while allegedly providing crypto that was still "good enough" for commercial use. Or if you prefer modern terms, think of it as the original "golden master key."

The need to support export-grade ciphers led to some technical challenges. Since U.S. servers needed to support both strong and weak crypto, the SSL designers used a "cipher suite" negotiation mechanism to identify the best cipher both parties could support. In theory this would allow "strong" clients to negotiate "strong" ciphersuites with servers that supported them, while still providing compatibility to the broken foreign clients.

And that's the problem. The weak algorithms are still there, and can be exploited by attackers.

Fixes are coming. Companies like Apple are quickly rolling out patches. But the vulnerability has been around for over a decade, and almost has certainly used by national intelligence agencies and criminals alike.

This is the generic problem with government-mandated backdoors, key escrow, "golden keys," or whatever you want to call them. We don't know how to design a third-party access system that checks for morality; once we build in such access, we then have to ensure that only the good guys can do it. And we can't. Or, to quote the Economist: "...mathematics applies to just and unjust alike; a flaw that can be exploited by Western governments is vulnerable to anyone who finds it."

This essay previously appeared on the Lawfare blog.

EDITED TO ADD: Microsoft Windows is vulnerable.

Posted on March 6, 2015 at 10:46 AM40 Comments

The TSA's FAST Personality Screening Program Violates the Fourth Amendment

New law journal article: "A Slow March Towards Thought Crime: How the Department of Homeland Security's FAST Program Violates the Fourth Amendment," by Christopher A. Rogers. From the abstract:

FAST is currently designed for deployment at airports, where heightened security threats justify warrantless searches under the administrative search exception to the Fourth Amendment. FAST scans, however, exceed the scope of the administrative search exception. Under this exception, the courts would employ a balancing test, weighing the governmental need for the search versus the invasion of personal privacy of the search, to determine whether FAST scans violate the Fourth Amendment. Although the government has an acute interest in protecting the nation's air transportation system against terrorism, FAST is not narrowly tailored to that interest because it cannot detect the presence or absence of weapons but instead detects merely a person's frame of mind. Further, the system is capable of detecting an enormous amount of the scannee's highly sensitive personal medical information, ranging from detection of arrhythmias and cardiovascular disease, to asthma and respiratory failures, physiological abnormalities, psychiatric conditions, or even a woman's stage in her ovulation cycle. This personal information warrants heightened protection under the Fourth Amendment. Rather than target all persons who fly on commercial airplanes, the Department of Homeland Security should limit the use of FAST to where it has credible intelligence that a terrorist act may occur and should place those people scanned on prior notice that they will be scanned using FAST.

Posted on March 6, 2015 at 6:28 AM35 Comments

Now Corporate Drones are Spying on Cell Phones

The marketing firm Adnear is using drones to track cell phone users:

The capture does not involve conversations or personally identifiable information, according to director of marketing and research Smriti Kataria. It uses signal strength, cell tower triangulation, and other indicators to determine where the device is, and that information is then used to map the user's travel patterns.

"Let's say someone is walking near a coffee shop," Kataria said by way of example.

The coffee shop may want to offer in-app ads or discount coupons to people who often walk by but don't enter, as well as to frequent patrons when they are elsewhere. Adnear's client would be the coffee shop or other retailers who want to entice passersby.


The system identifies a given user through the device ID, and the location info is used to flesh out the user's physical traffic pattern in his profile. Although anonymous, the user is "identified" as a code. The company says that no name, phone number, router ID, or other personally identifiable information is captured, and there is no photography or video.

Does anyone except this company believe that device ID is not personally identifiable information?

Posted on March 5, 2015 at 6:33 AM56 Comments

Tom Ridge Can Find Terrorists Anywhere

One of the problems with our current discourse about terrorism and terrorist policies is that the people entrusted with counterterrorism -- those whose job it is to surveil, study, or defend against terrorism -- become so consumed with their role that they literally start seeing terrorists everywhere. So it comes as no surprise that if you ask Tom Ridge, the former head of the Department of Homeland Security, about potential terrorism risks at a new LA football stadium, of course he finds them everywhere.

From a report he prepared -- paid, I'm sure -- about the location of a new football stadium:

Specifically, locating an NFL stadium at the Inglewood-Hollywood Park site needlessly increases risks for existing interests: LAX and tenant airlines, the NFL, the City of Los Angeles, law enforcement and first responders as well as the citizens and commercial enterprises in surrounding areas and across global transportation networks and supply chains. That risk would be expanded with the additional stadium and "soft target" infrastructure that would encircle the facility locally.

To be clear, total risk cannot be eliminated at any site. But basic risk management principles suggest that the proximity of these two sites creates a separate and additional set of risks that are wholly unnecessary.

In the post 9/11 world, the threat of terrorism is a permanent condition. As both a former governor and secretary of homeland security, it is my opinion that the peril of placing a National Football League stadium in the direct flight path of LAX -- layering risk -- outweigh any benefits over the decades-long lifespan of the facility.

If a decision is made to move forward at the Inglewood/Hollywood Park site, the NFL, state and local leaders, and those they represent, must be willing to accept the significant risk and the possible consequences that accompany a stadium at the location. This should give both public and private leaders in the area some pause. At the very least, an open, public debate should be enabled so that all interests may understand the comprehensive and interconnected security, safety and economic risks well before a shovel touches the ground.

I'm sure he can't help himself.

I am reminded of Glenn Greenwald's essay on the "terrorist expert" industry. I am also reminded of this story about a father taking pictures of his daughters.

On the plus side, now we all have a convincing argument against development. "You can't possibly build that shopping mall near my home, because OMG! terrorism."

Posted on March 4, 2015 at 6:40 AM45 Comments

Data and Goliath: Reviews and Excerpts

On the net right now, there are excerpts from the Introduction on Scientific American, Chapter 5 on the Atlantic, Chapter 6 on the Blaze, Chapter 8 on Ars Technica, Chapter 15 on Slate, and Chapter 16 on Motherboard. That might seem like a lot, but it's only 9,000 of the book's 80,000 words: barely 10%.

There are also a few reviews: from Boing Boing, Booklist, Kirkus Reviews, and Nature. More reviews coming.

Amazon claims to be temporarily out of stock, but that'll only be for a day or so. There are many other places to buy the book, including Indie Bound, which serves independent booksellers.

Book website is here.

Posted on March 3, 2015 at 1:03 PM17 Comments

Google Backs Away from Default Lollipop Encryption

Lollipop device encryption by default is still in the future. No conspiracy here; it seems like they don't have the appropriate drivers yet. But while relaxing the requirement might make sense technically, it's not a good public relations move.

Android compatibility document. Slashdot story.

Posted on March 3, 2015 at 5:46 AM35 Comments

The Democratization of Cyberattack

The thing about infrastructure is that everyone uses it. If it's secure, it's secure for everyone. And if it's insecure, it's insecure for everyone. This forces some hard policy choices.

When I was working with the Guardian on the Snowden documents, the one top-secret program the NSA desperately did not want us to expose was QUANTUM. This is the NSA's program for what is called packet injection--basically, a technology that allows the agency to hack into computers.

Turns out, though, that the NSA was not alone in its use of this technology. The Chinese government uses packet injection to attack computers. The cyberweapons manufacturer Hacking Team sells packet injection technology to any government willing to pay for it. Criminals use it. And there are hacker tools that give the capability to individuals as well.

All of these existed before I wrote about QUANTUM. By using its knowledge to attack others rather than to build up the internet's defenses, the NSA has worked to ensure that anyone can use packet injection to hack into computers.

This isn't the only example of once-top-secret US government attack capabilities being used against US government interests. StingRay is a particular brand of IMSI catcher, and is used to intercept cell phone calls and metadata. This technology was once the FBI's secret, but not anymore. There are dozens of these devices scattered around Washington, DC, as well as the rest of the country, run by who-knows-what government or organization. By accepting the vulnerabilities in these devices so the FBI can use them to solve crimes, we necessarily allow foreign governments and criminals to use them against us.

Similarly, vulnerabilities in phone switches--SS7 switches, for those who like jargon--have been long used by the NSA to locate cell phones. This same technology is sold by the US company Verint and the UK company Cobham to third-world governments, and hackers have demonstrated the same capabilities at conferences. An eavesdropping capability that was built into phone switches to enable lawful intercepts was used by still-unidentified unlawful intercepters in Greece between 2004 and 2005.

These are the stories you need to keep in mind when thinking about proposals to ensure that all communications systems can be eavesdropped on by government. Both the FBI's James Comey and UK Prime Minister David Cameron recently proposed limiting secure cryptography in favor of cryptography they can have access to.

But here's the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.

Even worse, modern computer technology is inherently democratizing. Today's NSA secrets become tomorrow's PhD theses and the next day's hacker tools. As long as we're all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.

We can't choose a world where the US gets to spy but China doesn't, or even a world where governments get to spy and criminals don't. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It's security or surveillance.

As long as criminals are breaking into corporate networks and stealing our data, as long as totalitarian governments are spying on their citizens, as long as cyberterrorism and cyberwar remain a threat, and as long as the beneficial uses of computer technology outweighs the harmful uses, we have to choose security. Anything else is just too dangerous.

This essay previously appeared on Vice Motherboard.

EDITED TO ADD (3/4): Slashdot thread.

Posted on March 2, 2015 at 6:49 AM145 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.