Blog: June 2015 Archives

Twitter Followers: Please Use the Correct Feed

The official Twitter feed for my blog is @schneierblog. The account @Bruce_Schneier also mirrors my blog, but it is not mine. I have nothing to do with it, and I don’t know who owns it.

Normally I wouldn’t mind, but the unofficial blog fails intermittently. Also, @Bruce_Schneier follows people who then think I’m following them. I’m not; I never log in to Twitter and I don’t follow anyone there.

So if you want to read my blog on Twitter, please make sure you’re following @schneierblog. If you are the person who runs the @Bruce_Schneier account—if anyone is even running it anymore—please e-mail me at the address on my Contact page.

And if anyone from the Twitter fraud department is reading this, please contact me. I know I can get the @Bruce_Schneier account deleted, but I don’t want to lose the 27,300 followers on it. What I want is to consolidate them with the 67,700 followers on my real account. There’s no way to explain this on the form to report Twitter impersonation. (Although maybe I should just delete the account. I didn’t do it 18 months ago when there were only 16,000 followers on that account, and look what happened. It’ll only be worse next year.)

EDITED TO ADD (7/2): It’s done. @Bruce_Schneier is gone.

Posted on June 30, 2015 at 1:16 PM25 Comments

Tracking the Psychological Effects of the 9/11 Attacks

Interesting research from 2012: “The Dynamics of Evolving Beliefs, Concerns, Emotions, and Behavioral Avoidance Following 9/11: A Longitudinal Analysis of Representative Archival Samples“:

Abstract: September 11 created a natural experiment that enables us to track the psychological effects of a large-scale terror event over time. The archival data came from 8,070 participants of 10 ABC and CBS News polls collected from September 2001 until September 2006. Six questions investigated emotional, behavioral, and cognitive responses to the events of September 11 over a five-year period. We found that heightened responses after September 11 dissipated and reached a plateau at various points in time over a five-year period. We also found that emotional, cognitive, and behavioral reactions were moderated by age, sex, political affiliation, and proximity to the attack. Both emotional and behavioral responses returned to a normal state after one year, whereas cognitively-based perceptions of risk were still diminishing as late as September 2006. These results provide insight into how individuals will perceive and respond to future similar attacks.

Posted on June 30, 2015 at 6:27 AM34 Comments

TEMPEST Attack

There’s a new paper on a low-cost TEMPEST attack against PC cryptography:

We demonstrate the extraction of secret decryption keys from laptop computers, by nonintrusively measuring electromagnetic emanations for a few seconds from a distance of 50 cm. The attack can be executed using cheap and readily-available equipment: a consumer-grade radio receiver or a Software Defined Radio USB dongle. The setup is compact and can operate untethered; it can be easily concealed, e.g., inside pita bread. Common laptops, and popular implementations of RSA and ElGamal encryptions, are vulnerable to this attack, including those that implement the decryption using modern exponentiation algorithms such as sliding-window, or even its side-channel resistant variant, fixed-window (m-ary) exponentiation.

We successfully extracted keys from laptops of various models running GnuPG (popular open source encryption software, implementing the OpenPGP standard), within a few seconds. The attack sends a few carefully-crafted ciphertexts, and when these are decrypted by the target computer, they trigger the occurrence of specially-structured values inside the decryption software. These special values cause observable fluctuations in the electromagnetic field surrounding the laptop, in a way that depends on the pattern of key bits (specifically, the key-bits window in the exponentiation routine). The secret key can be deduced from these fluctuations, through signal processing and cryptanalysis.

From Wired:

Researchers at Tel Aviv University and Israel’s Technion research institute have developed a new palm-sized device that can wirelessly steal data from a nearby laptop based on the radio waves leaked by its processor’s power use. Their spy bug, built for less than $300, is designed to allow anyone to “listen” to the accidental radio emanations of a computer’s electronics from 19 inches away and derive the user’s secret decryption keys, enabling the attacker to read their encrypted communications. And that device, described in a paper they’re presenting at the Workshop on Cryptographic Hardware and Embedded Systems in September, is both cheaper and more compact than similar attacks from the past—so small, in fact, that the Israeli researchers demonstrated it can fit inside a piece of pita bread.

Another article. NSA article from 1972 on TEMPEST. Hacker News thread. Reddit thread.

Posted on June 29, 2015 at 1:38 PM34 Comments

Other GCHQ News from Snowden

There are two other Snowden stories this week about GCHQ: one about its hacking practices, and the other about its propaganda and psychology research. The second is particularly disturbing:

While some of the unit’s activities are focused on the claimed areas, JTRIG also appears to be intimately involved in traditional law enforcement areas and U.K.-specific activity, as previously unpublished documents demonstrate. An August 2009 JTRIG memo entitled “Operational Highlights” boasts of “GCHQ’s first serious crime effects operation” against a website that was identifying police informants and members of a witness protection program. Another operation investigated an Internet forum allegedly “used to facilitate and execute online fraud.” The document also describes GCHQ advice provided :to assist the UK negotiating team on climate change.”

Particularly revealing is a fascinating 42-page document from 2011 detailing JTRIG’s activities. It provides the most comprehensive and sweeping insight to date into the scope of this unit’s extreme methods. Entitled “Behavioral Science Support for JTRIG’s Effects and Online HUMINT [Human Intelligence] Operations,” it describes the types of targets on which the unit focuses, the psychological and behavioral research it commissions and exploits, and its future organizational aspirations. It is authored by a psychologist, Mandeep K. Dhami.

Among other things, the document lays out the tactics the agency uses to manipulate public opinion, its scientific and psychological research into how human thinking and behavior can be influenced, and the broad range of targets that are traditionally the province of law enforcement rather than intelligence agencies.

Posted on June 26, 2015 at 12:12 PM54 Comments

NSA and GCHQ Attacked Antivirus Companies

On Monday, the Intercept published a new story from the Snowden documents:

The spy agencies have reverse engineered software products, sometimes under questionable legal authority, and monitored web and email traffic in order to discreetly thwart anti-virus software and obtain intelligence from companies about security software and users of such software. One security software maker repeatedly singled out in the documents is Moscow-based Kaspersky Lab, which has a holding registered in the U.K., claims more than 270,000 corporate clients, and says it protects more than 400 million people with its products.

British spies aimed to thwart Kaspersky software in part through a technique known as software reverse engineering, or SRE, according to a top-secret warrant renewal request. The NSA has also studied Kaspersky Lab’s software for weaknesses, obtaining sensitive customer information by monitoring communications between the software and Kaspersky servers, according to a draft top-secret report. The U.S. spy agency also appears to have examined emails inbound to security software companies flagging new viruses and vulnerabilities.

Wired has a good article on the documents:

The documents…don’t describe actual computer breaches against the security firms, but instead depict a systematic campaign to reverse-engineer their software in order to uncover vulnerabilities that could help the spy agencies subvert it.

[…]

An NSA slide describing “Project CAMBERDADA” lists at least 23 antivirus and security firms that were in that spy agency’s sights. They include the Finnish antivirus firm F-Secure, the Slovakian firm Eset, Avast software from the Czech Republic. and Bit-Defender from Romania. Notably missing from the list are the American anti-virus firms Symantec and McAfee as well as the UK-based firm Sophos.

But antivirus wasn’t the only target of the two spy agencies. They also targeted their reverse-engineering skills against CheckPoint, an Israeli maker of firewall software, as well as commercial encryption programs and software underpinning the online bulletin boards of numerous companies. GCHQ, for example, reverse-engineered both the CrypticDisk program made by Exlade and the eDataSecurity system from Acer. The spy agency also targeted web forum systems like vBulletin and Invision Power Board­used by Sony Pictures, Electronic Arts, NBC Universal and others­as well as CPanel, a software used by GoDaddy for configuring its servers, and PostfixAdmin, for managing the Postfix email server software But that’s not all. GCHQ reverse-engineered Cisco routers, too, which allowed the agency’s spies to access “almost any user of the internet” inside Pakistan and “to re-route selective traffic” straight into the mouth of GCHQ’s collection systems.

There’s also this article from Ars Technica. Slashdot thread.

Kaspersky recently announced that it was the victim of Duqu 2.0, probably from Israel.

Posted on June 26, 2015 at 6:59 AM20 Comments

Yet Another Leaker—with the NSA's French Intercepts

Wikileaks has published some NSA SIGINT documents describing intercepted French government communications. This seems not be from the Snowden documents. It could be one of the other NSA leakers, or it could be someone else entirely.

As leaks go, this isn’t much. As I’ve said before, spying on foreign leaders is the kind of thing we want the NSA to do. I’m sure French Intelligence does the same to us.

EDITED TO ADD (6/25): To me, more interesting than the intercepts is the spreadsheet of NSA surveillance targets. That spreadsheet gives us a glimpse into the US process of surveillance: what US government office initially asked for the surveillance, what NSA office is tasked with analyzing the intelligence collected, where a particular target is on the priorities list, and so on.

Posted on June 25, 2015 at 12:51 PM77 Comments

What is the DoD's Position on Backdoors in Security Systems?

In May, Admiral James A. Winnefeld, Jr., vice-chairman of the Joint Chiefs of Staff, gave an address at the Joint Service Academies Cyber Security Summit at West Point. After he spoke for twenty minutes on the importance of Internet security and a good national defense, I was able to ask him a question (32:42 mark) about security versus surveillance:

Bruce Schneier: I’d like to hear you talk about this need to get beyond signatures and the more robust cyber defense and ask the industry to provide these technologies to make the infrastructure more secure. My question is, the only definition of “us” that makes sense is the world, is everybody. Any technologies that we’ve developed and built will be used by everyone—nation-state and non-nation-state. So anything we do to increase our resilience, infrastructure, and security will naturally make Admiral Rogers’s both intelligence and attack jobs much harder. Are you okay with that?

Admiral James A. Winnefeld: Yes. I think Mike’s okay with that, also. That’s a really, really good question. We call that IGL. Anyone know what IGL stands for? Intel gain-loss. And there’s this constant tension between the operational community and the intelligence community when a military action could cause the loss of a critical intelligence node. We live this every day. In fact, in ancient times, when we were collecting actual signals in the air, we would be on the operational side, “I want to take down that emitter so it’ll make it safer for my airplanes to penetrate the airspace,” and they’re saying, “No, you’ve got to keep that emitter up, because I’m getting all kinds of intelligence from it.” So this is a familiar problem. But I think we all win if our networks are more secure. And I think I would rather live on the side of secure networks and a harder problem for Mike on the intelligence side than very vulnerable networks and an easy problem for Mike. And part of that—it’s not only the right thing do, but part of that goes to the fact that we are more vulnerable than any other country in the world, on our dependence on cyber. I’m also very confident that Mike has some very clever people working for him. He might actually still be able to get some work done. But it’s an excellent question. It really is.

It’s a good answer, and one firmly on the side of not introducing security vulnerabilities, backdoors, key-escrow systems, or anything that weakens Internet systems. It speaks to what I have seen as a split in the Second Crypto War, between the NSA and the FBI on building secure systems versus building systems with surveillance capabilities.

I have written about this before:

But here’s the problem: technological capabilities cannot distinguish based on morality, nationality, or legality; if the US government is able to use a backdoor in a communications system to spy on its enemies, the Chinese government can use the same backdoor to spy on its dissidents.

Even worse, modern computer technology is inherently democratizing. Today’s NSA secrets become tomorrow’s PhD theses and the next day’s hacker tools. As long as we’re all using the same computers, phones, social networking platforms, and computer networks, a vulnerability that allows us to spy also allows us to be spied upon.

We can’t choose a world where the US gets to spy but China doesn’t, or even a world where governments get to spy and criminals don’t. We need to choose, as a matter of policy, communications systems that are secure for all users, or ones that are vulnerable to all attackers. It’s security or surveillance.

NSA Director Admiral Mike Rogers was in the audience (he spoke earlier), and I saw him nodding at Winnefeld’s answer. Two weeks later, at CyCon in Tallinn, Rogers gave the opening keynote, and he seemed to be saying the opposite.

“Can we create some mechanism where within this legal framework there’s a means to access information that directly relates to the security of our respective nations, even as at the same time we are mindful we have got to protect the rights of our individual citizens?”

[…]

Rogers said a framework to allow law enforcement agencies to gain access to communications is in place within the phone system in the United States and other areas, so “why can’t we create a similar kind of framework within the internet and the digital age?”

He added: “I certainly have great respect for those that would argue that they most important thing is to ensure the privacy of our citizens and we shouldn’t allow any means for the government to access information. I would argue that’s not in the nation’s best long term interest, that we’ve got to create some structure that should enable us to do that mindful that it has to be done in a legal way and mindful that it shouldn’t be something arbitrary.”

Does Winnefeld know that Rogers is contradicting him? Can someone ask JCS about this?

Posted on June 24, 2015 at 7:42 AM58 Comments

Hayden Mocks NSA Reforms

Former NSA Director Michael recently mocked the NSA reforms in the recently passed USA Freedom Act:

If somebody would come up to me and say, “Look, Hayden, here’s the thing: This Snowden thing is going to be a nightmare for you guys for about two years. And when we get all done with it, what you’re going to be required to do is that little 215 program about American telephony metadata—and by the way, you can still have access to it, but you got to go to the court and get access to it from the companies, rather than keep it to yourself.” I go: “And this is it after two years? Cool!”

The thing is, he’s right. And Peter Swire is also right when he calls the law “the biggest pro-privacy change to U.S. intelligence law since the original enactment of the Foreign Intelligence Surveillance Act in 1978.” I supported the bill not because it was the answer, but because it was a step in the right direction. And Hayden’s comments demonstrate how much more work we have to do.

Posted on June 23, 2015 at 1:39 PM25 Comments

Why We Encrypt

Encryption protects our data. It protects our data when it’s sitting on our computers and in data centers, and it protects it when it’s being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy. It protects our anonymity. And sometimes, it protects our lives.

This protection is important for everyone. It’s easy to see how encryption protects journalists, human rights defenders, and political activists in authoritarian countries. But encryption protects the rest of us as well. It protects our data from criminals. It protects it from competitors, neighbors, and family members. It protects it from malicious attackers, and it protects it from accidents.

Encryption works best if it’s ubiquitous and automatic. The two forms of encryption you use most often—https URLs on your browser, and the handset-to-tower link for your cell phone calls—work so well because you don’t even know they’re there.

Encryption should be enabled for everything by default, not a feature you turn on only if you’re doing something you consider worth protecting.

This is important. If we only use encryption when we’re working with important data, then encryption signals that data’s importance. If only dissidents use encryption in a country, that country’s authorities have an easy way of identifying them. But if everyone uses it all of the time, encryption ceases to be a signal. No one can distinguish simple chatting from deeply private conversation. The government can’t tell the dissidents from the rest of the population. Every time you use encryption, you’re protecting someone who needs to use it to stay alive.

It’s important to remember that encryption doesn’t magically convey security. There are many ways to get encryption wrong, and we regularly see them in the headlines. Encryption doesn’t protect your computer or phone from being hacked, and it can’t protect metadata, such as e-mail addresses that need to be unencrypted so your mail can be delivered.

But encryption is the most important privacy-preserving technology we have, and one that is uniquely suited to protect against bulk surveillance—the kind done by governments looking to control their populations and criminals looking for vulnerable victims. By forcing both to target their attacks against individuals, we protect society.

Today, we are seeing government pushback against encryption. Many countries, from States like China and Russia to more democratic governments like the United States and the United Kingdom, are either talking about or implementing policies that limit strong encryption. This is dangerous, because it’s technically impossible, and the attempt will cause incredible damage to the security of the Internet.

There are two morals to all of this. One, we should push companies to offer encryption to everyone, by default. And two, we should resist demands from governments to weaken encryption. Any weakening, even in the name of legitimate law enforcement, puts us all at risk. Even though criminals benefit from strong encryption, we’re all much more secure when we all have strong encryption.

This originally appeared in Securing Safe Spaces Online.

EDITED TO ADD: Last month, I blogged about a UN report on the value of encryption technologies to human freedom worldwide. This essay is the foreword to a companion document:

To support the findings contained in the Special Rapporteur’s report, Privacy International, the Harvard Law School’s International Human Rights Law Clinic and ARTICLE 19 have published an accompanying booklet, Securing Safe Spaces Online: Encryption, online anonymity and human rights which explores the impact of measures to restrict online encryption and anonymity in four particular countries ­—the United Kingdom, Morocco, Pakistan and South Korea.

EDITED TO ADD (7/8): this essay has been translated into Russian.

EDITED TO ADD (4/29/2019): this essay has been translated into Spanish.

EDITED TO ADD (11/13/2019): this essay has been translated into Bosnian.

EDITED TO ADD (10/28/2020): this essay has been translated into French and Hungarian.

EDITED TO ADD: this essay has been translated into German.

Posted on June 23, 2015 at 6:02 AM44 Comments

History of the First Crypto War

As we’re all gearing up to fight the Second Crypto War over governments’ demands to be able to back-door any cryptographic system, it pays for us to remember the history of the First Crypto War. The Open Technology Institute has written the story of those years in the mid-1990s.

The act that truly launched the Crypto Wars was the White House’s introduction of the “Clipper Chip” in 1993. The Clipper Chip was a state-of-the-art microchip developed by government engineers which could be inserted into consumer hardware telephones, providing the public with strong cryptographic tools without sacrificing the ability of law enforcement and intelligence agencies to access unencrypted versions of those communications. The technology relied on a system of “key escrow,” in which a copy of each chip’s unique encryption key would be stored by the government. Although White House officials mobilized both political and technical allies in support of the proposal, it faced immediate backlash from technical experts, privacy advocates, and industry leaders, who were concerned about the security and economic impact of the technology in addition to obvious civil liberties concerns. As the battle wore on throughout 1993 and into 1994, leaders from across the political spectrum joined the fray, supported by a broad coalition that opposed the Clipper Chip. When computer scientist Matt Blaze discovered a flaw in the system in May 1994, it proved to be the final death blow: the Clipper Chip was dead.

Nonetheless, the idea that the government could find a palatable way to access the keys to encrypted communications lived on throughout the 1990s. Many policymakers held onto hopes that it was possible to securely implement what they called “software key escrow” to preserve access to phone calls, emails, and other communications and storage applications. Under key escrow schemes, a government-certified third party would keep a “key” to every device. But the government’s shift in tactics ultimately proved unsuccessful; the privacy, security, and economic concerns continued to outweigh any potential benefits. By 1997, there was an overwhelming amount of evidence against moving ahead with any key escrow schemes.

The Second Crypto War is going to be harder and nastier, and I am less optimistic that strong cryptography will win in the short term.

Posted on June 22, 2015 at 1:35 PM31 Comments

The Secrecy of the Snowden Documents

Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It’s a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden’s actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?

I believe the answer is certainly yes, but that it’s almost certainly not Snowden’s fault.

Snowden has claimed that he gave nothing to China while he was in Hong Kong, and brought nothing to Russia. He has said that he encrypted the documents in such a way that even he no longer has access to them, and that he did this before the US government stranded him in Russia. I have no doubt he did as he said, because A) it’s the smart thing to do, and B) it’s easy. All he would have had to do was encrypt the file with a long random key, break the encrypted text up into a few parts and mail them to trusted friends around the world, then forget the key. He probably added some security embellishments, but—regardless—the first sentence of the Times story simply makes no sense: “Russia and China have cracked the top-secret cache of files…”

But while cryptography is strong, computer security is weak. The vulnerability is not Snowden; it’s everyone who has access to the files.

First, the journalists working with the documents. I’ve handled some of the Snowden documents myself, and even though I’m a paranoid cryptographer, I know how difficult it is to maintain perfect security. It’s been open season on the computers of the journalists Snowden shared documents with since this story broke in July 2013. And while they have been taking extraordinary pains to secure those computers, it’s almost certainly not enough to keep out the world’s intelligence services.

There is a lot of evidence for this belief. We know from other top-secret NSA documents that as far back as 2008, the agency’s Tailored Access Operations group has extraordinary capabilities to hack into and “exfiltrate” data from specific computers, even if those computers are highly secured and not connected to the Internet.

These NSA capabilities are not unique, and it’s reasonable to assume both that other countries had similar capabilities in 2008 and that everyone has improved their attack techniques in the seven years since then. Last week, we learned that Israel had successfully hacked a wide variety of networks, including that of a major computer antivirus company. We also learned that China successfully hacked US government personnel databases. And earlier this year, Russia successfully hacked the White House’s network. These sorts of stories are now routine.

Which brings me to the second potential source of these documents to foreign intelligence agencies: the US and UK governments themselves. I believe that both China and Russia had access to all the files that Snowden took well before Snowden took them because they’ve penetrated the NSA networks where those files reside. After all, the NSA has been a prime target for decades.

Those government hacking examples above were against unclassified networks, but the nation-state techniques we’re seeing work against classified and unconnected networks as well. In general, it’s far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s how computer and network security work today. A former NSA deputy director recently said that if we were to score cyber the way we score soccer, the tally would be 462­456 twenty minutes into the game. In other words, it’s all offense and no defense.

In this kind of environment, we simply have to assume that even our classified networks have been penetrated. Remember that Snowden was able to wander through the NSA’s networks with impunity, and that the agency had so few controls in place that the only way they can guess what has been taken is to extrapolate based on what has been published. Does anyone believe that Snowden was the first to take advantage of that lax security? I don’t.

This is why I find allegations that Snowden was working for the Russians or the Chinese simply laughable. What makes you think those countries waited for Snowden? And why do you think someone working for the Russians or the Chinese would go public with their haul?

I am reminded of a comment made to me in confidence by a US intelligence official. I asked him what he was most worried about, and he replied: “I know how deep we are in our enemies’ networks without them having any idea that we’re there. I’m worried that our networks are penetrated just as deeply.”

Seems like a reasonable worry to me.

The open question is which countries have sophisticated enough cyberespionage operations to mount a successful attack against one of the journalists or against the intelligence agencies themselves. And while I have my own mental list, the truth is that I don’t know. But certainly Russia and China are on the list, and it’s just as certain they didn’t have to wait for Snowden to get access to the files. While it might be politically convenient to blame Snowden because, as the Sunday Times reported an anonymous source saying, “we have now seen our agents and assets being targeted,” the NSA and GCHQ should first take a look into their mirrors.

This essay originally appeared on Wired.com.

EDITED TO ADD: I wrote about this essay on Lawfare:

A Twitter user commented: “Surely if agencies accessed computers of people Snowden shared with then is still his fault?”

Yes, that’s right. Snowden took the documents out of the well-protected NSA network and shared with people who don’t have those levels of computer security. Given what we’ve seen of the NSA’s hacking capabilities, I think the odds are zero that other nations were unable to hack at least one of those journalists’ computers. And yes, Snowden has to own that.

The point I make in the article is that those nations didn’t have to wait for Snowden. More specifically, GCHQ claims that “we have now seen our agents and assets being targeted.” One, agents and assets are not discussed in the Snowden documents. Two, it’s two years after Snowden handed those documents to reporters. Whatever is happening, it’s unlikely to be related to Snowden.

EDITED TO ADD: Slashdot thread. Hacker News thread.

EDITED TO ADD (7/13): Two threads on Reddit.

EDITED TO ADD (7/14): Another refutation.

Posted on June 22, 2015 at 6:13 AM56 Comments

Hacking Drug Pumps

When you connect hospital drug pumps to the Internet, they’re hackable. This is only surprising to people who aren’t paying attention.

Rios says when he first told Hospira a year ago that hackers could update the firmware on its pumps, the company “didn’t believe it could be done.” Hospira insisted there was “separation” between the communications module and the circuit board that would make this impossible. Rios says technically there is physical separation between the two. But the serial cable provides a bridge to jump from one to the other.

An attacker wouldn’t need physical access to the pump because the communication modules are connected to hospital networks, which are in turn connected to the Internet.

“From an architecture standpoint, it looks like these two modules are separated,” he says. “But when you open the device up, you can see they’re actually connected with a serial cable, and they”re connected in a way that you can actually change the core software on the pump.”

An attacker wouldn’t need physical access to the pump. The communication modules are connected to hospital networks, which are in turn connected to the Internet. “You can talk to that communication module over the network or over a wireless network,” Rios warns.

Hospira knows this, he says, because this is how it delivers firmware updates to its pumps. Yet despite this, he says, the company insists that “the separation makes it so you can’t hurt someone. So we’re going to develop a proof-of-concept that proves that’s not true.”

One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.

Posted on June 17, 2015 at 2:02 PM65 Comments

Research on The Trade-off Between Free Services and Personal Data

New report: “The Tradeoff Fallacy: How marketers are misrepresenting American consumers and opening them up to exploitation.”

New Annenberg survey results indicate that marketers are misrepresenting a large majority of Americans by claiming that Americas give out information about themselves as a tradeoff for benefits they receive. To the contrary, the survey reveals most Americans do not believe that ‘data for discounts’ is a square deal.

The findings also suggest, in contrast to other academics’ claims, that Americans’ willingness to provide personal information to marketers cannot be explained by the public’s poor knowledge of the ins and outs of digital commerce. In fact, people who know more about ways marketers can use their personal information are more likely rather than less likely to accept discounts in exchange for data when presented with a real-life scenario.

Our findings, instead, support a new explanation: a majority of Americans are resigned to giving up their data­—and that is why many appear to be engaging in tradeoffs. Resignation occurs when a person believes an undesirable outcome is inevitable and feels powerless to stop it. Rather than feeling able to make choices, Americans believe it is futile to manage what companies can learn about them. Our study reveals that more than half do not want to lose control over their information but also believe this loss of control has already happened.

By misrepresenting the American people and championing the tradeoff argument, marketers give policymakers false justifications for allowing the collection and use of all kinds of consumer data often in ways that the public find objectionable. Moreover, the futility we found, combined with a broad public fear about what companies can do with the data, portends serious difficulties not just for individuals but also—over time—for the institution of consumer commerce.

Some news articles.

Posted on June 17, 2015 at 6:44 AM42 Comments

Encrypting Windows Hard Drives

Encrypting your Windows hard drives is trivially easy; choosing which program to use is annoyingly difficult. I still use Windows—yes, I know, don’t even start—and have intimate experience with this issue.

Historically, I used PGP Disk. I used it because I knew and trusted the designers. I even used it after Symantec bought the company. But big companies are always suspect, because there are a lot of ways for governments to manipulate them.

Then, I used TrueCrypt. I used it because it was open source. But the anonymous developers weirdly abdicated in 2014 when Microsoft released Windows 8. I stuck with the program for a while, saying:

For Windows, the options are basically BitLocker, Symantec’s PGP Disk, and TrueCrypt. I choose TrueCrypt as the least bad of all the options.

But soon after that, despite the public audit of TrueCrypt, I bailed for BitLocker.

BitLocker is Microsoft’s native file encryption program. Yes, it’s from a big company. But it was designed by my colleague and friend Niels Ferguson, whom I trust. (Here’s Niels’s statement from 2006 on back doors.) It was a snap decision; much had changed since 2006. (Here I am in March speculating about an NSA back door in BitLocker.) Specifically, Microsoft made a bunch of changes in BitLocker for Windows 8, including removing something Niels designed called the “Elephant Diffuser.”

The Intercept’s Micah Lee recently recommended BitLocker and got a lot of pushback from the security community. Last week, he published more research and explanation about the trade-offs. It’s worth reading. Microsoft told him they removed the Elephant Diffuser for performance reasons. And I agree with his ultimate conclusion:

Based on what I know about BitLocker, I think it’s perfectly fine for average Windows users to rely on, which is especially convenient considering it comes with many PCs. If it ever turns out that Microsoft is willing to include a backdoor in a major feature of Windows, then we have much bigger problems than the choice of disk encryption software anyway.

Whatever you choose, if trusting a proprietary operating system not to be malicious doesn’t fit your threat model, maybe it’s time to switch to Linux.

Micah also nicely explains how TrueCrypt is becoming antiquated, and not keeping up with Microsoft’s file system changes.

Lately, I am liking an obscure program called BestCrypt, by a Finnish company called Jetico. Micah quotes me:

Considering Schneier has been outspoken for decades about the importance of open source cryptography, I asked if he recommends that other people use BestCrypt, even though it’s proprietary. “I do recommend BestCrypt,” Schneier told me, “because I have met people at the company and I have a good feeling about them. Of course I don’t know for sure; this business is all about trust. But right now, given what I know, I trust them.”

I know it’s not a great argument. But, again, I’m trying to find the least bad option. And in the end, you either have to write your own software or trust someone else to write it for you.

But, yes, this should be an easier decision.

Posted on June 15, 2015 at 6:31 AM143 Comments

Eighth Movie-Plot Threat Contest Winner

On April 1, I announced the Eighth Movie-Plot Threat Contest:

I want a movie-plot threat that shows the evils of encryption. (For those who don’t know, a movie-plot threat is a scary-threat story that would make a great movie, but is much too specific to build security policies around. Contest history here.) We’ve long heard about the evils of the Four Horsemen of the Internet Apocalypse—terrorists, drug dealers, kidnappers, and child pornographers. (Or maybe they’re terrorists, pedophiles, drug dealers, and money launderers; I can never remember.) Try to be more original than that. And nothing too science fictional; today’s technology or presumed technology only.

On May 14, I announced the five semifinalists. The votes are in, and the winner is TonyK:

November 6 2020, the morning of the presidential election. This will be the first election where votes can be cast from smart phones and laptops. A record turnout is expected.

There is much excitement as live results are being displayed all over the place. Twitter, television, apps and websites are all displaying the vote counts. It is a close race between the leading candidates until about 9 am when a third candidate starts to rapidly close the gap. He was an unknown independent that had suspected ties to multiple terrorist organizations. There was outrage when he got on to the ballot, but it had quickly died down when he put forth no campaign effort.

By 11 am the independent was predicted to win, and the software called it for him at 3:22 pm.

At 4 the CEO of the software maker was being interviewed on CNN. There were accusations of everything from bribery to bugs to hackers being responsible for the results. Demands were made for audits and recounts. Some were even asking for the data to be made publicly available. The CEO calmly explained that there could be no audit or recount. The system was encrypted end to end and all the votes were cryptographically anonymized.

The interviewer was stunned and sat there in silence. When he eventually spoke, he said “We just elected a terrorist as the President of the United States.”

For the record, Nick P was a close runner-up.

Congratulations, TonyK. Contact me by e-mail, and I’ll send you your fabulous prizes.

Previous contests.

EDITED TO ADD (6/14): Slashdot thread.

Posted on June 13, 2015 at 12:11 PM13 Comments

Uh Oh—Robots Are Getting Good with Samurai Swords

It’s Iaido, not sword fighting, but still.

Of course, the two didn’t battle each other, but competed in Iaido tests like cutting mats and flowers in various cross-sectional directions. A highlight was when the robot horizontally sliced string beans measuring just 1cm in thickness! At the end, the ultimate test unfolds: the famous 1,000 iaido sword cut challenge. Ultimately, both man and machine end up victorious, leaving behind a litter of straw and sweat as testament to the very first “Senbongiri battle between the pinnacle of robotics and the peak of humanity.”

Posted on June 12, 2015 at 1:38 PM28 Comments

Duqu 2.0

Kaspersky Labs has discovered and publicized details of a new nation-state surveillance malware system, called Duqu 2.0. It’s being attributed to Israel.

There’s a lot of details, and I recommend reading them. There was probably a Kerberos zero-day vulnerability involved, allowing the attackers to send updates to Kaspersky’s clients. There’s code specifically targeting anti-virus software, both Kaspersky and others. The system includes anti-sniffer defense, and packet-injection code. It’s designed to reside in RAM so that it better avoids detection. This is all very sophisticated.

Eugene Kaspersky wrote an op-ed condemning the attack—and making his company look good—and almost, but not quite, comparing attacking his company to attacking the Red Cross:

Historically companies like mine have always played an important role in the development of IT. When the number of Internet users exploded, cybercrime skyrocketed and became a serious threat to the security of billions of Internet users and connected devices. Law enforcement agencies were not prepared for the advent of the digital era, and private security companies were alone in providing protection against cybercrime ­ both to individuals and to businesses. The security community has been something like a group of doctors for the Internet; we even share some vocabulary with the medical profession: we talk about ‘viruses’, ‘disinfection’, etc. And obviously we’re helping law enforcement develop its skills to fight cybercrime more effectively.

One thing that struck me from a very good Wired article on Duqu 2.0:

Raiu says each of the infections began within three weeks before the P5+1 meetings occurred at that particular location. “It cannot be coincidental,” he says. “Obviously the intention was to spy on these meetings.”

Initially Kaspersky was unsure all of these infections were related, because one of the victims appeared not to be part of the nuclear negotiations. But three weeks after discovering the infection, Raiu says, news outlets began reporting that negotiations were already taking place at the site. “Somehow the attackers knew in advance that this was one of the [negotiation] locations,” Raiu says.

Exactly how the attackers spied on the negotiations is unclear, but the malware contained modules for sniffing WiFi networks and hijacking email communications. But Raiu believes the attackers were more sophisticated than this. “I don’t think their style is to infect people connecting to the WiFi. I think they were after some kind of room surveillance—to hijack the audio through the teleconference or hotel phone systems.”

Those meetings are talks about Iran’s nuclear program, which we previously believed Israel spied on. Look at the details of the attack, though: hack the hotel’s Internet, get into the phone system, and turn the hotel phones into room bugs. Very clever.

Posted on June 12, 2015 at 6:18 AM59 Comments

Security and Human Behavior (SHB 2015)

Earlier this week, I was at the eighth Workshop on Security and Human Behavior.

This is a small invitational gathering of people studying various aspects of the human side of security. The fifty people in the room include psychologists, computer security researchers, sociologists, behavioral economists, philosophers, political scientists, lawyers, biologists, anthropologists, business school professors, neuroscientists, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

I call this the most intellectually stimulating two days of my year. The goal is discussion amongst the group. We do that by putting everyone on panels, but only letting each person talk for 10 minutes. The rest of the 90-minute panel is left for discussion.

Ross Anderson liveblogged the talks. Bob Sullivan wrote a piece on some of the presentations on family surveillance.

Here are my posts on the first, second, third, fourth, fifth, sixth, and seventh SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 11, 2015 at 1:24 PM3 Comments

Reassessing Airport Security

News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security “red team” tests was justifiably shocking. It’s clear that we’re not getting value for the $7 billion we’re paying the TSA annually.

But there’s another conclusion, inescapable and disturbing to many, but good news all around: we don’t need $7 billion worth of airport security. These results demonstrate that there isn’t much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.

We don’t need perfect airport security. We just need security that’s good enough to dissuade someone from building a plot around evading it. If you’re caught with a gun or a bomb, the TSA will detain you and call the FBI. Under those circumstances, even a medium chance of getting caught is enough to dissuade a sane terrorist. A 95% failure rate is too high, but a 20% one isn’t.

For those of us who have been watching the TSA, the 95% number wasn’t that much of a surprise. The TSA has been failing these sorts of tests since its inception: failures in 2003, a 91% failure rate at Newark Liberty International in 2006, a 75% failure rate at Los Angeles International in 2007, more failures in 2008. And those are just the public test results; I’m sure there are many more similarly damning reports the TSA has kept secret out of embarrassment.

Previous TSA excuses were that the results were isolated to a single airport, or not realistic simulations of terrorist behavior. That almost certainly wasn’t true then, but the TSA can’t even argue that now. The current test was conducted at many airports, and the testers didn’t use super-stealthy ninja-like weapon-hiding skills.

This is consistent with what we know anecdotally: the TSA misses a lot of weapons. Pretty much everyone I know has inadvertently carried a knife through airport security, and some people have told me about guns they mistakenly carried on airplanes. The TSA publishes statistics about how many guns it detects; last year, it was 2,212. This doesn’t mean the TSA missed 44,000 guns last year; a weapon that is mistakenly left in a carry-on bag is going to be easier to detect than a weapon deliberately hidden in the same bag. But we now know that it’s not hard to deliberately sneak a weapon through.

So why is the failure rate so high? The report doesn’t say, and I hope the TSA is going to conduct a thorough investigation as to the causes. My guess is that it’s a combination of things. Security screening is an incredibly boring job, and almost all alerts are false alarms. It’s very hard for people to remain vigilant in this sort of situation, and sloppiness is inevitable.

There are also technology failures. We know that current screening technologies are terrible at detecting the plastic explosive PETN—that’s what the underwear bomber had—and that a disassembled weapon has an excellent chance of getting through airport security. We know that some items allowed through airport security make excellent weapons.

The TSA is failing to defend us against the threat of terrorism. The only reason they’ve been able to get away with the scam for so long is that there isn’t much of a threat of terrorism to defend against.

Even with all these actual and potential failures, there have been no successful terrorist attacks against airplanes since 9/11. If there were lots of terrorists just waiting for us to let our guard down to destroy American planes, we would have seen attacks—attempted or successful—after all these years of screening failures. No one has hijacked a plane with a knife or a gun since 9/11. Not a single plane has blown up due to terrorism.

Terrorists are much rarer than we think, and launching a terrorist plot is much more difficult than we think. I understand this conclusion is counterintuitive, and contrary to the fearmongering we hear every day from our political leaders. But it’s what the data shows.

This isn’t to say that we can do away with airport security altogether. We need some security to dissuade the stupid or impulsive, but any more is a waste of money. The very rare smart terrorists are going to be able to bypass whatever we implement or choose an easier target. The more common stupid terrorists are going to be stopped by whatever measures we implement.

Smart terrorists are very rare, and we’re going to have to deal with them in two ways. One, we need vigilant passengers—that’s what protected us from both the shoe and the underwear bombers. And two, we’re going to need good intelligence and investigation—that’s how we caught the liquid bombers in their London apartments.

The real problem with airport security is that it’s only effective if the terrorists target airplanes. I generally am opposed to security measures that require us to correctly guess the terrorists’ tactics and targets. If we detect solids, the terrorists will use liquids. If we defend airports, they bomb movie theaters. It’s a lousy game to play, because we can’t win.

We should demand better results out of the TSA, but we should also recognize that the actual risk doesn’t justify their $7 billion budget. I’d rather see that money spent on intelligence and investigation—security that doesn’t require us to guess the next terrorist tactic and target, and works regardless of what the terrorists are planning next.

This essay previously appeared on CNN.com.

Posted on June 11, 2015 at 6:10 AM50 Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 3)

Cloud computing is the future of computing. Specialization and outsourcing make society more efficient and scalable, and computing isn’t any different.

But why aren’t we there yet? Why don’t we, in Simon Crosby’s words, “get on with it”? I have discussed some reasons: loss of control, new and unquantifiable security risks, and—above all—a lack of trust. It is not enough to simply discount them, as the number of companies not embracing the cloud shows. It is more useful to consider what we need to do to bridge the trust gap.

A variety of mechanisms can create trust. When I outsourced my food preparation to a restaurant last night, it never occurred to me to worry about food safety. That blind trust is largely created by government regulation. It ensures that our food is safe to eat, just as it ensures our paint will not kill us and our planes are safe to fly. It is all well and good for Mr. Crosby to write that cloud companies “will invest heavily to ensure that they can satisfy complex…regulations,” but this presupposes that we have comprehensive regulations. Right now, it is largely a free-for-all out there, and it can be impossible to see how security in the cloud works. When robust consumer-safety regulations underpin outsourcing, people can trust the systems.

This is true for any kind of outsourcing. Attorneys, tax preparers and doctors are licensed and highly regulated, by both governments and professional organizations. We trust our doctors to cut open our bodies because we know they are not just making it up. We need a similar professionalism in cloud computing.

Reputation is another big part of trust. We rely on both word-of-mouth and professional reviews to decide on a particular car or restaurant. But none of that works without considerable transparency. Security is an example. Mr Crosby writes: “Cloud providers design security into their systems and dedicate enormous resources to protect their customers.” Maybe some do; many certainly do not. Without more transparency, as a cloud customer you cannot tell the difference. Try asking either Amazon Web Services or Salesforce.com to see the details of their security arrangements, or even to indemnify you for data breaches on their networks. It is even worse for free consumer cloud services like Gmail and iCloud.

We need to trust cloud computing’s performance, reliability and security. We need open standards, rules about being able to remove our data from cloud services, and the assurance that we can switch cloud services if we want to.

We also need to trust who has access to our data, and under what circumstances. One commenter wrote: “After Snowden, the idea of doing your computing in the cloud is preposterous.” He isn’t making a technical argument: a typical corporate data center isn’t any better defended than a cloud-computing one. He is making a legal argument. Under American law—and similar laws in other countries—the government can force your cloud provider to give up your data without your knowledge and consent. If your data is in your own data center, you at least get to see a copy of the court order.

Corporate surveillance matters, too. Many cloud companies mine and sell your data or use it to manipulate you into buying things. Blocking broad surveillance by both governments and corporations is critical to trusting the cloud, as is eliminating secret laws and orders regarding data access.

In the future, we will do all our computing in the cloud: both commodity computing and computing that requires personalized expertise. But this future will only come to pass when we manage to create trust in the cloud.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the third of three essays. Here are Parts 1 and 2. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 3:27 PM20 Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 2)

Let me start by describing two approaches to the cloud.

Most of the students I meet at Harvard University live their lives in the cloud. Their e-mail, documents, contacts, calendars, photos and everything else are stored on servers belonging to large internet companies in America and elsewhere. They use cloud services for everything. They converse and share on Facebook and Instagram and Twitter. They seamlessly switch among their laptops, tablets and phones. It wouldn’t be a stretch to say that they don’t really care where their computers end and the internet begins, and they are used to having immediate access to all of their data on the closest screen available.

In contrast, I personally use the cloud as little as possible. My e-mail is on my own computer—I am one of the last Eudora users—and not at a web service like Gmail or Hotmail. I don’t store my contacts or calendar in the cloud. I don’t use cloud backup. I don’t have personal accounts on social networking sites like Facebook or Twitter. (This makes me a freak, but highly productive.) And I don’t use many software and hardware products that I would otherwise really like, because they force you to keep your data in the cloud: Trello, Evernote, Fitbit.

Why don’t I embrace the cloud in the same way my younger colleagues do? There are three reasons, and they parallel the trade-offs corporations faced with the same decisions are going to make.

The first is control. I want to be in control of my data, and I don’t want to give it up. I have the ability to keep control by running my own services my way. Most of those students lack the technical expertise, and have no choice. They also want services that are only available on the cloud, and have no choice. I have deliberately made my life harder, simply to keep that control. Similarly, companies are going to decide whether or not they want to—or even can—keep control of their data.

The second is security. I talked about this at length in my opening statement. Suffice it to say that I am extremely paranoid about cloud security, and think I can do better. Lots of those students don’t care very much. Again, companies are going to have to make the same decision about who is going to do a better job, and depending on their own internal resources, they might make a different decision.

The third is the big one: trust. I simply don’t trust large corporations with my data. I know that, at least in America, they can sell my data at will and disclose it to whomever they want. It can be made public inadvertently by their lax security. My government can get access to it without a warrant. Again, lots of those students don’t care. And again, companies are going to have to make the same decisions.

Like any outsourcing relationship, cloud services are based on trust. If anything, that is what you should take away from this exchange. Try to do business only with trustworthy providers, and put contracts in place to ensure their trustworthiness. Push for government regulations that establish a baseline of trustworthiness for cases where you don’t have that negotiation power. Fight laws that give governments secret access to your data in the cloud. Cloud computing is the future of computing; we need to ensure that it is secure and reliable.

Despite my personal choices, my belief is that, in most cases, the benefits of cloud computing outweigh the risks. My company, Resilient Systems, uses cloud services both to run the business and to host our own products that we sell to other companies. For us it makes the most sense. But we spend a lot of effort ensuring that we use only trustworthy cloud providers, and that we are a trustworthy cloud provider to our own customers.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the second of three essays. Here are Parts 1 and 3. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 11:27 AM31 Comments

Should Companies Do Most of Their Computing in the Cloud? (Part 1)

Yes. No. Yes. Maybe. Yes. Okay, it’s complicated.

The economics of cloud computing are compelling. For companies, the lower operating costs, the lack of capital expenditure, the ability to quickly scale and the ability to outsource maintenance are just some of the benefits. Computing is infrastructure, like cleaning, payroll, tax preparation and legal services. All of these are outsourced. And computing is becoming a utility, like power and water. Everyone does their power generation and water distribution “in the cloud.” Why should IT be any different?

Two reasons. The first is that IT is complicated: it is more like payroll services than like power generation. What this means is that you have to choose your cloud providers wisely, and make sure you have good contracts in place with them. You want to own your data, and be able to download that data at any time. You want assurances that your data will not disappear if the cloud provider goes out of business or discontinues your service. You want reliability and availability assurances, tech support assurances, whatever you need.

The downside is that you will have limited customization options. Cloud computing is cheaper because of economics of scale, and­—like any outsourced task—­you tend to get what you get. A restaurant with a limited menu is cheaper than a personal chef who can cook anything you want. Fewer options at a much cheaper price: it’s a feature, not a bug.

The second reason that cloud computing is different is security. This is not an idle concern. IT security is difficult under the best of circumstances, and security risks are one of the major reasons it has taken so long for companies to embrace the cloud. And here it really gets complicated.

On the pro-cloud side, cloud providers have the potential to be far more secure than the corporations whose data they are holding. It is the same economies of scale. For most companies, the cloud provider is likely to have better security than them­—by a lot. All but the largest companies benefit from the concentration of security expertise at the cloud provider.

On the anti-cloud side, the cloud provider might not meet your legal needs. You might have regulatory requirements that the cloud provider cannot meet. Your data might be stored in a country with laws you do not like­—or cannot legally use. Many foreign companies are thinking twice about putting their data inside America, because of laws allowing the government to get at that data in secret. Other countries around the world have even more draconian government-access rules.

Also on the anti-cloud side, a large cloud provider is a juicier target. Whether or not this matters depends on your threat profile. Criminals already steal far more credit card numbers than they can monetize; they are more likely to go after the smaller, less-defended networks. But a national intelligence agency will prefer the one-stop shop a cloud provider affords. That is why the NSA broke into Google’s data centers.

Finally, the loss of control is a security risk. Moving your data into the cloud means that someone else is controlling that data. This is fine if they do a good job, but terrible if they do not. And for free cloud services, that loss of control can be critical. The cloud provider can delete your data on a whim, if it believes you have violated some term of service that you never even knew existed. And you have no recourse.

As a business, you need to weigh the benefits against the risks. And that will depend on things like the type of cloud service you’re considering, the type of data that’s involved, how critical the service is, how easily you could do it in house, the size of your company and the regulatory environment, and so on.

This essay previously appeared on the Economist website, as part of a debate on cloud computing. It’s the first of three essays. Here are Parts 2 and 3. Visit the site for the other side of the debate and other commentary.

Posted on June 10, 2015 at 6:43 AM45 Comments

The Effects of Near Misses on Risk Decision-Making

This is interesting research: “How Near-Miss Events Amplify or Attenuate Risky Decision Making,” Catherine H. Tinsley, Robin L. Dillon, and Matthew A. Cronin.

In the aftermath of many natural and man-made disasters, people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not. We demonstrate that when near misses are interpreted as disasters that did not occur, people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard). On the other hand, if near misses can be recognized and interpreted as disasters that almost happened, this will counter the basic “near-miss” effect and encourage more mitigation. We illustrate the robustness of this pattern across populations with varying levels of real expertise with hazards and different hazard contexts (household evacuation for a hurricane, Caribbean cruises during hurricane season, and deep-water oil drilling). We conclude with ideas to help people manage and communicate about risk.

Another paper.

Posted on June 9, 2015 at 8:15 AM23 Comments

Surveillance Law and Surveillance Studies

Interesting paper by Julie Cohen:

Abstract: The dialogue between law and Surveillance Studies has been complicated by a mutual misrecognition that is both theoretical and temperamental. Legal scholars are inclined to consider surveillance simply as the (potential) subject of regulation, while scholarship in Surveillance Studies often seems not to grapple with the ways in which legal processes and doctrines are sites of contestation over both the modalities and the limits of surveillance. Put differently, Surveillance Studies takes notice of what law does not—the relationship between surveillance and social shaping—but glosses over what legal scholarship rightly recognizes as essential­—the processes of definition and compromise that regulators and other interested parties must navigate, and the ways that legal doctrines and constructs shape those processes. This article explores the fault lines between law and Surveillance Studies and considers the potential for more productive confrontation and dialogue in ways that leverage the strengths of each tradition.

Posted on June 8, 2015 at 12:48 PM8 Comments

Tracking People By Smart Phone Accelerometers

Interesting research: “We Can Track You If You Take the Metro: Tracking Metro Riders Using Accelerometers on Smartphones“:

Abstract: Motion sensors (e.g., accelerometers) on smartphones have been demonstrated to be a powerful side channel for attackers to spy on users’ inputs on touchscreen. In this paper, we reveal another motion accelerometer-based attack which is particularly serious: when a person takes the metro, a malicious application on her smartphone can easily use accelerator readings to trace her. We first propose a basic attack that can automatically extract metro-related data from a large amount of mixed accelerator readings, and then use an ensemble interval classier built from supervised learning to infer the riding intervals of the user. While this attack is very effective, the supervised learning part requires the attacker to collect labeled training data for each station interval, which is a significant amount of effort. To improve the efficiency of our attack, we further propose a semi-supervised learning approach, which only requires the attacker to collect labeled data for a very small number of station intervals with obvious characteristics. We conduct real experiments on a metro line in a major city. The results show that the inferring accuracy could reach 89% and 92% if the user takes the metro for 4 and 6 stations, respectively.

The Internet of Things is the Internet of sensors. I’m sure all kinds of surveillance is possible from all kinds of sensing inputs.

Posted on June 8, 2015 at 6:09 AM34 Comments

Friday Squid Blogging: Giant Squid Lore

Legends of giant squid go back centuries:

In his book “The Search for the Giant Squid” marine biologist Richard Ellis notes that “There is probably no apparition more terrifying than a gigantic, saucer-eyed creature of the depths… Even the man-eating shark pales by comparison to such a horror… An animal that can reach a length of 60 feet is already intimidating, and if it happens to have eight squirmy arms, two feeding tentacles, gigantic unblinking eyes, and a gnashing beak, it becomes the stuff of nightmares.”

[…]

It’s a Lovecraftian horror that resonates in the human psyche, though the giant squid are not aggressive against humans and typically feed on other squid and deep-sea fish.

It’s likely that the giant squid served as the basis for centuries of sea monster reports. Ancient sea stories told of the fearsome Kraken, a huge many-tentacled beast, said to attack ships and sailors on the high seas (known to modern audiences in Liam Neeson’s “Clash of the Titans” command to “Release the Kraken!”).

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on June 5, 2015 at 4:51 PM174 Comments

NSA Running a Massive IDS on the Internet Backbone

The latest story from the Snowden documents, co-published by the New York Times and ProPublica, shows that the NSA is operating a signature-based intrusion detection system on the Internet backbone:

In mid-2012, Justice Department lawyers wrote two secret memos permitting the spy agency to begin hunting on Internet cables, without a warrant and on American soil, for data linked to computer intrusions originating abroad—including traffic that flows to suspicious Internet addresses or contains malware, the documents show.

The Justice Department allowed the agency to monitor only addresses and “cybersignatures” ­—patterns associated with computer intrusions—that it could tie to foreign governments. But the documents also note that the N.S.A. sought to target hackers even when it could not establish any links to foreign powers.

To me, the big deal here is 1) the NSA is doing this without a warrant, and 2) that the policy change happened in secret, without any public policy debate.

The effort is the latest known expansion of the N.S.A.’s warrantless surveillance program, which allows the government to intercept Americans’ cross-border communications if the target is a foreigner abroad. While the N.S.A. has long searched for specific email addresses and phone numbers of foreign intelligence targets, the Obama administration three years ago started allowing the agency to search its communications streams for less-identifying Internet protocol addresses or strings of harmful computer code.

[…]

To carry out the orders, the F.B.I. negotiated in 2012 to use the N.S.A.’s system for monitoring Internet traffic crossing “chokepoints operated by U.S. providers through which international communications enter and leave the United States,” according to a 2012 N.S.A. document. The N.S.A. would send the intercepted traffic to the bureau’s “cyberdata repository” in Quantico, Virginia.

Ninety pages of NSA documents accompany the article. Here is a single OCRed PDF of them all.

Jonathan Mayer was consulted on the article. He gives more details on his blog, which I recommend you all read.

In my view, the key takeaway is this: for over a decade, there has been a public policy debate about what role the NSA should play in domestic cybersecurity. The debate has largely presupposed that the NSA’s domestic authority is narrowly circumscribed, and that DHS and DOJ play a far greater role. Today, we learn that assumption is incorrect. The NSA already asserts broad domestic cybersecurity powers. Recognizing the scope of the NSA’s authority is particularly critical for pending legislation.

This is especially important for pending information sharing legislation, which Mayer explains.

The other big news is that ProPublica’s Julia Angwin is working with Laura Poitras on the Snowden documents. I expect that this isn’t the last artcile we’re going to see.

EDITED TO ADD: Others are writing about these documents. Shane Harris explains how the NSA and FBI are working together on Internet surveillance. Benjamin Wittes says that the story is wrong, that “combating overseas cybersecurity threats from foreign governments” is exactly what the NSA is supposed to be doing, and that they don’t need a warrant for any of that. And Marcy Wheeler points out that she has been saying for years that the NSA has been using Section 702 to justify Internet surveillance.

EDITED TO ADD (6/5): Charlie Savage responds to Ben Wittes.

Posted on June 5, 2015 at 7:42 AM45 Comments

Yet Another New Biometric: Brainprints

New research:

In “Brainprint,” a newly published study in academic journal Neurocomputing, researchers from Binghamton University observed the brain signals of 45 volunteers as they read a list of 75 acronyms, such as FBI and DVD. They recorded the brain’s reaction to each group of letters, focusing on the part of the brain associated with reading and recognizing words, and found that participants’ brains reacted differently to each acronym, enough that a computer system was able to identify each volunteer with 94 percent accuracy. The results suggest that brainwaves could be used by security systems to verify a person’s identity.

I have no idea what the false negatives are, or how robust this biometric is over time, but the article makes the important point that unlike most biometrics this one can be updated.

“If someone’s fingerprint is stolen, that person can’t just grow a new finger to replace the compromised fingerprint—the fingerprint for that person is compromised forever. Fingerprints are ‘non-cancellable.’ Brainprints, on the other hand, are potentially cancellable. So, in the unlikely event that attackers were actually able to steal a brainprint from an authorized user, the authorized user could then ‘reset’ their brainprint,” Laszlo said.

Presumably the resetting involves a new set of acronyms.

Author’s self-archived version of the paper (pdf).

Posted on June 4, 2015 at 10:36 AM29 Comments

2015 EPIC Champions of Freedom Dinner

Monday night, EPIC—that’s the Electronic Privacy Information Center—had its annual Champions of Freedom Dinner. I tell you this for two reasons. One, I received a Lifetime Achievement Award. (I was incredibly honored to receive this, and I thank EPIC profusely.) And two, Apple’s CEO Tim Cook received a Champion of Freedom Award. His acceptance speech, delivered remotely, was amazing.

Posted on June 3, 2015 at 4:27 PM30 Comments

TSA Not Detecting Weapons at Security Checkpoints

This isn’t good:

An internal investigation of the Transportation Security Administration revealed security failures at dozens of the nation’s busiest airports, where undercover investigators were able to smuggle mock explosives or banned weapons through checkpoints in 95 percent of trials, ABC News has learned.

The series of tests were conducted by Homeland Security Red Teams who pose as passengers, setting out to beat the system.

According to officials briefed on the results of a recent Homeland Security Inspector General’s report, TSA agents failed 67 out of 70 tests, with Red Team members repeatedly able to get potential weapons through checkpoints.

The Acting Director of the TSA has been reassigned:

Homeland Security Secretary Jeh Johnson said in a statement Monday that Melvin Carraway would be moved to the Office of State and Local Law Enforcement at DHS headquarters “effective immediately.”

This is bad. I have often made the point that airport security doesn’t have to be 100% effective in detecting guns and bombs. Here I am in 2008:

If you’re caught at airport security with a bomb or a gun, the screeners aren’t just going to take it away from you. They’re going to call the police, and you’re going to be stuck for a few hours answering a lot of awkward questions. You may be arrested, and you’ll almost certainly miss your flight. At best, you’re going to have a very unpleasant day.

This is why articles about how screeners don’t catch every—or even a majority—of guns and bombs that go through the checkpoints don’t bother me. The screeners don’t have to be perfect; they just have to be good enough. No terrorist is going to base his plot on getting a gun through airport security if there’s a decent chance of getting caught, because the consequences of getting caught are too great.

A 95% failure rate is bad, because you can build a plot around sneaking something past the TSA.

I don’t know the details, or what failed. Was it the procedures or training? Was it the technology? Was it the PreCheck program? I hope we’ll learn details, and this won’t be swallowed in the great maw of government secrecy.

EDITED TO ADD: Quip:

David Burge @iowahawkblog

At $8 billion per year, the TSA is the most expensive theatrical production in history.

Posted on June 2, 2015 at 7:37 AM58 Comments

US Also Tried Stuxnet Against North Korea

According to a Reuters article, the US military tried to launch Stuxnet against North Korea in addition to Iran:

According to one U.S. intelligence source, Stuxnet’s developers produced a related virus that would be activated when it encountered Korean-language settings on an infected machine.

But U.S. agents could not access the core machines that ran Pyongyang’s nuclear weapons program, said another source, a former high-ranking intelligence official who was briefed on the program.

The official said the National Security Agency-led campaign was stymied by North Korea’s utter secrecy, as well as the extreme isolation of its communications systems.

Posted on June 1, 2015 at 6:33 AM29 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.