Page 459

The Effectiveness of Privacy Audits

This study concludes that there is a benefit to forcing companies to undergo privacy audits: “The results show that there are empirical regularities consistent with the privacy disclosures in the audited financial statements having some effect. Companies disclosing privacy risks are less likely to incur a breach of privacy related to unintentional disclosure of privacy information; while companies suffering a breach of privacy related to credit cards are more likely to disclose privacy risks afterwards. Disclosure after a breach is negatively related to privacy breaches related to hacking, and disclosure before a breach is positively related to breaches concerning insider trading.”

Posted on July 9, 2013 at 12:17 PMView Comments

Another Perspective on the Value of Privacy

A philosophical perspective:

But while Descartes’s overall view has been rightly rejected, there is something profoundly right about the connection between privacy and the self, something that recent events should cause us to appreciate. What is right about it, in my view, is that to be an autonomous person is to be capable of having privileged access (in the two senses defined above) to information about your psychological profile ­ your hopes, dreams, beliefs and fears. A capacity for privacy is a necessary condition of autonomous personhood.

To get a sense of what I mean, imagine that I could telepathically read all your conscious and unconscious thoughts and feelings—I could know about them in as much detail as you know about them yourself—and further, that you could not, in any way, control my access. You don’t, in other words, share your thoughts with me; I take them. The power I would have over you would of course be immense. Not only could you not hide from me, I would know instantly a great amount about how the outside world affects you, what scares you, what makes you act in the ways you do. And that means I could not only know what you think, I could to a large extent control what you do.

That is the political worry about the loss of privacy: it threatens a loss of freedom. And the worry, of course, is not merely theoretical. Targeted ad programs, like Google’s, which track your Internet searches for the purpose of sending you ads that reflect your interests can create deeply complex psychological profiles—especially when one conducts searches for emotional or personal advice information: Am I gay? What is terrorism? What is atheism? If the government or some entity should request the identity of the person making these searches for national security purposes, we’d be on the way to having a real-world version of our thought experiment.

But the loss of privacy doesn’t just threaten political freedom. Return for a moment to our thought experiment where I telepathically know all your thoughts whether you like it or not From my perspective, the perspective of the knower—your existence as a distinct person would begin to shrink. Our relationship would be so lopsided that there might cease to be, at least to me, anything subjective about you. As I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized.

Posted on July 9, 2013 at 6:24 AMView Comments

Big Data Surveillance Results in Bad Policy

Evgeny Morozov makes a point about surveillance and big data: it just looks for useful correlations without worrying about causes, and leads people to implement “fixes” based simply on those correlations—rather than understanding and correcting the underlying causes.

As the media academic Mark Andrejevic points out in Infoglut, his new book on the political implications of information overload, there is an immense—but mostly invisible—cost to the embrace of Big Data by the intelligence community (and by just about everyone else in both the public and private sectors). That cost is the devaluation of individual and institutional comprehension, epitomized by our reluctance to investigate the causes of actions and jump straight to dealing with their consequences. But, argues Andrejevic, while Google can afford to be ignorant, public institutions cannot.

“If the imperative of data mining is to continue to gather more data about everything,” he writes, “its promise is to put this data to work, not necessarily to make sense of it. Indeed, the goal of both data mining and predictive analytics is to generate useful patterns that are far beyond the ability of the human mind to detect or even explain.” In other words, we don’t need to inquire why things are the way they are as long as we can affect them to be the way we want them to be. This is rather unfortunate. The abandonment of comprehension as a useful public policy goal would make serious political reforms impossible.

Forget terrorism for a moment. Take more mundane crime. Why does crime happen? Well, you might say that it’s because youths don’t have jobs. Or you might say that’s because the doors of our buildings are not fortified enough. Given some limited funds to spend, you can either create yet another national employment program or you can equip houses with even better cameras, sensors, and locks. What should you do?

If you’re a technocratic manager, the answer is easy: Embrace the cheapest option. But what if you are that rare breed, a responsible politician? Just because some crimes have now become harder doesn’t mean that the previously unemployed youths have finally found employment. Surveillance cameras might reduce crime—even though the evidence here is mixed—but no studies show that they result in greater happiness of everyone involved. The unemployed youths are still as stuck as they were before—only that now, perhaps, they displace anger onto one another. On this reading, fortifying our streets without inquiring into the root causes of crime is a self-defeating strategy, at least in the long run.

Big Data is very much like the surveillance camera in this analogy: Yes, it can help us avoid occasional jolts and disturbances and, perhaps, even stop the bad guys. But it can also blind us to the fact that the problem at hand requires a more radical approach. Big Data buys us time, but it also gives us a false illusion of mastery.

Posted on July 8, 2013 at 11:50 AMView Comments

Protecting E-Mail from Eavesdropping

In the wake of the Snowden NSA documents, reporters have been asking me whether encryption can solve the problem. Leaving aside the fact that much of what the NSA is collecting can’t be encrypted by the user—telephone metadata, e-mail headers, phone calling records, e-mail you’re reading from a phone or tablet or cloud provider, anything you post on Facebook—it’s hard to give good advice.

In theory, an e-mail program will protect you, but the reality is much more complicated.

  • The program has to be vulnerability-free. If there is some back door in the program that bypasses, or weakens, the encryption, it’s not secure. It’s very difficult, almost impossible, to verify that a program is vulnerability-free.
  • The user has to choose a secure password. Luckily, there’s advice on how to do this.
  • The password has to be managed securely. The user can’t store it in a file somewhere. If he’s worried about security for after the FBI has arrested him and searched his house, he shouldn’t write it on a piece of paper, either.
  • Actually, he should understand the threat model he’s operating under. Is it the NSA trying to eavesdrop on everything, or an FBI investigation that specifically targets him—or a targeted attack, like dropping a Trojan on his computer, that bypasses e-mail encryption entirely?

This is simply too much for the poor reporter, who wants an easy-to-transcribe answer.

We’ve known how to send cryptographically secure e-mail since the early 1990s. Twenty years later, we’re still working on the security engineering of e-mail programs. And if the NSA is eavesdropping on encrypted e-mail, and if the FBI is decrypting messages from suspects’ hard drives, they’re both breaking the engineering, not the underlying cryptographic algorithms.

On the other hand, the two adversaries can be very different. The NSA has to process a ginormous amount of traffic. It’s the “drinking from a fire hose” problem; they cannot afford to devote a lot of time to decrypting everything, because they simply don’t have the computing resources. There’s just too much data to collect. In these situations, even a modest level of encryption is enough—until you are specifically targeted. This is why the NSA saves all encrypted data it encounters; it might want to devote cryptanalysis resources to it at some later time.

Posted on July 8, 2013 at 6:43 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Sixth Movie-Plot Threat Contest Winner

On April 1, I announced the Sixth Mostly-Annual Movie-Plot Threat Contest:

For this year’s contest, I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

On May 15, I announced the five semi-finalists. Voting continued through the end of the month, and the winner is Russell Thomas:

It’s November 2015 and the United Nations Climate Change Conference (UNCCC) is underway in Amsterdam, Netherlands. Over the past year, ocean level rise has done permanent damage to critical infrastructure in Maldives, killing off tourism and sending the economy into freefall. The Small Island Developing States are demanding immediate relief from the Green Climate Fund, but action has been blocked. Conspiracy theories flourish. For months, the rhetoric between developed and developing countries has escalated to veiled and not-so-veiled threats. One person in elites of the Small Island Developing States sees an opportunity to force action.

He’s Sayyid Abdullah bin Yahya, an Indonesian engineer and construction magnate with interests in Bahrain, Bangladesh, and Maldives, all directly threatened by recent sea level rise. Bin Yahya’s firm installed industrial control systems on several flood control projects, including in the Maldives, but these projects are all stalled and unfinished for lack of financing. He also has a deep, abiding enmity against Holland and the Dutch people, rooted in the 1947 Rawagede massacre that killed his grandfather and father. Like many Muslims, he declared that he was personally insulted by Queen Beatrix’s gift to the people of Indonesia on the 50th anniversary of the massacre—a Friesian cow. “Very rude. That’s part of the Dutch soul, this rudeness”, he said at the time. Also like many Muslims, he became enraged and radicalized in 2005 when the Dutch newspaper Jyllands-Posten published cartoons of the Prophet.

Of all the EU nations, Holland is most vulnerable to rising sea levels. It has spent billions on extensive barriers and flood controls, including the massive Oosterscheldekering storm surge barrier, designed and built in the 80s to protect against a 10,000-year storm surge. While it was only used 24 times between 1986 and 2010, in the last two years the gates have been closed 46 times.

As the UNCCC conference began in November 2015, the Oosterscheldekering was closed yet again to hold off the surge of an early winter storm. Even against low expectations, the first day’s meetings went very poorly. A radicalized and enraged delegation from the Small Island Developing States (SIDS) presented an ultimatum, leading to denunciations and walkouts. “What can they do—start a war?” asked the Dutch Minister of Infrastructure and the Environment in an unguarded moment. There was talk of canceling the rest of the conference.

Overnight, there are a series of news stories in China, South America, and United States reporting malfunctions of dams that resulted in flash floods and death of tens or hundreds people in several cases. Web sites associated with the damns were all defaced with the text of the SIDS ultimatum. In the morning, all over Holland there were reports of malfunctions of control equipment associated with flood monitoring and control systems. The winter storm was peaking that day with an expected surge of 7 meters (22 feet), larger than the Great Flood of 1953. With the Oosterscheldekering working normally, this is no worry. But at 10:43am, the storm gates unexpectedly open.

Microsoft Word claims it’s 501 words, but I’m letting that go.

This is the first professional—a researcher—who has won the contest. Be sure to check out his blogs, and his paper at WEIS this year.

Congratulations, Russell Thomas. Your box of fabulous prizes will be on its way to you soon.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner. The Fifth Movie-Plot Threat Contest rules, semifinalists, and winner.

Posted on July 5, 2013 at 12:08 PMView Comments

Is Cryptography Engineering or Science?

Responding to a tweet by Thomas Ptacek saying, “If you’re not learning crypto by coding attacks, you might not actually be learning crypto,” Colin Percival published a well-thought-out rebuttal, saying in part:

If we were still in the 1990s, I would agree with Thomas. 1990s cryptography was full of holes, and the best you could hope for was to know how your tools were broken so you could try to work around their deficiencies. This was a time when DES and RC4 were widely used, despite having well-known flaws. This was a time when people avoided using CTR mode to convert block ciphers into stream ciphers, due to concern that a weak block cipher could break if fed input blocks which shared many (zero) bytes in common. This was a time when people cared about the “error propagation” properties of block ciphers ­ that is, how much of the output would be mangled if a small number of bits in the ciphertext are flipped. This was a time when people routinely advised compressing data before encrypting it, because that “compacted” the entropy in the message, and thus made it “more difficult for an attacker to identify when he found the right key”. It should come as no surprise that SSL, designed during this era, has had a long list of design flaws.

Cryptography in the 2010s is different. Now we start with basic components which are believed to be highly secure ­ e.g., block ciphers which are believed to be indistinguishable from random permutations ­ and which have been mathematically proven to be secure against certain types of attacks ­ e.g., AES is known to be immune to differential cryptanalysis. From those components, we then build higher-order systems using mechanisms which have been proven to not introduce vulnerabilities. For example, if you generate an ordered sequence of packets by encrypting data using an indistinguishable-from-random-permutation block cipher (e.g., AES) in CTR mode using a packet sequence number as the CTR nonce, and then append a weakly-unforgeable MAC (e.g., HMAC-SHA256) of the encrypted data and the packet sequence number, the packets both preserve privacy and do not permit any undetected tampering (including replays and reordering of packets). Life will become even better once Keccak (aka. SHA-3) becomes more widely reviewed and trusted, as its “sponge” construction can be used to construct—with provable security—a very wide range of important cryptographic components.

He recommends a more modern approach to cryptography: “studying the theory and designing systems which you can prove are secure.”

I think both of statements are true—and not contradictory at all. The apparent disagreement stems from differing definitions of cryptography.

Many years ago, on the Cryptographer’s Panel at an RSA conference, then-chief scientist for RSA Bert Kaliski talked about the rise of something he called the “crypto engineer.” His point was that the practice of cryptography was changing. There was the traditional mathematical cryptography—designing and analyzing algorithms and protocols, and building up cryptographic theory—but there was also a more practice-oriented cryptography: taking existing cryptographic building blocks and creating secure systems out of them. It’s this latter group he called crypto engineers. It’s the group of people I wrote Applied Cryptography, and, most recently, co-wrote Cryptography Engineering, for. Colin knows this, directing his advice to “developers”—Kaliski’s crypto engineers.

Traditional cryptography is a science—applied mathematics—and applied cryptography is engineering. I prefer the term “security engineering,” because it necessarily encompasses a lot more than cryptography—see Ross Andersen’s great book of that name. And mistakes in engineering are where a lot of real-world cryptographic systems break.

Provable security has its limitations. Cryptographer Lars Knudsen once said: “If it’s provably secure, it probably isn’t.” Yes, we have provably secure cryptography, but those proofs take very specific forms against very specific attacks. They reduce the number of security assumptions we have to make about a system, but we still have to make a lot of security assumptions.

And cryptography has its limitations in general, despite the apparent strengths. Cryptography’s great strength is that it gives the defender a natural advantage: adding a single bit to a cryptographic key increases the work to encrypt by only a small amount, but doubles the work required to break the encryption. This is how we design algorithms that—in theory—can’t be broken until the universe collapses back on itself.

Despite this, cryptographic systems are broken all the time: well before the heat death of the universe. They’re broken because of software mistakes in coding the algorithms. They’re broken because the computer’s memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They’re broken because of buffer overflows and other security flaws. They’re broken by side-channel attacks. They’re broken because of bad user interfaces, or insecure user practices.

Lots of people have said: “In theory, theory and practice are the same. But in practice, they are not.” It’s true about cryptography. If you want to be a cryptographer, study mathematics. Study the mathematics of cryptography, and especially cryptanalysis. There’s a lot of art to the science, and you won’t be able to design good algorithms and protocols until you gain experience in breaking existing ones. If you want to be a security engineer, study implementations and coding. Take the tools cryptographers create, and learn how to use them well.

The world needs security engineers even more than it needs cryptographers. We’re great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.

After writing this, I found a conversation between the two where they both basically agreed with me.

Posted on July 5, 2013 at 7:04 AMView Comments

The Office of the Director of National Intelligence Defends NSA Surveillance Programs

Here’s a transcript of a panel discussion about NSA surveillance. There’s a lot worth reading here, but I want to quote Bob Litt’s opening remarks. He’s the General Counsel for ODNI, and he has a lot to say about the programs revealed so far in the Snowden documents.

I’m reminded a little bit of a quote that, like many quotes, is attributed to Mark Twain but in fact is not Mark Twain’s, which is that a lie can get halfway around the world before the truth gets its boots on. And unfortunately, there’s been a lot of misinformation that’s come out about these programs. And what I would like to do in the next couple of minutes is actually go through and explain what the programs are and what they aren’t.

I particularly want to emphasize that I hope you come away from this with the understanding that neither of the programs that have been leaked to the press recently are indiscriminate sweeping up of information without regard to privacy or constitutional rights or any kind of controls. In fact, from my boss, the director of national intelligence, on down through the entire intelligence community, we are in fact sensitive to privacy and constitutional rights. After all, we are citizens of the United States. These are our rights too.

So as I said, we’re talking about two types of intelligence collection programs. I want to start discussing them by making the point that in order to target the emails or the phone calls or the communications of a United States citizen or a lawful permanent resident of the United States, wherever that person is located, or of any person within the United States, we need to go to court, and we need to get an individual order based on probable cause, the equivalent of an electronic surveillance warrant.

That does not mean and nobody has ever said that that means we never acquire the contents of an email or telephone call to which a United States person is a party. Whenever you’re doing any collection of information, you’re going to—you can’t avoid some incidental acquisition of information about nontargeted persons. Think of a wiretap in a criminal case. You’re wiretapping somebody, and you intercept conversations that are innocent as well as conversations that are inculpatory. If we seize somebody’s computer, there’s going to be information about innocent people on that. This is just a necessary incident.

What we do is we impose controls on the use of that information. But what we cannot do—and I’m repeating this—is go out and target the communications of Americans for collection without an individual court order.

So the first of the programs that I want to talk about that was leaked to the press is what’s been called Section 215, or business record collection. It’s called Section 215 because that was the section of the Patriot Act that put the current version of that statute into place. And under that ­ this statute, we collect telephone metadata, using a court order which is authorized by the Foreign Intelligence Surveillance Act, under a provision which allows a government to obtain business records for intelligence and counterterrorism purposes. Now, by metadata, in this context, I mean data that describes the phone calls, such as the telephone number making the call, the telephone number dialed, the data and time the call was made and the length of the call. These are business records of the telephone companies in question, which is why they can be collected under this provision.

Despite what you may have read about this program, we do not collect the content of any communications under this program. We do not collect the identity of any participant to any communication under this program. And while there seems to have been some confusion about this as recently as today, I want to make perfectly clear we do not collect cellphone location information under this program, either GPS information or cell site tower information. I’m not sure why it’s been so hard to get people to understand that because it’s been said repeatedly.

When the court approves collection under this statute, it issues two orders. One order, which is the one that was leaked, is an order to providers directing them to turn the relevant information over to the government. The other order, which was not leaked, is the order that spells out the limitations on what we can do with the information after it’s been collected, who has access, what purposes they can access it for and how long it can be retained.

Some people have expressed concern, which is quite a valid concern in the abstract, that if you collect large quantities of metadata about telephone calls, you could subject it to sophisticated analysis, and using those kind of analytical tools, you can derive a lot of information about people that would otherwise not be discoverable.

The fact is, we are specifically not allowed to do that kind of analysis of this data, and we don’t do it. The metadata that is acquired and kept under this program can only be queried when there is reasonable suspicion, based on specific, articulable facts, that a particular telephone number is associated with specified foreign terrorist organizations. And the only purpose for which we can make that query is to identify contacts. All that we get under this program, all that we collect, is metadata. So all that we get back from one of these queries is metadata.

Each determination of a reasonable suspicion under this program must be documented and approved, and only a small portion of the data that is collected is ever actually reviewed, because the vast majority of that data is never going to be responsive to one of these terrorism-related queries.

In 2012 fewer than 300 identifiers were approved for searching this data. Nevertheless, we collect all the data because if you want to find a needle in the haystack, you need to have the haystack, especially in the case of a terrorism-related emergency, which is—and remember that this database is only used for terrorism-related purposes.

And if we want to pursue any further investigation as a result of a number that pops up as a result of one of these queries, we have to do, pursuant to other authorities and in particular if we want to conduct electronic surveillance of any number within the United States, as I said before, we have to go to court, we have to get an individual order based on probable cause.

That’s one of the two programs.

The other program is very different. This is a program that’s sometimes referred to as PRISM, which is a misnomer. PRISM is actually the name of a database. The program is collection under Section 702 of the Foreign Intelligence Surveillance Act, which is a public statute that is widely known to everybody. There’s really no secret about this kind of collection.

This permits the government to target a non-U.S. person, somebody who’s not a citizen or a permanent resident alien, located outside of the United States, for foreign intelligence purposes without obtaining a specific warrant for each target, under the programmatic supervision of the FISA Court.

And it’s important here to step back and note that historically and at the time FISA was originally passed in 1978, this particular kind of collection, targeting non-U.S. persons outside of the United States for foreign intelligence purposes, was not intended to be covered by FISA as ­ at all. It was totally outside of the supervision of the FISA Court and totally within the prerogative of the executive branch. So in that respect, Section 702 is properly viewed as an expansion of FISA Court authority, rather than a contraction of that authority.

So Section 702, as I—as I said, it’s—is limited to targeting foreigners outside the United States to acquire foreign intelligence information. And there is a specific provision in this statute that prohibits us from making an end run about this, about—on this requirement, because we are expressly prohibited from targeting somebody outside of the United States in order to obtain some information about somebody inside the United States. That is to say, if we know that somebody outside of the United States is communicating with Spike Bowman, and we really want to get Spike Bowman’s communications, we’ve got to get an electronic surveillance order on Spike Bowman. We cannot target the out ­ the person outside of the United States to collect on Spike.

In order to use Section 702, the government has to obtain approval from the FISA Court for the plan it intends to use to conduct the collection. This plan includes, first of all, identification of the foreign intelligence purposes of the collection; second, the plan and the procedures for ensuring that the individuals targeted for collection are in fact non-U.S. persons who are located outside of the United States. These are referred to as targeting procedures. And in addition, we have to get approval of the government’s procedures for what it will do with information about a U.S. person or someone inside the United States if we get that information through this collection. These procedures, which are called minimization procedures, determine what we can keep and what we can disseminate to other government agencies and impose limitations on that. And in particular, dissemination of information about U.S. persons is expressly prohibited unless that information is necessary to understand foreign intelligence or to assess its importance or is evidence of a crime or indicates a—an imminent threat of death or serious bodily harm.

And again, these procedures, the targeting and minimization procedures, have to be approved by the FISA court as consistent with the statute and consistent with the Fourth Amendment. And that’s what the Section 702 collection is.

The last thing I want to talk about a little bit is the myth that this is sort of unchecked authority, because we have extensive oversight and control over the collection, which involves all three branches of government. First, NSA has extensive technological processes, including segregated databases, limited access and audit trails, and they have extensive internal oversight, including their own compliance officer, who oversees compliance with the rules.

Second, the Department of Justice and my office, the Office of the Director of National Intelligence, are specifically charged with overseeing NSA’s activities to make sure that there are no compliance problems. And we report to the Congress twice a year on the use of these collection authorities and compliance problems. And if we find a problem, we correct it. Inspectors general, independent inspectors general, who, as you all know, also have an independent reporting responsibility to Congress, also are charged with undertaking a review of how these surveillance programs are carried out.

Any time that information is collected in violation of the rules, it’s reported immediately to the FISA court and is also reported to the relevant congressional oversight committees. It doesn’t matter how small the—or technical the violation is. And information that’s collected in violation of the rules has to be purged, with very limited exceptions.

Both the FISA court and the congressional oversight committees, which are Intelligence and Judiciary, take a very active role in overseeing this program and ensuring that we adhere to the requirements of the statutes and the court orders. And let me just stop and say that the suggestion that the FISA court is a rubber stamp is a complete canard, as anybody who’s ever had the privilege of appearing before Judge Bates or Judge Walton can attest.

Now, this is a complex system, and like any complex system, it’s not error free. But as I said before, every time we have found a mistake, we’ve fixed it. And the mistakes are self-reported. We find them ourselves in the exercise of our oversight. No one has ever found that there has ever been—and by no one, I mean the people at NSA, the people at the Department of Justice, the people at the Office of the Director of National Intelligence, the inspectors general, the FISA court and the congressional oversight committees, all of whom have visibility into this—nobody has ever found that there has ever been any intentional effort to violate the law or any intentional misuse of these tools.

As always, the fundamental issue is trust. If you believe Litt, this is all very comforting. If you don’t, it’s more lies and misdirection. Taken at face value, it explains why so many tech executives were able to say they had never heard of PRISM: it’s the internal NSA name for the database, and not the name of the program. I also note that Litt uses the word “collect” to mean what it actually means, and not the way his boss, Director of National Intelligence James Clapper, Jr., used it to deliberately lie to Congress.

Posted on July 4, 2013 at 7:07 AMView Comments

Privacy Protests

Interesting law journal article: “Privacy Protests: Surveillance Evasion and Fourth Amendment Suspicion,” by Elizabeth E. Joh.

Abstract: The police tend to think that those who evade surveillance are criminals. Yet the evasion may only be a protest against the surveillance itself. Faced with the growing surveillance capacities of the government, some people object. They buy “burners” (prepaid phones) or “freedom phones” from Asia that have had all tracking devices removed, or they hide their smartphones in ad hoc Faraday cages that block their signals. They use to surf the internet. They identify tracking devices with GPS detectors. They avoid credit cards and choose cash, prepaid debit cards, or bitcoins. They burn their garbage. At the extreme end, some “live off the grid” and cut off all contact with the modern world.

These are all examples of what I call privacy protests: actions individuals take to block or to thwart government surveillance for reasons that are unrelated to criminal wrongdoing. Those engaged in privacy protests do so primarily because they object to the presence of perceived or potential government surveillance in their lives. How do we tell the difference between privacy protests and criminal evasions, and why does it matter? Surprisingly scant attention has been given to these questions, in part because Fourth Amendment law makes little distinction between ordinary criminal evasions and privacy protests. This article discusses the importance of these ordinary acts of resistance, their place in constitutional criminal procedure, and their potential social value in the struggle over the meaning of privacy.

Read this while thinking about the lack of any legal notion of civil disobedience in cyberspace.

Posted on July 3, 2013 at 12:30 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.