Blog: November 2015 Archives

A History of Privacy

This New Yorker article traces the history of privacy from the mid 1800s to today:

As a matter of historical analysis, the relationship between secrecy and privacy can be stated in an axiom: the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late. The horse is out of the barn. The post office has opened your mail. Your photograph is on Facebook. Google already knows that, notwithstanding your demographic, you hate kale.

Posted on November 30, 2015 at 12:47 PM24 Comments

Cryptanalysis of Algebraic Eraser

Algebraic Eraser is a public-key key-agreement protocol that’s patented and being pushed by a company for the Internet of Things, primarily because it is efficient on small low-power devices. There’s a new cryptanalytic attack.

This is yet another demonstration of why you should not choose proprietary encryption over public algorithms and protocols. The good stuff is not patented.

News article.

Posted on November 30, 2015 at 6:05 AM26 Comments

NSA Lectures on Communications Security from 1973

Newly declassified: “A History of U.S. Communications Security (Volumes I and II),” the David G. Boak Lectures, National Security Agency (NSA), 1973. (The document was initially declassified in 2008. We just got a whole bunch of additional material declassified. Both versions are in the document, so you can compare and see what was kept secret seven years ago.)

Posted on November 25, 2015 at 7:06 AM21 Comments

Policy Repercussions of the Paris Terrorist Attacks

In 2013, in the early days of the Snowden leaks, Harvard Law School professor and former Assistant Attorney General Jack Goldsmith reflected on the increase in NSA surveillance post 9/11. He wrote:

Two important lessons of the last dozen years are (1) the government will increase its powers to meet the national security threat fully (because the People demand it), and (2) the enhanced powers will be accompanied by novel systems of review and transparency that seem to those in the Executive branch to be intrusive and antagonistic to the traditional national security mission, but that in the end are key legitimating factors for the expanded authorities.

Goldsmith is right, and I think about this quote as I read news articles about surveillance policies with headlines like “Political winds shifting on surveillance after Paris attacks?

The politics of surveillance are the politics of fear. As long as the people are afraid of terrorism—regardless of how realistic their fears are—they will demand that the government keep them safe. And if the government can convince them that it needs this or that power in order to keep the people safe, the people will willingly grant them those powers. That’s Goldsmith’s first point.

Today, in the wake of the horrific and devastating Paris terror attacks, we’re at a pivotal moment. People are scared, and already Western governments are lining up to authorize more invasive surveillance powers. The US want to back-door encryption products in some vain hope that the bad guys are 1) naive enough to use those products for their own communications instead of more secure ones, and 2) too stupid to use the back doors against the rest of us. The UK is trying to rush the passage of legislation that legalizes a whole bunch of surveillance activities that GCHQ has already been doing to its own citizens. France just gave its police a bunch of new powers. It doesn’t matter that mass surveillance isn’t an effective anti-terrorist tool: a scared populace wants to be reassured.

And politicians want to reassure. It’s smart politics to exaggerate the threat. It’s smart politics to do something, even if that something isn’t effective at mitigating the threat. The surveillance apparatus has the ear of the politicians, and the primary tool in its box is more surveillance. There’s minimal political will to push back on those ideas, especially when people are scared.

Writing about our country’s reaction to the Paris attacks, Tom Engelhardt wrote:

…the officials of that security state have bet the farm on the preeminence of the terrorist ‘threat,’ which has, not so surprisingly, left them eerily reliant on the Islamic State and other such organizations for the perpetuation of their way of life, their career opportunities, their growing powers, and their relative freedom to infringe on basic rights, as well as for that comfortably all-embracing blanket of secrecy that envelops their activities.

Goldsmith’s second point is more subtle: when these power increases are made in public, they’re legitimized through bureaucracy. Together, the scared populace and their scared elected officials serve to make the expanded national security and law enforcement powers normal.

Terrorism is singularly designed to push our fear buttons in ways completely out of proportion to the actual threat. And as long as people are scared of terrorism, they’ll give their governments all sorts of new powers of surveillance, arrest, detention, and so on, regardless of whether those powers actually combat the threat. This means that those who want those powers need a steady stream of terrorist attacks to enact their agenda. It’s not that these people are actively rooting for the terrorists, but they know a good opportunity when they see it.

We know that the PATRIOT Act was largely written before the 9/11 terrorist attacks, and that the political climate was right for its introduction and passage. More recently:

Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

The Paris attacks could very well be that event.

I am very worried that the Obama administration has already secretly told the NSA to increase its surveillance inside the US. And I am worried that there will be new legislation legitimizing that surveillance and granting other invasive powers to law enforcement. As Goldsmith says, these powers will be accompanied by novel systems of review and transparency. But I have no faith that those systems will be effective in limiting abuse any more than they have been over the last couple of decades.

EDITED TO ADD (12/14): Trevor Timm is all over this issue. Dan Gillmor wrote something good, too.

Posted on November 24, 2015 at 6:32 AM152 Comments

Voter Surveillance

There hasn’t been that much written about surveillance and big data being used to manipulate voters. In Data and Goliath, I wrote:

Unique harms can arise from the use of surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fund-raising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans…anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also fine-tune their get-out-the-vote campaigns on Election Day, and more efficiently gerrymander districts between elections. Such use of data will likely have fundamental effects on democracy and voting.

A new research paper looks at the trends:

Abstract: This paper surveys the various voter surveillance practices recently observed in the United States, assesses the extent to which they have been adopted in other democratic countries, and discusses the broad implications for privacy and democracy. Four broad trends are discussed: the move from voter management databases to integrated voter management platforms; the shift from mass-messaging to micro-targeting employing personal data from commercial data brokerage firms; the analysis of social media and the social graph; and the decentralization of data to local campaigns through mobile applications. The de-alignment of the electorate in most Western societies has placed pressures on parties to target voters outside their traditional bases, and to find new, cheaper, and potentially more intrusive, ways to influence their political behavior. This paper builds on previous research to consider the theoretical tensions between concerns for excessive surveillance, and the broad democratic responsibility of parties to mobilize voters and increase political engagement. These issues have been insufficiently studied in the surveillance literature. They are not just confined to the privacy of the individual voter, but relate to broader dynamics in democratic politics.

Posted on November 23, 2015 at 12:03 PM33 Comments

Friday Squid Blogging: Squid Spawning in South Australian Waters

Divers are counting them:

Squid gather and mate with as many partners as possible, then die, in an annual ritual off Rapid Head on the Fleurieu Peninsula, south of Adelaide.

Department of Environment divers will check the waters and gather data on how many eggs are left by the spawning squid.

No word on how many are expected. Ten? Ten billion? I have no idea.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on November 20, 2015 at 4:30 PM132 Comments

Reputation in the Information Age

Reputation is a social mechanism by which we come to trust one another, in all aspects of our society. I see it as a security mechanism. The promise and threat of a change in reputation entices us all to be trustworthy, which in turn enables others to trust us. In a very real sense, reputation enables friendships, commerce, and everything else we do in society. It’s old, older than our species, and we are finely tuned to both perceive and remember reputation information, and broadcast it to others.

The nature of how we manage reputation has changed in the past couple of decades, and Gloria Origgi alludes to the change in her remarks. Reputation now involves technology. Feedback and review systems, whether they be eBay rankings, Amazon reviews, or Uber ratings, are reputational systems. So is Google PageRank. Our reputations are, at least in part, based on what we say on social networking sites like Facebook and Twitter. Basically, what were wholly social systems have become socio-technical systems.

This change is important, for both the good and the bad of what it allows.

An example might make this clearer. In a small town, everyone knows each other, and lenders can make decisions about whom to loan money to, based on reputation (like in the movie It’s a Wonderful Life). The system isn’t perfect; it is prone to “old-boy network” preferences and discrimination against outsiders. The real problem, though, is that the system doesn’t scale. To enable lending on a larger scale, we replaced personal reputation with a technological system: credit reports and scores. They work well, and allow us to borrow money from strangers halfway across the country­—and lending has exploded in our society, in part because of it. But the new system can be attacked technologically. Someone could hack the credit bureau’s database and enhance her reputation by boosting her credit score. Or she could steal someone else’s reputation. All sorts of attacks that just weren’t possible with a wholly personal reputation system become possible against a system that works as a technological reputation system.

We like socio-technical systems of reputation because they empower us in so many ways. People can achieve a level of fame and notoriety much more easily on the Internet. Totally new ways of making a living­—think of Uber and Airbnb, or popular bloggers and YouTubers—­become possible. But the downsides are considerable. The hacker tactic of social engineering involves fooling someone by hijacking the reputation of someone else. Most social media companies make their money leeching off our activities on their sites. And because we trust the reputational information from these socio-technical systems, anyone who can figure out how to game those systems can artificially boost their reputation. Amazon, eBay, Yelp, and others have been trying to deal with fake reviews for years. And you can buy Twitter followers and Facebook likes cheap.

Reputation has always been gamed. It’s been an eternal arms race between those trying to artificially enhance their reputation and those trying to detect those enhancements. In that respect, nothing is new here. But technology changes the mechanisms of both enhancement and enhancement detection. There’s power to be had on either side of that arms race, and it’ll be interesting to watch each side jockeying for the upper hand.

This essay is part of a conversation with Gloria Origgi entitled “What is Reputation?”

Posted on November 20, 2015 at 7:04 AM89 Comments

RFID-Shielded, Ultra-Strong Duffel Bags

They’re for carrying cash through dangerous territory:

SDR Traveller caters to people who, for one reason or another, need to haul huge amounts of cash money through dangerous territory. The bags are made from a super strong, super light synthetic material designed for yacht sails, are RFID-shielded, and are rated by how much cash in US$100 bills each can carry….

Posted on November 19, 2015 at 6:16 AM37 Comments

Paris Terrorists Used Double ROT-13 Encryption

That is, no encryption at all. The Intercept has the story:

Yet news emerging from Paris—as well as evidence from a Belgian ISIS raid in January—suggests that the ISIS terror networks involved were communicating in the clear, and that the data on their smartphones was not encrypted.

European media outlets are reporting that the location of a raid conducted on a suspected safe house Wednesday morning was extracted from a cellphone, apparently belonging to one of the attackers, found in the trash outside the Bataclan concert hall massacre. Le Monde reported that investigators were able to access the data on the phone, including a detailed map of the concert hall and an SMS messaging saying “we’re off; we’re starting.” Police were also able to trace the phone’s movements.

The obvious conclusion:

The reports note that Abdelhamid Abaaoud, the “mastermind” of both the Paris attacks and a thwarted Belgium attack ten months ago, failed to use encryption whatsoever (read: existing capabilities stopped the Belgium attacks and could have stopped the Paris attacks, but didn’t). That’s of course not to say batshit religious cults like ISIS don’t use encryption, and won’t do so going forward. Everybody uses encryption. But the point remains that to use a tragedy to vilify encryption, push for surveillance expansion, and pass backdoor laws that will make everybody less safe—is nearly as gruesome as the attacks themselves.

And what is it about this “mastermind” label? Why do we have to make them smarter than they are?

EDITED TO ADD: More information.

EDITED TO ADD: My previous blog post on this.

Posted on November 18, 2015 at 3:35 PM149 Comments

Ads Surreptitiously Using Sound to Communicate Across Devices

This is creepy and disturbing:

Privacy advocates are warning federal authorities of a new threat that uses inaudible, high-frequency sounds to surreptitiously track a person’s online behavior across a range of devices, including phones, TVs, tablets, and computers.

The ultrasonic pitches are embedded into TV commercials or are played when a user encounters an ad displayed in a computer browser. While the sound can’t be heard by the human ear, nearby tablets and smartphones can detect it. When they do, browser cookies can now pair a single user to multiple devices and keep track of what TV commercials the person sees, how long the person watches the ads, and whether the person acts on the ads by doing a Web search or buying a product.

Related: a Chrome extension that broadcasts URLs over audio.

EDITED TO ADD (12/14): More here.

Posted on November 18, 2015 at 6:59 AM62 Comments

On CISA

I have avoided writing about the Cybersecurity Information Sharing Act (CISA), largely because the details kept changing. (For those not following closely, similar bills were passed by both the House and the Senate. They’re now being combined into a single bill which will be voted on again, and then almost certainly signed into law by President Obama.)

Now that it’s pretty solid, I find that I don’t have to write anything, because Danny Weitzner did such a good job, writing about how the bill encourages companies to share personal information with the government, allows them to take some offensive measures against attackers (or innocents, if they get it wrong), waives privacy protections, and gives companies immunity from prosecution.

Information sharing is essential to good cybersecurity, and we need more of it. But CISA is a really bad law.

This is good, too.

Posted on November 17, 2015 at 12:03 PM31 Comments

Refuse to Be Terrorized

Paul Krugman has written a really good update of my 2006 essay.

Krugman:

So what can we say about how to respond to terrorism? Before the atrocities in Paris, the West’s general response involved a mix of policing, precaution, and military action. All involved difficult tradeoffs: surveillance versus privacy, protection versus freedom of movement, denying terrorists safe havens versus the costs and dangers of waging war abroad. And it was always obvious that sometimes a terrorist attack would slip through.

Paris may have changed that calculus a bit, especially when it comes to Europe’s handling of refugees, an agonizing issue that has now gotten even more fraught. And there will have to be a post-mortem on why such an elaborate plot wasn’t spotted. But do you remember all the pronouncements that 9/11 would change everything? Well, it didn’t—and neither will this atrocity.

Again, the goal of terrorists is to inspire terror, because that’s all they’re capable of. And the most important thing our societies can do in response is to refuse to give in to fear.

Me:

But our job is to remain steadfast in the face of terror, to refuse to be terrorized. Our job is to not panic every time two Muslims stand together checking their watches. There are approximately 1 billion Muslims in the world, a large percentage of them not Arab, and about 320 million Arabs in the Middle East, the overwhelming majority of them not terrorists. Our job is to think critically and rationally, and to ignore the cacophony of other interests trying to use terrorism to advance political careers or increase a television show’s viewership.

The surest defense against terrorism is to refuse to be terrorized. Our job is to recognize that terrorism is just one of the risks we face, and not a particularly common one at that. And our job is to fight those politicians who use fear as an excuse to take away our liberties and promote security theater that wastes money and doesn’t make us any safer.

This crass and irreverent essay was written after January’s Paris terrorist attack, but is very relevant right now.

Posted on November 17, 2015 at 6:36 AM79 Comments

Paris Attacks Blamed on Strong Cryptography and Edward Snowden

Well, that didn’t take long:

As Paris reels from terrorist attacks that have claimed at least 128 lives, fierce blame for the carnage is being directed toward American whistleblower Edward Snowden and the spread of strong encryption catalyzed by his actions.

Now the Paris attacks are being used an excuse to demand back doors.

CIA Director John Brennan chimed in, too.

Of course, this was planned all along. From September:

Privately, law enforcement officials have acknowledged that prospects for congressional action this year are remote. Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

There is value, he said, in “keeping our options open for such a situation.”

I was going to write a definitive refutation to the meme that it’s all Snowden’s fault, but Glenn Greenwald beat me to it.

EDITED TO ADD: It wasn’t fair for me to characterize Ben Wittes’s Lawfare post as agitating for back doors. I apologize.

Better links are these two New York Times stories.

EDITED TO ADD (11/17): These two essays are also good.

EDITED TO ADD (11/18): The New York Times published a powerful editorial against mass surveillance.

EDITED TO ADD (11/19): The New York Times deleted a story claiming the attackers used encryption. Because it turns out they didn’t use encryption.

Posted on November 16, 2015 at 2:39 PM120 Comments

Did Carnegie Mellon Attack Tor for the FBI?

There’s pretty strong evidence that the team of researchers from Carnegie Mellon University who cancelled their scheduled 2015 Black Hat talk deanonymized Tor users for the FBI.

Details are in this Vice story and this Wired story (and these two follow-on Vice stories). And here’s the reaction from the Tor Project.

Nicholas Weaver guessed this back in January.

The behavior of the researchers is reprehensible, but the real issue is that CERT Coordination Center (CERT/CC) has lost its credibility as an honest broker. The researchers discovered this vulnerability and submitted it to CERT. Neither the researchers nor CERT disclosed this vulnerability to the Tor Project. Instead, the researchers apparently used this vulnerability to deanonymize a large number of hidden service visitors and provide the information to the FBI.

Does anyone still trust CERT to behave in the Internet’s best interests?

EDITED TO ADD (12/14): I was wrong. CERT did disclose to Tor.

Posted on November 16, 2015 at 6:19 AM128 Comments

Personal Data Sharing by Mobile Apps

Interesting research:

“Who Knows What About Me? A Survey of Behind the Scenes Personal Data Sharing to Third Parties by Mobile Apps,” by Jinyan Zang, Krysta Dummit, James Graves, Paul Lisker, and Latanya Sweeney.

We tested 110 popular, free Android and iOS apps to look for apps that shared personal, behavioral, and location data with third parties.

73% of Android apps shared personal information such as email address with third parties, and 47% of iOS apps shared geo-coordinates and other location data with third parties.

93% of Android apps tested connected to a mysterious domain, safemovedm.com, likely due to a background process of the Android phone.

We show that a significant proportion of apps share data from user inputs such as personal information or search terms with third parties without Android or iOS requiring a notification to the user.

EDITED TO ADD: News article.

Posted on November 13, 2015 at 6:08 AM24 Comments

Testing the Usability of PGP Encryption Tools

Why Johnny Still, Still Can’t Encrypt: Evaluating the Usability of a Modern PGP Client,” by Scott Ruoti, Jeff Andersen, Daniel Zappala, and Kent Seamons.

Abstract: This paper presents the results of a laboratory study involving Mailvelope, a modern PGP client that integrates tightly with existing webmail providers. In our study, we brought in pairs of participants and had them attempt to use Mailvelope to communicate with each other. Our results shown that more than a decade and a half after Why Johnny Can’t Encrypt, modern PGP tools are still unusable for the masses. We finish with a discussion of pain points encountered using Mailvelope, and discuss what might be done to address them in future PGP systems.

I have recently come to the conclusion that e-mail is fundamentally unsecurable. The things we want out of e-mail, and an e-mail system, are not readily compatible with encryption. I advise people who want communications security to not use e-mail, but instead use an encrypted message client like OTR or Signal.

Posted on November 12, 2015 at 2:28 PM75 Comments

The Effects of Surveillance on the Victims

Last month, the Cato Institute held its Second Annual Cato Surveillance Conference. It was an excellent event, with many interesting talks and panels. But their was one standout: a panel by victims of surveillance. Titled “The Feeling of Being Watched,” it consisted of Assia Boundaoui, Faisal Gill, and Jumana Musa. It was very powerful and moving to hear them talk about what it’s like to live under the constant threat of surveillance.

Watch the video or listen to the audio.

By the way, I gave the closing keynote (video and audio).

Posted on November 5, 2015 at 6:16 AM72 Comments

Analyzing Reshipping Mule Scams

Interesting paper: “Drops for Stuff: An Analysis of Reshipping Mule Scams. From a blog post:

A cybercriminal (called operator) recruits unsuspecting citizens with the promise of a rewarding work-from-home job. This job involves receiving packages at home and having to re-ship them to a different address, provided by the operator. By accepting the job, people unknowingly become part of a criminal operation: the packages that they receive at their home contain stolen goods, and the shipping destinations are often overseas, typically in Russia. These shipping agents are commonly known as reshipping mules (or drops for stuff in the underground community).

[…]

Studying the management of the mules lead us to some surprising findings. When applying for the job, people are usually required to send the operator copies of their ID cards and passport. After they are hired, mules are promised to be paid at the end of their first month of employment. However, from our data it is clear that mules are usually never paid. After their first month expires, they are never contacted back by the operator, who just moves on and hires new mules. In other words, the mules become victims of this scam themselves, by never seeing a penny. Moreover, because they sent copies of their documents to the criminals, mules can potentially become victims of identity theft.

Posted on November 4, 2015 at 1:54 PM33 Comments

$1M Bounty for iPhone Hack

I don’t know whether to believe this story. Supposedly the startup Zerodium paid someone $1M for an iOS 9.1 and 9.2b hack.

Bekrar and Zerodium, as well as its predecessor VUPEN, have a different business model. They offer higher rewards than what tech companies usually pay out, and keep the vulnerabilities secret, revealing them only to certain government customers, such as the NSA.

I know startups like publicity, but certainly an exploit like this is more valuable if it’s not talked about.

So this might be real, or it might be a PR stunt. But companies selling exploits to governments is certainly real.

Another news article.

Posted on November 3, 2015 at 2:31 PM25 Comments

Australia Is Testing Virtual Passports

Australia is going to be the first country to have virtual passports. Presumably, the passport data will be in the cloud somewhere, and you’ll access it with an app or a URL or maybe just the passport number.

On the one hand, all a passport needs to be is a pointer into a government database with all the relevant information and biometrics. On the other hand, not all countries have access into all databases. When I enter the US with my US passport, I’m sure no one really needs the paper document—it’s all on the officers’ computers. But when I enter a random country, they don’t have access to the US government database; they need the physical object.

Australia is trialing this with New Zealand. Presumably both countries will have access into each others’ databases.

Posted on November 3, 2015 at 6:20 AM49 Comments

The Rise of Political Doxing

Last week, CIA director John O. Brennan became the latest victim of what’s become a popular way to embarrass and harass people on the Internet. A hacker allegedly broke into his AOL account and published e-mails and documents found inside, many of them personal and sensitive.

It’s called doxing­—sometimes doxxing­—from the word “documents.” It emerged in the 1990s as a hacker revenge tactic, and has since been as a tool to harass and intimidate people, primarily women, on the Internet. Someone would threaten a woman with physical harm, or try to incite others to harm her, and publish her personal information as a way of saying “I know a lot about you­—like where you live and work.” Victims of doxing talk about the fear that this tactic instills. It’s very effective, by which I mean that it’s horrible.

Brennan’s doxing was slightly different. Here, the attacker had a more political motive. He wasn’t out to intimidate Brennan; he simply wanted to embarrass him. His personal papers were dumped indiscriminately, fodder for an eager press. This doxing was a political act, and we’re seeing this kind of thing more and more.

Last year, the government of North Korea did this to Sony. Hackers the FBI believes were working for North Korea broke into the company’s networks, stole a huge amount of corporate data, and published it. This included unreleased movies, financial information, company plans, and personal e-mails. The reputational damage to the company was enormous; the company estimated the cost at $41 million.

In July, hackers stole and published sensitive documents from the cyberweapons arms manufacturer Hacking Team. That same month, different hackers did the same thing to the infidelity website Ashley Madison. In 2014, hackers broke into the iCloud accounts of over 100 celebrities and published personal photographs, most containing some nudity. In 2013, Edward Snowden doxed the NSA.

These aren’t the first instances of politically motivated doxing, but there’s a clear trend. As people realize what an effective attack this can be, and how an individual can use the tactic to do considerable damage to powerful people and institutions, we’re going to see a lot more of it.

On the Internet, attack is easier than defense. We’re living in a world where a sufficiently skilled and motivated attacker will circumvent network security. Even worse, most Internet security assumes it needs to defend against an opportunistic attacker who will attack the weakest network in order to get­—for example­—a pile of credit card numbers. The notion of a targeted attacker, who wants Sony or Ashley Madison or John Brennan because of what they stand for, is still new. And it’s even harder to defend against.

What this means is that we’re going to see more political doxing in the future, against both people and institutions. It’s going to be a factor in elections. It’s going to be a factor in anti-corporate activism. More people will find their personal information exposed to the world: politicians, corporate executives, celebrities, divisive and outspoken individuals.

Of course they won’t all be doxed, but some of them will. Some of them will be doxed directly, like Brennan. Some of them will be inadvertent victims of a doxing attack aimed at a company where their information is stored, like those celebrities with iPhone accounts and every customer of Ashley Madison. Regardless of the method, lots of people will have to face the publication of personal correspondence, documents, and information they would rather be private.

In the end, doxing is a tactic that the powerless can effectively use against the powerful. It can be used for whistleblowing. It can be used as a vehicle for social change. And it can be used to embarrass, harass, and intimidate. Its popularity will rise and fall on this effectiveness, especially in a world where prosecuting the doxers is so difficult.

There’s no good solution for this right now. We all have the right to privacy, and we should be free from doxing. But we’re not, and those of us who are in the public eye have no choice but to rethink our online data shadows.

This essay previously appeared on Vice Motherboard.

EDITED TO ADD: Slashdot thread.

Posted on November 2, 2015 at 6:47 AM80 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.