Blog: May 2013 Archives

Why We Lie

This, by Judge Kozinski, is from a Federal court ruling about false statements and First Amendment protection

Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner“); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”)….

An important aspect of personal autonomy is the right to shape one’s public and private persona by choosing when to tell the truth about oneself, when to conceal, and when to deceive. Of course, lies are often disbelieved or discovered, and that, too, is part of the push and pull of social intercourse. But it’s critical to leave such interactions in private hands, so that we can make choices about who we are. How can you develop a reputation as a straight shooter if lying is not an option?

Two books on the evolutionary psychology of lying are related: David Livingstone Smith’s Why We Lie, and Dan Ariely’s The Honest Truth about Dishonesty.

Posted on May 30, 2013 at 6:31 AM38 Comments

Are We Finally Thinking Sensibly About Terrorism?

This article wonders if we are:

Yet for pretty much the first time there has been a considerable amount of media commentary seeking to put terrorism in context—commentary that concludes, as a Doyle McManus article in the Los Angeles Times put it a day after the attack, “We’re safer than we think.”

Similar tunes were sung by Tom Friedman of the New York Times, Jeff Jacoby of the Boston Globe, David Rothkopf writing for CNN.com, Josh Barro at Bloomberg, John Cassidy at the New Yorker, and Steve Chapman in the Chicago Tribune, even as the Washington Post told us “why terrorism is not scary” and published statistics on its rarity. Bruce Schneier, who has been making these arguments for over a decade, got 360,000 hits doing so for The Atlantic. Even neoconservative Max Boot, a strong advocate of the war in Iraq as a response to 9/11, argues in the Wall Street Journal, “we must do our best to make sure that the terrorists don’t achieve their objective­—to terrorize us.”

James Carafano of the conservative Heritage Foundation noted in a radio interview that “the odds of you being killed by a terrorist are less than you being hit by a meteorite.” Carafano’s odds may be a bit off, but his basic point isn’t. At present rates, an American’s chance of being killed by a terrorist is about one in 3.5 million per year­—compared, for example, to a yearly chance of dying in an automobile crash of one in 8,200. That could change, of course, if terrorists suddenly become vastly more capable of inflicting damage­—as much commentary on terrorism has predicted over the past decade. But we’re not hearing much of that anymore.

In a 60 Minutes interview a decade ago filmmaker Michael Moore noted, “The chances of any of us dying in a terrorist incident is very, very, very small.” Bob Simon, his interlocutor, responded, “No one sees the world like that.”

Both statements were pretty much true then. However, the unprecedented set of articles projecting a more restrained, and broader, perspective suggests that Simon’s wisdom may need some updating, and that Moore is beginning to have some company.

There’s also this; and this, by Andrew Sullivan; and this, by John Cole. And these two polls.

And, of course, President Obama himself declared that “Americans refuse to be terrorized.”

Posted on May 29, 2013 at 11:22 AM28 Comments

Nassim Nicholas Taleb on Risk Perception

From his Facebook page:

An illustration of how the news are largely created, bloated and magnified by journalists. I have been in Lebanon for the past 24h, and there were shells falling on a suburb of Beirut. Yet the news did not pass the local *social filter* and did [not] reach me from social sources…. The shelling is the kind of thing that is only discussed in the media because journalists can use it self-servingly to weave a web-worthy attention-grabbing narrative.

It is only through people away from the place discovering it through Google News or something even more stupid, the NYT, that I got the information; these people seemed impelled to inquire about my safety.

What kills people in Lebanon: cigarettes, sugar, coca cola and other chemical monstrosities, iatrogenics, hypochondria, overtreament (Lipitor etc.), refined wheat pita bread, fast cars, lack of exercise, angry husbands (or wives), etc., things that are not interesting enough to make it to Google News.

A Roman citizen 2000 years ago was more calibrated in his risk assessment than an internet user today….

Posted on May 28, 2013 at 12:52 PM25 Comments

The Politics of Security in a Democracy

Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.

Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.

The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.

The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”

The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.

Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.

The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.

For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.

Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.

This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.

We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.

Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.

What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.

Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.

This essay originally appeared on CNN.com.

EDITED TO ADD (6/4): This essay has been translated into Swedish.

EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.

Posted on May 28, 2013 at 5:09 AM39 Comments

Friday Squid Blogging: Eating Giant Squid

How does he know this?

Chris Cosentino, the Bay Area’s “Offal Chef” at Incanto in San Francisco and PIGG at Umamicatessen in Los Angeles, opted for the most intimidating choice of all—giant squid. “When it comes to underutilized fish, I wish the public wasn’t so afraid of different shapes and sizes outside of the standard fillet,” he said.

“I think the giant squid is a perfect example of an undervalued ocean creature. Everyone isn’t afraid of squid but the size and flavor of the giant squid scares people because it has a very intense flavor but it is quite delicious.”

I am surprised he has tasted giant squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on May 24, 2013 at 4:54 PM45 Comments

Training Baggage Screeners

The research in G. Giguère and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

Posted on May 24, 2013 at 12:17 PM11 Comments

New Report on Teens, Social Media, and Privacy

Interesting report from the From the Pew Internet and American Life Project:

Teens are sharing more information about themselves on their social media profiles than they did when we last surveyed in 2006:

  • 91% post a photo of themselves, up from 79% in 2006.
  • 71% post their school name, up from 49%.
  • 71% post the city or town where they live, up from 61%.
  • 53% post their email address, up from 29%.
  • 20% post their cell phone number, up from 2%.

60% of teen Facebook users set their Facebook profiles to private (friends only), and most report high levels of confidence in their ability to manage their settings.

danah boyd points out something interesting in the data:

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%)….

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them—parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Posted on May 24, 2013 at 8:40 AM18 Comments

One-Shot vs. Iterated Prisoner's Dilemma

This post by Aleatha Parker-Wood is very applicable to the things I wrote in Liars & Outliers:

A lot of fundamental social problems can be modeled as a disconnection between people who believe (correctly or incorrectly) that they are playing a non-iterated game (in the game theory sense of the word), and people who believe that (correctly or incorrectly) that they are playing an iterated game.

For instance, mechanisms such as reputation mechanisms, ostracism, shaming, etc., are all predicated on the idea that the person you’re shaming will reappear and have further interactions with the group. Legal punishment is only useful if you can catch the person, and if the cost of the punishment is more than the benefit of the crime.

If it is possible to act as if the game you are playing is a one-shot game (for instance, you have a very large population to hide in, you don’t need to ever interact with people again, or you can be anonymous), your optimal strategies are going to be different than if you will have to play the game many times, and live with the legal or social consequences of your actions. If you can make enough money as CEO to retire immediately, you may choose to do so, even if you’re so terrible at running the company that no one will ever hire you again.

Social cohesion can be thought of as a manifestation of how “iterated” people feel their interactions are, how likely they are to interact with the same people again and again and have to deal with long term consequences of locally optimal choices, or whether they feel they can “opt out” of consequences of interacting with some set of people in a poor way.

Posted on May 23, 2013 at 9:18 AM25 Comments

"The Global Cyber Game"

This 127-page report was just published by the UK Defence Academy. I have not read it yet, but it looks really interesting.

Executive Summary: This report presents a systematic way of thinking about cyberpower and its use by a variety of global players. The urgency of addressing cyberpower in this way is a consequence of the very high value of the Internet and the hazards of its current militarization.

Cyberpower and cyber security are conceptualized as a ‘Global Game’ with a novel ‘Cyber Gameboard’ consisting of a nine-cell grid. The horizontal direction on the grid is divided into three columns representing aspects of information (i.e. cyber): connection, computation and cognition. The vertical direction on the grid is divided into three rows representing types of power: coercion, co-option, and cooperation. The nine cells of the grid represent all the possible combinations of power and information, that is, forms of cyberpower.

The Cyber Gameboard itself is also an abstract representation of the surface of cyberspace, or C-space as defined in this report. C-space is understood as a networked medium capable of conveying various combinations of power and information to produce effects in physical or ‘flow space,’ referred to as F-space in this report. Game play is understood as the projection via C-space of a cyberpower capability existing in any one cell of the gameboard to produce an effect in F-space vis-a-vis another player in any other cell of the gameboard. By default, the Cyber Game is played either actively or passively by all those using network connected computers. The players include states, businesses, NGOs, individuals, non-state political groups, and organized crime, among others. Each player is seen as having a certain level of cyberpower when its capability in each cell is summed across the whole board. In general states have the most cyberpower.

The possible future path of the game is depicted by two scenarios, N-topia and N-crash. These are the stakes for which the Cyber Game is played. N-topia represents the upside potential of the game, in which the full value of a globally connected knowledge society is realized. N-crash represents the downside potential, in which militarization and fragmentation of the Internet cause its value to be substantially destroyed. Which scenario eventuates will be determined largely by the overall pattern of play of the Cyber Game.

States have a high level of responsibility for determining the outcome. The current pattern of play is beginning to resemble traditional state-on-state geopolitical conflict. This puts the civil Internet at risk, and civilian cyber players are already getting caught in the crossfire. As long as the civil Internet remains undefended and easily permeable to cyber attack it will be hard to achieve the N-topia scenario.

Defending the civil Internet in depth, and hardening it by re-architecting will allow its full social and economic value to be realized but will restrict the potential for espionage and surveillance by states. This trade-off is net positive and in accordance with the espoused values of Western-style democracies. It does however call for leadership based on enlightened self-interest by state players.

Posted on May 22, 2013 at 12:05 PM11 Comments

DDOS as Civil Disobedience

For a while now, I have been thinking about what civil disobedience looks like in the Internet Age. Certainly DDOS attacks, and politically motivated hacking in general, is a part of that. This is one of the reasons I found Molly Sauter’s recent thesis, “Distributed Denial of Service Actions and the Challenge of Civil Disobedience on the Internet,” so interesting:

Abstract: This thesis examines the history, development, theory, and practice of distributed denial of service actions as a tactic of political activism. DDOS actions have been used in online political activism since the early 1990s, though the tactic has recently attracted significant public attention with the actions of Anonymous and Operation Payback in December 2010. Guiding this work is the overarching question of how civil disobedience and disruptive activism can be practiced in the current online space. The internet acts as a vital arena of communication, self expression, and interpersonal organizing. When there is a message to convey, words to get out, people to organize, many will turn to the internet as the zone of that activity. Online, people sign petitions, investigate stories and rumors, amplify links and videos, donate money, and show their support for causes in a variety of ways. But as familiar and widely accepted activist tools—petitions, fundraisers, mass letter-writing, call-in campaigns and others—find equivalent practices in the online space, is there also room for the tactics of disruption and civil disobedience that are equally familiar from the realm of street marches, occupations, and sit-ins? This thesis grounds activist DDOS historically, focusing on early deployments of the tactic as well as modern instances to trace its development over time, both in theory and in practice. Through that examination, as well as tool design and development, participant identity, and state and corporate responses, this thesis presents an account of the development and current state of activist DDOS actions. It ends by presenting an analytical framework for the analysis of activist DDOS actions.

One of the problems with the legal system is that it doesn’t make any differentiation between civil disobedience and “normal” criminal activity on the Internet, though it does in the real world.

Posted on May 22, 2013 at 6:24 AM49 Comments

Surveillance and the Internet of Things

The Internet has turned into a massive surveillance tool. We’re constantly monitored on the Internet by hundreds of companies—both familiar and unfamiliar. Everything we do there is recorded, collected, and collated—sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behavior, it’s still only behavior that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is Internet-enabled. Soon there will be Internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be Internet-connected tags on our clothing. In its extreme, everything can be connected to the Internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about theInternet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be Internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the Internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to—to monitor how people move about their house and how they spend their time.

Drones are another “thing” moving onto the Internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the Internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s Internet-enabled glasses—Google Glass—are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behavior and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals—not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyze this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days—and nights—with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Will you know any of this? Will your friends? It depends. Lots of these devices have, and will have, privacy settings. But these settings are remarkable not in how much privacy they afford, but in how much they deny. Access will likely be similar to your browsing habits, your files stored on Dropbox, your searches on Google, and your text messages from your phone. All of your data is saved by those companies—and many others—correlated, and then bought and sold without your knowledge or consent. You’d think that your privacy settings would keep random strangers from learning everything about you, but it only keeps random strangers who don’t pay for the privilege—or don’t work for the government and have the ability to demand the data. Power is what matters here: you’ll be able to keep the powerless from invading your privacy, but you’ll have no ability to prevent the powerful from doing it again and again.

This essay originally appeared on the Guardian.

EDITED TO ADD (6/14): Another article on the subject.

Posted on May 21, 2013 at 6:15 AM55 Comments

Security Risks of Too Much Security

All of the anti-counterfeiting features of the new Canadian $100 bill are resulting in people not bothering to verify them.

The fanfare about the security features on the bills, may be part of the problem, said RCMP Sgt. Duncan Pound.

“Because the polymer series’ notes are so secure … there’s almost an overconfidence among retailers and the public in terms of when you sort of see the strip, the polymer looking materials, everybody says ‘oh, this one’s going to be good because you know it’s impossible to counterfeit,'” he said.

“So people don’t actually check it.”

Posted on May 20, 2013 at 6:34 AM40 Comments

Bluetooth-Controlled Door Lock

Here is a new lock that you can control via Bluetooth and an iPhone app.

That’s pretty cool, and I can imagine all sorts of reasons to get one of those. But I’m sure there are all sorts of unforeseen security vulnerabilities in this system. And even worse, a single vulnerability can affect all the locks. Remember that vulnerability found last year in hotel electronic locks?

Anyone care to guess how long before some researcher finds a way to hack this one? And how well the maker anticipated the need to update the firmware to fix the vulnerability once someone finds it?

I’m not saying that you shouldn’t use this lock, only that you understand that new technology brings new security risks, and electronic technology brings new kinds of security risks. Security is a trade-off, and the trade-off is particularly stark in this case.

Posted on May 16, 2013 at 8:45 AM65 Comments

Transparency and Accountability

As part of the fallout of the Boston bombings, we’re probably going to get some new laws that give the FBI additional investigative powers. As with the Patriot Act after 9/11, the debate over whether these new laws are helpful will be minimal, but the effects on civil liberties could be large. Even though most people are skeptical about sacrificing personal freedoms for security, it’s hard for politicians to say no to the FBI right now, and it’s politically expedient to demand that something be done.

If our leaders can’t say no—and there’s no reason to believe they can—there are two concepts that need to be part of any new counterterrorism laws, and investigative laws in general: transparency and accountability.

Long ago, we realized that simply trusting people and government agencies to always do the right thing doesn’t work, so we need to check up on them. In a democracy, transparency and accountability are how we do that. It’s how we ensure that we get both effective and cost-effective government. It’s how we prevent those we trust from abusing that trust, and protect ourselves when they do. And it’s especially important when security is concerned.

First, we need to ensure that the stuff we’re paying money for actually works and has a measureable impact. Law-enforcement organizations regularly invest in technologies that don’t make us any safer. The TSA, for example, could devote an entire museum to expensive but ineffective systems: puffer machines, body scanners, FAST behavioral screening, and so on. Local police departments have been wasting lots of post-9/11 money on unnecessary high-tech weaponry and equipment. The occasional high-profile success aside, police surveillance cameras have been shown to be a largely ineffective police tool.

Sometimes honest mistakes led organizations to invest in these technologies. Sometimes there’s self-deception and mismanagement—and far too often lobbyists are involved. Given the enormous amount of security money post-9/11, you inevitably end up with an enormous amount of waste. Transparency and accountability are how we keep all of this in check.

Second, we need to ensure that law enforcement does what we expect it to do and nothing more. Police powers are invariably abused. Mission creep is inevitable, and it results in laws designed to combat one particular type of crime being used for an ever-widening array of crimes. Transparency is the only way we have of knowing when this is going on.

For example, that’s how we learned that the FBI is abusing National Security Letters. Traditionally, we use the warrant process to protect ourselves from police overreach. It’s not enough for the police to want to conduct a search; they also need to convince a neutral third party—a judge—that the search is in the public interest and will respect the rights of those searched. That’s accountability, and it’s the very mechanism that NSLs were exempted from.

When laws are broken, accountability is how we punish those who abused their power. It’s how, for example, we correct racial profiling by police departments. And it’s a lack of accountability that permits the FBI to get away with massive data collection until exposed by a whistleblower or noticed by a judge.

Third, transparency and accountability keep both law enforcement and politicians from lying to us. The Bush Administration lied about the extent of the NSA’s warrantless wiretapping program. The TSA lied about the ability of full-body scanners to save naked images of people. We’ve been lied to about the lethality of tasers, when and how the FBI eavesdrops on cell-phone calls, and about the existence of surveillance records. Without transparency, we would never know.

A decade ago, the FBI was heavily lobbying Congress for a law to give it new wiretapping powers: a law known as CALEA. One of its key justifications was that existing law didn’t allow it to perform speedy wiretaps during kidnapping investigations. It sounded plausible—and who wouldn’t feel sympathy for kidnapping victims?—but when civil-liberties organizations analyzed the actual data, they found that it was just a story; there were no instances of wiretapping in kidnapping investigations. Without transparency, we would never have known that the FBI was making up stories to scare Congress.

If we’re going to give the government any new powers, we need to ensure that there’s oversight. Sometimes this oversight is before action occurs. Warrants are a great example. Sometimes they’re after action occurs: public reporting, audits by inspector generals, open hearings, notice to those affected, or some other mechanism. Too often, law enforcement tries to exempt itself from this principle by supporting laws that are specifically excused from oversight…or by establishing secret courts that just rubber-stamp government wiretapping requests.

Furthermore, we need to ensure that mechanisms for accountability have teeth and are used.

As we respond to the threat of terrorism, we must remember that there are other threats as well. A society without transparency and accountability is the very definition of a police state. And while a police state might have a low crime rate—especially if you don’t define police corruption and other abuses of power as crime—and an even lower terrorism rate, it’s not a society that most of us would willingly choose to live in.

We already give law enforcement enormous power to intrude into our lives. We do this because we know they need this power to catch criminals, and we’re all safer thereby. But because we recognize that a powerful police force is itself a danger to society, we must temper this power with transparency and accountability.

This essay previously appeared on TheAtlantic.com.

Posted on May 14, 2013 at 5:48 AM39 Comments

The Onion on Browser Security

Wise advice:

At Chase Bank, we recognize the value of online banking­—it’s quick, convenient, and available any time you need it. Unfortunately, though, the threats posed by malware and identity theft are very real and all too common nowadays. That’s why, when you’re finished with your online banking session, we recommend three simple steps to protect your personal information: log out of your account, close your web browser, and then charter a seafaring vessel to take you 30 miles out into the open ocean and throw your computer overboard.

And while we’re talking about the Onion, they were recently hacked by Syria (either the government or someone on their side). They responded in their own way.

EDITED TO ADD (5/11): How The Onion got hacked.

Posted on May 10, 2013 at 1:49 PM28 Comments

Mail Cover

From a FOIAed Department of Transportation document on investigative techniques:

A “mail cover” is the process by which the U.S. Postal Service records any data appearing on the outside cover of any class of mail, sealed or unsealed, or by which a record is made of the contents of unsealed (second-, third-, or fourth-class) mail matter as allowed by law. This “rnail cover” is done to obtain information in the interest of protecting national security, locating a fugitive, or obtaining evidence of commission or attempted commission of a felony crime, or assist in the identification of property, proceeds, or assets forfeitable under law.

Seems to be the paper mail equivalent of a pen register. I’d never heard of the term before.

EDITED TO ADD (5/11): Here is a 2002 NPR interview on mail cover, based on these two articles.

Posted on May 10, 2013 at 6:47 AM38 Comments

The Economist on Guantanamo

Maybe the tide is turning:

America is in a hole. The last response of the blowhards and cowards who have put it there is always: “So what would you do: set them free?” Our answer remains, yes. There is clearly a risk that some of them would then commit some act of violence—in Yemen, elsewhere in the Middle East or even in America itself. That risk can be lessened by surveillance. But even if another outrage were to happen, the evil of “Gitmo” has recruited far more people to terrorism than a mere 166. Mr Obama should think about America’s founding principles, take out his pen and end this stain on its history.

I agree 100%.

This isn’t the first time people have pointed out that our politics are creating more terrorists than they’re killing—especially our drone strikes—but I don’t expect this sort of security trade-off analysis from the Economist.

Posted on May 9, 2013 at 5:16 AM72 Comments

Reidentifying Anonymous Data

Latanya Sweeney has demonstrated how easy it can be to identify people from their birth date, gender, and zip code. The anonymous data she reidentified happened to be DNA data, but that’s not relevant to her methods or results.

Of the 1,130 volunteers Sweeney and her team reviewed, about 579 provided zip code, date of birth and gender, the three key pieces of information she needs to identify anonymous people combined with information from voter rolls or other public records. Of these, Sweeney succeeded in naming 241, or 42% of the total. The Personal Genome Project confirmed that 97% of the names matched those in its database if nicknames and first name variations were included.

Her results are described here.

Posted on May 8, 2013 at 1:54 PM10 Comments

Evacuation Alerts at the Airport

Last week, an employee error caused the monitors at LAX to display a building evacuation order:

At a little before 9:47 p.m., the message read: “An emergency has been declared in the terminal. Please evacuate.” An airport police source said officers responded to the scene at the Tom Bradley International Terminal, believing the system had been hacked. But an airport spokeswoman said it was an honest mistake.

I think the real news has nothing to do with how susceptible those systems are to hacking. It’s this line:

Castles said there were no reports of passengers evacuating the terminal and the problem was fixed within about 10 minutes.

So now we know: building evacuation announcements on computer screens are ineffective.

She said airport officials are looking into ways to ensure a similar problem does not occur again.

That probably means that they’re going to make sure an erroneous evacuation message doesn’t appear on the computer screens again, not that everyone doesn’t ignore the evacuation message when there is an actual emergency.

Posted on May 8, 2013 at 6:32 AM35 Comments

Is the U.S. Government Recording and Saving All Domestic Telephone Calls?

I have no idea if “former counterterrorism agent for the FBI” Tom Clemente knows what he’s talking about, but that’s certainly what he implies here:

More recently, two sources familiar with the investigation told CNN that Russell had spoken with Tamerlan after his picture appeared on national television April 18.

What exactly the two said remains under investigation, the sources said.

Investigators may be able to recover the conversation, said Tom Clemente, a former counterterrorism agent for the FBI.

“We certainly have ways in national security investigations to find out exactly what was said in that conversation,” he told CNN’s Erin Burnett on Monday, adding that “all of that stuff is being captured as we speak whether we know it or like it or not.”

“It’s not necessarily something that the FBI is going to want to present in court, but it may help lead the investigation and/or lead to questioning of her,” he said.

I’m very skeptical about Clemente’s comments. He left the FBI shortly after 9/11, and he didn’t have any special security clearances. My guess is that he is speaking more about what the NSA and FBI could potentially do, and not about what they are doing right now. And I don’t believe that the NSA could save every domestic phone call, not at this time. Possibly after the Utah data center is finished, but not now. They could be saving the all the metadata now, but I’m skeptical about that too.

Other commentary.

EDITED TO ADD (5/7): Interesting comments. I think it’s worth going through the math. There are two possible ways to do this. The first is to collect, compress, transport, and store. The second is to collect, convert to text, transport, and store. So, what data rates, processing requirements, and storage sizes are we talking about?

Posted on May 7, 2013 at 12:57 PM84 Comments

Intelligence Analysis and the Connect-the-Dots Metaphor

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

It’s an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber’s failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. All you have to do is move your pencil from one dot to the next, and when you’re done, you’ve drawn a sailboat. Or a tiger. It’s so simple that 5-year-olds can do it.

But in real life, the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from a Russian request for information to a foreign visit to some other piece of information that might have been collected.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

How many? We don’t know. But we know that the no-fly list had 21,000 people on it last year. The Terrorist Identities Datamart Environment, also known as the watch list, has 700,000 names on it.

We have no idea how many potential “dots” the FBI, CIA, NSA and other agencies collect, but it’s easily in the millions. It’s easy to work backwards through the data and see all the obvious warning signs. But before a terrorist attack, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a sailboat, a puppy, two guys with pressure-cooker bombs, or just an unintelligible mess of dots? You try to figure it out.

It’s not a matter of not enough data, either.

Piling more data onto the mix makes it harder, not easier. The best way to think of it is a needle-in-a-haystack problem; the last thing you want to do is increase the amount of hay you have to search through. The television show Person of Interest is fiction, not fact.

There’s a name for this sort of logical fallacy: hindsight bias. First explained by psychologists Daniel Kahneman and Amos Tversky, it’s surprisingly common. Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen. It’s a surprisingly strong tendency, one that has been observed in countless laboratory experiments and real-world examples of behavior. And it’s what all the post-Boston-Marathon bombing dot-connectors are doing.

Before we start blaming agencies for failing to stop the Boston bombers, and before we push “intelligence reforms” that will shred civil liberties without making us any safer, we need to stop seeing the past as a bunch of obvious dots that need connecting.

Kahneman, a Nobel prize winner, wisely noted: “Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.” Kahneman calls it “the illusion of understanding,” explaining that the past is only so understandable because we have cast it as simple inevitable stories and leave out the rest.

Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” We humans are natural storytellers, and the world of stories is much more tidy, predictable and coherent than the real world.

Millions of people behave strangely enough to warrant the FBI’s notice, and almost all of them are harmless. It is simply not possible to find every plot beforehand, especially when the perpetrators act alone and on impulse.

We have to accept that there always will be a risk of terrorism, and that when the occasional plot succeeds, it’s not necessarily because our law enforcement systems have failed.

This essay previously appeared on CNN.

EDITED TO ADD (5/7): The hindsight bias was actually first discovered by Baruch Fischhoff: “Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty,” Journal of Experimental Psychology: Human Perception and Performance, 1(3), 1975, pp. 288-299.

Posted on May 7, 2013 at 6:10 AM39 Comments

Michael Chertoff on Google Glass

Interesting op-ed by former DHS head Michael Chertoff on the privacy risks of Google Glass.

Now imagine that millions of Americans walk around each day wearing the equivalent of a drone on their head: a device capable of capturing video and audio recordings of everything that happens around them. And imagine that these devices upload the data to large-scale commercial enterprises that are able to collect the recordings from each and every American and integrate them together to form a minute-by-minute tracking of the activities of millions.

That is almost precisely the vision of the future that lies directly ahead of us. Not, of course, with wearable drones but with wearable Internet-connected equipment. This new technology—whether in the form of glasses or watches—may unobtrusively capture video data in real time, store it in the cloud and allow for it to be analyzed.

It’s not unusual for government officials—the very people we disagree with regarding civil liberties issues—to agree with us on consumer privacy issues. But don’t forget that this person advocated for full-body scanners at airports while on the payroll of a scanner company.

One of the points he makes, that the data collected from Google Glass will become part of Google’s vast sensory network, echoes something I’ve heard Marc Rotenberg at EPIC say: this whole thing would be a lot less scary if the glasses were sold by a company like Brookstone.

The ACLU comments on the essay.

Posted on May 6, 2013 at 1:17 PM46 Comments

The Public/Private Surveillance Partnership

Our government collects a lot of information about us. Tax records, legal records, license records, records of government services received—it’s all in databases that are increasingly linked and correlated. Still, there’s a lot of personal information the government can’t collect. Either they’re prohibited by law from asking without probable cause and a judicial order, or they simply have no cost-effective way to collect it. But the government has figured out how to get around the laws, and collect personal data that has been historically denied to them: ask corporate America for it.

It’s no secret that we’re monitored continuously on the Internet. Some of the company names you know, such as Google and Facebook. Others hide in the background as you move about the Internet. There are browser plugins that show you who is tracking you. One Atlantic editor found 105 companies tracking him during one 36-hour period. Add data from your cell phone (who you talk to, your location), your credit cards (what you buy, from whom you buy it), and the dozens of other times you interact with a computer daily, we live in a surveillance state beyond the dreams of Orwell.

It’s all corporate data, compiled and correlated, bought and sold. And increasingly, the government is doing the buying. Some of this is collected using National Security Letters (NSLs). These give the government the ability to demand an enormous amount of personal data about people for very speculative reasons, with neither probable cause nor judicial oversight. Data on these secretive orders is obviously scant, but we know that the FBI has issued hundreds of thousands of them in the past decade—for reasons that go far beyond terrorism.

NSLs aren’t the only way the government can get at corporate data. Sometimes they simply purchase it, just as any other company might. Sometimes they can get it for free, from corporations that want to stay on the government’s good side.

CISPA, a bill currently wending its way through Congress, codifies this sort of practice even further. If signed into law, CISPA will allow the government to collect all sorts of personal data from corporations, without any oversight at all, and will protect corporations from lawsuits based on their handing over that data. Without hyperbole, it’s been called the death of the 4th Amendment. Right now, it’s mainly the FBI and the NSA who are getting this data, but—all sorts of government agencies have administrative subpoena power.

Data on this scale has all sorts of applications. From finding tax cheaters by comparing data brokers’ estimates of income and net worth with what’s reported on tax returns, to compiling a list of gun owners from Web browsing habits, instant messaging conversations, and locations—did you have your iPhone turned on when you visited a gun store?—the possibilities are endless.

Government photograph databases form the basis of any police facial recognition system. They’re not very good today, but they’ll only get better. But the government no longer needs to collect photographs. Experiments demonstrate that the Facebook database of tagged photographs is surprisingly effective at identifying people. As more places follow Disney’s lead in fingerprinting people at its theme parks, the government will be able to use that to identify people as well.

In a few years, the whole notion of a government-issued ID will seem quaint. Among facial recognition, the unique signature from your smart phone, the RFID chips in your clothing and other items you own, and whatever new technologies that will broadcast your identity, no one will have to ask to see ID. When you walk into a store, they’ll already know who you are. When you interact with a policeman, she’ll already have your personal information displayed on her Internet-enabled glasses.

Soon, governments won’t have to bother collecting personal data. We’re willingly giving it to a vast network of for-profit data collectors, and they’re more than happy to pass it on to the government without our knowledge or consent.

This essay previously appeared on TheAtlantic.com.

EDITED TO ADD: This essay has been translated into French.

Posted on May 3, 2013 at 6:15 AM43 Comments

Risks of Networked Systems

Interesting research:

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Posted on May 2, 2013 at 1:09 PM16 Comments

More on FinSpy/FinFisher

FinFisher (also called FinSpy) is a commercially sold spyware package that is used by governments world-wide, including the U.S. There’s a new report that has a bunch of new information:

Our new findings include:

  • We have identified FinFisher Command & Control servers in 11 new Countries. Hungary, Turkey, Romania, Panama, Lithuania, Macedonia, South Africa, Pakistan, Nigeria, Bulgaria, Austria.
  • Taken together with our previous research, we can now assert that FinFisher Command & Control servers are currently active, or have been present, in 36 countries.
  • We have also identified a FinSpy sample that appears to be specifically targeting Malay language speakers, masquerading as a document discussing Malaysia’s upcoming 2013 General Elections.
  • We identify instances where FinSpy makes use of Mozilla’s Trademark and Code. The latest Malay-language sample masquerades as Mozilla Firefox in both file properties and in manifest. This behavior is similar to samples discussed in some of our previous reports, including a demo copy of the product, and samples targeting Bahraini activists.

Mozilla has sent them a cease and desist letter for using their name and code.

News story.

Here’s my previous post on the spyware.

Posted on May 2, 2013 at 6:50 AM14 Comments

Details of a Cyberheist

Really interesting article detailing how criminals steal from a company’s accounts over the Internet.

The costly cyberheist was carried out with the help of nearly 100 different accomplices in the United States who were hired through work-at-home job scams run by a crime gang that has been fleecing businesses for the past five years.

Basically, the criminals break into the bank account, move money into a bunch of other bank accounts, and use unwitting accomplices to launder the money.

The publication said the attack occurred on Apr. 19, and moved an estimated $1.03 million out of the hospital’s payroll account into 96 different bank accounts, mostly at banks in the Midwest and East Coast.

Posted on May 1, 2013 at 10:26 AM13 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.