July 15, 2007
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0707.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Correspondent Inference Theory and Terrorism
- TSA and the Sippy Cup Incident
- Ubiquity of Communication
- 4th Amendment Rights Extended to E-Mail
- Credit Card Gas Limits
- Schneier/BT Counterpane News
- Designing Voting Machines to Minimize Coercion
- Risks of Data Reuse
- Comments from Readers
Two people are sitting in a room together: an experimenter and a subject. The experimenter gets up and closes the door, and the room becomes quieter. The subject is likely to believe that the experimenter’s purpose in closing the door was to make the room quieter.
This is an example of correspondent inference theory. People tend to infer the motives—and also the disposition—of someone who performs an action based on the effects of his actions, and not on external or situational factors. If you see someone violently hitting someone else, you assume it’s because he wanted to—and is a violent person—and not because he’s play-acting. If you read about someone getting into a car accident, you assume it’s because he’s a bad driver and not because he was simply unlucky. And—more importantly for this column—if you read about a terrorist, you assume that terrorism is his ultimate goal.
It’s not always this easy, of course. If someone chooses to move to Seattle instead of New York, is it because of the climate, the culture or his career? Edward Jones and Keith Davis, who advanced this theory in the 1960s and 1970s, proposed a theory of “correspondence” to describe the extent to which this effect predominates. When an action has a high correspondence, people tend to infer the motives of the person directly from the action: e.g., hitting someone violently. When the action has a low correspondence, people tend to not to make the assumption: e.g., moving to Seattle.
Like most cognitive biases, correspondent inference theory makes evolutionary sense. In a world of simple actions and base motivations, it’s a good rule of thumb that allows a creature to rapidly infer the motivations of another creature. (He’s attacking me because he wants to kill me.) Even in sentient and social creatures like humans, it makes a lot of sense most of the time. If you see someone violently hitting someone else, it’s reasonable to assume that he’s a violent person. Cognitive biases aren’t bad; they’re sensible rules of thumb.
But like all cognitive biases, correspondent inference theory fails sometimes. And one place it fails pretty spectacularly is in our response to terrorism. Because terrorism often results in the horrific deaths of innocents, we mistakenly infer that the horrific deaths of innocents is the primary motivation of the terrorist, and not the means to a different end.
I found this interesting analysis in a paper by Max Abrahms in “International Security.” “Why Terrorism Does Not Work” analyzes the political motivations of 28 terrorist groups: the complete list of “foreign terrorist organizations” designated by the U.S. Department of State since 2001. He lists 42 policy objectives of those groups, and found that they only achieved them 7 percent of the time.
According to the data, terrorism is more likely to work if 1) the terrorists attack military targets more often than civilian ones, and 2) if they have minimalist goals like evicting a foreign power from their country or winning control of a piece of territory, rather than maximalist objectives like establishing a new political system in the country or annihilating another nation. But even so, terrorism is a pretty ineffective means of influencing policy.
There’s a lot to quibble about in Abrahms’ methodology, but he seems to be erring on the side of crediting terrorist groups with success. (Hezbollah’s objectives of expelling both peacekeepers and Israel out of Lebanon counts as a success, but so does the “limited success” by the Tamil Tigers of establishing a Tamil state.) Still, he provides good data to support what was until recently common knowledge: Terrorism doesn’t work.
This is all interesting stuff, and I recommend that you read the paper for yourself. But to me, the most insightful part is when Abrahms uses correspondent inference theory to explain why terrorist groups that primarily attack civilians do not achieve their policy goals, even if they are minimalist. Abrahms writes:
“The theory posited here is that terrorist groups that target civilians are unable to coerce policy change because terrorism has an extremely high correspondence. Countries believe that their civilian populations are attacked not because the terrorist group is protesting unfavorable external conditions such as territorial occupation or poverty. Rather, target countries infer the short-term consequences of terrorism—the deaths of innocent civilians, mass fear, loss of confidence in the government to offer protection, economic contraction, and the inevitable erosion of civil liberties—(are) the objects of the terrorist groups. In short, target countries view the negative consequences of terrorist attacks on their societies and political systems as evidence that the terrorists want them destroyed. Target countries are understandably skeptical that making concessions will placate terrorist groups believed to be motivated by these maximalist objectives.”
In other words, terrorism doesn’t work, because it makes people less likely to acquiesce to the terrorists’ demands, no matter how limited they might be. The reaction to terrorism has an effect completely opposite to what the terrorists want; people simply don’t believe those limited demands are the actual demands.
This theory explains, with a clarity I have never seen before, why so many people make the bizarre claim that al Qaeda terrorism—or Islamic terrorism in general—is “different”: that while other terrorist groups might have policy objectives, al Qaeda’s primary motivation is to kill us all. This is something we have heard from President Bush again and again—Abrahms has a page of examples in the paper—and is a rhetorical staple in the debate.
In fact, Bin Laden’s policy objectives have been surprisingly consistent. Abrahms lists four; here are six from former CIA analyst Michael Scheuer’s book “Imperial Hubris”:
* End U.S. support of Israel
* Force American troops out of the Middle East, particularly Saudi Arabia
* End the U.S. occupation of Afghanistan and (subsequently) Iraq
* End U.S. support of other countries’ anti-Muslim policies
* End U.S. pressure on Arab oil companies to keep prices low
* End U.S. support for “illegitimate” (i.e. moderate) Arab governments, like Pakistan
Although Bin Laden has complained that Americans have completely misunderstood the reason behind the 9/11 attacks, correspondent inference theory postulates that he’s not going to convince people. Terrorism, and 9/11 in particular, has such a high correspondence that people use the effects of the attacks to infer the terrorists’ motives. In other words, since Bin Laden caused the death of a couple of thousand people in the 9/11 attacks, people assume that must have been his actual goal, and he’s just giving lip service to what he *claims* are his goals. Even Bin Laden’s actual objectives are ignored as people focus on the deaths, the destruction and the economic impact.
Perversely, Bush’s misinterpretation of terrorists’ motives actually helps prevent them from achieving their goals.
None of this is meant to either excuse or justify terrorism. In fact, it does the exact opposite, by demonstrating why terrorism doesn’t work as a tool of persuasion and policy change. But we’re more effective at fighting terrorism if we understand that it is a means to an end and not an end in itself; it requires us to understand the true motivations of the terrorists and not just their particular tactics. And the more our own cognitive biases cloud that understanding, the more we mischaracterize the threat and make bad security trade-offs.
This essay originally appeared on Wired.com:
This story is pretty disgusting: “I demanded to speak to a TSA [Transportation Security Administration] supervisor who asked me if the water in the sippy cup was ‘nursery water or other bottled water.’ I explained that the sippy cup water was filtered tap water. The sippy cup was seized as my son was pointing and crying for his cup. I asked if I could drink the water to get the cup back, and was advised that I would have to leave security and come back through with an empty cup in order to retain the cup. As I was escorted out of security by TSA and a police officer, I unscrewed the cup to drink the water, which accidentally spilled because I was so upset with the situation.
“At this point, I was detained against my will by the police officer and threatened to be arrested for endangering other passengers with the spilled 3 to 4 ounces of water. I was ordered to clean the water, so I got on my hands and knees while my son sat in his stroller with no shoes on since they were also screened and I had no time to put them back on his feet. I asked to call back my fiancé, who I could still see from afar, waiting for us to clear security, to watch my son while I was being detained, and the officer threatened to arrest me if I moved. So I yelled past security to get the attention of my fiancé.
“I was ordered to apologize for the spilled water, and again threatened arrest. I was threatened several times with arrest while detained, and while three other police officers were called to the scene of the mother with the 19 month old. A total of four police officers and three TSA officers reported to the scene where I was being held against my will. I was also told that I should not disrespect the officer and could be arrested for this too. I apologized to the officer and she continued to detain me despite me telling her that I would miss my flight. The officer advised me that I should have thought about this before I ‘intentionally spilled the water!'”
This story portrays the TSA as jack-booted thugs. The story hit the Internet in mid-June, and quickly made the rounds. I saw it on BoingBoing. But, as it turns out, it’s not entirely true.
The TSA has a webpage up, with both the incident report and video.
“TSO [REDACTED] took the female to the exit lane with the stroller and her bag. When she got past the exit lane podium she opened the child’s drink container and held her arm out and poured the contents (approx. 6 to 8 ounces) on the floor. MWAA Officer [REDACTED] was manning the exit lane at the time and observed the entire scene and approached the female passenger after observing this and stopped her when she tried to re-enter the sterile area after trying to come back through after spilling the fluids on the floor. The female passenger flashed her badge and credentials and told the MWAA officer ‘Do you know who I am?’ An argument then ensued between the officer and the passenger of whether the spilling of the fluid was intentional or accidental. Officer [REDACTED] asked the passenger to clean up the spill and she did.”
Watch the second video. TSO [REDACTED] is partially blocking the scene, but at 2:01:00 PM it’s pretty clear that Monica Emmerson—that’s the female passenger—spills the liquid on the floor on purpose, as a deliberate act of defiance. What happens next is more complicated; you can watch it for yourself, or you can read BoingBoing’s somewhat sarcastic summary.
In this instance, the TSA is clearly in the right.
But there’s a larger lesson here. Remember the Princeton professor who was put on the watch list for criticizing Bush? That was also untrue. Why is it that we all—myself included—believe these stories? Why are we so quick to assume that the TSA is a bunch of jack-booted thugs, officious and arbitrary and drunk with power?
It’s because everything seems so arbitrary, because there’s no accountability or transparency in the DHS. Rules and regulations change all the time, without any explanation or justification. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states. It’s not what we expect out of 21st century America.
The problem is larger than the TSA, but the TSA is the part of “homeland security” that the public comes into contact with most often—at least the part of the public that writes about these things most. They’re the public face of the problem, so of course they’re going to get the lion’s share of the finger pointing.
It was smart public relations on the TSA’s part to get the video of the incident on the Internet quickly, but it would be even smarter for the government to restore basic constitutional liberties to our nation’s counterterrorism policy. Accountability and transparency are basic building blocks of any democracy; and the more we lose sight of them, the more we lose our way as a nation.
Remote sensing of meth labs, another NSF grant:
Ridiculous “age verification” for online movie trailers: “It seems like ‘We want to protect children’ really means, We want to give the appearance that we’ve made an effort to protect children. If they really wanted to protect children, they wouldn’t use the honor system as the sole safeguard standing between previews filled with sex and violence and Internet-savvy kids who can, in a matter of seconds, beat the impotent little system.”
Direct marketing meets wholesale surveillance: a $100K National Science Foundation grant:
In 1748, the painter William Hogarth was arrested as a spy for sketching fortifications at Calais.
Sound familiar, doesn’t it?
Someone claims to have hacked the Bloomsbury Publishing network, and has posted what he says is the ending to the last Harry Potter book. I don’t believe it, actually. Sure, it’s possible—probably even easy. But the posting just doesn’t read right to me. And I would expect someone who really got their hands on a copy of the manuscript to post the choice bits of text, not just a plot summary. It’s easier, and it’s more proof.
The French government wants to ban BlackBerry e-mail devices, because of worries of eavesdropping by U.S. intelligence.
Vulnerabilities in the DHS network:
The Onion on terrorist cell apathy:
“Cocktail condoms” are protective covers that go over your drink and “protect” against someone trying to slip a Mickey Finn (or whatever they’re called these days). I’m sure there are many ways to defeat this security device if you’re so inclined: a syringe, affixing a new cover after you tamper with the drink, and so on. And this is exactly the sort of rare risk we’re likely to overreact to. But to me, the most interesting aspect of this story is the agenda. If these things become common, it won’t be because of security. It will be because of advertising
Does this cell phone stalking story seem real to anyone?
There’s something going on here, but I just don’t believe it’s entirely cell phone hacking. Something else is going on.
Really good “Washington Post” article on secrecy:
Back in 2002 I wrote about the relationship between secrecy and security.
Surveillance cameras that obscure faces, an interesting privacy-enhancing technology.
At the beach, sand is more deadly than sharks. And this is important enough to become someone’s crusade?
Essay: “The only thing we have to fear is the ‘culture of fear’ itself,” by Frank Furedi.
Making invisible ink printer cartridges: a covert channel.
Bioterrorism detection systems and false alarms:
Airport security: Israel vs. the United States
Why an ATM PIN has four digits:
Security cartoon: it’s always a trade-off:
Look at the last line of this article, about an Ohio town considering mandatory school uniforms in lower grades: “For Edgewood, the primary motivation for adopting uniforms would be to enhance school security, York said.” What is he talking about? Does he think that school uniforms enhance security because it would be easier to spot non-uniform-wearing non-students in the school building and on the grounds? (Of course, non-students with uniforms would have an easier time sneaking in.) Or something else? Or is security just an excuse for any random thing these days?
http://news.enquirer.com/apps/pbcs.dll/article?AID=/… or http://tinyurl.com/2yr2z8
Good commentaries on the UK terrorist plots:
In former East Germany, the Stazi kept samples of people’s smells.
The Millwall brick: an improvised weapon made out of newspaper, favored by football (i.e., soccer) hooligans.
When coins are worth more as metal than as coins.
This guy has a bottle taken away from him, then he picks it out of the trash and takes it on the plane anyway. I’m not sure whether this is more gutsy or stupid. If he had been caught, the TSA would have made his day pretty damn miserable. I’m not even sure bragging about it online is a good idea. Too many idiots in the FBI.
I’ve written about this Greek wiretapping scandal before. A system to allow the police to eavesdrop on conversations was abused (surprise, surprise). There’s a really good technical analysis in IEEE Spectrum this month.
Police don’t overreact to strange object. What’s sad is that it feels like an exception.
I’m sure glad the Australian Federal Police have their priorities straight: “Technology such as cloned part-robot humans used by organised crime gangs pose the greatest future challenge to police, along with online scamming, Australian Federal Police (AFP) Commissioner Mick Keelty says.”
Dan Solove comments on the recent ACLU vs. NSA decision regarding the NSA’s illegal wiretapping activities.
Dan Solove on privacy and the “nothing to hide” argument:
Funny airport-security photo:
In an essay by Randy Farmer, a pioneer of virtual online worlds, he describes communication in something called Disney’s ToonTown. Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that’s inappropriate for children to hear.
Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called “Speedchat,” a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.
Users found ways to pass code strings to strangers anyway. Users invented several protocols, using gestures, canned sentences, or movement of objects in the game.
Randy writes: “By hook, or by crook, customers will always find a way to connect with each other.”
This is a great piece of news in the U.S. For the first time, e-mail has been granted the same constitutional protections as telephone calls and personal papers: the police need a warrant to get at it. Now it’s only a circuit court decision—the Sixth U.S. Circuit Court of Appeals in Ohio—it’s pretty narrowly defined based on the attributes of the e-mail system, and it has a good chance of being overturned by the Supreme Court…but it’s still great news.
The way to think of the warrant system is as a security device. The police still have the ability to get access to e-mail in order to investigate a crime. But in order to prevent abuse, they have to convince a neutral third party—a judge—that accessing someone’s e-mail is necessary to investigate that crime. That judge, at least in theory, protects our interests.
Clearly e-mail deserves the same protection as our other personal papers, but—like phone calls—it might take the courts decades to figure that out. But we’ll get there eventually.
Here’s an interesting phenomenon: rising gas costs have pushed up a lot of legitimate transactions to the “anti-fraud” ceiling.
Security is a trade-off, and now the ceiling is annoying more and more legitimate gas purchasers. But to me the real question is: does this ceiling have any actual security purpose?
In general, credit card fraudsters like making gas purchases because the system is automated: no signature is required, and there’s no need to interact with any other person. In fact, buying gas is the most common way a fraudster tests that a recently stolen card is valid. The anti-fraud ceiling doesn’t actually prevent any of this, but limits the amount of money at risk.
But so what? How many perps are actually trying to get more gas than is permitted? Are credit-card-stealing miscreants also swiping cars with enormous gas tanks, or merely filling up the passenger cars they regularly drive? I’d love to know how many times, prior to the run-up in gas prices, a triggered cutoff actually coincided with a subsequent report of a stolen card. And what’s the effect of a ceiling, apart from a gas shut-off? Surely the smart criminals know about smurfing, if they need more gas than the ceiling will allow.
The Visa spokesperson said, “We get more calls, questions, when gas prices increase.” He/she didn’t say: “We *make* more calls to see if fraud is occurring.” So the only inquiries made may be in the cases where fraud isn’t occurring.
Slate wrote an article on my movie-plot threat contest.
If someone wants to buy your vote, he’d like some proof that you’ve delivered the goods. Camera phones are one way for you to prove to your buyer that you voted the way he wants. Belgian voting machines have been designed to minimize that risk.
“Once you have confirmed your vote, the next screen doesn’t display how you voted. So if one is coerced and has to deliver proof, one just has to take a picture of the vote one was coerced into, and then back out from the screen and change ones vote. The only workaround I see is for the coercer to demand a video of the complete voting process, instead of a picture of the ballot.”
The author is wrong that this is an advantage electronic ballots have over paper ballots. Paper voting systems can be designed with the same security features.
We learned the news in March: Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.
The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.
New research proves they were lying.
The whole incident serves as a poignant illustration of one of the thorniest problems of the information age: data collected for one purpose and then used for another, or “data reuse.”
When we think about our personal data, what bothers us most is generally not the initial collection and use, but the secondary uses. I personally appreciate it when Amazon.com suggests books that might interest me, based on books I have already bought. I like it that my airline knows what type of seat and meal I prefer, and my hotel chain keeps records of my room preferences. I don’t mind that my automatic road-toll collection tag is tied to my credit card, and that I get billed automatically. I even like the detailed summary of my purchases that my credit card company sends me at the end of every year. What I don’t want, though, is any of these companies selling that data to brokers, or for law enforcement to be allowed to paw through those records without a warrant.
There are two bothersome issues about data reuse. First, we lose control of our data. In all of the examples above, there is an implied agreement between the data collector and me: It gets the data in order to provide me with some sort of service. Once the data collector sells it to a broker, though, it’s out of my hands. It might show up on some telemarketer’s screen, or in a detailed report to a potential employer, or as part of a data-mining system to evaluate my personal terrorism risk. It becomes part of my data shadow, which always follows me around but I can never see.
This, of course, affects our willingness to give up personal data in the first place. The reason U.S. census data was declared off-limits for other uses was to placate Americans’ fears and assure them that they could answer questions truthfully. How accurate would you be in filling out your census forms if you knew the FBI would be mining the data, looking for terrorists? How would it affect your supermarket purchases if you knew people were examining them and making judgments about your lifestyle? I know many people who engage in data poisoning: deliberately lying on forms in order to propagate erroneous data. I’m sure many of them would stop that practice if they could be sure that the data was only used for the purpose for which it was collected.
The second issue about data reuse is error rates. All data has errors, and different uses can tolerate different amounts of error. The sorts of marketing databases you can buy on the web, for example, are notoriously error-filled. That’s OK; if the database of ultra-affluent Americans of a particular ethnicity you just bought has a 10 percent error rate, you can factor that cost into your marketing campaign. But that same database, with that same error rate, might be useless for law enforcement purposes.
Understanding error rates and how they propagate is vital when evaluating any system that reuses data, especially for law enforcement purposes. A few years ago, the Transportation Security Administration’s follow-on watch list system, Secure Flight, was going to use commercial data to give people a terrorism risk score and determine how much they were going to be questioned or searched at the airport. People rightly rebelled against the thought of being judged in secret, but there was much less discussion about whether the commercial data from credit bureaus was accurate enough for this application.
An even more egregious example of error-rate problems occurred in 2000, when the Florida Division of Elections contracted with Database Technologies (since merged with ChoicePoint) to remove convicted felons from the voting rolls. The databases used were filled with errors and the matching procedures were sloppy, which resulted in thousands of disenfranchised voters—mostly black—and almost certainly changed a presidential election result.
Of course, there are beneficial uses of secondary data. Take, for example, personal medical data. It’s personal and intimate, yet valuable to society in aggregate. Think of what we could do with a database of everyone’s health information: massive studies examining the long-term effects of different drugs and treatment options, different environmental factors, different lifestyle choices. There’s an enormous amount of important research potential hidden in that data, and it’s worth figuring out how to get at it without compromising individual privacy.
This is largely a matter of legislation. Technology alone can never protect our rights. There are just too many reasons not to trust it, and too many ways to subvert it. Data privacy ultimately stems from our laws, and strong legal protections are fundamental to protecting our information against abuse. But at the same time, technology is still vital.
Both the Japanese internment and the Florida voting-roll purge demonstrate that laws can change—and sometimes change quickly. We need to build systems with privacy-enhancing technologies that limit data collection wherever possible. Data that is never collected cannot be reused. Data that is collected anonymously, or deleted immediately after it is used, is much harder to reuse. It’s easy to build systems that collect data on everything—it’s what computers naturally do—but it’s far better to take the time to understand what data is needed and why, and only collect that.
History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people’s freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.
Individual data and the Japanese internment:
Florida disenfranchisement in 2000:
This article originally appeared on Wired.com:
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2007 by Bruce Schneier.