May 15, 2012
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1205.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.
In this issue:
- The Trouble with Airport Profiling
- Hawley Channels His Inner Schneier
- TSA Behavioral Detection Statistics
- Stolen Phone Database
- “Liars & Outliers” Update
- Overreacting to Potential Bombs
- A Foiled Terrorist Plot
- Schneier News
- Fear and the Attention Economy
- Amazing Round of “Split or Steal”
Why do otherwise rational people think it’s a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: “Muslims, or anyone who looks like he or she could conceivably be Muslim.”
This is a bad idea. It doesn’t make us any safer—and it actually puts us all at risk.
The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn’t, we’d be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he’s going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile…and then run away. This makes perfect sense, and is why evolution produced sheep—and other animals—that react this way. But this sort of profiling doesn’t work with humans at airports, for several reasons.
First, in the sheep’s case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn’t hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists—arguably a singular event—that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the “base rate fallacy,” and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.
Second, sheep can safely ignore animals that don’t look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else—most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don’t match it.
Third, wolves can’t deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent—and innocent-looking—travelers. Randomized secondary screening is more effective, especially since the goal isn’t to catch every plot but to create enough uncertainty that terrorists don’t even try.
And fourth, sheep don’t care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.
I too am incensed—but not surprised—when the TSA singles out four-year-old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that’s really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.
The proper reaction to screening horror stories isn’t to subject only “those people” to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.
This essay previously appeared on Forbes.com and Sam Harris’s blog.
Proponents of profiling:
Base rate fallacy:
Gaming profiling systems:
Al Qaeda recruiting non-Muslims:
Islamic terrorists using an innocent European:
TSA horror stories:
Zero-tolerance security policies:
Kip Hawley wrote an essay for the “Wall Street Journal” on airport security. In it, he says so many sensible things that people have been forwarding it to me with comments like “did you ghostwrite this?” and “it looks like you won an argument” and “how did you convince him?”
Any effort to rebuild TSA and get airport security right in the U.S. has to start with two basic principles:
First, the TSA’s mission is to prevent a catastrophic attack on the transportation system, not to ensure that every single passenger can avoid harm while traveling. Much of the friction in the system today results from rules that are direct responses to how we were attacked on 9/11. But it’s simply no longer the case that killing a few people on board a plane could lead to a hijacking. Never again will a terrorist be able to breach the cockpit simply with a box cutter or a knife. The cockpit doors have been reinforced, and passengers, flight crews and air marshals would intervene.
This sounds a lot like me in 2005:
Exactly two things have made airline travel safer since 9/11: reinforcement of cockpit doors, and passengers who now know that they may have to fight back.
I’m less into sky marshals than he is.
Second, the TSA’s job is to manage risk, not to enforce regulations. Terrorists are adaptive, and we need to be adaptive, too. Regulations are always playing catch-up, because terrorists design their plots around the loopholes.
Me in 2008:
It’s this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they’re going to do something else. Or they’ll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.
These are stupid games, so let’s stop playing.
He disses Trusted Traveler programs, where known people are allowed bypass some security measures:
I had hoped to advance the idea of a Registered Traveler program, but the second that you create a population of travelers who are considered “trusted,” that category of fliers moves to the top of al Qaeda’s training list, whether they are old, young, white, Asian, military, civilian, male or female. The men who bombed the London Underground in July 2005 would all have been eligible for the Registered Traveler cards we were developing at the time. No realistic amount of prescreening can alleviate this threat when al Qaeda is working to recruit “clean” agents. TSA dropped the idea on my watch—though new versions of it continue to pop up.
Me in 2004:
What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.
Hawley’s essay ends with a list of recommendations for change, and they are mostly good:
What would a better system look like? If politicians gave the TSA some political cover, the agency could institute the following changes before the start of the summer travel season:
1. No more banned items: Aside from obvious weapons capable of fast, multiple killings—such as guns, toxins and explosive devices—it is time to end the TSA’s use of well-trained security officers as kindergarten teachers to millions of passengers a day. The list of banned items has created an “Easter- egg hunt” mentality at the TSA. Worse, banning certain items gives terrorists a complete list of what not to use in their next attack. Lighters are banned? The next attack will use an electric trigger.
Me in 2009:
Return passenger screening to pre-9/11 levels.
2. Allow all liquids: Simple checkpoint signage, a small software update and some traffic management are all that stand between you and bringing all your liquids on every U.S. flight. Really.
This is referring to a point he makes earlier in his essay:
I was initially against a ban on liquids as well, because I thought that, with proper briefing, TSA officers could stop al Qaeda’s new liquid bombs. Unfortunately, al Qaeda’s advancing skill with hydrogen-peroxide-based bombs made a total liquid ban necessary for a brief period and a restriction on the amount of liquid one could carry on a plane necessary thereafter.
Existing scanners could allow passengers to carry on any amount of liquid they want, so long as they put it in the gray bins. The scanners have yet to be used in this way because of concern for the large number of false alarms and delays that they could cause. When I left TSA in 2009, the plan was to designate “liquid lanes” where waits might be longer but passengers could board with snow globes, beauty products or booze. That plan is still sitting on someone’s desk.
I have been complaining about the liquids ban for years, but Hawley’s comment confuses me. He says that hydrogen-peroxide based bombs—these are the bombs that are too dangerous to bring on board in 4-oz. bottles, but perfectly fine in four 1-oz bottles combined after the checkpoints—can be detected with *existing scanners*, not with new scanners using new technology. Does anyone know what he’s talking about?
3. Give TSA officers more flexibility and rewards for initiative, and hold them accountable: No security agency on earth has the experience and pattern-recognition skills of TSA officers. We need to leverage that ability. TSA officers should have more discretion to interact with passengers and to work in looser teams throughout airports. And TSA’s leaders must be prepared to support initiative even when officers make mistakes. Currently, independence on the ground is more likely to lead to discipline than reward.
This is a great idea, but it’s going to cost money. Being a TSA screener is a pretty lousy job. Morale is poor: “In surveys on employee morale and job satisfaction, TSA often performs poorly compared to other government agencies. In 2010 TSA ranked 220 out of 224 government agency subcomponents for employee satisfaction.” Pay is low: “The men and women at the front lines of the battle to keep the skies safe are among the lowest paid of all federal employees, and they have one of the highest injury rates.” And there is traditionally a high turnover: 20% in 2008. The 2011 decision allowing TSA workers to unionize will help this somewhat, but for it to really work, the rules can’t be this limiting: “the paper outlining his decision precludes negotiations on security policies, pay, pensions and compensation, proficiency testing, job qualifications and discipline standards. It also will prohibit screeners from striking or engaging in work slowdowns.”
TSA workers who are smart, flexible, and show initiative will cost money, and that’ll be difficult when the TSA’s budget is being cut.
4. Eliminate baggage fees: Much of the pain at TSA checkpoints these days can be attributed to passengers overstuffing their carry-on luggage to avoid baggage fees. The airlines had their reasons for implementing these fees, but the result has been a checkpoint nightmare. Airlines might increase ticket prices slightly to compensate for the lost revenue, but the main impact would be that checkpoint screening for everybody will be faster and safer.
Another great idea, but I don’t see how we can do it without passing a law forbidding airlines to charge those fees. Over the past few years, airlines have drastically increased fees as a revenue source. Sneaking in extra charges allows them to advertise lower prices, and I don’t see that changing anytime soon.
5. Randomize security: Predictability is deadly. Banned-item lists, rigid protocols—if terrorists know what to expect at the airport, they have a greater chance of evading our system.
This would be a disaster. Actually, I’m surprised Hawley even mentions it, given that he wrote this a few paragraphs earlier:
One brilliant bit of streamlining from the consultants: It turned out that if the outline of two footprints was drawn on a mat in the area for using metal-detecting wands, most people stepped on the feet with no prompting and spread their legs in the most efficient stance. Every second counts when you’re processing thousands of passengers a day.
Randomization would slow checkpoints down to a crawl, as well as anger passengers. Do I have to take my shoes off or not? Does my computer go in the bin or not? (Even the weird but mostly consistent rule about laptops vs. iPads is annoying people.) Yesterday, liquids were allowed—today they’re banned. But at this airport, the TSA is confiscating anything with more than two ounces of aluminum and questioning people carrying Tom Clancy novels.
I’m not even convinced this would be a hardship for the terrorists. I’ve gotten really good at avoiding lanes with full-body scanners, and presumably the terrorists will simply assume that all security regulations are in force at all times. I’d like to see a cost-benefit analysis of this sort of thing first.
Hawley’s concluding paragraph:
In America, any successful attack—no matter how small—is likely to lead to a series of public recriminations and witch hunts. But security is a series of trade-offs. We’ve made it through the 10 years after 9/11 without another attack, something that was not a given. But no security system can be maintained over the long term without public support and cooperation. If Americans are ready to embrace risk, it is time to strike a new balance.
I agree with this. Sadly, I’m not optimistic for change anytime soon. There’s one point Hawley makes, but I don’t think he makes it strongly enough. He says:
I wanted to reduce the amount of time that officers spent searching for low-risk objects, but politics intervened at every turn. Lighters were untouchable, having been banned by an act of Congress. And despite the radically reduced risk that knives and box cutters presented in the post-9/11 world, allowing them back on board was considered too emotionally charged for the American public.
This is the fundamental political problem of airport security: it’s in nobody’s self-interest to take a stand for what might appear to be reduced security. Imagine that the TSA management announces a new rule that box cutters are now okay, and that they respond to critics by explaining that the current risks to airplanes don’t warrant prohibiting them. Even if they’re right, they’re open to attacks from political opponents that they’re not taking terrorism seriously enough. And if they’re wrong, their careers are over.
It’s even worse when it’s elected officials who have to make the decision. Which congressman is going to jeopardize his political career by standing up and saying that the cigarette lighter ban is stupid and should be repealed? It’s all political risk, and no political gain.
We have the same problem with the no-fly list: Congress mandates that the TSA match passengers against these lists. Rolling this back is politically difficult at the best of times, and impossible in today’s climate, even if the TSA decided it wanted to do so.
I am very impressed with Hawley’s essay. I do wonder where it came from. This wasn’t the same argument Hawley made when I debated him last month on the “Economist” website. This definitely wasn’t the same argument he made when I interviewed him in 2007, when he was still head of the TSA. But it’s great to read today.
Hopefully, someone is listening. And hopefully, our social climate will change so that these sorts of changes become politically possible.
Me in 2005:
Me in 2008:
Me in 2009:
Me complaining about the liquids ban:
New technology to scan liquids:
TSA employment, salary, and morale:
Airlines and baggage fees:
Confusing laptop rules:
Hawley and I debate in 2012:
Hawley and I debate in 2007:
Interesting data from the U.S. Government Accountability Office:
But congressional auditors have questions about other efficiencies as well, like having 3,000 “behavior detection” officers assigned to question passengers. The officers sidetracked 50,000 passengers in 2010, resulting in the arrests of 300 passengers, the GAO found. None turned out to be terrorists.
Yet in the same year, behavior detection teams apparently let at least 16 individuals allegedly involved in six subsequent terror plots slip through eight different airports. GAO said the individuals moved through protected airports on at least 23 different occasions.
I don’t believe the second paragraph. We haven’t had six terror plots between 2010 and today. And even if we did, how would the auditors know? But I’m sure the first paragraph is correct: the behavioral detection program is 0% effective at preventing terrorism.
The rest of the article is pretty depressing. The TSA refuses to back down on any of its security theater measures. At the same time, its budget is being cut and more people are flying. The result: longer waiting times at security.
From the CIA journal “Studies in Intelligence”: “Capturing the Potential of Outlier Ideas in the Intelligence Community.”
“Forever-day bugs”: a nice turn of phrase.
Password security at Linode.
It’s nice to see some companies implementing these sorts of security measures.
Brian Krebs writes about smart meter hacks.
A burglar was identified by his dance moves, captured on security cameras.
GCHQ, the UK government’s communications headquarters, has released two new—well, 70 years old, but new to us—cryptanalysis documents by Alan Turing.
Last year, I wrote about how social media sites are making it harder than ever for undercover police officers. This story talks about how biometric passports are making it harder than ever for undercover CIA agents.
Last year’s story:
At the RSA Conference this year, I noticed a trend of companies that have products and services designed to help victims recover from attacks. Kelly Jackson Higgins noticed the same thing: “Damage Mitigation as the New Defense.”
Army General Martin E. Dempsey, the chairman of the Joint Chiefs of Staff, said: “A cyber attack could stop our society in its tracks.” Gadzooks. A scared populace is much more willing to pour money into the cyberwar arms race.
I’ve long advocated investigation, intelligence, and emergency response as the places where we can most usefully spend our counterterrorism dollars. Here’s an example where that didn’t work.
Two very interesting points in this essay on cybercrime. The first is that cybercrime isn’t as big a problem as conventional wisdom makes it out to be. The second is that exaggerating the effects of cybercrime is a direct result of how the estimates are generated.
The reports are still early, but it seems that a bunch of terrorist planning documents were found embedded in a digital file of a porn movie.
I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless.
Facial recognition of avatars; I suppose this sort of thing might be useful someday.
This vendor is selling a tampon-shaped USB drive. Although it’s less secure now that there are blog posts about it.
With all the talk about airborne drones like the Predator, it’s easy to forget that drones can be in the water as well. Meet the Common Unmanned Surface Vessel (CUSV).
Funny security fail.
MobileScope looks like a great tool for monitoring and controlling what information third parties get from your smart phone apps.
The U.S. is exporting terrorism fears. “United States Secretary of Homeland Security Janet Napolitano has warned the New Zealand Government about the latest terrorist threat known as ‘body bombers.'” Further in the article: “Do we have specific credible evidence of a [body bomb] threat today? I would not say that we do, however, the importance is that we all lean forward.” Why the headline of this article is “NZ warned over ‘body bombers,'” and not “Napolitano admits ‘no credible evidence’ of body bomber threat” is beyond me.
This article talks about a database of stolen cell phone IDs that will be used to deny service. While I think this is a good idea, I don’t know how much it would deter cell phone theft. As long as there are countries that don’t implement blocking based on the IDs in the databases—and surely there will always be—there will be a market for stolen cell phones.
Plus, think of the possibilities for a denial-of-service attack. Can I report your cell phone as stolen and have it turned off? Surely no political party will think of doing that to the phones of all the leaders of a rival party the weekend before a major election.
“Liars and Outliers” has been available for about two months, and is selling well both in hardcover and e-book formats. More importantly, I’m very pleased with the book’s reception. The reviews I’ve gotten have been great, and I read a lot of tweets from people who have enjoyed the book. My goal was to give people new ways to think about trust and society—and by extension security and society—and it looks like I’ve succeeded.
InfoWorld: “The fact that ‘Liars and Outliers’ prompted me to go back and update my own thinking is truly the measure of Schneier’s latest book.”
ComputerWeekly.com: “I used to think that Bruce Schneier was out of touch with industry CISOs, but now I think that they are out of touch with him.”
Slashdot: “the reader will find that Schneier is one of the most original thinkers around.”
CSO: “If you get a chance to read Schneier’s book (beg, borrow or steal a copy—although I’m not sure what that says about trust if you steal it), you should do so…trust me!”
I’m really proud of the book. I think it’s the best thing I’ve written. If you haven’t read the book yet, please give it a look. It’s the synthesis of a lot of my security thinking to date. I really believe you will enjoy it, and that you’ll think differently after you read it.
So far, though, my readership has mostly been within the security community: people who already know my writing. What I need help with is getting the word out to people outside the circles of computer security or this blog. Anyone who has read the book, I would really appreciate a review somewhere. On your blog if you have one, on Amazon, anywhere. If you know of a venue that reviews, or otherwise discusses books and authors, I would appreciate an introduction.
Also, if there are any companies that would like me to do a book signing at their booth or reception at a conference—surely there’s someone hosting a reception at the Gartner Security Expo next month, or Black Hat in July—please contact me. Giving away copies of my book is a very effective lead generation tool.
This is a ridiculous overreaction:
The police bomb squad was called to 2 World Financial Center in lower Manhattan at midday when a security guard reported a package that seemed suspicious. Brookfield Properties, which runs the property, ordered an evacuation as a precaution.
That’s the entire building, a 44-story, 2.5-million-square-foot office building. And why?
The bomb squad determined the package was a fake explosive that looked like a 1940s-style pineapple grenade. It was mounted on a plaque that said “Complaint department: Take a number,” with a number attached to the pin.
It was addressed to someone at one of the financial institutions housed there and discovered by someone in the mail room.
If the grenade had been real, it could have destroyed—what?—a room. Of course, there’s no downside to Brookfield Properties overreacting.
We don’t know much, but here were my predictions from May 8:
1. There’s a lot more hyperbole to this story than reality.
2. The explosive would have either 1) been caught by pre-9/11 security, or 2) not been caught by post-9/11 security.
3. Nonetheless, it will be used to justify more invasive airport security.
Since then, we’ve learned that the plot was foiled by a police informant. So we don’t even know to what extent the informant created the plot, and whether it could have ever reached fruition were it not for the informant.
I’m speaking at Hack-in-the-Box in Amsterdam on May 25.
danah boyd is thinking about—in a draft essay, and as a recording of a presentation—fear and the attention economy. Basically, she is making the argument that the attention economy magnifies the culture of fear because fear is a good way to get attention, and that this is being made worse by the rise of social media.
A lot of this isn’t new. Fear has been used to sell products and policy (“Remember the Maine!” “Remember the Alamo! “Remember 9/11!”) since forever. Newspapers have used fear to attract readers since there were readers. Long before there were child predators on the Internet, irrational panics swept society. Shark attacks in the 1970s. Marijuana in the 1950s. boyd relates a story from Glassner’s “The Culture of Fear” about elderly women being mugged in the 1990s.
These fears have largely been driven from the top down: from political leaders, from the news media. What’s new today—and I agree this is very interesting—is that in addition to these traditional top-down fears, we’re also seeing fears come from the bottom up. Social media are allowing all of us to sow fear and, because fear gets attention, is enticing us to do so. Rather than fostering empathy and bringing us all together, social media might be pushing us further apart.
A lot of this is related to my own writing about trust. Fear causes us to mistrust a group we’re fearful of, and to more strongly trust the group we’re a part of. It’s natural, and it can be manipulated. It can be amplified, and it can be dampened. How social media are both enabling and undermining trust is a really important thing for us to understand. As boyd says: “What we design and how we design it matters. And how our systems are used also matters, even if those uses aren’t what we intended.”
In “Liars and Outliers,” I use the metaphor of the Prisoner’s Dilemma to exemplify the conflict between group interest and self-interest. There are a gazillion academic papers on the Prisoner’s Dilemma from a good dozen different academic disciplines, but the weirdest dataset on real people playing the game is from a British game show called “Golden Balls.”
In the final round of the game, called “Split or Steal,” two contestants play a one-shot Prisoner’s Dilemma—technically, it’s a variant—choosing to either cooperate (and split a jackpot) or defect (and try to steal it). If one steals and the other splits, the stealer gets the whole jackpot. And, of course, if both contestants steal then both end up with nothing. There are lots of videos from the show on YouTube. (There are even two papers that analyze data from the game.) The videos are interesting to watch, not just to see how players cooperate and defect, but to watch their conversation beforehand and their reactions afterwards. I wrote a few paragraphs about this game for “Liars and Outliers,” but I ended up deleting them.
This is the weirdest, most surreal round of “Split or Steal” I have ever seen. The more I think about the psychology of it, the more interesting it is. I want you to watch it before I say more. Really.
Think about Nick’s strategy. He can’t trust that Ibrahim will split. More importantly, he can’t trust that Ibrahim will do what he said, because it’s in Ibrahim’s best interest to say one thing and do another. So he changes the game. He offers to split the pot outside the game—set up a meta-game of sorts—and removes Ibrahim’s incentive to lie.
In effect, Nick turns the Prisoner’s Dilemma, where both players make their decisions simultaneously, into a sort of Trust game: where one player makes a decision, and then the other does. In a classic Trust game, Player A gets a pot of money. He gives some percentage of it to Player B. It is then multiplied by some amount, and Player B gives some percentage of it back to Player A. In a classic rational self-interest model, it makes no sense for Player B to give any of the money back to Player A. Given that, it makes no sense for Player A to give any of the money to Player B in the first place. But if Player A gives player B 100%, and Player B gives Player A back 50% of the increased pot, they both end up the happiest.
Nick sets himself up as Player B, promising to give Ibrahim 50% of the jackpot outside of the game. Ibrahim is now Player A, deciding whether to give Nick the money in the first place. But unlike a classic Trust game, Ibrahim can’t keep the money if he doesn’t give it to Nick. So he might as well give the money to Nick. The game is turned on its head; trusting Nick now means letting him have all the money. Not trusting Nick means…well, it doesn’t mean anything anymore. All that’s left is not letting Nick have the money out of spite—and that emotion seems out of place in the conversation. Ibrahim decides to trust Nick, because it’s the only option that makes any sense. (Although the process is funny to watch. Ibrahim can’t figure out what’s going on. He tries to steer the conversation back to mutual trust—how important his word is—but it’s no longer relevant. And, at the end, he first picks up the “Steal” ball before saying “okay, I’ll tell you what, I’m going to go with you” and changing his mind. That bit of honesty demonstrates how effectively subterfuge was removed from Ibrahim’s game.)
Nick, for his part, having removed the subterfuge from the game, can safely choose “Split.” Although notice that he does so before Ibrahim chooses, so he’s sure his psychological manipulation worked. I wouldn’t have been so cocky.
Videos of “Split or Steal”:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.