Crypto-Gram

April 15, 2011

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-1104.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.


In this issue:


Detecting Cheaters

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating—otherwise, the social group would fall apart.

Perhaps the most vivid demonstration of this can be seen with variations on what’s known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, its used more as a demonstration of evolutionary psychology. But before we get to the experiment, let’s get into the mathematical background.

Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they’re not generally considered to be easy classes. Two particular rules of inference are relevant here: *modus ponens* and *modus tollens*. Both allow you to reason from a statement of the form, “if P, then Q.” (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) *Modus ponens* goes like this:

If P, then Q. P. Therefore, Q.

In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,

If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.

*Modus tollens* is more complicated:

If P, then Q. Not Q. Therefore, not P.

If Socrates was a man, then Socrates was mortal. Socrates was not
mortal. Therefore, Socrates was not a man.

This makes sense: if Socrates was not mortal, then he’d be a demigod or a stone statue or something.

Both are valid forms of logical reasoning. If you know “if P, then Q” and “P,” then you know “Q.” If you know “if P, then Q” and “not Q,” then you know “not P.” (The other two similar forms don’t work. If you know “if P, then Q” and “Q,” you don’t know anything about “P.” And if you know “if P, then Q” and “not P,” then you don’t know anything about “Q.”)

If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, “If a person travels to Boston, then he or she takes a plane.” The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: “went to Boston,” “went to New York,” “took a plane,” and “took a car.” Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there’s the general rule: “if P, then Q.” The four cards are “P,” “not P,” “Q,” and “not Q.” To verify that “if P, then Q” is a valid rule, you have to verify *modus ponens* by turning over the “P” card and making sure that the reverse says “Q.” To verify *modus tollens*, you turn over the “not Q” card and make sure that the reverse doesn’t say “P.”

Shifting back to the example, you need to turn over the “went to Boston” card to make sure that person took a plane, and you need to turn over the “took a car” card to make sure that person didn’t go to Boston. You don’t—as many people think—need to turn over the “took a plane” card to see if it says “went to Boston”; because you don’t care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn’t break the rule if someone flies elsewhere.

If you’re confused, you aren’t alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I’ve seen is “fewer than 25 percent.” Training in formal logic doesn’t seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.

This isn’t just another “math is hard” story. There’s a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family—”gets dessert,” “doesn’t get dessert,” “ate vegetables,” and “didn’t eat vegetables”—and change the rule to “If a child gets dessert, he or she ate his or her vegetables.” Many people—65 to 80 percent—get it right immediately. They turn over the “ate dessert” card, making sure the child ate his vegetables, and they turn over the “didn’t eat vegetables” card, making sure the child didn’t get dessert. Another way of saying this is that they turn over the “benefit received” card to make sure the cost was paid. And they turn over the “cost not paid” card to make sure no benefit was received. They look for cheaters.

The difference is startling. Subjects don’t need formal logic training. They don’t need math or philosophy. When asked to explain their reasoning, they say things like the answer “popped out at them.”

Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.

In the world of propositional calculus, there’s absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there’s an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It’s of the form “If you take Benefit B, you must first satisfy Requirement R.”

Our brains are optimized to detect cheaters in a social exchange. We’re good at it. Even as children, we intuitively notice when someone gets a benefit he didn’t pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn’t fair, but we still know who among our friends cheats in social exchanges. We know who doesn’t pay his or her fair share of a group meal. At an airport, we might not notice the rule “If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights.” But we’ll certainly notice who breaks the “If you board first, then you must be a first-class passenger” rule.

This essay was originally published in IEEE Security & Privacy
http://www.schneier.com/essay-337.html
It is an excerpt from the draft of my new book.
https://www.schneier.com/blog/archives/2011/02/…

Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.
http://thelastpsychiatrist.com/2010/12/…


Ebook Fraud

There is an interesting post—and discussion—on the blog Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in two interesting blog posts linked to below. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called “Autopilot Kindle Cash,” which promises to teach people how to post dozens of ebooks to Amazon.com per day.

The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn’t own the copyright. It could be a book that isn’t already available as an ebook, or it could be a “low cost” version of a book that is already available. Amazon doesn’t seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.

Broadly speaking, there’s nothing new here. All complex ecosystems have parasites, and every open communications system we’ve ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It’ll be interesting to see what particular sort of mix works in this case.

http://nielsenhayden.com/makinglight/archives/…

http://www.impactmedia.co.uk//search-marketing/…
http://www.publishingtrends.com/2011/03/…

“All complex ecosystems have parasites”:
http://craphound.com/complexecosystems.txt


Unanticipated Security Risk of Keeping Your Money in a Home Safe

In Japan, lots of people—especially older people—keep their life savings in cash in their homes. (The country’s banks pay very low interest rates, so the incentive to deposit that money into bank accounts is lower than in other countries.) This is all well and good, until a tsunami destroys your home and washes your money out to sea. Then, when it washes up onto the beach, the police collect it.

They have thousands, and—in most cases—no way of determining who owns them.

After three months, the money goes to the government.

http://news.yahoo.com/s/ap/20110411/ap_on_bi_ge/…


News

Hacking cars with MP3 files: “By adding extra code to a digital music file, they were able to turn a song burned to CD into a Trojan horse. When played on the car’s stereo, this song could alter the firmware of the car’s stereo system, giving attackers an entry point to change other components on the car.”
http://www.itworld.com/security/139794/…
http://www.technologyreview.com/computing/35094/
http://www.nytimes.com/2011/03/10/business/10hack.html

Hacking ATM users by gluing down keys.
http://www.sfexaminer.com/local/crime/2011/03/…
Zombie fungus: as far as natural-world security stories go, this one is pretty impressive.
https://www.schneier.com/blog/archives/2011/03/…

RSA Security, Inc. hacked. The company, not the algorithm.
https://www.schneier.com/blog/archives/2011/03/…

I didn’t post the video of a Times Square video screen being hacked with an iPhone when I first saw it because I suspected a hoax. Turns out, I was right. It wasn’t even two guys faking hacking a Times Square video screen. It was a movie studio faking two guys faking hacking a Times Square video screen.
http://.movies.yahoo.com//…
This is a really interesting paper: “Folk Models of Home Computer Security,” by Rick Wash. It was presented at SOUPS, the Symposium on Usable Privacy and Security, last year.
http://www.rickwash.com/papers/…

I found this article on the difference between threats and vulnerabilities to be very interesting. I like his taxonomy.
http://jps.anl.gov/Volume4_iss2/Paper3-RGJohnston.pdf

Technology for transmitting data through steel.
https://www.schneier.com/blog/archives/2011/03/…
What’s interesting is that this technology can be used to transmit through TEMPEST shielding.

Detecting words and phrases in encrypted VoIP calls.
http://portal.acm.org/citation.cfm?doid=1880022.1880029
http://www.cs.unc.edu/~fabian/papers/tissec2010.pdf
I wrote about this in 2008.
https://www.schneier.com/blog/archives/2008/06/…

Interesting research: “One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users”
http://hal.inria.fr/inria-00574178/en/

A really interesting article on how to authenticate a launch order in a nuclear command and control system.
http://www.slate.com/id/2286735

Interesting article on William Friedman and biliteral ciphers.
http://www.cabinetmagazine.org/issues/40/sherman.php

Nice infographic on detecting liars.
http://s.westword.com/showandtell/2011/03/…
New paper by Ross Anderson: “Can We Fix the Security Economics of Federated Authentication?”
http://spw.stca.herts.ac.uk/2.pdf
Also a blog post on the research.
http://www.lightbluetouchpaper.org/2011/03/24/…
In this amusing story of a terrorist plotter using pencil-and-paper cryptography instead of actually secure cryptography, there’s this great paragraph: “Despite urging by the Yemen-based al Qaida leader Anwar Al Anlaki, Karim also rejected the use of a sophisticated code program called ‘Mujhaddin Secrets’, which implements all the AES candidate cyphers, ‘because “kaffirs”, or non-believers, know about it so it must be less secure.'” Actually, that’s not how peer review works.
http://www.theregister.co.uk/2011/03/22/…

The FBI asks for cryptanalysis help from the community.
http://www.fbi.gov/news/stories/2011/march/…
http://www.networkworld.com/community//…
http://news.yahoo.com/s/yblog_thelookout/20110329/…
Comodo Group issues bogus SSL certificates. This isn’t good.
http://www.wired.com/threatlevel/2011/03/…
http://www.wired.com/threatlevel/2008/08/…
http://threatpost.com/en_us/s/…
http://www.bbc.co.uk/news/technology-12847072
http://www.zdnet.com//security/…
https://www.comodo.com/…
Fake certs for Google, Yahoo, and Skype? Wow. This isn’t the first time Comodo has screwed up with certificates. The safest thing for us users to do would be to remove the Comodo root certificate from our browsers so that none of their certificates work, but we don’t have the capability to do that. The browser companies—Microsoft, Mozilla, Opera, etc.—could do that, but my guess is they won’t. The economic incentives don’t work properly. Comodo is likely to sue any browser company that takes this sort of action, and Comodo’s customers might as well. So it’s smarter for the browser companies to just ignore the issue and pass the problem to us users.

34 SCADA vulnerabilities published. It’s hard to tell how serious this is.
http://www.wired.com/threatlevel/2011/03/…

Here’s some very clever thinking from India’s chief economic adviser. In order to reduce bribery, he proposes legalizing the giving of bribes. The idea is that after a bribe is given, both the briber and the bribee are partners in the crime, and neither wants the bribe to become public. However, if it is legal it give the bribe but illegal to take it, then after the bribe the bribe giver is much more likely to cooperate with the police. He notes that this only works for a certain class of bribes: when you have to bribe officials for something you are already entitled to receive. It won’t work for any long-term bribery relationship, or in any situation where the briber would otherwise not want the bribe to become public.
http://finmin.nic.in/WorkingPaper/…
http://s.wsj.com/indiarealtime/2011/03/30/…
“Terror, Security, and Money: Balancing the Risks, Benefits, and Costs of Homeland Security,” by John Mueller and Mark Stewart. Of course, it’s not cost effective.
http://polisci.osu.edu/faculty/jmueller/MID11TSM.PDF

An optical stun ray has been patented; no idea if it actually works.
http://www.scientificamerican.com/article.cfm?…

New research allows a computer to be pinpointed to within 690 meters, from over the Internet. Impressive, and scary.
http://www.newscientist.com/article/…
Terrorist alerts of Facebook and Twitter. What could possibly go wrong?
https://www.schneier.com/blog/archives/2011/04/…

The former CIA general counsel, John A. Rizzo, talks about his agency’s assassination program, which has increased dramatically under the Obama administration.
http://www.newsweek.com/2011/02/13/…
And the ACLU Deputy Legal Director comments on the interview.
http://www.latimes.com/news/opinion/commentary/…
New French law mandates sites save personal information even when it is no longer required.
https://www.schneier.com/blog/archives/2011/04/…

Israel is thinking about creating a counter-cyberterrorism unit. You’d think the country would already have one of those.
http://www.theregister.co.uk/2011/04/06/…


Changing Incentives Creates Security Risks

One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.

An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.

In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses—and threatened them with termination—for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.

It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.

Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.

The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.

http://www.usatoday.com/news/education/…
This is not isolated to DC. It has happened elsewhere as well.
https://www.schneier.com/blog/archives/2010/06/…


Euro Coin Recycling Scam

This story is just plain weird. Regularly, damaged coins are taken out of circulation. They’re destroyed and then sold to scrap metal dealers. That makes sense, but it seems that one- and two-euro coins aren’t destroyed very well. They’re both bi-metal designs, and they’re just separated into an inner core and an outer ring and then sold to Chinese scrap metal dealers. The dealers, being no dummies, put the two parts back together and sold them back to a German bank at face value. The bank was chosen because they accept damaged coins and don’t inspect them very carefully.

Is this not entirely predictable? If you’re going to take coins out of circulation, you had better use a metal shredder. (Except for U.S. pennies, which are worth more in component metals.)

http://www.spiegel.de/international/germany/…

Pennies.
http://www.snopes.com/business/money/pennycost.asp


Security Fears of Wi-Fi in London Underground

The London Underground is getting Wi-Fi. Of course there are security fears. The article below worries that this will enable people to use their laptop as a cell phone, and that in Afghanistan and Iraq bomb attacks have been detonated using cell phones. Also, eavesdropping software exists.

This is just silly. We could have a similar conversation regarding any piece of our infrastructure. Yes, the bad guys could use it, just as they use telephones and automobiles and all-night restaurants. If we didn’t deploy technologies because of this fear, we’d still be living in the Middle Ages.

http://www.bbc.co.uk/news/uk-england-london-12856289


Schneier News

I’m speaking at InfoSec Europe in London on April 20.
http://www.infosec.co.uk/page.cfm/Link=687

I’m also speaking at the Activate conference in New York on April 28.
http://www.guardian.co.uk/activate/new-york-programme


Epsilon Hack

I have no idea why the Epsilon hack is getting so much press.

Yes, millions of names and e-mail addresses might have been stolen. Yes, other customer information might have been stolen, too. Yes, this personal information could be used to create more personalized and better targeted phishing attacks.

So what? These sorts of breaches happen all the time, and even more personal information is stolen.

I get that over 50 companies were affected, and some of them are big names. But the hack of the century? Hardly.

http://www.bloomberg.com/news/2011-04-04/…
http://www.infosecurity-magazine.com/view/17088/…
http://articles.cnn.com/2011-04-04/tech/…
http://www.usatoday.com/tech/news/…
http://www.huffingtonpost.com/2011/04/03/…

“The hack of the century”:
http://s.computerworld.com/18079/…


Schneier’s Law

Back in 1998, I wrote: “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.”

In 2004, Cory Doctorow called this Schneier’s law: “…what I think of as Schneier’s Law: ‘any person can invent a security system so clever that she or he can’t think of how to break it.'”

The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes: “Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.”

The idea is even older. Back in 1864, Charles Babbage wrote: “One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.”

My phrasing is different, though. Here’s my original quote in context: “Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.”

And here’s me in 2006: “Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it ‘Schneier’s Law’: When someone hands you a security system and says, ‘I believe this is secure,’ the first thing you have to ask is, ‘Who the hell are you?’ Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.”

And that’s the point I want to make. It’s not that people believe they can create an unbreakable cipher; it’s that people create a cipher that they themselves can’t break, and then use that as evidence they’ve created an unbreakable cipher.

Me in 1998:
http://www.schneier.com/…

Me in 2006:
http://www.schneier.com/crypto-gram-0608.html#7

Cory Doctorow:
http://craphound.com/msftdrm.txt

Charles Babbage:
http://www-history.mcs.st-and.ac.uk/history/Extras/…


How did the CIA and FBI Know that Australian Government Computers Were Hacked?

Newspapers are reporting that, for about a month, hackers had access to computers “of at least 10 federal ministers including the Prime Minister, Foreign Minister and Defence Minister.”

That’s not much of a surprise. What is odd is the statement that “Australian intelligence agencies were tipped off to the cyber-spy raid by US intelligence officials within the Central Intelligence Agency and the Federal Bureau of Investigation.”

How did the CIA and the FBI know? Did they see some intelligence traffic and assume that those computers were where the stolen e-mails were coming from? Or something else?

http://www.dailytelegraph.com.au/news/national/…


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2011 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.