Crypto-Gram

September 15, 2008

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0809.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


New Book: Schneier on Security

I have a new book coming out: “Schneier on Security.” It’s a collection of my essays, all written from June 2002 to June 2008. They’re all on my website, so regular readers won’t have missed anything if they don’t buy this book. But for those of you who want my essays in one easy-to-read place, or are planning to be shipwrecked on a desert island without Web access and would like to spend your time there pondering the sorts of questions I discuss in my essays, or want to give copies of my essays to friends and relatives as gifts, this book is for you. There are only 90 shopping days before Christmas.

The hardcover book retails for $30, but Amazon is already selling it for $20. If you want a signed copy, e-mail me. I’ll send you a signed copy for $30, including U.S. shipping, or $40, including shipping overseas. Yes, Amazon is cheaper—and you can always find me at a conference and ask me to sign the book.

Book:
http://www.schneier.com/book-sos.html

Essays:
http://www.schneier.com/essays.html

Order on Amazon.com:
http://www.amazon.com/exec/obidos/ASIN/0470395354/…


Identity Farming

Let me start off by saying that I’m making this whole thing up.

Imagine you’re in charge of infiltrating sleeper agents into the United States. The year is 1983, and the proliferation of identity databases is making it increasingly difficult to create fake credentials. Ten years ago, someone could have just shown up in the country and gotten a driver’s license, Social Security card and bank account—possibly using the identity of someone roughly the same age who died as a young child—but it’s getting harder. And you know that trend will only continue. So you decide to grow your own identities.

Call it “identity farming.” You invent a handful of infants. You apply for Social Security numbers for them. Eventually, you open bank accounts for them, file tax returns for them, register them to vote, and apply for credit cards in their name. And now, 25 years later, you have a handful of identities ready and waiting for some real people to step into them.

There are some complications, of course. Maybe you need people to sign their name as parents—or, at least, mothers. Maybe you need to doctors to fill out birth certificates. Maybe you need to fill out paperwork certifying that you’re home-schooling these children. You’ll certainly want to exercise their financial identity: depositing money into their bank accounts and withdrawing it from ATMs, using their credit cards and paying the bills, and so on. And you’ll need to establish some sort of addresses for them, even if it is just a mail drop.

You won’t be able to get driver’s licenses or photo IDs in their name. That isn’t critical, though; in the U.S., more than 20 million adult citizens don’t have photo IDs. But other than that, I can’t think of any reason why identity farming wouldn’t work.

Here’s the real question: Do you actually have to show up for any part of your life?

Again, I made this all up. I have no evidence that anyone is actually doing this. It’s not something a criminal organization is likely to do; twenty-five years is too distant a payoff horizon. The same logic holds true for terrorist organizations; it’s not worth it. It might have been worth it to the KGB—although perhaps harder to justify after the Soviet Union broke up in 1991—and might be an attractive option for existing intelligence adversaries like China.

Immortals could also use this trick to self-perpetuate themselves, inventing their own children and gradually assuming their identity, then killing their parents off. They could even show up for their own driver’s license photos, wearing a beard as the father and blue spiked hair as the son. I’m told this is a common idea in Highlander fan fiction.

The point isn’t to create another movie plot threat, but to point out the central role that data has taken on in our lives. Previously, I’ve said that we all have a data shadow that follows us around, and that more and more institutions interact with our data shadows instead of with us. We only intersect with our data shadows once in a while—when we apply for a driver’s license or passport, for example—and those interactions are authenticated by older, less-secure interactions. The rest of the world assumes that our photo IDs glue us to our data shadows, ignoring the rather flimsy connection between us and our plastic cards. (And, no, REAL-ID won’t help.)

It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What’s important now is our shadows; we’re secondary. And as our society relies more and more on these shadows, we might even become unnecessary.

Our data shadows can live a perfectly normal life without us.

Data shadow essay:
http://www.schneier.com/essay-219.html

Interesting commentary.
http://www.examiner.com/…
This essay previously appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


BT, Phorm, and Me

Over the past year I have gotten many requests, both public and private, to comment on the BT and Phorm incident.

I was not involved with BT and Phorm, then or now. Everything I know about Phorm and BT’s relationship with Phorm came from the same news articles you read. I have not gotten involved as an employee of BT. But anything I say is—by definition—said by a BT executive. That’s not good.

So I’m sorry that I can’t write about Phorm. But—honestly—lots of others have been giving their views on the issue.

https://www.schneier.com/blog/archives/2008/09/…


Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It’s become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It’s a good idea in theory, but it’s a mostly bunk in practice.

Before I get into the details, there’s one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It’s an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn’t make sense in this context.

But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercises knows, when you’re trying to make your numbers, cutting costs is the same as increasing revenues. So while security can’t produce ROI, loss prevention most certainly affects a company’s bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth. Conversely, it shouldn’t ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you’re wasting money. Spend less than that, and you’re also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it’s worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn’t.

The key to making this work is good data; the term of art is “actuarial tail.” If you’re doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you’re having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won’t get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn’t enough good data. There aren’t good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don’t even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we’re trying to prevent change so quickly that we can’t accumulate data fast enough. By the time we get some data, there’s a new threat model for which we don’t have enough data. So we can’t create ALE models.

But there’s another problem, and it’s that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can’t argue, since we’re just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you’re in charge of terrorism mitigation at a chlorine plant. What’s the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I’m making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They’ve jiggered the numbers so that they do.

This doesn’t mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don’t even show the vendor your improvements; it won’t consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you’re deciding what security products and services to buy.

Articles:
http://communities.intel.com/openport/s/it/2008/…
http://communities.intel.com/openport/s/it/2007/…
https://buildsecurityin.us-cert.gov/daisy/bsi/…
http://taosecurity.blogspot.com/2007/07/…
http://www.bloginfosec.com/2007/07/13/…
http://.vorant.com/2007/07/…
http://taosecurity.blogspot.com/2007/07/…
http://chuvakin.blogspot.com/2007/07/…
http://taosecurity.blogspot.com/2007/07/…
http://www.pcis.com/web/vvblog.nsf/dx/…
An example to laugh at:
http://www.postini.com/services/roi_calculator.html

This essay previously appeared in CSO Magazine.
http://www.csoonline.com/article/446866/…


Diebold Finally Admits its Voting Machines Drop Votes

Premier Election Solutions, formerly called Diebold Election Systems, has finally admitted that a ten-year-old error has caused votes to be dropped.

It’s unclear if this error is random or systematic. If it’s random—a small percentage of all votes are dropped—then it is highly unlikely that this affected the outcome of any election. If it’s systematic—a small percentage of votes for a particular candidate are dropped—then it is much more problematic.

Ohio is trying to sue.

In other news, election officials sometimes take voting machines home for the night.

http://www.networkworld.com/news/2008/…
http://www.theregister.co.uk/2008/08/26/…
http://www.engadget.com/2008/08/23/…
http://voices.washingtonpost.com/the-trail/2008/08/…
http://www.mcclatchydc.com/election2008/story/…

http://thelede.blogs.nytimes.com/2008/08/19/…
My 2004 essay on election technology:
http://www.schneier.com/crypto-gram-0411.html#1


News

The provisional, 8,000-man Cyber Command has been ordered to stop all activities, just weeks before it was supposed to be declared operational.
http://.wired.com/defense/2008/08/…

The continuing cheapening of the word “terrorism.” Illegally diverting water is terrorism:
http://www.abc.net.au/news/stories/2008/08/15/…
Anonymously threatening people with messages on playing cards, like the Joker in The Dark Knight, is terrorism:
http://www.wsls.com/sls/news/local/new_river_valley/…
Walking on a bicycle path is terrorism:
http://www.timesonline.co.uk/tol/news/uk/…
I’ve written about this sort of thing before:
https://www.schneier.com/blog/archives/2008/04/…
https://www.schneier.com/blog/archives/2008/07/…

Cyberattack against Georgia preceded real attack:
http://www.nytimes.com/2008/08/13/technology/…

Adi Shamir gave an invited talk at the Crypto 2008 conference about a new type of cryptanalytic attack called “cube attacks.” He claims very broad applicability to block ciphers, stream ciphers, hash functions, etc. In general, anything that can be described with a low-degree polynomial equation is vulnerable: that’s pretty much every LFSR scheme. The attack doesn’t apply to any block cipher—DES, AES, Blowfish, Twofish, anything else—in common use; their degree is much too high. (The paper was rejected from Asiacrypt, demonstrating yet again that the conference review process is broken.
https://www.schneier.com/blog/archives/2008/08/…
http://www.theregister.co.uk/2008/08/26/…
http://arstechnica.com/news.ars/post/…
http://groups.google.com/group/sci.crypt/msg/…
http://www.mail-archive.com/…
http://www.mail-archive.com/…
Paper is online:
http://eprint.iacr.org/2008/385

A security assessment of the Internet Protocol:
http://www.cpni.gov.uk/Docs/InternetProtocol.pdf

Nice article on personal surveillance from the London Review of Books.
http://www.lrb.co.uk/v30/n16/soar01_.html

Ah, the TSA. They break planes:
http://www.aero-news.net/index.cfm?…
Then they try to blame someone else:
http://abcnews.go.com/Blotter/story?…
They harass innocents, and it’s easy to sneak by them:
http://edition.cnn.com/2008/US/08/19/tsa.watch.list/…
How to sneak lock picks past the TSA:
http://www.i-hacked.com/content/view/267/48

Here’s some good TSA news: “A federal appeals court ruled this week that individuals who are blocked from commercial flights by the federal no-fly list can challenge their detention in federal court.”
http://arstechnica.com/news.ars/post/…
MI5 on terrorist profiling: there is no profile.
http://www.guardian.co.uk/uk/2008/aug/20/…

Interesting paper—”Challenges and Directions for Monitoring P2P File Sharing Networks or Why My Printer Received a DMCA Takedown Notice”:
http://dmca.cs.washington.edu/dmca_hotsec08.pdf
http://dmca.cs.washington.edu/

Red light cameras don’t work: the solution to one problem causes another:
https://www.schneier.com/blog/archives/2008/08/…

How to doctor photographs without Photoshop: it’s all about the captions.
http://morris.blogs.nytimes.com/2008/08/11/…

Laptops aboard the International Space Station have been infected with the W32.Gammima.AG worm. And it’s not the first time this sort of thing has happened.
http://www.spaceref.com/news/viewnews.html?id=1305
http://.wired.com/27bstroke6/2008/08/…
http://news.bbc.co.uk/2/hi/technology/7583805.stm

An airplane was forced to land when one of the passengers had an extreme allergic reaction to a jar of mushroom soup that was leaking the cabin. See, the TSA told you that liquids were dangerous.
http://www.examiner.ie/breaking/ireland/mhqlojkfidql/

Border Gateway Protocol (BGP) attacks are serious stuff. It’s a man-in-the-middle attack. “The Internet’s Biggest Security Hole” (the title of that first link) has been that interior relays have always been trusted even though they are not trustworthy.
http://.wired.com/27bstroke6/2008/08/…
http://.wired.com/27bstroke6/2008/08/…
http://www.doxpara.com/?p=1231

A British bank bans a man’s password:
http://news.bbc.co.uk/2/hi/uk_news/england/hereford/…

Voting machine comic. You know your industry has problems when mainstream comic strips make fun of you.
http://www.mycomicspage.com/features/68/…
Software to facilitate retail tax fraud:
http://www.nytimes.com/2008/08/30/technology/…

Here’s how to suck data off cell phones. Moral: don’t give someone your phone unless you trust him.
http://news.cnet.com/8301-1009_3-10028589-83.html
http://www.physorg.com/news139460365.html

Throughout history, many diaries have been written in code.
http://news.bbc.co.uk/today/hi/today/newsid_7586000/…

Here’s a new paper on the perception and reality of privacy policies: “What Californians Understand About Privacy Online,” by Chris Jay Hoofnagle and Jennifer King.
http://papers.ssrn.com/sol3/papers.cfm?…

Using shredded checks as packaging material seems like a really dumb idea.
http://consumerist.com/5040975/…
Bumblebees making security trade-offs:
http://news.bbc.co.uk/1/hi/sci/tech/7596808.stm

Identifying people using gait analysis, from overhead camera and even from satellite:
https://www.schneier.com/blog/archives/2008/09/…
http://technology.newscientist.com/channel/tech/…

The Rock Phish Gang is improving its fraud software:
http://www.theregister.co.uk/2008/09/05/…
http://www.rsa.com/blog/_entry.aspx?id=1338

On 60 Minutes, in an interview with Scott Pelley, reporter Bob Woodward claimed that the U.S. military has a new secret technique that’s so revolutionary, it’s on par with the tank and the airplane.
https://www.schneier.com/blog/archives/2008/09/…

A Mythbusters episode on RFID security was killed by lawyers under pressure from the credit card industry. Or maybe not; the person who started this rumor has retracted his comments. Or maybe those same lawyers made him retract his comments. Don’t they know that security by gag order never works, except temporarily?
http://www.tomshardware.com/news/…
http://news.cnet.com/8301-13772_3-10030509-52.html
http://consumerist.com/5043831/…
http://www.youtube.com/watch?v=-St_ltH90Oc

Good essay on DNA matching and the birthday paradox:
http://freakonomics.blogs.nytimes.com/2008/08/19/…
Turning off fire hydrants in the name of terrorism:
https://www.schneier.com/blog/archives/2008/09/…

“The terrifying cost of feeling safer,” from the Sydney Morning Herald:
http://business.smh.com.au/business/…
The Doghouse: Tornado Plus Encrypted USB Drive
http://s.techrepublic.com.com/security/?…

NSA snooping on cell phone calls without a warrant.
http://news.cnet.com/8301-13739_3-10030134-46.html

The UK Ministry of Defense loses a memory stick with military secrets on it. It’s not the first time this has happened.
http://news.bbc.co.uk/2/hi/uk_news/england/cornwall/…
I’ve written about this general problem before: we’re storing ever more data in ever smaller devices.
http://www.schneier.com/essay-105.html
The solution? Encrypt them.
http://www.schneier.com/essay-199.html


Full Disclosure and the Boston Fare Card Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems, and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information *about* that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

http://.wired.com/27bstroke6/2008/08/…

London’s Oyster Card:
http://www.schneier.com/essay-229.html
http://zoeken.rechtspraak.nl/resultpage.aspx?…
Boston’s fare card:
http://.wired.com/27bstroke6/2008/08/…
http://.wired.com/27bstroke6/2008/08/…
http://.wired.com/27bstroke6/2008/08/…
http://www.groklaw.net/article.php?…

Full disclosure:
http://www.schneier.com/essay-146.html
http://www.schneier.com/crypto-gram-0111.html#1
http://www.eff.org/files/filenode/MBTA_v_Anderson/…

Locks and full disclosure:
http://news.cnet.com/8301-1009_3-10002138-83.html?…
http://www.slate.com/id/2195862/
http://www.theglobeandmail.com/servlet/story/…
http://www.schneier.com/crypto-gram-0302.html#1
http://www.crypto.com/papers/kiss.html
http://www.crypto.com/papers/flattery.html
http://www.wired.com/culture/lifestyle/news/2004/09/…
http://www.crypto.com/papers/safelocks.pdf
http://www.crypto.com/masterkey.html
http://.wired.com/27bstroke6/2008/08/…
http://en.wikipedia.org/wiki/Lock_bumping

Other reactions to full disclosure:
http://compsci.ca//…
http://www.freedom-to-tinker.com/?p=1265
https://www.schneier.com/blog/archives/2006/11/…

Secrecy and security:
http://www.schneier.com/crypto-gram-0205.html#1

Matt Blaze has a good comment on the topic.
http://www.crypto.com//…

This essay previously appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


Contest: Cory Doctorow’s Cipher Wheel Rings

Cory Doctorow wanted a secret decoder wedding ring, and he asked me to help design it. I wanted something more than the standard secret decoder ring, so this is what I asked for: “I want each wheel to be the alphabet, with each letter having either a dot above, a dot below, or no dot at all. The first wheel should have alternating above, none, below. The second wheel should be the repeating sequence of above, above, none, none, below, below. The third wheel should be the repeating sequence of above, above, above, none, none, none, below, below, below.” (I know it sounds confusing, but look at the chart.)

So that’s what he asked for, and that’s what got. And now it’s time to create some cryptographic applications for the rings. Cory and I are holding an open contest for the cleverest application.

I don’t think we can invent any encryption algorithms that will survive computer analysis—there’s just not enough entropy in the system—but we can come up with some clever pencil-and-paper ciphers that will serve them well if they’re ever stuck back in time. And there are certainly other cryptographic uses for the rings.

Here’s a way to use the rings as a password mnemonic: First, choose a two-letter key. Align the three wheels according to the key. For example, if the key is “EB” for eBay, align the three wheels AEB. Take the common password “PASSWORD” and encrypt it. For each letter, find it on the top wheel. Count one letter to the left if there is a dot over the letter, and one letter to the right if there is a dot under it. Take that new letter and look at the letter below it (in the middle wheel). Count two letters to the left if there is a dot over it, and two letters to the right if there is a dot under it. Take that new letter (in the middle wheel), and look at the letter below it (in the lower wheel). Count three letters to the left if there is a dot over it, and three letters to the right if there is a dot under it. That’s your encrypted letter. Do that with every letter to get your password.

“PASSWORD” and the key “EB” becomes “NXPPVVOF.”

It’s not very good; can anyone see why? (Ignore for now whether or not publishing this on a blog makes it no longer secure.)

How can I do that better? What else can we do with the rings? Can we incorporate other elements—a deck of playing cards as in Solitaire, different-sized coins to make the system more secure?

Post your contest entries as comments to Cory’s blog post or send them to cryptocontest@craphound.com. Deadline is October 1st.

Good luck, and have fun with this.

Decoder rings:
http://en.wikipedia.org/wiki/Secret_decoder_ring

Chart and photo:
http://www.flickr.com/photos/doctorow/2816467273/
http://www.flickr.com/photos/doctorow/2817314740/

Solitaire:
http://www.schneier.com/solitaire.html

Entries:
http://www.boingboing.net/2008/09/05/…
mailto:cryptocontest@craphound.com


Schneier/BT News

Schneier will be speaking at the World Economic Forum Annual Meeting of the New Champions, in Tianjin, China on 27 September.
http://www.weforum.org/en/events/…


Photo ID Checks at Airport

The TSA is tightening its photo ID rules at airport security. Previously, people with expired IDs or who claimed to have lost their IDs were subjected to secondary screening. Then the Transportation Security Administration realized that meant someone on the government’s no-fly list—the list that is supposed to keep our planes safe from terrorists—could just fly with no ID.

Now, people without ID must also answer personal questions from their credit history to ascertain their identity. The TSA will keep records of who those ID-less people are, too, in case they’re trying to probe the system.

This may seem like an improvement, except that the photo ID requirement is a joke. Anyone on the no-fly list can easily fly whenever he wants. Even worse, the whole concept of matching passenger names against a list of bad guys has negligible security value.

How to fly, even if you are on the no-fly list: Buy a ticket in some innocent person’s name. At home, before your flight, check in online and print out your boarding pass. Then, save that web page as a PDF and use Adobe Acrobat to change the name on the boarding pass to your own. Print it again. At the airport, use the fake boarding pass and your valid ID to get through security. At the gate, use the real boarding pass in the fake name to board your flight.

The problem is that it is unverified passenger names that get checked against the no-fly list. At security checkpoints, the TSA just matches IDs to whatever is printed on the boarding passes. The airline checks boarding passes against tickets when people board the plane. But because no one checks ticketed names against IDs, the security breaks down.

This vulnerability isn’t new. It isn’t even subtle. I wrote about it in 2003, and again in 2006. I asked Kip Hawley, who runs the TSA, about it in 2007. Today, any terrorist smart enough to Google “print your own boarding pass” can bypass the no-fly list.

This gaping security hole would bother me more if the very idea of a no-fly list weren’t so ineffective. The system is based on the faulty notion that the feds have this master list of terrorists, and all we have to do is keep the people on the list off the planes.

That’s just not true. The no-fly list—a list of people so dangerous they are not allowed to fly yet so innocent we can’t arrest them—and the less dangerous “watch list” contain a combined 1 million names representing the identities and aliases of an estimated 400,000 people. There aren’t that many terrorists out there; if there were, we would be feeling their effects.

Almost all of the people stopped by the no-fly list are false positives. It catches innocents such as Ted Kennedy, whose name is similar to someone’s on the list, and Yusuf Islam (formerly Cat Stevens), who was on the list but no one knew why.

The no-fly list is a Kafkaesque nightmare for the thousands of innocent Americans who are harassed and detained every time they fly. Put on the list by unidentified government officials, they can’t get off. They can’t challenge the TSA about their status or prove their innocence. (The U.S. 9th Circuit Court of Appeals decided this month that no-fly passengers can sue the FBI, but that strategy hasn’t been tried yet.)

But even if these lists were complete and accurate, they wouldn’t work. Timothy McVeigh, the Unabomber, the D.C. snipers, the London subway bombers and most of the 9/11 terrorists weren’t on any list before they committed their terrorist acts. And if a terrorist wants to know if he’s on a list, the TSA has approved a convenient, $100 service that allows him to figure it out: the Clear program, which issues IDs to “trusted travelers” to speed them through security lines. Just apply for a Clear card; if you get one, you’re not on the list.

In the end, the photo ID requirement is based on the myth that we can somehow correlate identity with intent. We can’t. And instead of wasting money trying, we would be far safer as a nation if we invested in intelligence, investigation and emergency response—security measures that aren’t based on a guess about a terrorist target or tactic.

That’s the TSA: Not doing the right things. Not even doing right the things it does.

My previous articles on the subject:
http://www.schneier.com/crypto-gram-0308.html#6
https://www.schneier.com/blog/archives/2006/11/…
http://www.schneier.com/interview-hawley.html

This article originally appeared in the L.A. Times:
http://www.latimes.com/news/opinion/…


Mental Illness and Murder

Contrary to popular belief, homicide due to mental illness is declining, at least in England and Wales: “The rate of total homicide and the rate of homicide due to mental disorder rose steadily until the mid-1970s. From then there was a reversal in the rate of homicides attributed to mental disorder, which declined to historically low levels, while other homicides continued to rise.”

Remember this the next time you read a newspaper article about how scared everyone is because some patients escaped from a mental institution: “We are convinced by the media that people with serious mental illnesses make a significant contribution to murders, and we formulate our approach as a society to tens of thousands of people on the basis of the actions of about 20. Once again, the decisions we make, the attitudes we have, and the prejudices we express are all entirely rational, when analysed in terms of the flawed information we are fed, only half chewed, from the mouths of morons.”

Articles:
http://bjp.rcpsych.org/cgi/content/abstract/193/2/130
http://www.badscience.net/2008/08/…

Paper and press release:
http://www.scribd.com/doc/4805076/…
http://www.rcpsych.ac.uk/pressparliament/…


Movie-Plot Threats

We spend far more effort defending our countries against specific movie-plot threats, rather than the real, broad threats. In the US during the months after the 9/11 attacks, we feared terrorists with scuba gear, terrorists with crop dusters and terrorists contaminating our milk supply. Both the UK and the US fear terrorists with small bottles of liquid. Our imaginations run wild with vivid specific threats. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.

It’s not just terrorism; it’s any rare risk in the news. The big fear in Canada right now, following a particularly gruesome incident, is random decapitations on intercity buses. In the US, fears of school shootings are much greater than the actual risks. In the UK, it’s child predators. And people all over the world mistakenly fear flying more than driving. But the very definition of news is something that hardly ever happens. If an incident is in the news, we shouldn’t worry about it. It’s when something is so common that its no longer news—car crashes, domestic violence—that we should worry. But that’s not the way people think.

Psychologically, this makes sense. We are a species of storytellers. We have good imaginations and we respond more emotionally to stories than to data. We also judge the probability of something by how easy it is to imagine, so stories that are in the news feel more probable—and ominous—than stories that are not. As a result, we overreact to the rare risks we hear stories about, and fear specific plots more than general threats.

The problem with building security around specific targets and tactics is that its only effective if we happen to guess the plot correctly. If we spend billions defending the Underground and terrorists bomb a school instead, we’ve wasted our money. If we focus on the World Cup and terrorists attack Wimbledon, we’ve wasted our money.

It’s this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they’re going to do something else. Or they’ll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.

These are stupid games, so let’s stop playing. Some high-profile targets deserve special attention and some tactics are worse than others. Airplanes are particularly important targets because they are national symbols and because a small bomb can kill everyone aboard. Seats of government are also symbolic, and therefore attractive, targets. But targets and tactics are interchangeable.

The following three things are true about terrorism. One, the number of potential terrorist targets is infinite. Two, the odds of the terrorists going after any one target is zero. And three, the cost to the terrorist of switching targets is zero.

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t require us to guess. We need to focus resources on intelligence and investigation: identifying terrorists, cutting off their funding and stopping them regardless of what their plans are. We need to focus resources on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy.

In 2006, UK police arrested the liquid bombers not through diligent airport security, but through intelligence and investigation. It didn’t matter what the bombers’ target was. It didn’t matter what their tactic was. They would have been arrested regardless. That’s smart security. Now we confiscate liquids at airports, just in case another group happens to attack the exact same target in exactly the same way. That’s just illogical.

This essay originally appeared in The Guardian. Nothing I haven’t already said elsewhere.
http://www.guardian.co.uk/technology/2008/sep/04/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2008 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.