November 15, 2008
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0811.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- The Skein Hash Function
- Me and the TSA
- Quantum Cryptography
- The Economics of Spam
- Schneier/BT News
- The Psychology of Con Men
- Movie-Plot Threat: Terrorists Using Twitter
- Giving Out Replacement Hotel Room Keys
- P = NP?
- Comments from Readers
NIST is holding a competition to replace the SHA family of hash functions, which have been increasingly under attack.
Skein is our submission (myself and seven others: Niels Ferguson, Stefan Lucks, Doug Whiting, Mihir Bellare, Tadayoshi Kohno, Jon Callas, and Jesse Walker). This is our executive summary:
“Skein is a new family of cryptographic hash functions. Its design combines speed, security, simplicity, and a great deal of flexibility in a modular package that is easy to analyze.
“Skein is fast. Skein-512—our primary proposal—hashes data at 6.1 clock cycles per byte on a 64-bit CPU. This means that on a 3.1 GHz x64 Core 2 Duo CPU, Skein hashes data at 500 MBytes/second per core—almost twice as fast as SHA-512 and three times faster than SHA-256. An optional hash-tree mode speeds up parallelizable implementations even more. Skein is fast for short messages, too; Skein-512 hashes short messages in about 1000 clock cycles.
“Skein is secure. Its conservative design is based on the Threefish block cipher. Our current best attack on Threefish-512 is on 25 of 72 rounds, for a safety factor of 2.9. For comparison, at a similar stage in the standardization process, the AES encryption algorithm had an attack on 6 of 10 rounds, for a safety factor of only 1.7. Additionally, Skein has a number of provably secure properties, greatly increasing confidence in the algorithm.
“Skein is simple. Using only three primitive operations, the Skein compression function can be easily understood and remembered. The rest of the algorithm is a straightforward iteration of this function.
“Skein is flexible. Skein is defined for three different internal state sizes—256 bits, 512 bits, and 1024 bits—and any output size. This allows Skein to be a drop-in replacement for the entire SHA family of hash functions. A completely optional and extendable argument system makes Skein an efficient tool to use for a very large number of functions: a PRNG, a stream cipher, a key derivation function, authentication without the overhead of HMAC, and a personalization capability. All these features can be implemented with very low overhead. Together with the Threefish large-block cipher at Skein’s core, this design provides a full set of symmetric cryptographic primitives suitable for most modern applications.
“Skein is efficient on a variety of platforms, both hardware and software. Skein-512 can be implemented in about 200 bytes of state. Small devices, such as 8-bit smart cards, can implement Skein-256 using about 100 bytes of memory. Larger devices can implement the larger versions of Skein to achieve faster speeds.
“Skein was designed by a team of highly experienced cryptographic experts from academia and industry, with expertise in cryptography, security analysis, software, chip design, and implementation of real-world cryptographic systems. This breadth of knowledge allowed them to create a balanced design that works well in all environments.”
NIST’s deadline was the end of October. It seems as if everyone—including many amateurs—is working on a hash function. I predicted that NIST would receive at least 80 submissions; they actually received 64. (Compare this to the sixteen NIST submissions received for the AES competition in 1998.) Somewhat more than a third are public at this time.
The selection process will take around four years. I’ve previously called this sort of thing a cryptographic demolition derby—last one left standing wins—but that’s only half true. Certainly all the groups will spend the next couple of years trying to cryptanalyze each other, but in the end there will be a bunch of unbroken algorithms; NIST will select one based on performance and features.
NIST has stated that the goal of this process is not to choose the best standard but to choose a good standard. I think that’s smart of them; in this process, “best” is the enemy of “good.” My advice is this: immediately sort them based on performance and features. Ask the cryptographic community to focus its attention on the top dozen, rather than spread its attention across all 64—although I also expect that many of the amateur submissions will be rejected by NIST for not being “complete and proper.” Otherwise, people will break the easy ones and the better ones will go unanalyzed.
Source code is available on that site.
NIST’s SHA-3 website:
SHA-3 submissions (the 27 of them that are public so far):
Attacks against SHA-1:
My liveblogging of a previous NIST hash workshop:
There was a great article from The Atlantic about me helping evade airport security. We printed fake boarding passes, explained how anyone on the no-fly list could get through security, and brought on more liquids than should be allowed.
Kip Hawley, head of the TSA, has responded to the article on his blog.
Unfortunately, there’s not really anything to his response. It’s obvious he doesn’t want to admit that they’ve been checking ID’s all this time to no purpose whatsoever, so he just emits vague generalities like a frightened squid filling the water with ink. Yes, some of the stunts in article are silly (who cares if people fly with Hezbollah T-shirts?) so that gives him an opportunity to minimize the real issues.
Hawley says: “Watch-lists and identity checks are important and effective security measures. We identify dozens of terrorist-related individuals a week and stop No-Flys regularly with our watch-list process.”
It is simply impossible that the TSA catches dozens of terrorists every week. If it were true, the administration would be trumpeting this all over the press—it would be an amazing success story in their war on terrorism. But note that Hawley doesn’t exactly say that; he calls them “terrorist-related individuals.” Which means exactly what? People so dangerous they can’t be allowed to fly for any reason, yet so innocent they can’t be arrested—even under the provisions of the Patriot Act.
And if Secretary Chertoff is telling the truth when he says that there are only 2,500 people on the no-fly list and fewer than 16,000 people on the selectee list—they’re the ones that get extra screening—and that most of them live outside the U.S., then it is just plain impossible that the TSA identifies “dozens” of these people every week. The math just doesn’t make sense.
And I also don’t believe this: “Behavior detection works and we have 2,000 trained officers at airports today. They alert us to people who may pose a threat but who may also have items that could elude other layers of physical security.”
It does work, but I don’t see the TSA doing it properly. (Fly El Al if you want to see it done properly.) But what I think Hawley is doing is engaging in a little bit of psychological manipulation. Like sky marshals, the real benefit of behavior detection isn’t whether or not you do it but whether or not the bad guys *believe* you’re doing it. If they think you are doing behavior detection at security checkpoints, or have sky marshals on every airplane, then you don’t actually have to do it. It’s the threat that’s the deterrent, not the actual security system.
This doesn’t impress me, either: “Items carried on the person, be they a ‘beer belly’ or concealed objects in very private areas, are why we are buying over 100 whole body imagers in upcoming months and will deploy more over time. In the meantime, we use hand-held devices that detect hydrogen peroxide and other explosives compounds as well as targeted pat-downs that require private screening.”
Optional security measures don’t work, because the bad guys will opt not to use them. It’s like those air-puff machines at some airports now. They’re probably great at detecting explosive residue off clothing, but every time I have seen the machines in operation, the passengers have the option whether to go through the lane with them or another lane. What possible good is that?
The closest thing to a real response from Hawley is that the terrorists might get caught stealing credit cards. “Using stolen credit cards and false documents as a way to get around watch-lists makes the point that forcing terrorists to use increasingly risky tactics has its own security value.”
He’s right about that. And, truth be told, that was my sloppiest answer during the original interview. Thinking about it afterwards, it’s far more likely is that someone with a clean record and a legal credit card will buy the various plane tickets.
This is new: “Boarding pass scanners and encryption are being tested in eight airports now and more will be coming.”
Ignoring for a moment that “eight airports” nonsense—unless you do it at every airport, the bad guys will choose the airport where you don’t do it to launch their attack—this is an excellent idea. The reason my attack works, the reason I can get through TSA checkpoints with a fake boarding pass, is that the TSA never confirms that the information on the boarding pass matches a legitimate reservation. If all TSA checkpoints had boarding pass scanners that connected to the airlines’ computers, this attack would not work. (Interestingly enough, I noticed exactly this system at the Dublin airport earlier this month.)
And finally: “Stopping the ‘James Bond’ terrorist is truly a team effort and I whole-heartedly agree that the best way to stop those attacks is with intelligence and law enforcement working together.”
This isn’t about “Stopping the ‘James Bond’ terrorist,” it’s about stopping terrorism. And if all this focus on airports, even assuming it starts working, shifts the terrorists to other targets, we haven’t gotten a whole lot of security for our money.
Chertoff on the no-fly list:
Hawley responds to my comments in my blog. Yes, it’s really him.
My interview with Hawley from last year:
In other news, Kip Hawley says that the TSA may loosen size restrictions on liquids. You’ll still have to take them out of your bag, but they can be larger than three ounces. The reasons—so he states—are that technologies are getting better, not that the threat is reduced.
I’m skeptical, of course. But read his post; it’s interesting.
The Atlantic is holding a contest, based on Hawley’s comment that the TSA is basically there to catch stupid terrorists: “And so, a contest: How would the Hawley Principle of Federally-Endorsed Mediocrity apply to other government endeavors?”
Not the same as my movie-plot threat contest, but fun all the same.
And lastly, what would the TSA make of this?
>From the LEET ’08 conference: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.
Warning poster: “In Case of Terrorist Attack, Do Not Discard Brain.”
While I am strongly opposed to a national ID, I have consistently said that giving strongly secured ID cards to groups like port workers is a good idea.
Me on national ID cards:
In northern British Columbia, there were two pipeline bombings. I found this quote heartening: “Investigators are treating the explosions as acts of vandalism, not terrorism, Shields said. ‘Under the Criminal Code, it would be characterized as mischief, which is an intentional vandalism. We don’t want to characterize this as terrorism. They were very isolated locations and there would seem there was no intent to hurt people,’ he said.”
On the other hand, in Philadelphia, a subway car design was criticized because people can see out the front. And, um, terrorists will be able to see out the front too, and we all know how dangerous terrorists are.
Seems like the engineers have another agenda—the cabs in the new trains are too small—and they’re just using security as an excuse:
And there’s still considerable terrorist fear mongering in the UK:
Fear-inducing story of terrorists hiding their communications in child porn pictures.
Terrorists and strangers preying on our children are two of the things that cause the most fear in people. Put them together, and there’s no limit to what sorts of laws you can get passed. Comment from my blog: “Why would terrorists hide incriminating messages inside incriminating photographs? That would be like drug smugglers hiding kilos of cocaine in bales of marijuana.”
Remotely eavesdropping on keyboards, from 30 feet away in another room:
I generally avoid commenting on election politics—that’s not what Crypto-Gram is about—but this comment by Barack Obama on security and trade-offs is worth discussing:
Cryptographers have long joked about rubber-hose cryptanalysis: basically, beating the keys out of someone. Seems that this might have actually happened in Turkey:
Chilling story of a death-row inmate with a contraband cell phone.
If we can’t keep contraband out of prisons, how can we possibly hope to keep it out of airports?
This is a story of how smart people can be neutralized through stupid procedures.
It’s not a new scam to switch bar codes and buy merchandise for a lower value, but how do you get away with over $1M worth of merchandise with this scam? That requires a lot of really clueless checkout clerks.
Video of talk on barcode hacks:
Keeping America safe from terrorism by monitoring distillery webcams: a bizarre story that ended up being rather mundane.
“A Look at Terrorist Behavior: How They Prepare, Where They Strike,” by Brent Smith, National Institute of Justice Journal, No. 260, 2008.
How Terrorist Groups End: Lessons for Countering al Qa’ida, by Seth G. Jones and Martin C. Libicki, RAND Corporation, 2008.
Duplicating keys from photographs:
A U.S. court ruled that hashing equals searching. Good, and interesting, ruling.
India has experienced an ill effect of banning security research. Terrorists have figured out how to clone cell phone SIM cards. The good guys didn’t know this was possible, because they can’t do the research: “The experts said no one has actually done any research on SIM card cloning because the activity is illegal in the country.”
If the good guys can’t even participate, the bad guys will always win.
More anti-terror law mission creep in the U.K. The laws are being used to catch people putting trash cans out on the wrong day.
Aspidistra, a fascinating story of a man-in-the-middle attack using radio from World War II.
Censorship in Dubai is transparent, and includes an appeals process:
Reading a letter from the envelope it was in:
Using the incremental update feature of PDF files to watch a malware author create his exploit:
Reducing the risk of human extinction:
Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.
The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper—period.
This month we’ve seen reports on a new working quantum key-distribution network in Vienna, and a new quantum key-distribution technique out of Britain. Great stuff, but headlines like the BBC’s “‘Unbreakable’ encryption unveiled” are a bit much.
The basic science behind quantum crypto was developed, and prototypes built, in the early 1980s by Charles Bennett and Giles Brassard, and there have been steady advances in engineering since then. I describe basically how it all works in Applied Cryptography, 2nd Edition (pages 554-557). At least one company already sells quantum-key distribution products.
Note that this is totally separate from quantum computing, which also has implications for cryptography. Several groups are working on designing and building a quantum computer, which is fundamentally different from a classical computer. If one were built—and we’re talking science fiction here—then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it’s not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical. I think the best quantum computer today can factor the number 15.
While I like the science of quantum cryptography—my undergraduate degree was in physics—I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.
Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.
Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.
As I’ve often said, it’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be 50 feet tall or 100 feet tall, because either way, the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: The keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.
I’m always in favor of security research, and I have enjoyed following the developments in quantum cryptography. But as a product, it has no future. It’s not that quantum cryptography might be insecure; it’s that cryptography is already sufficiently secure.
Quantum cryptography bibliography:
More commentary on news articles here:
This essay previously appeared on Wired.com.
Researchers infiltrated the Storm worm and monitored its doings.
“After 26 days, and almost 350 million e-mail messages, only 28 sales resulted—a conversion rate of well under 0.00001%. Of these, all but one were for male-enhancement products and the average purchase price was close to $100. Taken together, these conversions would have resulted in revenues of $2,731.88—a bit over $100 a day for the measurement period or $140 per day for periods when the campaign was active. However, our study interposed on only a small fraction of the overall Storm network—we estimate roughly 1.5 percent based on the fraction of worker bots we proxy. Thus, the total daily revenue attributable to Storm’s pharmacy campaign is likely closer to $7000 (or $9500 during periods of campaign activity). By the same logic, we estimate that Storm self-propagation campaigns can produce between 3500 and 8500 new bots per day.
“Under the assumption that our measurements are representative over time (an admittedly dangerous assumption when dealing with such small samples), we can extrapolate that, were it sent continuously at the same rate, Storm-generated pharmaceutical spam would produce roughly 3.5 million dollars of revenue in a year. This number could be even higher if spam-advertised pharmacies experience repeat business. A bit less than “millions of dollars every day,” but certainly a healthy enterprise.”
Of course, the authors point out that it’s dangerous to make these sorts of generalizations: “We would be the first to admit that these results represent a single data point and are not necessarily representative of spam as a whole. Different campaigns, using different tactics and marketing different products will undoubtedly produce different outcomes. Indeed, we caution strongly against researchers using the conversion rates we have measured for these Storm-based campaigns to justify assumptions in any other context.”
Spam is all about economics. When sending junk mail costs a dollar in paper, list rental, and postage, a marketer needs a reasonable conversion rate to make the campaign worthwhile. When sending junk mail is almost free, a one in ten million conversion rate is acceptable.
Book review of “Schneier on Security”:
Schneier interview from Dr. Dobb’s Journal.
Way back before the first edition of Applied Cryptography, Dr. Dobb’s Journal published my first writings about cryptography.
Schneier interview from Datamation:
Schneier audio interview about my talk at the RSA Conference in London last month:
An article of mine on choosing good passwords appeared in the Guardian.
Nothing I haven’t said before.
Great story: “My all-time favourite [short con] only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.”
The notion that it is somehow worrisome that terrorists might use Twitter is ridiculous. Of course the bad guys will use all the communications tools available to the rest of us. They have to communicate, after all. They’ll also use cars, water faucets, and all-you-can-eat buffet lunches. So what?
This commentary is dead on: “Steven Aftergood, a veteran intelligence analyst at the Federation of the American Scientists, doesn’t dismiss the Army presentation out of hand. But nor does he think it’s tackling a terribly seriously threat. ‘Red-teaming exercises to anticipate adversary operations are fundamental. But they need to be informed by a sense of what’s realistic and important and what’s not,’ he tells Danger Room. ‘If we have time to worry about ‘Twitter threats’ then we’re in good shape. I mean, it’s important to keep some sense of proportion.'”
It’s a tough security trade-off. Guests lose their hotel room keys, and the hotel staff needs to be accommodating. But at the same time, they can’t be giving out hotel room keys to anyone claiming to have lost one. Generally, hotels ask to see some ID before giving out a replacement key and, if the guest doesn’t have his wallet with him, have someone walk to the room with the key and check their ID.
This normally works pretty well, but there’s a court case in Brisbane right now about a hotel giving a room key to someone who ended up sexually attacking the woman who had rented the room. “In civil action launched yesterday, the woman alleges the man was given the spare access key to her room by a hotel staffer.”
The article doesn’t say what kind of authentication the hotel requested or received.
People have been sending me a paper that “proves” that P != NP. These sorts of papers make the rounds regularly, and my advice is to not pay attention to any of them. G.J. keeps a list of these papers—he has 43 so far—and points out: “The following paragraphs list many papers that try to contribute to the P-versus-NP question. Among all these papers, there is only a single paper that has appeared in a peer-reviewed journal, that has thoroughly been verified by the experts in the area, and whose correctness is accepted by the general research community: The paper by Mihalis Yannakakis. (And this paper does not settle the P-versus-NP question, but ‘just’ shows that a certain approach to settling this question will never work out.)”
Of course, there’s a million-dollar prize for resolving the question—so expect the flawed proofs to continue.
The latest paper:
The Millennium Prize:
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2008 by Bruce Schneier.