Crypto-Gram

February 15, 2009

by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0902.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Helping the Terrorists

It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt, or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn’t the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.

This isn’t the only way our information technology helps terrorists. Last year, a U.S. army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists might recruit on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.

Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labeled a terrorist tool and which has been the object of attempted bans.

Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.

Let’s all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it’s generally impossible to tell which is which. When I send and receive e-mail, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won’t be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.

Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down—or London blocked mobile phone coverage—during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.

None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven’t seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are—by and large—small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society’s very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response—just as we would if we banned cars because bank robbers used them too.

Google Earth helps the terrorists:
http://news.nationalgeographic.com/news/2007/03/…
http://technology.timesonline.co.uk/tol/news/…
http://news.cnet.com/…
Twitter helps the terrorists:
http://www.inquisitr.com/9863/…
http://bit.ly/terror4

Skype helps the terrorists:
http://www.computerweekly.com/Articles/2008/09/15/…
Second Life and World of Warcraft help the terrorists:
http://www.news.com.au/story/0,23599,22163811-2,00.html

Open wireless helps the terrorists:
http://.wired.com/defense/2009/01/…
https://www.schneier.com/blog/archives/2008/01/…

Cell phones help the terrorists:
http://www.foxnews.com/politics/2009/01/08/…
http://www.guardian.co.uk/technology/2008/dec/04/…
http://www.washingtonpost.com/wp-dyn/content/…
Cars help the terrorists:
http://www.guardian.co.uk/technology/2008/sep/04/…
Library computers help the terrorists:
http://www.washingtonpost.com/wp-dyn/content/…
Anonymous chat rooms help the terrorists:
http://query.nytimes.com/gst/fullpage.html?…
Commercial databases help the terrorists:
http://www.computerworld.com/printthis/2005/…

Biomedical research helps the terrorists:
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/…
In-flight Internet helps the terrorists:
http://www.upi.com/Top_News/2009/02/07/…
How soon before the people making this remote fireworks launcher are accused of helping the terrorists?
http://www.maplin.co.uk/module.aspx?moduleno=226037

This essay originally appeared in The Guardian.
http://www.guardian.co.uk/technology/2009/jan/29/…


Monster.com Data Breach

Monster.com was hacked, and people’s personal data was stolen. Normally I wouldn’t bother even writing about this—it happens all the time—but an AP reporter called me to comment. I said: “Monster’s latest breach ‘shouldn’t have happened,’ said Bruce Schneier, chief security technology officer for BT Group. ‘But you can’t understand a company’s network security by looking at public events—that’s a bad metric. All the public events tell you are, these are attacks that were successful enough to steal data, but were unsuccessful in covering their tracks.'”

Thinking about it, it’s even more complex than that. To assess an organization’s network security, you need to actually analyze it. You can’t get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company’s attorneys couldn’t figure out a reason not to disclose to the public.

http://www.google.com/hostednews/ap/article/…
http://www.telegraph.co.uk/scienceandtechnology/…
http://www.usatoday.com/money/industries/technology/…
http://www.itpro.co.uk/609662/…
http://technology.timesonline.co.uk/tol/news/…


News

In December, then-DHS Secretary Michael Chertoff claimed that airplane hijackings were routine prior to 9/11:
https://www.schneier.com/blog/archives/2009/01/…

Top eleven reasons why lists of top 10 bugs don’t work:
http://www.informit.com/articles/article.aspx?p=1322398

Excellent essay on “The Cost of Fearing Strangers”:
http://freakonomics.blogs.nytimes.com/2009/01/06/…
Nothing I haven’t said before. Remember, if it’s in the news, don’t worry about it. The very definition of news is “something that almost never happens.” When something is so common that it’s no longer news—car crashes, domestic violence—that’s when you should worry about it.
http://www.schneier.com/essay-171.html

In-person credit card scam relies on tricking a clerk into calling a fake credit-card company employee.
http://www.hattiesburgamerican.com/article/20090112/…

Dognapping—or, at least, the fear of dognapping—is on the rise. So people are no longer leaving their dogs tied up outside stores, and are buying leashes that can’t be easily cut through.
http://www.newyorker.com/talk/2009/01/05/…

Another recently declassified NSA document, on the discovery of TEMPEST, from 1972.
http://www.nsa.gov/public_info/_files/…
http://.wired.com/27bstroke6/2008/04/…

Good essay on why identity, authentication, and authorization must remain distinct. I spent a chapter on this in Beyond Fear.
http://technet.microsoft.com/en-us/library/…

In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying.
http://www.news.com.au/couriermail/story/…
This is a good example of how non-security incentives affect security decisions.

Story of voting machine audit logs that don’t actually help in figuring out what happened.
http://.wired.com/27bstroke6/2009/01/…

Long article from the New York Times Magazine on Wall Street’s risk management, and where it went wrong. The most interesting part explains how the incentives for traders encouraged them to take asymmetric risks: trade-offs that would work out well 99% of the time but fail catastrophically the remaining 1%. So of course, this is exactly what happened.
http://www.nytimes.com/2009/01/04/magazine/…

Good points about teaching risk analysis in school:
http://www.timesonline.co.uk/tol/news/uk/education/…

Some parents of children with peanut allergies are *not* asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an EpiPen. This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever. It’s so rare to see rational risk management when it comes to children and safety.
http://www.todaysparent.com/shared/print.jsp?…
Fascinating interview with an adware developer.
http://philosecurity.org/2009/01/12/…
Good commentary on the interview, showing how it whitewashes history.
http://www.vitalsecurity.org/2009/01/…
http://www.vitalsecurity.org/2009/01/…
http://www.vitalsecurity.org/2009/01/…
http://www.vitalsecurity.org/2009/01/…

Jeffrey Rosen on the Department of Homeland Security:
http://www.tnr.com/politics/story.html?…
Jon Stewart on closing Guantanamo and movie-plot threats:
http://www.thedailyshow.com/video/index.jhtml?…
Safe Quick Undercarriage Immobilization Device (SQUID):
http://www.dhs.gov/xres/programs/…

This Los Angeles Times story, about the airlines defining anyone disruptive as terrorists, seems to be much more hype than reality.
http://www.latimes.com/news/nationworld/world/…
http://www.popehat.com/2009/01/22/2793/
http://.simplejustice.us/2009/01/23/…
Academic paper about evaluating the risks of low-probability high-cost events:
http://arxiv.org/pdf/0810.5515v1

There’s a bill in Congress—unlikely to go anywhere—to force digital cameras to go “click.” The idea is that this will make surreptitious photography harder. “The bill’s text says that Congress has found that ‘children and adolescents have been exploited by photographs taken in dressing rooms and public places with the use of a camera phone.'” This is so silly it defies comment.
http://arstechnica.com/tech-policy/news/2009/01/…
Apparently this is already law in Japan:
http://news.bbc.co.uk/2/hi/asia-pacific/3031716.stm

Some did the analysis and came up with a cost of the U.S. no-fly list: “As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million.”
http://www.hsaj.org/?fullarticle=5.1.6

People confess to crimes they don’t commit. They do it a lot. What’s interesting about it is that confessions—whether false or true—corrupt other eyewitnesses.
http://www3.interscience.wiley.com/journal/…
http://www.sciam.com/podcast/episode.cfm?…
Some serious research to back up the point that racial profiling is no better than random screening:
http://arstechnica.com/science/news/2009/02/…
http://www.sciam.com/article.cfm?…
http://www.nytimes.com/2009/02/03/science/…
http://www.nature.com/news/2009/090202/full/…
Me on racial profiling:
https://www.schneier.com/blog/archives/2005/07/…

There’s a new hard drive encryption standard, which will make it easier for manufacturers to build encryption into drives. Honestly, I don’t think this is really needed. I use PGP Disk, and I haven’t noticed any slowdown due to having encryption done in software. And I worry about yet another standard with its inevitable flaws and security vulnerabilities.
http://www.computerworld.com/action/article.do?…
http://arstechnica.com/hardware/news/2009/01/…
http://www.theregister.co.uk/2009/01/30/…
Perceptive comment about how the real benefit is regulatory compliance:
https://www.schneier.com/blog/archives/2009/02/…
It’s easy to hack electronic road signs: cheap locks, and a default password. And it’s fun.
http://www.i-hacked.com/content/view/274/48/
http://www.kxan.com/dpp/news/Road_signs_warn_of_zombies
http://www.theindychannel.com/news/18620871/detail.html
http://hacks.mit.edu/Hacks/by_year/2008/sign_factory/

This list of NSA Video Courses from 1991 is interesting, at least to me. It helps if you know the various code names and the names of the different equipment.
http://www.governmentattic.org/2docs/…

Good xkcd comic on the difference between theoretical and practical cryptanalysis.
http://xkcd.com/538/

Some, but not many, details about the presidential limousine.
http://www.latimes.com/classified/automotive/…
http://www.msnbc.msn.com/id/28697417/
http://features.csmonitor.com/wp-content/themes/csm/…
http://jalopnik.com/5131380/…
Info about the Gatling gun-equipped SUV that follows Cadillac One.
http://jalopnik.com/5134488/…
The U.S. House of Representatives approved a bill creating a whitelist of people who are on the no-fly blacklist, but shouldn’t be. No word yet about what they’re going to do about people who are on the whitelist, but shouldn’t be. Perhaps they’ll create a second blacklist for them. Then we’ll all be safe from terrorists, for sure.
http://.wired.com/27bstroke6/2009/02/…

A man was arrested by Amtrak police for taking photographs for an Amtrak photography contest. You can’t make this stuff up. He’s since taken down his webpage about the incident, so see my blog entry for details:
https://www.schneier.com/blog/archives/2009/02/…
Even Stephen Colbert made fun of it.
http://www.colbertnation.com/…
This isn’t the first time Amtrak police have been idiots.
https://www.schneier.com/blog/archives/2008/06/…

In related news, in the UK it soon might be illegal to photograph the police.
http://www.bjp-online.com/public/showPage.html?…

Self-propelled semi-submersibles are used to smuggle drugs into the U.S. But let’s not forget the terrorism angle: “‘What worries me [about the SPSS] is if you can move that much cocaine, what else can you put in that semi-submersible. Can you put a weapon of mass destruction in it? Navy Adm. Jim Stavridis, Commander, U.S. Southern Command.”
http://www.southcom.mil/AppsSC/factFiles.php?id=83

Chris Paget is able—from a distance—to clone Western Hemisphere Travel Initiative (WHTI) compliant documents such as the passport card and Enhanced Drivers License (EDL). He doesn’t clone passports, as many of the press reports claim.
http://video.google.com/videoplay?…
http://www.engadget.com/2009/02/02/…
http://hackaday.com/2009/02/02/mobile-rfid-scanning/
http://it.slashdot.org/article.pl?sid=09/02/04/1320223
https://www.schneier.com/blog/archives/2009/02/…

Creepy billboards that watch you back:
http://www.physorg.com/news152544159.html

Privacy on Facebook: excellent advice.
http://www.allfacebook.com/2009/02/facebook-privacy/

Interesting discussion of different ways to cheat and skip the lines at Disney theme parks. Most of the tricks involve their FastPass system for virtual queuing.
http://miceage.micechat.com/kevinyee/ky020309b.htm

Measuring browser patch rates worldwide:
http://www.techzoom.net/publications/…

The Doghouse: Raidon’s Staray-S Encrypted Hard Drives
Turns out the algorithm is linear.
http://www.heise-online.co.uk/security/…
When you’re buying security products, you have to trust the vendor. That’s why I don’t buy any of these hardware-encrypted drives. I don’t trust the vendors.


The Exclusionary Rule and Security

Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.

The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn’t. When the police came to arrest him, they searched his home and found illegal drugs and a gun. The Supreme Court was asked to rule whether the police had the right to arrest him for possessing those items, even though there was no legal basis for the search and arrest in the first place.

What’s at issue here is the exclusionary rule, which basically says that unconstitutionally or illegally collected evidence is inadmissible in court. It might seem like a technicality, but excluding what is called “the fruit of the poisonous tree” is a security system designed to protect us all from police abuse.

We have a number of rules limiting what the police can do: rules governing arrest, search, interrogation, detention, prosecution, and so on. And one of the ways we ensure that the police follow these rules is by forbidding the police to receive any benefit from breaking them. In fact, we design the system so that the police actually harm their own interests by breaking them, because all evidence that stems from breaking the rules is inadmissible.

And that’s what the exclusionary rule does. If the police search your home without a warrant and find drugs, they can’t arrest you for possession. Since the police have better things to do than waste their time, they have an incentive to get a warrant.

The Herring case is more complicated, because the police thought they did have a warrant. The error was not a police error, but a database error. And, in fact, Judge Roberts wrote for the majority: “The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence. The error in this case does not rise to that level.”

Unfortunately, Roberts is wrong. Government databases are filled with errors. People often can’t see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy. The legal argument for excluding this evidence was best made by an amicus curiae brief filed by the Electronic Privacy Information Center, but in short, the court should exclude the evidence because it’s the only way to ensure police database accuracy.

We are protected from becoming a police state by limits on police power and authority. This is not a trade-off we make lightly: we deliberately hamper law enforcement’s ability to do its job because we recognize that these limits make us safer. Without the exclusionary rule, your only remedy against an illegal search is to bring legal action against the police—and that can be very difficult. We, the people, would rather have you go free than motivate the police to ignore the rules that limit their power.

By not applying the exclusionary rule in the Herring case, the Supreme Court missed an important opportunity to motivate the police to purge errors from their databases. Constitutional lawyers have written many articles about this ruling, but the most interesting idea comes from George Washington University professor Daniel J. Solove, who proposes this compromise: “If a particular database has reasonable protections and deterrents against errors, then the Fourth Amendment exclusionary rule should not apply. If not, then the exclusionary rule should apply. Such a rule would create an incentive for law enforcement officials to maintain accurate databases, to avoid all errors, and would ensure that there would be a penalty or consequence for errors.”

Increasingly, we are being judged by the trail of data we leave behind us. Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren’t held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.

http://www.nytimes.com/2009/01/15/washington/…
http://www.supremecourtus.gov/opinions/08pdf/07-513.pdf
http://epic.org/privacy/herring

Government database errors:
http://www.usdoj.gov/oig/reports/INS/e9708/index.htm
http://www.usdoj.gov/oig/reports/INS/e0206/index.htm
http://www.usdoj.gov/oig/reports/INS/e0301/final.pdf
http://www.gao.gov/new.items/d05813.pdf
http://www.usdoj.gov/oig/reports/FBI/a0527/final.pdf
http://www.ssa.gov/oig/ADOBEPDF/A-08-06-26100.pdf

EPIC amicus curiae brief:
http://epic.org/privacy/herring/07-513tsac_epic.pdf

Other commentary on this ruling:
http://www.concurringopinions.com/archives/2009/01/…
http://www.scotusblog.com/wp/…
http://volokh.com/posts/1231961926.shtml
http://alicublog.blogspot.com/2009/01/…

Me on our trail of data:
http://www.schneier.com/essay-219.html

More on the assault on the exclusionary rule.
http://www.nytimes.com/2009/01/31/washington/…

Here’s another recent court case involving the exclusionary rule, and a thoughtful analysis by Orin Kerr.
http://www.ajc.com/services/content/metro/dekalb/…
http://volokh.com/posts/1233720663.shtml

This essay originally appeared on the Wall Street Journal website:
http://online.wsj.com/article/SB123301316511017419.html


BitArmor’s No-Breach Guarantee

BitArmor now comes with a security guarantee. They even use me to tout it: “‘We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time,’ [BitArmor’s CEO] says.”

Sounds good, until you read the fine print: “If your company has to publicly report a breach while your data is protected by BitArmor, we’ll refund the purchase price of your software. It’s that simple. No gimmicks, no hassles.”

And: “BitArmor cannot be held accountable for data breaches, publicly or otherwise.”

So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors—BitArmor will refund the purchase price.

Bottom line: PR gimmick, nothing more.

Yes, I think that software vendors need to accept liability for their products, and that we won’t see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won’t happen without the insurance companies; that’s the industry that knows how to buy and sell liability.

http://www.bitarmor.com/guarantee
http://www.darkreading.com/security/attacks/…
BitArmor responds:
https://www.schneier.com/blog/archives/2009/01/…
Me on liability:
https://www.schneier.com/blog/archives/2004/11/…


Schneier News

Interview with me from Reason:
http://www.reason.com/news/show/131103.html

Cato recorded a podcast with me. If you’re a regular reader of Crypto-Gram, there’s nothing here you haven’t heard before.
http://www.cato.org/dailypodcast/…

Interview with me on Paul Harris’s Chicago radio show.
http://paulharrisonline.blogspot.com/2009/02/…

Another interview with me:
http://www.privacysummit.org/index.php?…
I am presenting Skein at the First SHA-3 Candidate Conference in Leuven, Belgium on February 25-28:
http://csrc.nist.gov/groups/ST/hash/sha-3/Round1/…

I am speaking at the International Association of Privacy Professionals Summit in Washington DC on March 13:
http://www.privacysummit.org/


Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2% on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

Carnegie Mellon paper:
http://ssrn.com/abstract=1268926

Me on identity theft:
https://www.schneier.com/blog/archives/2005/04/…

This is the second half of a point/counterpoint with Marcus Ranum.
http://searchsecurity.techtarget.com/…
Marcus’s essay:
http://searchsecurity.techtarget.com/…


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.

Copyright (c) 2009 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.