February 15, 2009
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0902.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt, or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around. This isn't the first time Google Earth has been charged with helping terrorists: in 2007, Google Earth images of British military bases were found in the homes of Iraqi insurgents. Incidents such as these have led many governments to demand that Google remove or blur images of sensitive locations: military bases, nuclear reactors, government buildings, and so on. An Indian court has been asked to ban Google Earth entirely.
This isn't the only way our information technology helps terrorists. Last year, a U.S. army intelligence report worried that terrorists could plan their attacks using Twitter, and there are unconfirmed reports that the Mumbai terrorists read the Twitter feeds about their attacks to get real-time information they could use. British intelligence is worried that terrorists might use voice over IP services such as Skype to communicate. Terrorists might recruit on Second Life and World of Warcraft. We already know they use websites to spread their message and possibly even to recruit.
Of course, all of this is exacerbated by open-wireless access, which has been repeatedly labeled a terrorist tool and which has been the object of attempted bans.
Mobile phone networks help terrorists, too. The Mumbai terrorists used them to communicate with each other. This has led some cities, including New York and London, to propose turning off mobile phone coverage in the event of a terrorist attack.
Let's all stop and take a deep breath. By its very nature, communications infrastructure is general. It can be used to plan both legal and illegal activities, and it's generally impossible to tell which is which. When I send and receive e-mail, it looks exactly the same as a terrorist doing the same thing. To the mobile phone network, a call from one terrorist to another looks exactly the same as a mobile phone call from one victim to another. Any attempt to ban or limit infrastructure affects everybody. If India bans Google Earth, a future terrorist won't be able to use it to plan; nor will anybody else. Open Wi-Fi networks are useful for many reasons, the large majority of them positive, and closing them down affects all those reasons. Terrorist attacks are very rare, and it is almost always a bad trade-off to deny society the benefits of a communications technology just because the bad guys might use it too.
Communications infrastructure is especially valuable during a terrorist attack. Twitter was the best way for people to get real-time information about the attacks in Mumbai. If the Indian government shut Twitter down -- or London blocked mobile phone coverage -- during a terrorist attack, the lack of communications for everyone, not just the terrorists, would increase the level of terror and could even increase the body count. Information lessens fear and makes people safer.
None of this is new. Criminals have used telephones and mobile phones since they were invented. Drug smugglers use airplanes and boats, radios and satellite phones. Bank robbers have long used cars and motorcycles as getaway vehicles, and horses before then. I haven't seen it talked about yet, but the Mumbai terrorists used boats as well. They also wore boots. They ate lunch at restaurants, drank bottled water, and breathed the air. Society survives all of this because the good uses of infrastructure far outweigh the bad uses, even though the good uses are -- by and large -- small and pedestrian and the bad uses are rare and spectacular. And while terrorism turns society's very infrastructure against itself, we only harm ourselves by dismantling that infrastructure in response -- just as we would if we banned cars because bank robbers used them too.
Google Earth helps the terrorists:
Skype helps the terrorists:
Cell phones help the terrorists:
Biomedical research helps the terrorists:
This essay originally appeared in The Guardian.
Monster.com was hacked, and people's personal data was stolen. Normally I wouldn't bother even writing about this -- it happens all the time -- but an AP reporter called me to comment. I said: "Monster's latest breach 'shouldn't have happened,' said Bruce Schneier, chief security technology officer for BT Group. 'But you can't understand a company's network security by looking at public events -- that's a bad metric. All the public events tell you are, these are attacks that were successful enough to steal data, but were unsuccessful in covering their tracks.'"
Thinking about it, it's even more complex than that. To assess an organization's network security, you need to actually analyze it. You can't get a lot of information from the list of attacks that were successful enough to steal data but not successful enough to cover their tracks, and which the company's attorneys couldn't figure out a reason not to disclose to the public.
In December, then-DHS Secretary Michael Chertoff claimed that airplane hijackings were routine prior to 9/11:
Top eleven reasons why lists of top 10 bugs don't work:
Excellent essay on "The Cost of Fearing Strangers":
In-person credit card scam relies on tricking a clerk into calling a fake credit-card company employee.
Dognapping -- or, at least, the fear of dognapping -- is on the rise. So people are no longer leaving their dogs tied up outside stores, and are buying leashes that can't be easily cut through.
Good essay on why identity, authentication, and authorization must remain distinct. I spent a chapter on this in Beyond Fear.
In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying.
Story of voting machine audit logs that don't actually help in figuring out what happened.
Long article from the New York Times Magazine on Wall Street's risk management, and where it went wrong. The most interesting part explains how the incentives for traders encouraged them to take asymmetric risks: trade-offs that would work out well 99% of the time but fail catastrophically the remaining 1%. So of course, this is exactly what happened.
Good points about teaching risk analysis in school:
Some parents of children with peanut allergies are *not* asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an EpiPen. This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever. It's so rare to see rational risk management when it comes to children and safety.
Jeffrey Rosen on the Department of Homeland Security:
This Los Angeles Times story, about the airlines defining anyone disruptive as terrorists, seems to be much more hype than reality.
There's a bill in Congress -- unlikely to go anywhere -- to force digital cameras to go "click." The idea is that this will make surreptitious photography harder. "The bill's text says that Congress has found that 'children and adolescents have been exploited by photographs taken in dressing rooms and public places with the use of a camera phone.'" This is so silly it defies comment.
Some did the analysis and came up with a cost of the U.S. no-fly list: "As will be analyzed below, it is estimated that the costs of the no-fly list, since 2002, range from approximately $300 million (a conservative estimate) to $966 million (an estimate on the high end). Using those figures as low and high potentials, a reasonable estimate is that the U.S. government has spent over $500 million on the project since the September 11, 2001 terrorist attacks. Using annual data, this article suggests that the list costs taxpayers somewhere between $50 million and $161 million a year, with a reasonable compromise of those figures at approximately $100 million."
People confess to crimes they don't commit. They do it a lot. What's interesting about it is that confessions -- whether false or true -- corrupt other eyewitnesses.
There's a new hard drive encryption standard, which will make it easier for manufacturers to build encryption into drives. Honestly, I don't think this is really needed. I use PGP Disk, and I haven't noticed any slowdown due to having encryption done in software. And I worry about yet another standard with its inevitable flaws and security vulnerabilities.
This list of NSA Video Courses from 1991 is interesting, at least to me. It helps if you know the various code names and the names of the different equipment.
Good xkcd comic on the difference between theoretical and practical cryptanalysis.
Some, but not many, details about the presidential limousine.
A man was arrested by Amtrak police for taking photographs for an Amtrak photography contest. You can't make this stuff up. He's since taken down his webpage about the incident, so see my blog entry for details:
In related news, in the UK it soon might be illegal to photograph the police.
Self-propelled semi-submersibles are used to smuggle drugs into the U.S. But let's not forget the terrorism angle: "'What worries me [about the SPSS] is if you can move that much cocaine, what else can you put in that semi-submersible. Can you put a weapon of mass destruction in it? Navy Adm. Jim Stavridis, Commander, U.S. Southern Command."
Chris Paget is able -- from a distance -- to clone Western Hemisphere Travel Initiative (WHTI) compliant documents such as the passport card and Enhanced Drivers License (EDL). He doesn't clone passports, as many of the press reports claim.
Creepy billboards that watch you back:
Privacy on Facebook: excellent advice.
Interesting discussion of different ways to cheat and skip the lines at Disney theme parks. Most of the tricks involve their FastPass system for virtual queuing.
Measuring browser patch rates worldwide:
The Doghouse: Raidon's Staray-S Encrypted Hard Drives
Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.
The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn't. When the police came to arrest him, they searched his home and found illegal drugs and a gun. The Supreme Court was asked to rule whether the police had the right to arrest him for possessing those items, even though there was no legal basis for the search and arrest in the first place.
What's at issue here is the exclusionary rule, which basically says that unconstitutionally or illegally collected evidence is inadmissible in court. It might seem like a technicality, but excluding what is called "the fruit of the poisonous tree" is a security system designed to protect us all from police abuse.
We have a number of rules limiting what the police can do: rules governing arrest, search, interrogation, detention, prosecution, and so on. And one of the ways we ensure that the police follow these rules is by forbidding the police to receive any benefit from breaking them. In fact, we design the system so that the police actually harm their own interests by breaking them, because all evidence that stems from breaking the rules is inadmissible.
And that's what the exclusionary rule does. If the police search your home without a warrant and find drugs, they can't arrest you for possession. Since the police have better things to do than waste their time, they have an incentive to get a warrant.
The Herring case is more complicated, because the police thought they did have a warrant. The error was not a police error, but a database error. And, in fact, Judge Roberts wrote for the majority: "The exclusionary rule serves to deter deliberate, reckless, or grossly negligent conduct, or in some circumstances recurring or systemic negligence. The error in this case does not rise to that level."
Unfortunately, Roberts is wrong. Government databases are filled with errors. People often can't see data about themselves, and have no way to correct the errors if they do learn of any. And more and more databases are trying to exempt themselves from the Privacy Act of 1974, and specifically the provisions that require data accuracy. The legal argument for excluding this evidence was best made by an amicus curiae brief filed by the Electronic Privacy Information Center, but in short, the court should exclude the evidence because it's the only way to ensure police database accuracy.
We are protected from becoming a police state by limits on police power and authority. This is not a trade-off we make lightly: we deliberately hamper law enforcement's ability to do its job because we recognize that these limits make us safer. Without the exclusionary rule, your only remedy against an illegal search is to bring legal action against the police -- and that can be very difficult. We, the people, would rather have you go free than motivate the police to ignore the rules that limit their power.
By not applying the exclusionary rule in the Herring case, the Supreme Court missed an important opportunity to motivate the police to purge errors from their databases. Constitutional lawyers have written many articles about this ruling, but the most interesting idea comes from George Washington University professor Daniel J. Solove, who proposes this compromise: "If a particular database has reasonable protections and deterrents against errors, then the Fourth Amendment exclusionary rule should not apply. If not, then the exclusionary rule should apply. Such a rule would create an incentive for law enforcement officials to maintain accurate databases, to avoid all errors, and would ensure that there would be a penalty or consequence for errors."
Increasingly, we are being judged by the trail of data we leave behind us. Increasingly, data accuracy is vital to our personal safety and security. And if errors made by police databases aren't held to the same legal standard as errors made by policemen, then more and more innocent Americans will find themselves the victims of incorrect data.
Government database errors:
EPIC amicus curiae brief:
Other commentary on this ruling:
Me on our trail of data:
More on the assault on the exclusionary rule.
Here's another recent court case involving the exclusionary rule, and a thoughtful analysis by Orin Kerr.
This essay originally appeared on the Wall Street Journal website:
BitArmor now comes with a security guarantee. They even use me to tout it: "'We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time,' [BitArmor's CEO] says."
Sounds good, until you read the fine print: "If your company has to publicly report a breach while your data is protected by BitArmor, we'll refund the purchase price of your software. It's that simple. No gimmicks, no hassles."
And: "BitArmor cannot be held accountable for data breaches, publicly or otherwise."
So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors -- BitArmor will refund the purchase price.
Bottom line: PR gimmick, nothing more.
Yes, I think that software vendors need to accept liability for their products, and that we won't see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won't happen without the insurance companies; that's the industry that knows how to buy and sell liability.
Interview with me from Reason:
Cato recorded a podcast with me. If you're a regular reader of Crypto-Gram, there's nothing here you haven't heard before.
Interview with me on Paul Harris's Chicago radio show.
Another interview with me:
I am speaking at the International Association of Privacy Professionals Summit in Washington DC on March 13:
There are three reasons for breach notification laws. One, it's common politeness that when you lose something of someone else's, you tell him. The prevailing corporate attitude before the law -- "They won't notice, and if they do notice they won't know it's us, so we are better off keeping quiet about the whole thing" -- is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.
That last point needs a bit of explanation. The problem with companies protecting your data is that it isn't in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control -- or even knowledge -- of the company's security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost -- both in bad publicity and the actual notification -- of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.
So how has it worked?
Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University -- Sasha Romanosky, Rahul Telang and Alessandro Acquisti -- tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2% on average.
I think there's a combination of things going on. Identity theft is being reported far more today than five years ago, so it's difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone's home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn't make much of a difference, reducing the effect of these laws.
The laws rely on public shaming. It's embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there's an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint's stock tanked, and it was shamed into improving its security.
Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like "crying wolf" and soon, data breaches were no longer news.
Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.
I'm still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it's not going to solve identity theft. As I've written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use.
Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.
Carnegie Mellon paper:
Me on identity theft:
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.