April 15, 2012
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1204.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
I debated former TSA Administrator Kip Hawley on the "Economist" website. I didn't bother reposting my opening statement and rebuttal, because -- even though I thought I did a really good job with them -- they were largely things I've said before. In my closing statement, I talked about specific harms post-9/11 airport security has caused. This is mostly new, so here it is, British spelling and punctuation and all.
In my previous two statements, I made two basic arguments about post-9/11 airport security. One, we are not doing the right things: the focus on airports at the expense of the broader threat is not making us safer. And two, the things we are doing are wrong: the specific security measures put in place since 9/11 do not work. Kip Hawley doesn't argue with the specifics of my criticisms, but instead provides anecdotes and asks us to trust that airport security -- and the Transportation Security Administration (TSA) in particular -- knows what it's doing.
He wants us to trust that a 400-ml bottle of liquid is dangerous, but transferring it to four 100-ml bottles magically makes it safe. He wants us to trust that the butter knives given to first-class passengers are nevertheless too dangerous to be taken through a security checkpoint. He wants us to trust the no-fly list: 21,000 people so dangerous they're not allowed to fly, yet so innocent they can't be arrested. He wants us to trust that the deployment of expensive full-body scanners has nothing to do with the fact that the former secretary of homeland security, Michael Chertoff, lobbies for one of the companies that makes them. He wants us to trust that there's a reason to confiscate a cupcake (Las Vegas), a 3-inch plastic toy gun (London Gatwick), a purse with an embroidered gun on it (Norfolk, VA), a T-shirt with a picture of a gun on it (London Heathrow) and a plastic lightsaber that's really a flashlight with a long cone on top (Dallas/Fort Worth).
At this point, we don't trust America's TSA, Britain's Department for Transport, or airport security in general. We don't believe they're acting in the best interests of passengers. We suspect their actions are the result of politicians and government appointees making decisions based on their concerns about the security of their own careers if they don't act tough on terror, and capitulating to public demands that "something must be done."
In this final statement, I promised to discuss the broader societal harms of post-9/11 airport security. This loss of trust -- in both airport security and counterterrorism policies in general -- is the first harm. Trust is fundamental to society. There is an enormous amount written about this; high-trust societies are simply happier and more prosperous than low-trust societies. Trust is essential for both free markets and democracy. This is why open-government laws are so important; trust requires government transparency. The secret policies implemented by airport security harm society because of their very secrecy.
The humiliation, the dehumanisation and the privacy violations are also harms. That Mr Hawley dismisses these as mere "costs in convenience" demonstrates how out-of-touch the TSA is from the people it claims to be protecting. Additionally, there's actual physical harm: the radiation from full-body scanners still not publicly tested for safety; and the mental harm suffered by both abuse survivors and children: the things screeners tell them as they touch their bodies are uncomfortably similar to what child molesters say.
In 2004, the average extra waiting time due to TSA procedures was 19.5 minutes per person. That's a total economic loss -- in America -- of $10 billion per year, more than the TSA's entire budget. The increased automobile deaths due to people deciding to drive instead of fly is 500 per year. Both of these numbers are for America only, and by themselves demonstrate that post-9/11 airport security has done more harm than good.
The current TSA measures create an even greater harm: loss of liberty. Airports are effectively rights-free zones. Security officers have enormous power over you as a passenger. You have limited rights to refuse a search. Your possessions can be confiscated. You cannot make jokes, or wear clothing, that airport security does not approve of. You cannot travel anonymously. (Remember when we would mock Soviet-style "show me your papers" societies? That we've become inured to the very practice is a harm.) And if you're on a certain secret list, you cannot fly, and you enter a Kafkaesque world where you cannot face your accuser, protest your innocence, clear your name, or even get confirmation from the government that someone, somewhere, has judged you guilty. These police powers would be illegal anywhere but in an airport, and we are all harmed -- individually and collectively -- by their existence.
In his first statement, Mr Hawley related a quote predicting "blood running in the aisles" if small scissors and tools were allowed on planes. That was said by Corey Caldwell, an Association of Flight Attendants spokesman, in 2005. It was not the statement of someone who is thinking rationally about airport security; it was the voice of irrational fear.
Increased fear is the final harm, and its effects are both emotional and physical. By sowing mistrust, by stripping us of our privacy -- and in many cases our dignity -- by taking away our rights, by subjecting us to arbitrary and irrational rules, and by constantly reminding us that this is the only thing between us and death by the hands of terrorists, the TSA and its ilk are sowing fear. And by doing so, they are playing directly into the terrorists' hands.
The goal of terrorism is not to crash planes, or even to kill people; the goal of terrorism is to cause terror. Liquid bombs, PETN, planes as missiles: these are all tactics designed to cause terror by killing innocents. But terrorists can only do so much. They cannot take away our freedoms. They cannot reduce our liberties. They cannot, by themselves, cause that much terror. It's our reaction to terrorism that determines whether or not their actions are ultimately successful. That we allow governments to do these things to us -- to effectively do the terrorists' job for them -- is the greatest harm of all.
Return airport security checkpoints to pre-9/11 levels. Get rid of everything that isn't needed to protect against random amateur terrorists and won't work against professional al-Qaeda plots. Take the savings thus earned and invest them in investigation, intelligence, and emergency response: security outside the airport, security that does not require us to play guessing games about plots. Recognise that 100% safety is impossible, and also that terrorism is not an "existential threat" to our way of life. Respond to terrorism not with fear but with indomitability. Refuse to be terrorized.
Here's the whole "Economist" debate.
Demands that "something must be done."
Full-body scanners and radiation.
Personal story of someone on the no-fly list.
Quote about small knives and scissors.
Investigation, intelligence, and emergency response:
BoingBoing on the debate:
I was supposed to testify on March 26 about the TSA in front of the House Committee on Oversight and Government Reform. I was informally invited a couple of weeks previous, and formally invited the Tuesday before.
The hearing will examine the successes and challenges associated with Advanced Imaging Technology (AIT), the Screening of Passengers by Observation Techniques (SPOT) program, the Transportation Worker Credential Card (TWIC), and other security initiatives administered by the TSA.
On the Friday before, at the request of the TSA, I was removed from the witness list. The excuse was that I am involved in a lawsuit against the TSA, trying to get them to suspend their full-body scanner program. But it's pretty clear that the TSA is afraid of public testimony on the topic, and especially of being challenged in front of Congress. They want to control the story, and it's easier for them to do that if I'm not sitting next to them pointing out all the holes in their position. Unfortunately, the committee went along with them.
The committee said it would try to invite me back for another hearing, but with my busy schedule, I don't know if I will be able to make it. And it would be far less effective for me to testify without forcing the TSA to respond to my points.
I was there in spirit, though. The title of the hearing was "TSA Oversight Part III: Effective Security or Security Theater?"
The U.S. military has a non-lethal heat ray. No details on what "non-lethal" means in this context.
Jon Callas talks about BitCoin's security model, and how susceptible it would be to a Goldfinger-style attack (destroy everyone else's BitCoins).
"Empirical Analysis of Data Breach Litigation," Sasha Romanosky, David Hoffman, and Alessandro Acquisti.
Last month was the 2012 SHARCS (Special-Purpose Hardware for Attacking Cryptographic Systems) conference. The presentations are online.
Normally I just delete these as spam, but this Summer School in Cryptography and Software Security at Penn State for graduate students 1) looks interesting, and 2) has some scholarship money available.
XRY forensics tool against smart phones.
Paul Ceglia's lawsuit against Facebook is fascinating, but that's not the point of this news entry. As part of the case, there are allegations that documents and e-mails have been electronically forged. I found this story about the forensics done on Ceglia's computer to be interesting.
Symantec deliberately "lost" a bunch of smart phones with tracking software on them, just to see what would happen. "Some 43 percent of finders clicked on an app labeled 'online banking.' And 53 percent clicked on a filed named 'HR salaries.' A file named 'saved passwords' was opened by 57 percent of finders. Social networking tools and personal e-mail were checked by 60 percent. And a folder labeled 'private photos' tempted 72 percent."
The National Academies Press has published "Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response."
The "New York Times" tries to make sense of the TSA's policies on computers. Why do you have to take your tiny laptop out of your bag, but not your iPad? Their conclusion: security theater.
I read "Raise the Crime Rate" a couple of months ago, and I'm still not sure what I think about it. It's definitely one of the most thought-provoking essays I've read this year. The author argues that the only moral thing for the U.S. to do is to accept a slight rise in the crime rate while vastly reducing the number of people incarcerated. While I might not agree with his conclusion -- as I said above, I'm not sure whether I do or I don't -- it's very much the sort of trade-off I talk about in "Liars and Outliers." And Steven Pinker has an extensive argument about violent crime in modern society that he makes in "The Better Angels of our Nature."
Interesting video of Brian Snow speaking from last November. (Brian used to be the Technical Director of NSA's Information Assurance Directorate.) About a year and a half ago, I complained that his words were being used to sow cyber-fear. This talk -- about 30 minutes -- is a better reflection of what he really thinks.
Disguising Tor traffic as Skype video calls, to prevent national firewalls from blocking it.
The University of Pittsburgh has been the recipient of over 80 bomb threats in the past two months (over 30 during the last week). Each time, the university evacuates the threatened building, searches it top to bottom -- one of the threatened buildings is the 42-story Cathedral of Learning -- finds nothing, and eventually resumes classes. This seems to be nothing more than a very effective denial-of-service attack.
Police have no leads. The threats started out as handwritten messages on bathroom walls, but are now being sent via e-mail and anonymous remailers.
The University is implementing some pretty annoying security theater in response:
To enter secured buildings, we all will need to present a University of Pittsburgh ID card. It is important to understand that book bags, backpacks and packages will not be allowed. There will be single entrances to buildings so there will be longer waiting times to get into the buildings. In addition, non-University of Pittsburgh residents will not be allowed in the residence halls.
I can't see how this will help, but what else can the University do? Their incentives are such that they're stuck overreacting. If they ignore the threats and they're wrong, people will be fired. If they overreact to the threats and they're wrong, they'll be forgiven. There's no incentive to do an actual cost-benefit analysis of the security measures.
For the attacker, though, the cost-benefit payoff is enormous. E-mails are cheap, and the response they induce is very expensive.
If you have any information about the bomb threatener, contact the FBI. There's a $50,000 reward waiting for you. For the university, paying that would be a bargain.
In an excellent article in "Wired," James Bamford talks about the NSA's codebreaking capability.
According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: "Everybody's a target; everybody with communication is a target."
Bamford has been writing about the NSA for decades, and people tell him all sorts of confidential things. Reading the above, the obvious question to ask is: can the NSA break AES?
My guess is that they can't. That is, they don't have a cryptanalytic attack against the AES algorithm that allows them to recover a key from known or chosen ciphertext with a reasonable time and memory complexity. I believe that what the "top official" was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on. These attacks are likely to be much more effective against computer encryption.
Another option is that the NSA has built dedicated hardware capable of factoring 1024-bit numbers. There's quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe.
The NSA denies everything.
This is a neat story:
A pair of rare Enigma machines used in the Spanish Civil War have been given to the head of GCHQ, Britain's communications intelligence agency. The machines -- only recently discovered in Spain -- fill in a missing chapter in the history of British code-breaking, paving the way for crucial successes in World War II.
A non-commissioned officer found the machines almost by chance, only a few years ago, in a secret room at the Spanish Ministry of Defence in Madrid.
"Nobody entered there because it was very secret," says Felix Sanz, the director of Spain's intelligence service.
"And one day somebody said 'Well if it is so secret, perhaps there is something secret inside.' They entered and saw a small office where all the encryption was produced during not only the civil war but in the years right afterwards."
Liars and Outliers: IT World published an excerpt from Chapter 4.
I'll be speaking at InfoShare in Gdansk, Poland, April 19-20.
I'll be speaking to the New Zealand Internet Task Force in Wellington, New Zealand, on May 1.
I'll be speaking at Identity Conference 2012 in Wellington, New Zealand, also on May 1.
I'll be speaking at the Privacy Forum in Wellington, New Zealand on May 2.
A Forbes article talks about legitimate companies buying zero-day exploits, including the fact that "an undisclosed U.S. government contractor recently paid $250,000 for an iOS exploit."
The price goes up if the hack is exclusive, works on the latest version of the software, and is unknown to the developer of that particular software. Also, more popular software results in a higher payout. Sometimes, the money is paid in installments, which keep coming as long as the hack does not get patched by the original software developer.
Yes, I know that vendors will pay bounties for exploits. And I'm sure there are a lot of government agencies around the world who want zero-day exploits for both espionage and cyber-weapons. But I just don't see that much value in buying an exploit from random hackers around the world.
These things only have value until they're patched, and a known exploit -- even if it is just known by the seller -- is much more likely to get patched. I can much more easily see a criminal organization deciding that the exploit has significant value before that happens. Government agencies are playing a much longer game.
And I would expect that most governments have their own hackers who are finding their own exploits. One, cheaper. And two, only known within that government.
An otherwise uninteresting article on Internet threats to public infrastructure contains this paragraph:
At a closed-door briefing, the senators were shown how a power company employee could derail the New York City electrical grid by clicking on an e-mail attachment sent by a hacker, and how an attack during a heat wave could have a cascading impact that would lead to deaths and cost the nation billions of dollars.
Why isn't the obvious solution to this to take those critical electrical grid computers off the public Internet?
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Schneier on Security," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.