Schneier on Security
A blog covering security and security technology.
March 2012 Archives
The squid use two closely spaced organs called statocysts to sense sound.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Normally I just delete these as spam, but this summer program for graduate students 1) looks interesting, and 2) has some scholarship money available.
As I posted previously, I have been debating former TSA Administrator Kip Hawley on the Economist website. I didn't bother reposting my opening statement and rebuttal, because -- even though I thought I did a really good job with them -- they were largely things I've said before. In my closing statement, I talked about specific harms post-9/11 airport security has caused. This is mostly new, so here it is, British spelling and punctuation and all.
In my previous two statements, I made two basic arguments about post-9/11 airport security. One, we are not doing the right things: the focus on airports at the expense of the broader threat is not making us safer. And two, the things we are doing are wrong: the specific security measures put in place since 9/11 do not work. Kip Hawley doesn’t argue with the specifics of my criticisms, but instead provides anecdotes and asks us to trust that airport security—and the Transportation Security Administration (TSA) in particular—knows what it’s doing.
He wants us to trust that a 400-ml bottle of liquid is dangerous, but transferring it to four 100-ml bottles magically makes it safe. He wants us to trust that the butter knives given to first-class passengers are nevertheless too dangerous to be taken through a security checkpoint. He wants us to trust the no-fly list: 21,000 people so dangerous they’re not allowed to fly, yet so innocent they can’t be arrested. He wants us to trust that the deployment of expensive full-body scanners has nothing to do with the fact that the former secretary of homeland security, Michael Chertoff, lobbies for one of the companies that makes them. He wants us to trust that there’s a reason to confiscate a cupcake (Las Vegas), a 3-inch plastic toy gun (London Gatwick), a purse with an embroidered gun on it (Norfolk, VA), a T-shirt with a picture of a gun on it (London Heathrow) and a plastic lightsaber that’s really a flashlight with a long cone on top (Dallas/Fort Worth).
At this point, we don’t trust America’s TSA, Britain’s Department for Transport, or airport security in general. We don’t believe they’re acting in the best interests of passengers. We suspect their actions are the result of politicians and government appointees making decisions based on their concerns about the security of their own careers if they don’t act tough on terror, and capitulating to public demands that “something must be done”.
In this final statement, I promised to discuss the broader societal harms of post-9/11 airport security. This loss of trust—in both airport security and counterterrorism policies in general—is the first harm. Trust is fundamental to society. There is an enormous amount written about this; high-trust societies are simply happier and more prosperous than low-trust societies. Trust is essential for both free markets and democracy. This is why open-government laws are so important; trust requires government transparency. The secret policies implemented by airport security harm society because of their very secrecy.
The humiliation, the dehumanisation and the privacy violations are also harms. That Mr Hawley dismisses these as mere “costs in convenience” demonstrates how out-of-touch the TSA is from the people it claims to be protecting. Additionally, there’s actual physical harm: the radiation from full-body scanners still not publicly tested for safety; and the mental harm suffered by both abuse survivors and children: the things screeners tell them as they touch their bodies are uncomfortably similar to what child molesters say.
In 2004, the average extra waiting time due to TSA procedures was 19.5 minutes per person. That’s a total economic loss—in –America—of $10 billion per year, more than the TSA’s entire budget. The increased automobile deaths due to people deciding to drive instead of fly is 500 per year. Both of these numbers are for America only, and by themselves demonstrate that post-9/11 airport security has done more harm than good.
The current TSA measures create an even greater harm: loss of liberty. Airports are effectively rights-free zones. Security officers have enormous power over you as a passenger. You have limited rights to refuse a search. Your possessions can be confiscated. You cannot make jokes, or wear clothing, that airport security does not approve of. You cannot travel anonymously. (Remember when we would mock Soviet-style “show me your papers” societies? That we’ve become inured to the very practice is a harm.) And if you’re on a certain secret list, you cannot fly, and you enter a Kafkaesque world where you cannot face your accuser, protest your innocence, clear your name, or even get confirmation from the government that someone, somewhere, has judged you guilty. These police powers would be illegal anywhere but in an airport, and we are all harmed—individually and collectively—by their existence.
In his first statement, Mr Hawley related a quote predicting “blood running in the aisles” if small scissors and tools were allowed on planes. That was said by Corey Caldwell, an Association of Flight Attendants spokesman, in 2005. It was not the statement of someone who is thinking rationally about airport security; it was the voice of irrational fear.
Increased fear is the final harm, and its effects are both emotional and physical. By sowing mistrust, by stripping us of our privacy—and in many cases our dignity—by taking away our rights, by subjecting us to arbitrary and irrational rules, and by constantly reminding us that this is the only thing between us and death by the hands of terrorists, the TSA and its ilk are sowing fear. And by doing so, they are playing directly into the terrorists’ hands.
The goal of terrorism is not to crash planes, or even to kill people; the goal of terrorism is to cause terror. Liquid bombs, PETN, planes as missiles: these are all tactics designed to cause terror by killing innocents. But terrorists can only do so much. They cannot take away our freedoms. They cannot reduce our liberties. They cannot, by themselves, cause that much terror. It’s our reaction to terrorism that determines whether or not their actions are ultimately successful. That we allow governments to do these things to us—to effectively do the terrorists’ job for them—is the greatest harm of all.
Return airport security checkpoints to pre-9/11 levels. Get rid of everything that isn’t needed to protect against random amateur terrorists and won’t work against professional al-Qaeda plots. Take the savings thus earned and invest them in investigation, intelligence, and emergency response: security outside the airport, security that does not require us to play guessing games about plots. Recognise that 100% safety is impossible, and also that terrorism is not an “existential threat” to our way of life. Respond to terrorism not with fear but with indomitability. Refuse to be terrorized.
EDITED TO ADD (3/20): Cory Doctorow on the exchange:
All of Hawley's best arguments sum up to "Someone somewhere did something bad, and if he'd tried it on us, we would have caught him." His closing clincher? They heard a bad guy was getting on a plane somewhere. The figured out which plane, stopped it from taking off and "resolved" the situation. Seeing as there were no recent reports of foiled terrorist plots, I'm guessing the "resolution" was "it turned out we made a mistake." But Hawley's takeaway is: "look at how fast our mistake was!"
EDITED TO ADD (4/19): German translation of the closing statement.
"Empirical Analysis of Data Breach Litigation," Sasha Romanosky, David Hoffman, and Alessandro Acquisti:
Abstract: In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff's chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.
The full paper is available by using the one-click download button.
I was supposed to testify today about the TSA in front of the House Committee on Oversight and Government Reform. I was informally invited a couple of weeks ago, and formally invited last Tuesday:
The hearing will examine the successes and challenges associated with Advanced Imaging Technology (AIT), the Screening of Passengers by Observation Techniques (SPOT) program, the Transportation Worker Credential Card (TWIC), and other security initiatives administered by the TSA.
On Friday, at the request of the TSA, I was removed from the witness list. The excuse was that I am involved in a lawsuit against the TSA, trying to get them to suspend their full-body scanner program. But it's pretty clear that the TSA is afraid of public testimony on the topic, and especially of being challenged in front of Congress. They want to control the story, and it's easier for them to do that if I'm not sitting next to them pointing out all the holes in their position. Unfortunately, the committee went along with them. (They tried to pull the same thing last year and it failed -- video at the 10:50 mark.)
The committee said it would try to invite me back for another hearing, but with my busy schedule, I don't know if I will be able to make it. And it would be far less effective for me to testify without forcing the TSA to respond to my points.
I'm there in spirit, though. The title of the hearing is "TSA Oversight Part III: Effective Security or Security Theater?"
This is a neat story:
A pair of rare Enigma machines used in the Spanish Civil War have been given to the head of GCHQ, Britain's communications intelligence agency. The machines - only recently discovered in Spain - fill in a missing chapter in the history of British code-breaking, paving the way for crucial successes in World War II.
A non-commissioned officer found the machines almost by chance, only a few years ago, in a secret room at the Spanish Ministry of Defence in Madrid.
On The Economist website, I am currently debating Kip Hawley on airplane security. On Tuesday we posted our initial statements, and today (London time) we posted our rebuttals. We have one more round to go.
I've set it up to talk about the myriad of harms airport security has caused: loss of trust in government, increased fear, creeping police state, loss of liberty in the "rights free zone," and so on. Suggestions of what to say next are appreciated.
In an excellent article in Wired, James Bamford talks about the NSA's codebreaking capability.
According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: "Everybody's a target; everybody with communication is a target."
Bamford has been writing about the NSA for decades, and people tell him all sorts of confidential things. Reading the above, the obvious question to ask is: can the NSA break AES?
My guess is that they can't. That is, they don't have a cryptanalytic attack against the AES algorithm that allows them to recover a key from known or chosen ciphertext with a reasonable time and memory complexity. I believe that what the "top official" was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on. These attacks are likely to be much more effective against computer encryption.
EDITED TO ADD (3/22): Another option is that the NSA has built dedicated hardware capable of factoring 1024-bit numbers. There's quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe.
EDITED TO ADD (4/13): The NSA denies everything.
IT World published an excerpt from Chapter 4.
A way to securely erase paper:
"The key idea was to find a laser energy level that is high enough to ablate - or vaporise - the toner that at the same time is lower than the destruction threshold of the paper substrate. It turns out the best wavelength is 532 nanometres - that's green visible light - with a pulse length of 4 nanoseconds, which is quite long," Leal-Ayala told New Scientist.
EDITED TO ADD (3/21): More than one reader has pointed out that this system is not secure, nor do its inventors make any claims of security.
A otherwise uninteresting article on Internet threats to public infrastructure contains this paragraph:
At a closed-door briefing, the senators were shown how a power company employee could derail the New York City electrical grid by clicking on an e-mail attachment sent by a hacker, and how an attack during a heat wave could have a cascading impact that would lead to deaths and cost the nation billions of dollars.
Why isn't the obvious solution to this to take those critical electrical grid computers off the public Internet?
Avi Rubin has a TEDx talk on hacking various computer devices: medical devices, automobiles, police radios, smart phones, etc.
I like the quote at the end of this excerpt:
Aviation officials have questioned the need for such a strong permanent police presence at airports, suggesting they were there simply "to make the government look tough on terror".
It looks great.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Jon Callas talks about Bitcoin's security model, and how susceptible it would be to a Goldfinger-style attack (destroy everyone else's bitcoins).
The U.S. military has a non-lethal heat ray. No details on what "non-lethal" means in this context.
EDITED TO ADD (4/13): Here's an older article no the same topic.
Third, I take the Page 99 Test.
Good article by Thomas Rid on the hype surrounding cyberwar. It's well worth reading.
And in a more academic paper, published in the RUSI Journal, Thomas Rid and Peter McBurney argue that cyber-weapons aren't all that destructive and that we've been misled by some bad metaphors.
Some fundamental questions on the use of force in cyberspace are still unanswered. Worse, they are still unexplored: What are cyber 'weapons' in the first place? How is weaponised code different from physical weaponry? What are the differences between various cyber-attack tools? And do the same dynamics and norms that govern the use of weapons on the conventional battlefield apply in cyberspace?
And from the conclusion:
Two findings contravene the debate's received wisdom. One insight concerns the dominance of the offence. Most weapons may be used defensively and offensively. But the information age, the argument goes since at least 1996, has 'offence-dominant attributes.' A 2011 Pentagon report on cyberspace again stressed 'the advantage currently enjoyed by the offense in cyberwarfare.' But when it comes to cyber-weapons, the offence has higher costs, a shorter shelf-life than the defence, and a very limited target set. All this drastically reduces the coercive utility of cyber-attacks. Any threat relies on the offender's credibility to attack, or to repeat a successful attack. Even if a potent cyber-weapon could be launched successfully once, it would be highly questionable if an attack, or even a salvo, could be repeated in order to achieve a political goal. At closer inspection cyber-weapons do not seem to favour the offence.
Here's an article on the paper.
One final paper by Rid: "Cyber-War Will Not Take Place" (2012), Journal of Strategic Studies. I have not read it yet.
This person didn't like it at all.
It'll go up on the book's webpage, along with all the positive reviews.
We found about 8,000 phrases using a 20,000 phrase dictionary. Using a very rough estimate for the total number of phrases and some probability calculations, this produced an estimate that passphrase distribution provides only about 20 bits of security against an attacker trying to compromise 1% of available accounts. This is far better than passwords, which are usually under 10 bits by this same metric, but not high enough to make online guessing impractical without proper rate-limiting. Curiously, it's close to estimates made using Kuo et al.'s published numbers on mnemonic phrases. It also shows that significant numbers of people will blatantly ignore security advice about choosing nonsense phrases and choose things like "Manchester United" or "Harry Potter."
The Internet is buzzing about this video, showing a blogger walking through two different types of full-body scanners with metal objects. Basically, by placing the object on your side, the black image is hidden against the scanner's black background. This isn't new, by the way. This vulnerability was discussed in a paper published last year by the Journal of Transportation Security. And here's a German TV news segment from 2010 that shows someone sneaking explosives past a full-body scanner.
The TSA's response is pretty uninformative. I'd include a quote, but it really doesn't say anything. And the original blogger is now writing that the TSA is pressuring journalists not to cover the story.
These full-body scanners have been a disaster since they've been introduced. But, as I wrote in 2010, I don't think the TSA will back down. It would be too embarrassing if they did.
This is cool:
The idea is simple. Psychologists have known for some years that it is almost impossible to speak when your words are replayed to you with a delay of a fraction of a second.
Yet another impressive Humboldt squid feat:
"We've seen them make really impressive dives up to a kilometre and a half deep, swimming straight through a zone where there's really low oxygen," the Hopkins Marine Station researcher said.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Gizmodo published the beginning of Chapter 17: the last chapter.
This essay uses the interesting metaphor of the man-in-the-middle attacker to describe cloud providers like Facebook and Google. Basically, they get in the middle of our interactions with others and eavesdrop on the data going back and forth.
The NSA has released its specification for a secure Android.
One of the interesting things it's requiring is that all data be tunneled through a secure VPN:
Inter-relationship to Other Elements of the Secure VoIP System
The more I look at mobile security, the more I think a secure tunnel is essential.
Security is a tradeoff, a balancing act between attacker and defender. Unfortunately, that balance is never static. Changes in technology affect both sides. Society uses new technologies to decrease what I call the scope of defection -- what attackers can get away with -- and attackers use new technologies to increase it. What's interesting is the difference between how the two groups incorporate new technologies.
Changes in security systems can be slow. Society has to implement any new security technology as a group, which implies agreement and coordination and -- in some instances -- a lengthy bureaucratic procurement process. Meanwhile, an attacker can just use the new technology. For example, at the end of the horse-and-buggy era, it was easier for a bank robber to use his new motorcar as a getaway vehicle than it was for a town's police department to decide it needed a police car, get the budget to buy one, choose which one to buy, buy it, and then develop training and policies for it. And if only one police department did this, the bank robber could just move to another town. Defectors are more agile and adaptable, making them much better at being early adopters of new technology.
We saw it in law enforcement's initial inability to deal with Internet crime. Criminals were simply more flexible. Traditional criminal organizations like the Mafia didn't immediately move onto the Internet; instead, new Internet-savvy criminals sprung up. They set up websites like CardersMarket and DarkMarket, and established new organized crime groups within a decade or so of the Internet's commercialization. Meanwhile, law enforcement simply didn't have the organizational fluidity to adapt as quickly. Cities couldn't fire their old-school detectives and replace them with people who understood the Internet. The detectives' natural inertia and tendency to sweep problems under the rug slowed things even more. They spent the better part of a decade playing catch-up.
There's one more problem: defenders are in what military strategist Carl von Clausewitz calls "the position of the interior." They have to defend against every possible attack, while the defector only has to find one flaw that allows one way through the defenses. As systems get more complicated due to technology, more attacks become possible. This means defectors have a first-mover advantage; they get to try the new attack first. Consequently, society is constantly responding: shoe scanners in response to the shoe bomber, harder-to-counterfeit money in response to better counterfeiting technologies, better antivirus software to combat new computer viruses, and so on. The attacker's clear advantage increases the scope of defection even further.
Of course, there are exceptions. There are technologies that immediately benefit the defender and are of no use at all to the attacker -- for example, fingerprint technology allowed police to identify suspects after they left the crime scene and didn't provide any corresponding benefit to criminals. The same thing happened with immobilizing technology for cars, alarm systems for houses, and computer authentication technologies. Some technologies benefit both but still give more advantage to the defenders. The radio allowed street policemen to communicate remotely, which increased our level of safety more than the corresponding downside of criminals communicating remotely endangers us.
Still, we tend to be reactive in security, and only implement new measures in response to an increased scope of defection. We're slow about doing it and even slower about getting it right.
This essay originally appeared in IEEE Security & Privacy. It was adapted from Chapter 16 of Liars and Outliers.
According to this document, received by EPIC under the Freedom of Information Act, the U.S. Department of Homeland Security is combing through the gazillions of social media postings looking for terrorists. A partial list of keywords is included in the document (pages 20–23), and is reprinted in this blog post.
EDITED TO ADD (3/13): It's hard to tell what the DHS is doing with this program. EPIC says that they're monitoring "dissent," and the document talks about monitoring news stories.
Last week was the big RSA Conference in San Francisco: something like 20,000 people. From what I saw, these were the major themes on the show floor:
Who else went to RSA? What did you notice?
Some squid can see aspects of light that are invisible to humans, including polarized light.
My big idea is a big question. Every cooperative system contains parasites. How do we ensure that society's parasites don't destroy society's systems?
It's all about trust, really. Not the intimate trust we have in our close friends and relatives, but the more impersonal trust we have in the various people and systems we interact with in society. I trust airline pilots, hotel clerks, ATMs, restaurant kitchens, and the company that built the computer I'm writing this short essay on. I trust that they have acted and will act in the ways I expect them to. This type of trust is more a matter of consistency or predictability than of intimacy.
Of course, all of these systems contain parasites. Most people are naturally trustworthy, but some are not. There are hotel clerks who will steal your credit card information. There are ATMs that have been hacked by criminals. Some restaurant kitchens serve tainted food. There was even an airline pilot who deliberately crashed his Boeing 767 into the Atlantic Ocean in 1999.
My central metaphor is the Prisoner's Dilemma, which nicely exposes the tension between group interest and self-interest. And the dilemma even gives us a terminology to use: cooperators act in the group interest, and defectors act in their own selfish interest, to the detriment of the group. Too many defectors, and everyone suffers -- often catastrophically.
The Prisoner's Dilemma is not only useful in describing the problem, but also serves as a way to organize solutions. We humans have developed four basic mechanisms for ways to limit defectors: what I call societal pressure. We use morals, reputation, laws, and security systems. It's all coercion, really, although we don't call it that. I'll spare you the details; it would require a book to explain. And it did.
This book marks another chapter in my career's endless series of generalizations. From mathematical security -- cryptography -- to computer and network security; from there to security technology in general; then to the economics of security and the psychology of security; and now to -- I suppose -- the sociology of security. The more I try to understand how security works, the more of the world I need to encompass within my model.
When I started out writing this book, I thought I'd be talking a lot about the global financial crisis of 2008. It's an excellent example of group interest vs. self-interest, and how a small minority of parasites almost destroyed the planet's financial system. I even had a great quote by former Federal Reserve Chairman Alan Greenspan, where he admitted a "flaw" in his worldview. The exchange, which took place when he was being questioned by Congressman Henry Waxman at a 2008 Congressional hearing, was once the opening paragraphs of my book. I called the defectors "the dishonest minority," which was my original title.
That unifying example eventually faded into the background, to be replaced by a lot of separate examples. I talk about overfishing, childhood immunizations, paying taxes, voting, stealing, airplane security, gay marriage, and a whole lot of other things. I dumped the phrase "dishonest minority" entirely, partly because I didn't need it and partly because a vocal few early readers were reading it not as "the small percentage of us that are dishonest" but as "the minority group that is dishonest" -- not at all the meaning I was trying to convey.
I didn't even realize I was talking about trust until most of the way through. It was a couple of early readers who -- coincidentally, on the same day -- told me my book wasn't about security, it was about trust. More specifically, it was about how different societal pressures, security included, induce trust. This interplay between cooperators and defectors, trust and security, compliance and coercion, affects everything having to do with people.
In the book, I wander through a dizzying array of academic disciplines: experimental psychology, evolutionary psychology, sociology, economics, behavioral economics, evolutionary biology, neuroscience, game theory, systems dynamics, anthropology, archeology, history, political science, law, philosophy, theology, cognitive science, and computer security. It sometimes felt as if I were blundering through a university, kicking down doors and demanding answers. "You anthropologists: what can you tell me about early human transgressions and punishments?" "Okay neuroscientists, what's the brain chemistry of cooperation? And you evolutionary psychologists, how can you explain that?" "Hey philosophers, what have you got?" I downloaded thousands -- literally -- of academic papers. In pre-Internet days I would have had to move into an academic library.
What's really interesting to me is what this all means for the future. We've never been able to eliminate defections. No matter how much societal pressure we bring to bear, we can't bring the murder rate in society to zero. We'll never see the end of bad corporate behavior, or embezzlement, or rude people who make cell phone calls in movie theaters. That's fine, but it starts getting interesting when technology makes each individual defection more dangerous. That is, fisherman will survive even if a few of them defect and overfish -- until defectors can deploy driftnets and single-handedly collapse the fishing stock. The occasional terrorist with a machine gun isn't a problem for society in the overall scheme of things; but a terrorist with a nuclear weapon could be.
Also -- and this is the final kicker -- not all defectors are bad. If you think about the notions of cooperating and defecting, they're defined in terms of the societal norm. Cooperators are people who follow the formal or informal rules of society. Defectors are people who, for whatever reason, break the rules. That definition says nothing about the absolute morality of the society or its rules. When society is in the wrong, it's defectors who are in the vanguard for change. So it was defectors who helped escaped slaves in the antebellum American South. It's defectors who are agitating to overthrow repressive regimes in the Middle East. And it's defectors who are fueling the Occupy Wall Street movement. Without defectors, society stagnates.
We simultaneously need more societal pressure to deal with the effects of technology, and less societal pressure to ensure an open, free, and evolving society. This is our big challenge for the coming decade.
This essay originally appeared on John Scalzi's blog, Whatever.
Great movie-plot threat:
Financial institutions depend on timing that is accurate to the microsecond on a global scale so that stock exchanges in, say, London and New York are perfectly synchronised. One of the main ways of doing this is through GPS, and major financial institutions will have a GPS antenna on their main buildings. "They are always visible because they need a clear view of the sky," Humphreys told Wired.co.uk.
The ACLU filed a FOIA request for a bunch of cables that Wikileaks had already released complete versions of. This is what happened:
The agency released redacted versions of 11 and withheld the other 12 in full.
Click on the link to see what was redacted.
EDITED TO ADD (3/2): Commentary:
The Freedom of Information Act provides exceptions for a number of classes of information, but the State Department’s declassification decisions appear to be based not on the criteria specified in the statute, but rather on whether the documents embarrass the US or portray the US in a negative light.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.