Crypto-Gram

August 15, 2007

by Bruce Schneier
Founder and CTO
BT Counterpane
schneier@schneier.com
http://www.schneier.com
http://www.counterpane.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <http://www.schneier.com/crypto-gram-0708.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.


In this issue:


Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, industry experts told us there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

“Assurances are confidence-building activities demonstrating that:
“1. The system’s security policy is internally consistent and reflects the requirements of the organization,
“2. There are sufficient security functions to support the security policy,
“3. The system functions to meet a desired set of properties and *only* those properties,
“4. The functions are implemented correctly, and
“5. The assurances *hold up* through the manufacturing, delivery and life cycle of the system.”

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like “Building Secure Software,” “Software Security,” and “Writing Secure Code.” It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

California reports:
http://www.sos.ca.gov/elections/elections_vsr.htm

Commentary and blog posts:
http://www.freedom-to-tinker.com/?p=1181
http://.wired.com/27bstroke6/2007/07/…
https://www.schneier.com/blog/archives/2007/07/…
http://www.freedom-to-tinker.com/?p=1184
http://.wired.com/27bstroke6/2007/08/…
http://avi-rubin.blogspot.com/2007/08/…
http://www.crypto.com/blog/ca_voting_report/
http://twistedphysics.typepad.com/…
https://www.schneier.com/blog/archives/2007/08/…

California’s recertification requirements:
http://arstechnica.com/news.ars/post/…
DefCon reports:
http://www.defcon.org/
http://www.physorg.com/news105533409.html
http://.wired.com/27bstroke6/2007/08/…
http://www.newsfactor.com/news/…
http://.wired.com/27bstroke6/2007/08/…

US-VISIT database vulnerabilities:
http://www.washingtonpost.com/wp-dyn/content/…
RFID passport hacking:
http://www.engadget.com/2006/08/03/…
http://www.rfidjournal.com/article/articleview/2559/…
http://www.wired.com/politics/security/news/2007/08/…
http://money.cnn.com/2007/08/03/news/rfid/?…

How common are bugs:
http://www.rtfm.com/bugrate.pdf

Diebold patch:
https://www.schneier.com/blog/archives/2007/08/…

Brian Snow on assurance:
http://www.acsac.org/2005/papers/Snow.pdf

Books on secure software development:
http://www.amazon.com/…
http://www.amazon.com/…
http://www.amazon.com/…
Microsoft’s SDL:
http://www.microsoft.com/MSPress/books/8753.asp

DHS’s Build Security In program:
https://buildsecurityin.us-cert.gov/daisy/bsi/home.html

This essay originally appeared on Wired.com.
http://www.wired.com/politics/security/commentary/…


More Voting News

California Secretary of State Bowen’s certification decisions are online. She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.) To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.
http://www.sos.ca.gov/elections/elections_vsr.htm
http://www.nytimes.com/2007/08/05/us/05vote.html?…
Florida just recently released another study of the Diebold voting machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).
http://www.sait.fsu.edu/news/2007-07-31.shtml
http://election.dos.state.fl.us/pdf/SAITreport.pdf
The most interesting issues are (1) Diebold’s apparent “find-then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography. More here:
https://www.schneier.com/blog/archives/2007/08/…

The UK Electoral Commission released a report on the 2007 e-voting and e-counting pilots. The results are none too good.
http://www.electoralcommission.org.uk/elections/…
http://www.lightbluetouchpaper.org/2007/08/02/…
And the Brennan Center released a report on post-election audits:
http://www.brennancenter.org/dynamic/subpages/…

My previous essays on electronic voting, from 2004:
http://www.schneier.com/crypto-gram-0411.html#1
http://www.schneier.com/crypto-gram-0411.html#2

My previous essay on electronic voting, from 2000:
http://www.schneier.com/crypto-gram-0012.html#1


New Harry Potter Book Leaked on BitTorrent

A week before publication, digital photographs of every page were available on BitTorrent.

I fielded a lot of press calls on this, mostly from reporters asking me what the publisher could have done differently. Honestly, I don’t think it was possible to keep the book under wraps. Millions of copies of the book were headed to all four corners of the globe. There were simply too many people who must be trusted in order for the security to hold. And all it took was one untrustworthy person—one truck driver, one bookstore owner, one warehouse worker—to leak the book.

But conversely, I don’t think the publishers should care. Anyone fan-crazed enough to read digital photographs of the pages a few days before the real copy comes out is also someone who is going to buy a real copy. And anyone who will read the digital photographs *instead* of the real book would have borrowed a copy from a friend. My guess is that the publishers lost zero sales, and that the pre-release simply increased the press frenzy.

I’m kind of amazed the book hadn’t leaked sooner.

And that was just the first leak. Shortly thereafter, versions in Word, plain text, and formatted PDF were likewise available via BitTorrent.

http://machinist.salon.com//2007/07/17/…
http://www.publicradio.org/columns/futuretense/

Some of the security measures the publisher took with the manuscript.
http://www.crn.com.au/story.aspx?…

The camera has a unique serial number embedded in each of the digital photos which might be used to track the author.
http://arstechnica.com/news.ars/post/…
https://www.schneier.com/blog/archives/2006/03/…


News

Interesting essay on security and return on investment (ROI):
http://taosecurity.blogspot.com/2007/07/…

In this article about a do-it-yourself anti-satellite missile system, this bit of reality only appears at the end: “While it may be true that, when it comes to nuts and bolts, things may not be quite as simple as they sound here, the bare fact remains—it can be done.”
http://www.spacewar.com/reports/…

Here’s a clip from an Australian TV programme called “The Chaser”. A Trojan Horse (full of appropriately attired soldiers) finds its way past security everywhere except the Turkish consulate. At least they remember their history.
http://www.youtube.com/watch?v=Xs3SfNANtig

Some common sense from Canada: passengers are once again allowed to say “bomb” in the airport.
http://www.reuters.com/article/oddlyEnoughNews/…

“Hut 33”: a new BBC Radio comedy about three codebreakers in World War II Bletchley Park:
http://www.bbc.co.uk/radio4/comedy/hut33.shtml

In London, the system that was built for road-fare collection is now being used for counterterrorism. This sort of function creep happens all the time. I’ll bet you anything that, soon after this data is used for antiterrorism purposes, more exceptions are put in place for more routine police matters.
http://news.bbc.co.uk/1/hi/uk_politics/6902543.stm

Woman registers a dog to vote. No word about whether or not the dog would have actually been able to vote.
http://timesunion.com/AspStories/story.asp?…
http://www.freedom-to-tinker.com/?p=1172

Most computer security products deliberately do not detect police spyware.
http://news.com.com/…
A TSA screener doesn’t like the look of a homemade battery charger, and refuses to let it on an airplane. Interesting story, both for the escalation procedure the TSA screener followed, and the author’s observation. Basically, people with no expertise in what’s normal are being asked to determine what’s normal. Something that’s allowed one time might not be allowed the next. And that’s the problem: the TSA is both arbitrary and capricious, and it’s impossible to follow the rules because no one knows how they will be applied.
http://www.natch.net/stuff/TSA/

There are buildings in the DC area that you can’t photograph for security reasons, but you can’t get a list of those buildings—for security reasons. Very Kafkaesque.
http://.washingtonpost.com/rawfisher/2007/07/…
https://www.schneier.com/blog/archives/2007/07/…

An Enigma machine sold on eBay for $30K:
https://www.schneier.com/blog/archives/2007/07/…

U.S. drug enforcement agents use key loggers to bypass both PGP and Hushmail encryption. I’ve been saying this for a while: the easiest way to get at someone’s communications is not by intercepting it in transit, but by accessing it on the sender’s or recipient’s computers.
http://www.boingboing.net/2007/07/13/…

I’ve written about forged credentials before, and how hard a problem it is to solve. Here’s another story illustrating the problem: an aide to ex-governor Mitt Romney created a phony law-enforcement badge.
http://news.bostonherald.com/politics/view.bg?…
Here’s the problem: When faced with a badge, most people assume it’s legitimate. And even if they wanted to verify the badge, there’s no real way for them to do so.
https://www.schneier.com/blog/archives/2006/01/…

Computer security people have been talking about ransomware for years, but only recently are we seeing it in the wild: software that encrypts your data, and then charges you for the decryption key.
http://arstechnica.com/news.ars/post/…
In 2006, there were 20,000 false alarms from the terrorist watch list. How do I know they were all false alarms? Because this administration makes a press splash with every arrest, no matter how scant the evidence is. Do you really think they would pass up a chance to tout how good the watch list is?
http://www.wired.com/politics/security/news/2007/07/…

Definitely read this. It’s by Dave Mackett, the president of the Airline Pilots Security Alliance, talking about airplane security and terrorism.
http://hotair.com/archives/2007/07/16/…

A really interesting essay on truth and photographs.
http://morris.blogs.nytimes.com/

Long and interesting article on fMRI lie detectors, including a discussion about why we’re so bad at detecting lies:
http://www.newyorker.com/reporting/2007/07/02/…

Intel security music video, directed by Christopher Guest. Hardware vs. software security: I can’t believe the actors kept a straight face while filming this.
http://www.youtube.com/watch?v=12Icxthmpis

Geek Squad computer repair technicians were accused to copying customer files. We all know it’s possible, but we assume that technicians don’t do it. In this case, they were just ogling photos and the like, but how much are these people paid and how much can they make with a few good identity thefts?
http://.wired.com/gadgets/2007/04/…

Poodle identity theft: fake pedigrees.
http://news.bbc.co.uk/1/hi/wales/north_east/6914119.stm

Security analysis of a 13th century Venetian election protocol.
http://www.hpl.hp.com/techreports/2007/…
Venice was very clever about working to avoid the factionalism that tore apart a lot of its Italian rivals, while making the various factions feel represented.
Blog entry:
https://www.schneier.com/blog/archives/2007/07/…

See-through backpacks required at school. It’s a security measure, you see.
http://www.philly.com/philly/education/8655757.html
If you don’t like that, how about bulletproof backpacks for children?
http://www.thebostonchannel.com/news/13860078/…
That goes with the bulletproof textbooks:
https://www.schneier.com/blog/archives/2006/11/…

Funny article listing some movie-plot-threat presidential debate questions:
http://www.slate.com/id/2169275

Transporting a $1.9M rare coin is a perfect opportunity for effective security by obscurity.
https://www.schneier.com/blog/archives/2007/07/…

A year ago, I wrote about a bank hack at the center of a French national scandal. Well, the case has taken an interesting turn. Law enforcement experts managed to retrieve incriminating evidence from the hard disk of senior intelligence General Rondot after about a year of work. Wouldn’t we all like to know the technical details of both the data shredding and forensic technologies?
http://www.economist.com/world/europe/…
http://www.theage.com.au/news/world/…
https://www.schneier.com/blog/archives/2006/07/…

Movie-plot threats in Second Life. This idiotic story has taken on a life of its own: terrorists training in virtual worlds like Second Life.
http://www.news.com.au/story/0,23599,22163811-2,00.html
http://www.techcrunch.com/2007/07/30/…
http://www.theaustralian.news.com.au/story/…
Do we all need to take our shoes off before logging in now?

Earlier this month, I wrote about a library of people’s smells kept by the former East German police. Seems that the current German police is still doing it.
http://www.spiegel.de/international/germany/…
https://www.schneier.com/blog/archives/2007/07/…

A recent article talked about a security hole at the Phoenix Airport: an hours-long window when any airport employee can bring anything into the airport, without any screening at all. I’m not impressed. On the one hand, it’s a big security hole that not everyone knew was there. On the other hand, airport employees are allowed to bring stuff in and out of airports without screening all the time. So yes, the airports aren’t secure—but they never have been, so what’s the big deal? The real issue here is that people don’t understand that an airport is a complex system and that securing it means more than passenger screening.
http://www.abc15.com/news/local/story.aspx?…
For a few months, German police tested a face recognition system. Two hundred frequent travelers volunteered to have their faces recorded and three different systems tried to recognize the faces in the crowds of a train station. Results: 60% recognition at best, 30% on average (depending on light and other factors). I’m not impressed.
http://www.n24.de/politik/article.php?articleId=133703

How do you get a password out of an IRS agent? Just ask:
http://www.msnbc.msn.com/id/20108530/

Another biometric: vein patterns.
http://www.heise-security.co.uk/articles/93233
I don’t know about the details of the technology, but the discussions of false positives, false negatives, and forgeability are the right ones to have. Remember, though, that while biometrics are an effective security technology, they’re not a panacea.
http://www.schneier.com/…

Gun-shaped laptop battery:
http://www.forta.com//index.cfm/2007/7/18/…
Steven D. Levitt, a blogger for The New York Times, is having a movie-plot threat contest. Far more interesting than the suggested attacks are the commenters who accuse him of helping the terrorists. Not that I’m surprised; there were people who accused me of helping the terrorists.
http://freakonomics.blogs.nytimes.com/2007/08/08/…
http://freakonomics.blogs.nytimes.com/2007/08/09/…
My contests:
https://www.schneier.com/blog/archives/2006/04/…
https://www.schneier.com/blog/archives/2007/04/…
While it’s one thing for this kind of thing to happen in my blog, it’s another for it to happen in a mainstream blog on The New York Times website.

Two weeks ago Congress gave President Bush new wiretapping powers. I was going to write an essay on the security implications of this, but Susan Landau beat me to it. This op-ed is a must-read.
http://www.washingtonpost.com/wp-dyn/content/…
And here’s more about the Greek wiretapping scandal:
https://www.schneier.com/blog/archives/2007/07/…
And I would be remiss if I didn’t mention the excellent book by Whitfield Diffie and Susan Landau on the subject: “Privacy on the Line: The Politics of Wiretapping and Encryption.”
http://www.amazon.com/…
It’s nice to find an example of the police using data mining correctly: not as security theater, but more as a business-intelligence tool.
http://www.cbc.ca/news/background/tech/data-mining.html

Security Problem Excuse Bingo. Very funny:
http://www.crypto.com/bingo/pr
http://www.crypto.com/blog/bingo/

This is a good article about the use of paid informants in Muslim communities, and how they are both creating potential terrorists where none existed before and sowing mistrust among people.
http://www.infocusnews.net/content/view/15942/135/

Fascinating “New Scientist” article on conspiracy theories and why we believe them. Lots of good stuff in the article, including instructions on how to create your own conspiracy theory.
http://www.newscientist.com/channel/being-human/…
http://www.therazor.org/?p=855

Two interesting phishing studies:
http://arstechnica.com/news.ars/post/…
http://www.webwereld.nl/articles/47539/…
Interesting article on security-aware consumer items. I especially liked the chair design with a place to hang a purse. Seems like a better idea than the “Chelsea clip.”
http://news.bbc.co.uk/1/hi/uk/6940485.stm
http://www.selectamark.co.uk/product_chelseaclip.html

Nice article on security theater:
http://www.govexec.com/features/0807-01/0807-01s3.htm

How to escape from plastic police handcuffs:
http://www.metacafe.com/watch/545672/…
How to make a taser out of a cheap camera:
http://www.techeblog.com/index.php/tech-gadget/…


Avian Flu and Disaster Planning

If an avian flu pandemic broke out tomorrow, would your company be ready for it?

“Computerworld” published a series of articles on that question last year, prompted by a presentation analyst firm Gartner gave at a conference last November. Among Gartner’s recommendations: “Store 42 gallons of water per data center employee—enough for a six-week quarantine—and don’t forget about food, medical care, cooking facilities, sanitation and electricity.”

And Gartner’s conclusion, over half a year later: Pretty much no organizations are ready.

This doesn’t surprise me at all. It’s not that organizations don’t spend enough effort on disaster planning, although that’s true; it’s that this really isn’t the sort of disaster worth planning for.

Disaster planning is critically important for individuals, families, organizations large and small, and governments. For the individual, it can be as simple as spending a few minutes thinking about how he or she would respond to a disaster. For example, I’ve spent a lot of time thinking about what I would do if I lost the use of my computer, whether by equipment failure, theft or government seizure. As a result, I have a pretty complex backup and encryption system, ensuring that 1) I’d still have access to my data, and 2) no one else would. On the other hand, I haven’t given any serious thought to family disaster planning, although others have.

For an organization, disaster planning can be much more complex. What would it do in the case of fire, flood, earthquake, and so on? How would its business survive? The resultant disaster plan might include backup data centers, temporary staffing contracts, planned degradation of services, and a host of other products and service—and consultants to tell you how to use it all.

And anyone who does this kind of thing knows that planning isn’t enough: Testing your disaster plan is critical. Far too often the backup software fails when it has to do an actual restore, or the diesel-powered emergency generator fails to kick in. That’s also the flaw with emergency kit links given below; if you don’t know how to use a compass or first-aid kit, having one in your car won’t do you much good.

But testing isn’t just valuable because it reveals practical problems with a plan. It also has enormous ancillary benefits for your organization in terms of communication and team building. There’s nothing like a good crisis to get people to rely on each other. Sometimes I think companies should forget about those team-building exercises that involve climbing trees and building fires, and instead pretend that a flood has taken out the primary data center.

It really doesn’t matter what disaster scenario you’re testing. The real disaster won’t be like the test, regardless of what you do, so just pick one and go. Whether you’re an individual trying to recover from a simulated virus attack, or an organization testing its response to a hypothetical shooter in the building, you’ll learn a lot about yourselves and your organization, as well as your plan.

There is a sweet spot, though, in disaster preparedness. Some disasters are too small or too common to worry about. (“We’re out of paper clips!? Call the Crisis Response Team together. I’ll get the Paper Clip Shortage Readiness Program Directive Manual Plan.”) And others are too large or too rare.

It makes no sense to plan for total annihilation of the continent, whether by nuclear or meteor strike: that’s obvious. But depending on the size of the planner, many other disasters are also too large to plan for. People can stockpile food and water to prepare for a hurricane that knocks out services for a few days, but not for a Katrina-like flood that knocks out services for months. Organizations can prepare for losing a data center due to a flood, fire, or hurricane, but not for a Black-Death-scale epidemic that would wipe out a third of the population. No one can fault bond trading firm Cantor Fitzgerald, which lost two thirds of its employees in the 9/11 attack on the World Trade Center, for not having a plan in place to deal with that possibility.

Another consideration is scope. If your corporate headquarters burns down, it’s actually a bigger problem for you than a citywide disaster that does much more damage. If the whole San Francisco Bay Area were taken out by an earthquake, customers of affected companies would be far more likely to forgive lapses in service, or would go the extra mile to help out. Think of the nationwide response to 9/11; the human “just deal with it” social structures kicked in, and we all muddled through.

In general, you can only reasonably prepare for disasters that leave your world largely intact. If a third of the country’s population dies, it’s a different world. The economy is different, the laws are different—the world is different. You simply can’t plan for it; there’s no way you can know enough about what the new world will look like. Disaster planning only makes sense within the context of existing society.

What all of this means is that any bird flu pandemic will very likely fall outside the corporate disaster-planning sweet spot. We’re just guessing on its infectiousness, of course, but (despite the alarmism from two and three years ago), likely scenarios are either moderate to severe absenteeism because people are staying home for a few weeks—any organization ought to be able to deal with that—or a major disaster of proportions that dwarf the concerns of any organization. There’s not much in between.

Honestly, if you think you’re heading toward a world where you need to stash six weeks’ worth of food and water in your company’s closets, do you really believe that it will be enough to see you through to the other side?

A blogger commented on what I said in one article: “Schneier is using what I would call the nuclear war argument for doing nothing. If there’s a nuclear war nothing will be left anyway, so why waste your time stockpiling food or building fallout shelters? It’s entirely out of your control. It’s someone else’s responsibility. Don’t worry about it.”

Almost. Bird flu, pandemics, and disasters in general—whether man-made like 9/11, natural like bird flu, or a combination like Katrina—are definitely things we should worry about. The proper place for bird flu planning is at the government level. (These are also the people who should worry about nuclear and meteor strikes.) But real disasters don’t exactly match our plans, and we are best served by a bunch of generic disaster plans and a smart, flexible organization that can deal with anything.

The key is preparedness. Much more important than planning, preparedness is about setting up social structures so that people fall into doing something sensible when things go wrong. Think of all the wasted effort—and even more wasted *desire*—to do something after Katrina because there was no way for most people to help. Preparedness is about getting people to react when there’s a crisis. It’s something the military trains its soldiers for.

This advice holds true for organizations, families, and individuals as well. And remember, despite what you read about nuclear accidents, suicide terrorism, genetically engineered viruses, and mutant man-eating badgers, you live in the safest society in the history of mankind.

http://www.computerworld.com/action/article.do?…
http://www.computerworld.com/action/article.do?…
http://www.computerworld.com/action/article.do?…
http://www.computerworld.com/s/node/5854

Family disaster planning:
http://nielsenhayden.com/makinglight/archives/…
http://nielsenhayden.com/makinglight/archives/…
http://www.sff.net/people/doylemacdonald/emerg_kit.htm

Disaster Recovery Journal:
http://www.drj.com/

Bird flu:
http://www.cdc.gov/flu/avian/
http://infectiousdiseases.about.com/od/faqs/f/…
http://www.msnbc.msn.com/id/6861065/
http://news.bbc.co.uk/2/hi/health/4295649.stm
http://www.cnn.com/2004/HEALTH/11/25/…

Blogger comments:
http://www.computerworld.com/s/node/5854

Man-eating badgers:
http://news.bbc.co.uk/1/hi/world/middle_east/…

A good rebuttal to this essay:
http://www.computerweekly.com/s/david_lacey/…
This essay originally appeared on Wired.com:
http://www.wired.com/print/politics/security/…


TSA Warns of Terrorist Dry Runs

A leaked TSA memo warns screeners to be on the lookout for terrorists staging dry runs through airport security. (The TSA issued a short statement following the leak.)

Honestly, the four incidents described, with photos, sure sound suspicious to me:

“San Diego, July 7. A U.S. person—either a citizen or a foreigner legally here—checked baggage containing two ice packs covered in duct tape. The ice packs had clay inside them rather than the normal blue gel.

“Milwaukee, June 4. A U.S. person’s carryon baggage contained wire coil wrapped around a possible initiator, an electrical switch, batteries, three tubes and two blocks of cheese. The bulletin said block cheese has a consistency similar to some explosives.

“Houston, Nov. 8, 2006. A U.S. person’s checked baggage contained a plastic bag with a 9-volt battery, wires, a block of brown clay-like minerals and pipes.

“Baltimore, Sept. 16, 2006. A couple’s checked baggage contained a plastic bag with a block of processed cheese taped to another plastic bag holding a cellular phone charger.

The cheese and clay are stand-ins for plastic explosive. And honestly, I don’t care if someone is carrying a water bottle, wearing a head scarf, or buying a one-way ticket, but if someone has a block of cheese with wires and a detonator—I want the FBI to be called in.

Note that profiling didn’t seem to help here. Three of the incidents involved U.S. persons, and one is unspecified. Also, according to the report: “Individuals involved in these incidents were of varying gender, and initial investigations do not link them with criminal or terrorist organizations. However, most passengers’ explanations for carrying the suspicious items were questionable, and some investigations are still ongoing.”

I wish I had more information on what the “questionable” explanations were; either these people are innocent or they should be investigated pretty heavily. Later news reports said that the San Diego incident was bogus, and maybe all four were.

I’m skeptical. I can’t think of a valid explanation for “wire coil wrapped around a possible initiator, an electrical switch, batteries, three tubes and two blocks of cheese.” I’d like to know what it was.

Flagging suspicious items is what the TSA is supposed to do. Unfortunately, “suspicious” is a subjective term, and problems arise when screeners aren’t competent enough to distinguish between “potentially dangerous” and “just plain strange.” If bulletins like these are accompanied with real training, then we’re getting some actual security out of the TSA.

http://msnbcmedia.msn.com/i/msnbc/sections/NEWS/…
http://www.tsa.gov/press/happenings/…
http://www.usatoday.com/news/washington/…
http://www.signonsandiego.com/news/metro/…
http://rawstory.com/news/2007/…


Security-Theater Cameras Coming to New York

In this otherwise lopsided article about security cameras, this one quote stands out:

“But Steve Swain, who served for years with the London Metropolitan Police and its counter-terror operations, doubts the power of cameras to deter crime.

“‘I don’t know of a single incident where CCTV has actually been used to spot, apprehend or detain offenders in the act,’ he said, referring to the London system. Swain now works for Control Risk, an international security firm.

“Asked about their role in possibly stopping acts of terror, he said pointedly: ‘The presence of CCTV is irrelevant for those who want to sacrifice their lives to carry out a terrorist act.’ “

And:

“Swain does believe the cameras have great value in investigation work. He also said they are necessary to reassure the public that law enforcement is being aggressive.

“‘You need to do this piece of theater so that if the terrorists are looking at you, they can see that you’ve got some measures in place,’ he said. “

Did you get that? Swain doesn’t believe that cameras deter crime, but he wants cities to spend millions on them so that the terrorists “can see that you’ve got some measures in place.”

Anyone have any idea why we’re better off doing this than other things that may actually deter crime and terrorism?

http://www.cnn.com/2007/TECH/08/01/nyc.surveillance/…


Schneier/BT Counterpane News

Schneier was interviewed for The Command Line podcast:
http://thecommandline.net/2007/08/01/bruce_schneier/

“BT Counterpane Securing your Network” video:
http://www.youtube.com/watch?v=IwiPH_s0x3M

“Schneier on Security” video:
http://www.youtube.com/watch?v=IoXoHlI86rQ


Airport Security Breach

One of the problems with airport security checkpoints is that the system is a single point of failure. If someone slips through, the only way to regain security is for the entire airport to be emptied and everyone searched again. This happens rarely, but when it does, it can close an airport for hours.

It happened last week at the Charlotte airport.

One sentence in the news report struck me: “Passengers on another 15 planes that took off after the breach will have to go through screening again when they reach their destinations, the TSA said.”

It’s understandable why the TSA would want to screen everybody once someone evades security: that person could give his contraband to someone else. And since the entire airport system is a single secure area—once you go through security at one airport, you are considered to be inside security at all airports—it makes sense for those passengers to be screened if they’re changing planes.

But it must feel weird to have to go through screening after flying, before being able to leave the airport.

http://edition.cnn.com/2007/US/08/10/charlotte.airport/


Details on the UK Liquid Terrorist Plot

U.S. Homeland Security Secretary Michael Chertoff is releasing details about last summer’s liquid-bomb plot:

“The components of that explosives mixture can be bought at any drugstore or supermarket; however, there is some question whether the potential terrorists would have had the skill to properly mix and detonate their explosive cocktails in-flight.

“But they can work—scientists at Sandia National Laboratory conducted a test using the formula, and when a small amount of liquid in a container was hit with a tiny burst of electrical current, a large explosion followed. (Click on the video player on the right side of this page to view the video.)

“The test results were reviewed today by ABC terrorism consultant Richard Clarke, who said that while frequent travelers are upset by the current limits on liquids in carry-on baggage, ‘when they see this film, they ought to know it’s worth going through those problems.'”

There has been a lot of speculation since last year about the plausibility of the plot, with most chemists falling on the “unrealistic” side.

I’m still skeptical, especially because the liquid ban doesn’t actually ban liquids. If they’re so dangerous, why can anyone take 12 ounces of any liquid on any plane at any time? That’s the real question, which TSA Administrator Kip Hawley deftly didn’t answer in my conversation with him—see below. (I brought it on a plane again yesterday: an opaque 12-ounce bottle labeled “saline,” emptied and filled with another liquid, and then resealed. I held it up to the TSA official and made sure it was okay. It was.)

Another quote from the same article:

“One official who briefed ABC News said explosives and security experts who examined the plot were ‘stunned at the extent that the suspects had gamed the system to exploit its weaknesses.’

“‘There’s no question that they had given a lot of thought to how they might smuggle containers with liquid explosives onto airplanes,’ Chertoff said. ‘Without getting into things that are still classified, they obviously paid attention to the ways in which they thought they might be able to disguise these explosives as very innocent types of everyday articles.'”

Well, yeah. That’s the game you’re stuck playing. From my conversation with Hawley (that’s me talking):

“But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.”

Stop focusing on the tactics; focus on the broad threats.

http://abcnews.go.com/WN/story?id=3451976&page=1

Previous speculation:
https://www.schneier.com/blog/archives/2006/08/…


House of Lords on Computer Security

The Science and Technology Committee of the UK House of Lords has issued a report on “Personal Internet Security.” It’s 121 pages long. Richard Clayton, who helped the committee, has a good summary of the report on his blog. Among other things, the Lords recommend various consumer notification standards, a data-breach disclosure law, and a liability regime for software.

The Register writes that the report recommends the UK government:

“Increase the resources and skills available to the police and criminal justice system to catch and prosecute e-criminals.
“Establish a centralised and automated system, administered by law enforcement, for the reporting of e-crime.
“Provide incentives to banks and other companies trading online to improve the data security by establishing a data security breach notification law.
“Improve standards of new software and hardware by moving towards legal liability for damage resulting from security flaws.
“Encourage Internet Service Providers to improve customer security offered by establishing a “kite mark” for internet services.”

If that sounds like a lot of the things I’ve been saying for years, there’s a reason for that. Earlier this year, I testified before the committee, where I recommended some of these things. (Sadly, I didn’t get to wear a powdered wig.)

Report:
http://www.publications.parliament.uk/pa/ld200607/…
http://www.publications.parliament.uk/pa/ld200607/…
Summaries:
http://www.lightbluetouchpaper.org/2007/08/10/…
http://www.theregister.com/2007/08/10/…

Transcript of my testimony:
http://www.publications.parliament.uk/pa/ld/…

The entire body of evidence:
http://www.publications.parliament.uk/pa/ld200607/…
http://www.publications.parliament.uk/pa/ld200607/…
I don’t recommend reading it; it’s absolutely huge, and a lot of it is corporate drivel.


Conversation with Kip Hawley

In April, Kip Hawley, the head of the Transportation Security Administration (TSA), invited me to Washington for a meeting. Despite some serious trepidation, I accepted. And it was a good meeting. Most of it was off the record, but he asked me how the TSA could overcome its negative image. I told him to be more transparent, and stop ducking the hard questions. He said that he wanted to do that. He did enjoy writing a guest blog post for “Aviation Daily,” but having a blog himself didn’t work within the bureaucracy. What else could he do?

This interview, conducted in May and June via e-mail, was one of my suggestions.

Bruce Schneier: By today’s rules, I can carry on liquids in quantities of three ounces or less, unless they’re in larger bottles. But I can carry on multiple three-ounce bottles. Or a single larger bottle with a non-prescription medicine label, like contact lens fluid. It all has to fit inside a one-quart plastic bag, except for that large bottle of contact lens fluid. And if you confiscate my liquids, you’re going to toss them into a large pile right next to the screening station—which you would never do if anyone thought they were actually dangerous.

Can you please convince me there’s not an Office for Annoying Air Travelers making this sort of stuff up?

Kip Hawley: Screening ideas are indeed thought up by the Office for Annoying Air Travelers and vetted through the Directorate for Confusion and Complexity, and then we review them to insure that there are sufficient unintended irritating consequences so that the blogosphere is constantly fueled. Imagine for a moment that TSA people are somewhat bright, and motivated to protect the public with the least intrusion into their lives, not to mention travel themselves. How might you engineer backwards from that premise to get to three ounces and a baggie?

We faced a different kind of liquid explosive, one that was engineered to evade then-existing technology and process. Not the old Bojinka formula or other well-understood ones—TSA already trains and tests on those. After August 10, we began testing different variants with the national labs, among others, and engaged with other countries that have sophisticated explosives capabilities to find out what is necessary to reliably bring down a plane.

We started with the premise that we should prohibit only what’s needed from a security perspective. Otherwise, we would have stuck with a total liquid ban. But we learned through testing that that no matter what someone brought on, if it was in a small enough container, it wasn’t a serious threat. So what would the justification be for prohibiting lip gloss, nasal spray, etc? There was none, other than for our own convenience and the sake of a simple explanation.

Based on the scientific findings and a don’t-intrude-unless-needed-for-security philosophy, we came up with a container size that eliminates an assembled bomb (without having to determine what exactly is inside the bottle labeled “shampoo”), limits the total liquid any one person can bring (without requiring Transportation Security Officers (TSOs) to count individual bottles), and allows for additional security measures relating to multiple people mixing a bomb post-checkpoint. Three ounces and a baggie in the bin gives us a way for people to safely bring on limited quantities of liquids, aerosols and gels.

BS: How will this foil a plot, given that there are no consequences to trying? Airplane contraband falls into two broad categories: stuff you get in trouble for trying to smuggle onboard, and stuff that just gets taken away from you. If I’m caught at a security checkpoint with a gun or a bomb, you’re going to call the police and really ruin my day. But if I have a large bottle of that liquid explosive, you confiscate it with a smile and let me though. So unless you’re 100% perfect in catching this stuff—which you’re not—I can just try again and again until I get it through.

This isn’t like contaminants in food, where if you remove 90% of the particles, you’re 90% safer. None of those false alarms—none of those innocuous liquids taken away from innocent travelers—improve security. We’re only safer if you catch the one explosive liquid amongst the millions of containers of water, shampoo, and toothpaste. I have described two ways to get large amounts of liquids onto airplanes—large bottles labeled “saline solution” and trying until the screeners miss the liquid—not to mention combining multiple little bottles of liquid into one big bottle after the security checkpoint.

I want to assume the TSA is both intelligent and motivated to protect us. I’m taking your word for it that there is an actual threat—lots of chemists disagree—but your liquid ban isn’t mitigating it. Instead, I have the sinking feeling that you’re defending us against a terrorist smart enough to develop his own liquid explosive, yet too stupid to read the rules on TSA’s own website.

KH: I think your premise is wrong. There are consequences to coming to an airport with a bomb and having some of the materials taken away at the checkpoint. Putting aside our layers of security for the moment, there are things you can do to get a TSO’s attention at the checkpoint. If a TSO finds you or the contents of your bag suspicious, you might get interviewed and/or have your bags more closely examined. If the TSO throws your liquids in the trash, they don’t find you a threat.

I often read blog posts about how someone could just take all their three-ounce bottles—or take bottles from others on the plane—and combine them into a larger container to make a bomb. I can’t get into the specifics, but our explosives research shows this is not a viable option.

The current system is not the best we’ll ever come up with. In the near future, we’ll come up with an automated system to take care of liquids, and everyone will be happier.

In the meantime, we have begun using hand-held devices that can recognize threat liquids through factory-sealed containers (we will increase their number through the rest of the year) and we have different test strips that are effective when a bottle is opened. Right now, we’re using them on exempt items like medicines, as well as undeclared liquids TSOs find in bags. This will help close the vulnerability and strengthen the deterrent.

BS: People regularly point to security checkpoints missing a knife in their handbag as evidence that security screening isn’t working. But that’s wrong. Complete effectiveness is not the goal; the checkpoints just have to be effective enough so that the terrorists are worried their plan will be uncovered. But in Denver earlier this year, testers sneaked 90% of weapons through. And other tests aren’t much better. Why are these numbers so poor, and why didn’t they get better when the TSA took over airport security?

KH: Your first point is dead on and is the key to how we look at security. The stories about 90% failures are wrong or extremely misleading. We do many kinds of effectiveness tests at checkpoints daily. We use them to guide training and decisions on technology and operating procedures. We also do extensive and very sophisticated Red Team testing, and one of their jobs is to observe checkpoints and go back and figure out—based on inside knowledge of what we do—ways to beat the system. They isolate one particular thing: for example, a particular explosive, made and placed in a way that exploits a particular weakness in technology; our procedures; or the way TSOs do things in practice. Then they will test that particular thing over and over until they identify what corrective action is needed. We then change technology or procedure, or plain old focus on execution. And we repeat the process—forever.

So without getting into specifics on the test results, of course there are times that our evaluations can generate high failure rate numbers on specific scenarios. Overall, though, our ability to detect bomb components is vastly improved and it will keep getting better. (Older scores you may have seen may be “feel good” numbers based on old, easy tests. Don’t go for the sound-bite; today’s TSOs are light-years ahead of even where they were two years ago.)

BS: I hope you’re telling the truth; screening is a difficult problem, and it’s hard to discount all of those published tests and reports. But a lot of the security around these checkpoints is about perception—we want potential terrorists to think there’s a significant chance they won’t get through the checkpoints—so you’re better off maintaining that the screeners are better than reports indicate, even if they’re not.

Backscatter X-ray is another technology that is causing privacy concerns, since it basically allows you to see people naked. Can you explain the benefits of the technology, and what you are doing to protect privacy? Although the machines can distort the images, we know that they can store raw, unfiltered images; the manufacturer Rapiscan is quite proud of the fact. Are the machines you’re using routinely storing images? Can they store images at the screener’s discretion, or is that capability turned off at installation?

KH: We’re still evaluating backscatter and are in the process of running millimeter wave portals right alongside backscatter to compare their effectiveness and the privacy issues. We do not now store images for the test phase (function disabled), and although we haven’t officially resolved the issue, I fully understand the privacy argument and don’t assume that we will store them if and when they’re widely deployed.

BS: When can we keep our shoes on?

KH: Any time after you clear security. Sorry, Bruce, I don’t like it either, but this is not just something leftover from 2002. It is a real, current concern. We’re looking at shoe scanners and ways of using millimeter wave and/or backscatter to get there, but until the technology catches up to the risk, the shoes have to go in the bin.

BS: This feels so much like “cover your ass” security: you’re screening our shoes because everyone knows Richard Reid hid explosives in them, and you’ll be raked over the coals if that particular plot ever happens again. But there are literally thousands of possible plots.

So when does it end? The terrorists invented a particular tactic, and you’re defending against it. But you’re playing a game you can’t win. You ban guns and bombs, so the terrorists use box cutters. You ban small blades and knitting needles, and they hide explosives in their shoes. You screen shoes, so they invent a liquid explosive. You restrict liquids, and they’re going to do something else. The terrorists are going to look at what you’re confiscating, and they’re going to design a plot to bypass your security.

That’s the real lesson of the liquid bombers. Assuming you’re right and the explosive was real, it was an explosive that none of the security measures at the time would have detected. So why play this slow game of whittling down what people can bring onto airplanes? When do you say: “Enough. It’s not about the details of the tactic; it’s about the broad threat”?

KH: In late 2005, I made a big deal about focusing on Improvised Explosives Devices (IEDs) and not chasing all the things that could be used as weapons. Until the liquids plot this summer, we were defending our decision to let scissors and small tools back on planes and trying to add layers like behavior detection and document checking, so it is ironic that you ask this question—I am in vehement agreement with your premise. We’d rather focus on things that can do catastrophic harm (bombs!) and add layers to get people with hostile intent to highlight themselves. We have a responsibility, though, to address known continued active attack methods like shoes and liquids and, unfortunately, have to use our somewhat clunky process for now.

BS: You don’t have a responsibility to screen shoes; you have one to protect air travel from terrorism to the best of your ability. You’re picking and choosing. We know the Chechnyan terrorists who downed two Russian planes in 2004 got through security partly because different people carried the explosive and the detonator. Why doesn’t this count as a continued, active attack method?

I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers. Or search the Internet for “BeerBelly.” It’s a device you can strap to your chest to smuggle beer into stadiums, but you can also use it smuggle 40 ounces of dangerous liquid explosive onto planes. The magnetometer won’t detect it. Your secondary screening wandings won’t detect it. Why aren’t you making us all take our shirts off? Will you have to find a printout of the webpage in some terrorist safe house? Or will someone actually have to try it? If that doesn’t bother you, search the Internet for “cell phone gun.”

It’s “cover your ass” security. If someone tries to blow up a plane with a shoe or a liquid, you’ll take a lot of blame for not catching it. But if someone uses any of these other, equally known, attack methods, you’ll be blamed less because they’re less public.

KH: Dead wrong! Our security strategy assumes an adaptive terrorist, and that looking backwards is not a reliable predictor of the next type of attack. Yes, we screen for shoe bombs and liquids, because it would be stupid not to directly address attack methods that we believe to be active. Overall, we are getting away from trying to predict what the object looks like and looking more for the other markers of a terrorist. (Don’t forget, we see two million people a day, so we know what normal looks like.) What he/she does; the way they behave. That way we don’t put all our eggs in the basket of catching them in the act. We can’t give them free rein to surveil or do dry-runs; we need to put up obstacles for them at every turn. Working backwards, what do you need to do to be successful in an attack? Find the decision points that show the difference between normal action and action needed for an attack. Our odds are better with this approach than by trying to take away methods, annoying object by annoying object. Bruce, as for blame, that’s nothing compared to what all of us would carry inside if we failed to prevent an attack.

BS: Let’s talk about ID checks. I’ve called the no-fly list a list of people so dangerous they cannot be allowed to fly under any circumstance, yet so innocent we can’t arrest them even under the Patriot Act. Except that’s not even true; anyone, no matter how dangerous they are, can fly without an ID—or by using someone else’s boarding pass. And the list itself is filled with people who shouldn’t be on it—dead people, people in jail, and so on—and primarily catches innocents with similar names. Why are you bothering?

KH: Because it works. We just completed a scrub of every name on the no-fly list and cut it in half—essentially cleaning out people who were no longer an active terror threat. We do not publicize how often the no-fly system stops people you would not want on your flight. Several times a week would low-ball it.

Your point about the no-ID and false boarding pass people is a great one. We are moving people who have tools and training to get at that problem. The bigger issue is that TSA is moving in the direction of security that picks up on behavior versus just keying on what we see in your bag. It really would be security theater if all we did was try to find possible weapons in that crunched fifteen seconds and fifteen feet after you anonymously walk through the magnetometer. We do a better job, with less aggravation of ordinary passengers, if we put people-based layers further ahead in the process—behavior observation based on involuntary, observable muscle behavior, canine teams, document verification, etc.

BS: We’ll talk about behavioral profiling later; no fair defending one security measure by pointing to another, completely separate, one. How can you claim ID cards work? Like the liquid ban, all it does is annoy innocent travelers without doing more than inconveniencing any future terrorists. Is it really good enough for you to defend me from terrorists too dumb to Google “print your own boarding pass”?

KH: We are getting at the fake boarding pass and ID issues with our proposal to Congress that would allow us to replace existing document checkers with more highly trained people with tools that would close those gaps. Without effective identity verification, watch lists don’t do much, so this is a top priority.

Having highly trained TSOs performing the document checking function closes a security gap, adds another security layer, and pushes TSA’s security program out in front of the checkpoint.

BS: Let’s move on. Air travelers think you’re capricious. Remember in April when the story went around about the Princeton professor being on a no-fly list because he spoke out against President Bush? His claims were easily debunked, but the real story is that so many people believed it. People believe political activity puts them on the list. People are afraid to complain about being mistreated at checkpoints because they’re afraid it puts them on a list. Is there anything you can do to make this process more transparent?

KH: We need some help on this one. This is the biggest public pain point, dwarfing shoes and baggies.

First off, TSA does not add people to the watch-lists, no matter how cranky you are at a checkpoint. Second, political views have nothing to do with no-flys or selectees. These myths have taken on urban legend status. There are very strict criteria and they are reviewed by lots of separate people in separate agencies: it is for live terror concerns only. The problem comes from random selectees (literally mathematically random) or people who have the same name and birth date as real no-flys. If you can get a boarding pass, you are not on the no-fly list. This problem will go away when Secure Flight starts in 2008, but we can’t seem to shake the false impression that ordinary Americans get put on a “list.” I am open for suggestions on how to make the public “get it.”

BS: It’s hard to believe that there could be hundreds of thousands of people meeting those very strict criteria, and that’s after the list was cut in half! I know the TSA does not control the no-fly and watch lists, but you’re the public face of those lists. You’re the aspect of homeland security that people come into direct contact with. Some people might find out they’re on the list by being arrested, or being shipped off to Syria for torture, but most people find out they’re on the list by being repeatedly searched and questioned for hours at airports.

The main problem with the list is that it’s secret. Who is on the list is secret. Why someone’s on is secret. How someone can get off is secret. There’s no accountability and there’s no transparency. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states.

The best thing you can do to improve the problem is redress. People need the ability to see the evidence against them, challenge their accuser, and have a hearing in a neutral court. If they’re guilty of something, arrest them. And if they’re innocent, stop harassing them. It’s basic liberty.

I don’t actually expect you to fix this; the problem is larger than the TSA. But can you tell us something about redress? It’s been promised to us for years now.

KH: Redress issues are divided into two categories: people on the no-fly list and people who have names similar to them.

In our experience, the first group is not a heavy user of the redress process. They typically don’t want anything to do with the U.S. government. Still, if someone is either wrongly put on or kept on, the Terrorist Screening Center (TSC) removes him or her immediately. In fact, TSA worked with the TSC to review every name, and that review cut the no-fly list in half. Having said that, once someone is really on the no-fly list, I totally agree with what you said about appeal rights. This is true across the board, not just with no-flys. DHS has recently consolidated redress for all DHS activities into one process called DHS TRIP. If you are mistaken for a real no-fly, you can let TSA know and we provide your information to the airlines, who right now are responsible for identifying no-flys trying to fly. Each airline uses its own system, so some can get you cleared to use kiosks, while others still require a visit to the ticket agent. When Secure Flight is operating, we’ll take that in-house at TSA and the problem should go away.

BS: I still don’t see how that will work, as long as the TSA doesn’t have control over who gets on or off the list.

What about Registered Traveler? When TSA first started talking about the program, the plan was to divide people into two categories: more trusted people who get less screening, and less trusted people who get more screening. This opened an enormous security hole; whenever you create an easy way and a hard way through security, you invite the bad guys to take the easier way. Since then, it’s transformed into a way for people to pay for better screening equipment and faster processing—a great idea with no security downsides. Given that, why bother with the background checks at all? What else is it besides a way for a potential terrorist to spend $60 and find out if the government is on to them?

KH: Registered Traveler (RT) is a promising program but suffers from unrealistic expectations. The idea—that you and I aren’t really risks and we should be screened less so that TSA can apply scarce resources on the more likely terrorist—makes sense and got branded as RT. The problem is that with two million people a day, how can we tell them apart in an effective way? We know terrorists use people who are not on watch lists and who don’t have criminal convictions, so we can’t use those criteria alone. Right now, I’ve said that RT is behind Secure Flight in priority and that TSA is open to working with private sector entities to facilitate RT, but we will not fund it, reduce overall security, or inconvenience regular travelers. As private companies deploy extra security above what TSA does, we can change the screening process accordingly. It has to be more than a front-of-the-line pass, and I think there are some innovations coming out in the year ahead that will better de
fine what RT can become.

BS: Let’s talk about behavioral profiling. I’ve long thought that most of airline security could be ditched in favor of well-trained guards, both in and out of uniform, wandering the crowds looking for suspicious behavior. Can you talk about some of the things you’re doing along those lines, and especially ways to prevent this from turning into just another form of racial profiling?

KH: Moving security out from behind the checkpoint is a big priority for us. First, it gives us the opportunity to pick up a threat a lot earlier. Taking away weapons or explosives at the checkpoint is stopping the plot at nearly the last possible moment. Obviously, a good security system aims at stopping attacks well before that. That’s why we have many layers of security (intel, law enforcement, behavior detection, etc.) to get to that person well before the security checkpoint. When a threat gets to the checkpoint, we’re operating on his/her terms—they pick when, where, and how they present themselves to us. We want to pick up the cues on our terms, before they’re ready, even if they’re just at the surveillance stage.

We use a system of behavior observation that is based on the science that demonstrates that there are certain involuntary, subconscious actions that can betray a person’s hostile intent. For instance, there are tiny—but noticeable to the trained person—movements in a person’s facial muscles when they have certain emotions. It is very different from the stress we all show when we’re anxious about missing the flight due to, say, a long security line. This is true across race, gender, age, ethnicity, etc. It is our way of not falling into the trap where we predict what a terrorist is going to look like. We know they use people who “look like” terrorists, but they also use people who do not, perhaps thinking that we cue only off of what the 9/11 hijackers looked like.

Our Behavior Detection teams routinely—and quietly—identify problem people just through observable behavior cues. More than 150 people have been identified by our teams, turned over to law enforcement, and subsequently arrested. This layer is invisible to the public, but don’t discount it, because it may be the most effective. We publicize non-terrorist-related successes like a murder suspect caught in Minneapolis and a bank robber caught in Philadelphia.

Most common are people showing phony documents, but we have even picked out undercover operatives—including our own. One individual, identified by a TSO in late May and not allowed to fly, was killed in a police shoot-out five days later. Additionally, several individuals have been of interest from the counter-terrorism perspective. With just this limited deployment of Behavior Detection Officers (BDOs), we have identified more people of counterterrorism interest than all the people combined caught with prohibited items. Look for us to continue to look at ways that highlight problem people rather than just problem objects.

BS: That’s really good news, and I think it’s the most promising new security measure you’ve got. Although, honestly, bragging about capturing a guy for wearing a fake military uniform just makes you look silly.

So far, we’ve only talked about passengers. What about airport workers? Nearly one million workers move in and out of airports every day without ever being screened. The JFK plot, as laughably unrealistic as it was, highlighted the security risks of airport workers. As with any security problem, we need to secure the weak links, rather than make already strong links stronger. What about airport employees, delivery vehicles, and so on?

KH: I totally agree with your point about a strong base level of security everywhere and not creating large gaps by over-focusing on one area. This is especially true with airport employees. We do background checks on all airport employees who have access to the sterile area. These employees are in the same places doing the same jobs day after day, so when someone does something out of the ordinary, it immediately stands out. They serve as an additional set of eyes and ears throughout the airport.

Even so, we should do more on airport employees and my House testimony of April 19 gives details of where we’re heading. The main point is that everything you need for an attack is already inside the perimeter of an airport. For example, why take lighters from people who work with blowtorches in facilities with millions of gallons of jet fuel?

You could perhaps feel better by setting up employee checkpoints at entry points, but you’d hassle a lot of people at great cost with minimal additional benefit, and a smart, patient terrorist could find a way to beat you. Today’s random, unpredictable screenings that can and do occur everywhere, all the time (including delivery vehicles, etc.) are harder to defeat. With the latter, you make it impossible to engineer an attack; with the former, you give the blueprint for exactly that.

BS: There’s another reason to screen pilots and flight attendants: they go through the same security lines as passengers. People have to remember that it’s not pilots being screened, it’s people dressed as pilots. You either have to implement a system to verify that people dressed as pilots are actual pilots, or just screen everybody. The latter choice is far easier.

I want to ask you about general philosophy. Basically, there are three broad ways of defending airplanes: preventing bad people from getting on them (ID checks), preventing bad objects from getting on them (passenger screening, baggage screening), and preventing bad things from happening on them (reinforcing the cockpit door, sky marshals). The first one seems to be a complete failure, the second one is spotty at best. I’ve always been a fan of the third. Any future developments in that area?

KH: You are too eager to discount the first—stopping bad people from getting on planes. That is the most effective! Don’t forget about all the intel work done partnering with other countries to stop plots before they get here (UK liquids, NY subway), all the work done to keep them out either through no-flys (at least several times a month) or by Customs & Border Protection on their way in, and law enforcement once they are here (Ft. Dix). Then, you add the behavior observation (both uniformed and not) and identity validation (as we take that on) and that’s all before they get to the checkpoint.

The screening-for-things part, we’ve discussed, so I’ll jump to in-air measures. Reinforced, locked cockpit doors and air marshals are indeed huge upgrades since 9/11. Along the same lines, you have to consider the role of the engaged flight crew and passengers—they are quick to give a heads-up about suspicious behavior and they can, and do, take decisive action when threatened. Also, there are thousands of flights covered by pilots who are qualified as law enforcement and are armed, as well as the agents from other government entities like the Secret Service and FBI who provide coverage as well. There is also a fair amount of communications with the flight deck during flights if anything comes up en route—either in the aircraft or if we get information that would be of interest to them. That allows “quiet” diversions or other preventive measures. Training is, of course, important too. Pilots need to know what to do in the event of a missile sighting or other event, and need to know what we are going to do in different situations. Other things coming: better air-to-ground communications for air marshals and flight information, including, possibly, video.

So, when you boil it down, keeping the bomb off the plane is the number one priority. A terrorist has to know that once that door closes, he or she is locked into a confined space with dozens, if not hundreds, of zero-tolerance people, some of whom may be armed with firearms, not to mention the memory of United Flight 93.

BS: I’ve read repeated calls to privatize airport security: to return it to the way it was pre-9/11. Personally, I think it’s a bad idea, but I’d like your opinion on the question. And regardless of what you think should happen, do you think it will happen?

KH: From an operational security point of view, I think it works both ways. So it is not a strategic issue for me.

SFO, our largest private airport, has excellent security and is on a par with its federalized counterparts (in fact, I am on a flight from there as I write this). One current federalized advantage is that we can surge resources around the system with no notice; essentially, the ability to move from anywhere to anywhere and mix TSOs with federal air marshals in different force packages. We would need to be sure we don’t lose that interchangeability if we were to expand privatized screening.

I don’t see a major security or economic driver that would push us to large-scale privatization. Economically, the current cost-plus model makes it a better deal for the government in smaller airports than in bigger. So, maybe more small airports will privatize. If Congress requires collective bargaining for our TSOs, that will impose an additional overhead cost of about $500 million, which would shift the economic balance significantly toward privatized screening. But unless that happens, I don’t see major change in this area.

BS: Last question. I regularly criticize overly specific security measures, because forcing the terrorists to make minor modifications in their tactics doesn’t make us any safer. We’ve talked about specific airline threats, but what about airplanes as a specific threat? On the one hand, if we secure our airlines and the terrorists all decide instead to bomb shopping malls, we haven’t improved our security very much. On the other hand, airplanes make particularly attractive targets for several reasons. One, they’re considered national symbols. Two, they’re a common and important travel vehicle, and are deeply embedded throughout our economy. Three, they travel to distant places where the terrorists are. And four, the failure mode is severe: a small bomb drops the plane out of the sky and kills everyone. I don’t expect you to give back any of your budget, but when do we have “enough” airplane security as compared with the rest of our nation’s infrastructure?

KH: Airplanes are a high-profile target for terrorists for all the reasons you cited. The reason we have the focus we do on aviation is because of the effect the airline system has on our country, both economically and psychologically. We do considerable work (through grants and voluntary agreements) to ensure the safety of surface transportation, but it’s less visible to the public because people other than ones in TSA uniforms are taking care of that responsibility.

We look at the aviation system as one component in a much larger network that also includes freight rail, mass transit, highways, etc. And that’s just in the U.S. Then you add the world’s transportation sectors—it’s all about the network.

The only components that require specific security measures are the critical points of failure—and they have to be protected at virtually any cost. It doesn’t matter which individual part of the network is attacked—what matters is that the network as a whole is resilient enough to operate even with losing one or more components.

The network approach allows various transportation modes to benefit from our layers of security. Take our first layer: intel. It is fundamental to our security program to catch terrorists long before they get to their target, and even better if we catch them before they get into our country. Our intel operation works closely with other international and domestic agencies, and that information and analysis benefits all transportation modes.

Dogs have proven very successful at detecting explosives. They work in airports and they work in mass transit venues as well. As we test and pilot technologies like millimeter wave in airports, we assess their viability in other transportation modes, and vice versa.

To get back to your question, we’re not at the point where we can say “enough” for aviation security. But we’re also aware of the attractiveness of other modes and continue to use the network to share resources and lessons learned.

BS: Thank you very much for your time. I appreciate both your time and your candor.

KH: I enjoyed the exchange and appreciated your insights. Thanks for the opportunity.

URL for this entire conversation:
http://www.schneier.com/interview-hawley.html

Hawley’s bio:
http://www.tsa.gov/who_we_are/people/bios/…

Hawley’s “Aviation Daily” blog post:
http://aviationweek.typepad.com/airports/2007/03/…

TSA liquid rules:
http://www.tsa.gov/311/index.shtm

Airport security tests:
http://www.9news.com/news/article.aspx?storyid=67166
http://www.rawstory.com/showoutarticle.php?…
https://www.schneier.com/blog/archives/2006/03/…

Problems with screening:
http://www.schneier.com/essay-110.html

Backscatter X-ray:
http://www.epic.org/privacy/surveillance/spotlight/…
http://www.tsa.gov/approach/tech/backscatter.shtm

The multitude of threats and CYA security:
https://www.schneier.com/blog/archives/2007/02/…
https://www.schneier.com/blog/archives/2007/04/…
http://www.schneier.com/essay-087.html
https://www.schneier.com/blog/archives/2006/08/…

BeerBelly:
http://www.thebeerbelly.com/

Cell phone gun:
http://cellular.co.za/phones/gunphone/gun-phone.htm
http://urbanlegends.about.com/library/…
http://www.strategypage.com/military_photos/…
http://www.safetyproductsunlimited.com/…

Terrorists doing dry runs:
http://msnbcmedia.msn.com/i/msnbc/sections/NEWS/…
ID checks:
http://www.schneier.com/essay-096.html
https://www.schneier.com/blog/archives/2006/03/…
https://www.schneier.com/blog/archives/2006/11/…
https://www.schneier.com/blog/archives/2006/10/…

Princeton professor:
http://rawstory.com/news/2007/…
http://.wired.com/27bstroke6/2007/04/…

People arrested and tortured:
http://www.cbsnews.com/stories/2006/11/29/national/…
https://www.schneier.com/blog/archives/2006/09/…

Redress:
http://www.epic.org/privacy/surveillance/spotlight/…
http://www.tsa.gov/travelers/customer/redress/…

Registered Traveler:
http://www.schneier.com/essay-130.html

Behavior detection successes:
http://www.tsa.gov/press/happenings/man_spotted.shtm
http://www.tsa.gov/press/happenings/…
http://www.tsa.gov/press/happenings/…

Stupid terrorists:
https://www.schneier.com/blog/archives/2007/06/…

Hawley’s House testimony:
http://www.tsa.gov/press/speeches/…

Overly specific security:
http://www.schneier.com/essay-121.html
http://www.schneier.com/essay-173.html


Comments from Readers

There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.

http://www.schneier.com/


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.

Copyright (c) 2007 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.