November 15, 2006
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0611.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Voting Technology and Security
- More on Electronic Voting Machines
- The Inherent Inaccuracy of Voting
- The Need for Professional Election Officials
- Perceived Risk vs. Actual Risk
- Crypto-Gram Reprints
- Total Information Awareness Is Back
- Forge Your Own Boarding Pass
- The Death of Ephemeral Conversation
- Airline Passenger Profiling for Profit
- Counterpane News
- Architecture and Security
- The Doghouse: Skylark Utilities
- Heathrow Tests Biometric ID
- Please Stop My Car
- Air Cargo Security
- Cheyenne Mountain Retired
- Comments from Readers
Last week in Florida’s 13th Congressional district, the victory margin was only 386 votes out of 153,000. There’ll be a mandatory lawyered-up recount, but it won’t include the almost 18,000 votes that seem to have disappeared. The electronic voting machines didn’t include them in their final tallies, and there’s no backup to use for the recount. The district will pick a winner to send to Washington, but it won’t be because they are sure the majority voted for him. Maybe the majority did, and maybe it didn’t. There’s no way to know.
Electronic voting machines represent a grave threat to fair and accurate elections, a threat that every American—Republican, Democrat or independent—should be concerned about. Because they’re computer-based, the deliberate or accidental actions of a few can swing an entire election. The solution: Paper ballots, which can be verified by voters and recounted if necessary.
To understand the security of electronic voting machines, you first have to consider election security in general. The goal of any voting system is to capture the intent of each voter and collect them all into a final tally. In practice, this occurs through a series of transfer steps. When I voted last week, I transferred my intent onto a paper ballot, which was then transferred to a tabulation machine via an optical scan reader; at the end of the night, the individual machine tallies were transferred by election officials to a central facility and combined into a single result I saw on television.
All election problems are errors introduced at one of these steps, whether it’s voter disenfranchisement, confusing ballots, broken machines or ballot stuffing. Even in normal operations, each step can introduce errors. Voting accuracy, therefore, is a matter of 1) minimizing the number of steps, and 2) increasing the reliability of each step.
Much of our election security is based on “security by competing interests.” Every step, with the exception of voters completing their single anonymous ballots, is witnessed by someone from each major party; this ensures that any partisan shenanigans—or even honest mistakes—will be caught by the other observers. This system isn’t perfect, but it’s worked pretty well for a couple hundred years.
Electronic voting is like an iceberg; the real threats are below the waterline where you can’t see them. Paperless electronic voting machines bypass that security process, allowing a small group of people—or even a single hacker—to affect an election. The problem is software—programs that are hidden from view and cannot be verified by a team of Republican and Democrat election judges, programs that can drastically change the final tallies. And because all that’s left at the end of the day are those electronic tallies, there’s no way to verify the results or to perform a recount. Recounts are important.
This isn’t theoretical. In the U.S., there have been hundreds of documented cases of electronic voting machines distorting the vote to the detriment of candidates from both political parties: machines losing votes, machines swapping the votes for candidates, machines registering more votes for a candidate than there were voters, machines not registering votes at all. I would like to believe these are all mistakes and not deliberate fraud, but the truth is that we can’t tell the difference. And these are just the problems we’ve caught; it’s almost certain that many more problems have escaped detection because no one was paying attention.
This is both new and terrifying. For the most part, and throughout most of history, election fraud on a massive scale has been hard; it requires very public actions or a highly corrupt government—or both. But electronic voting is different: a lone hacker can affect an election. He can do his work secretly before the machines are shipped to the polling stations. He can affect an entire area’s voting machines. And he can cover his tracks completely, writing code that deletes itself after the election.
And that assumes well-designed voting machines. The actual machines being sold by companies like Diebold, Sequoia Voting Systems and Election Systems & Software are much worse. The software is badly designed. Machines are “protected” by hotel minibar keys. Vote tallies are stored in easily changeable files. Machines can be infected with viruses. Some voting software runs on Microsoft Windows, with all the bugs and crashes and security vulnerabilities that introduces. The list of inadequate security practices goes on and on.
The voting machine companies counter that such attacks are impossible because the machines are never left unattended (they’re not), the memory cards that hold the votes are carefully controlled (they’re not), and everything is supervised (it isn’t). Yes, they’re lying, but they’re also missing the point.
We shouldn’t—and don’t—have to accept voting machines that might someday be secure only if a long list of operational procedures are followed precisely. We need voting machines that are secure regardless of how they’re programmed, handled and used, and that can be trusted even if they’re sold by a partisan company, or a company with possible ties to Venezuela.
Sounds like an impossible task, but in reality, the solution is surprisingly easy. The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine. The second machine provides the quick initial tally, while the paper ballot provides for recounts when necessary. And absentee and backup ballots can be counted the same way.
You can even do away with the electronic vote-generation machines entirely and hand-mark your ballots like we do in Minnesota. Or run a 100% mail-in election like Oregon does. Again, paper ballots are the key.
Paper? Yes, paper. A stack of paper is harder to tamper with than a number in a computer’s memory. Voters can see their vote on paper, regardless of what goes on inside the computer. And most important, everyone understands paper. We get into hassles over our cell phone bills and credit card mischarges, but when was the last time you had a problem with a $20 bill? We know how to count paper. Banks count it all the time. Both Canada and the U.K. count paper ballots with no problems, as do the Swiss. We can do it, too. In today’s world of computer crashes, worms and hackers, a low-tech solution is the most secure.
Secure voting machines are just one component of a fair and honest election, but they’re an increasingly important part. They’re where a dedicated attacker can most effectively commit election fraud (and we know that changing the results can be worth millions). But we shouldn’t forget other voter suppression tactics: telling people the wrong polling place or election date, taking registered voters off the voting rolls, having too few machines at polling places, or making it onerous for people to register. (Oddly enough, ineligible people voting isn’t a problem in the U.S., despite political rhetoric to the contrary; every study shows their numbers to be so small as to be insignificant. And photo ID requirements actually cause more problems than they solve.)
Voting is as much a perception issue as it is a technological issue. It’s not enough for the result to be mathematically accurate; every citizen must also be confident that it is correct. Around the world, people protest or riot after an election not when their candidate loses, but when they think their candidate lost unfairly. It is vital for a democracy that an election both accurately determine the winner and adequately convince the loser. In the U.S., we’re losing the perception battle.
The current crop of electronic voting machines fail on both counts. The results from Florida’s 13th Congressional district are neither accurate nor convincing. As a democracy, we deserve better. We need to refuse to vote on electronic voting machines without a voter-verifiable paper ballot, and to continue to pressure our legislatures to implement voting technology that works.
This essay originally appeared on Forbes.com.
How to Steal an Election:
Value of stolen elections:
Avi Rubin wrote a good essay on voting for “Forbes” as well.
Florida 13 is turning out to be a bigger problem than I described:
“The Democrat, Christine Jennings, lost to her Republican opponent, Vern Buchanan, by just 373 votes out of a total 237,861 cast -one of the closest House races in the nation. More than 18,000 voters in Sarasota County, or 13 percent of those who went to the polls Tuesday, did not seem to vote in the Congressional race when they cast ballots, a discrepancy that Kathy Dent, the county elections supervisor, said she could not explain.
“In comparison, only 2 percent of voters in one neighboring county within the same House district and 5 percent in another skipped the Congressional race, according to The Herald-Tribune of Sarasota. And many of those who did not seem to cast a vote in the House race did vote in more obscure races, like for the hospital board.”
And the absentee ballots collected for the same race show only a 2.5% difference in the number of voters that voted for candidates in other races but not for Congress.
There’ll be a recount, and with that close a margin it’s pretty random who will eventually win. But because so many votes were not recorded—and I don’t see how anyone who has any understanding of statistics can look at this data and not conclude that votes were not recorded—we’ll never know who should really win this district.
In Pennsylvania, the Republican State Committee is asking the Secretary of State to impound voting machines because of potential voting errors. According to KDKA:
“Pennsylvania GOP officials claimed there were reports that some machines were changing Republican votes to Democratic votes. They asked the state to investigate and said they were not ruling out a legal challenge.
“According to Santorum’s camp, people are voting for Santorum, but the vote either registered as invalid or a vote for Casey.”
RedState.com describes some of the problems:
“RedState is getting widespread reports of an electoral nightmare shaping up in Pennsylvania with certain types of electronic voting machines.
“In some counties, machines are crashing. In other counties, we have enough reports to treat as credible that fact that some Rendell votes are being tabulated by the machines for Swann and vice versa. The same is happening with Santorum and Casey. Reports have been filed with the Pennsylvania Secretary of State, but nothing has happened.”
I’m happy to see a Republican at the receiving end of the problems.
Actually, that’s not true. I’m not happy to see anyone at the receiving end of voting problems. But I am sick and tired of this being perceived as a partisan issue, and I hope some high-profile Republican losses that might be attributed to electronic voting-machine malfunctions (or even fraud) will change that perception. This is a serious problem that affects everyone, and it is in everyone’s interest to fix it.
FL-13 was the big voting-machine disaster, but there were other electronic voting-machine problems reported. EFF wrote: “The types of machine problems reported to EFF volunteers were wide-ranging in both size and scope. Polls opened late for machine-related reasons in polling places throughout the country, including Ohio, Florida, Georgia, Virginia, Utah, Indiana, Illinois, Tennessee, and California. In Broward County, Florida, voting machines failed to start up at one polling place, leaving some citizens unable to cast votes for hours. EFF and the Election Protection Coalition sought to keep the polling place open late to accommodate voters frustrated by the delays, but the officials refused. In Utah County, Utah, more than 100 precincts opened one to two hours late on Tuesday due to problems with machines. Both county and state election officials refused to keep polling stations open longer to make up for the lost time, and a judge also turned down a voter’s plea for extended hours brought by EFF.”
And there’s an election for mayor, where one of the candidates received zero votes—even though that candidate is sure he voted for himself.
ComputerWorld is also reporting problems across the country, as is “The New York Times”. Avi Rubin, whose writings on electronic voting security are always worth reading, writes about a problem he witnessed in Maryland:
“The voter had made his selections and pressed the “cast ballot” button on the machine. The machine spit out his smartcard, as it is supposed to do, but his summary screen remained, and it did not appear that his vote had been cast. So, he pushed the smartcard back in, and it came out saying that he had already voted. But, he was still in the screen that showed he was in the process of voting. The voter then pressed the “cast ballot” again, and an error message appeared on the screen that said that he needs to call a judge for assistance. The voter was very patient, but was clearly taking this very seriously, as one would expect. After discussing the details about what happened with him very carefully, I believed that there was a glitch with his machine, and that it was in an unexpected state after it spit out the smartcard. The question we had to figure out was whether or not his vote had been recorded. The machine said that there had been 145 votes cast. So, I suggested that we count the voter authority cards in the envelope attached to the machine. Since we were grouping them into bundles of 25 throughout the day, that was pretty easy, and we found that there were 146 authority cards. So, this meant that either his vote had not been counted, or that the count was off for some other reason. Considering that the count on that machine had been perfect all day, I thought that the most likely thing is that this glitch had caused his vote not to count. Unfortunately, because while this was going on, all the other voters had left, other election judges had taken down and put away the e-poll books, and we had no way to encode a smartcard for him. We were left with the possibility of having the voter vote on a provisional ballot, which is what he did. He was gracious, and understood our predicament.
“The thing is, that I don’t know for sure now if this voter’s vote will be counted once or twice (or not at all if the board of election rejects his provisional ballot). In fact, the purpose of counting the voter authority cards is to check the counts on the machines hourly. What we had done was to use the number of cards to conclude something about whether a particular voter had voted, and that is not information that these cards can provide. Unfortunately, I believe there are an unimaginable number of problems that could crop up with these machines where we would not know for sure if a voter’s vote had been recorded, and the machines provide no way to check on such questions. If we had paper ballots that were counted by optical scanners, this kind of situation could never occur.”
How many hundreds of these stories do we need before we conclude that electronic voting machines aren’t accurate enough for elections?
On the plus side, the FL-13 problems have convinced some previous naysayers in that district: “Supervisor of Elections Kathy Dent now says she will comply with voters who want a new voting system—one that produces a paper trail…. Her announcement Friday marks a reversal for the elections supervisor, who had promoted and adamantly defended the touch-screen system the county purchased for $4.5 million in 2001.”
One of the dumber comments I hear about electronic voting goes something like this: “If we can secure multi-million-dollar financial transactions, we should be able to secure voting.” Most financial security comes through audit: names are attached to every transaction, and transactions can be unwound if there are problems. Voting requires an anonymous ballot, which means that most of our anti-fraud systems from the financial world don’t apply to voting. (I first explained this back in 2001.)
In Minnesota, we use paper ballots counted by optical scanners, and we have some of the most well-run elections in the country. To anyone reading this who needs to buy new election equipment, this is what to buy.
On the other hand, I am increasingly of the opinion that an all mail-in election—like Oregon has—is the right answer. Yes, there are authentication issues with mail-in ballots, but these are issues we have to solve anyway, as long as we allow absentee ballots. And yes, there are vote-buying issues, but almost everyone considers them to be secondary. The combined benefits of 1) a paper ballot, 2) no worries about long lines due to malfunctioning or insufficient machines, 3) increased voter turnout, and 4) a dampening of the last-minute campaign frenzy make Oregon’s election process very appealing.
E-voting state by state:
HBO’s “Hacking Democracy” documentary:
Avi Rubin on voting:
David Wagner and Ed Felten design a better voting machine.
My previous writings on electronic voting, as far back as 2000:
Voting vs. e-commerce:
In a “New York Times” op-ed, New York University sociology professor Dalton Conley points out that vote counting is inherently inaccurate:
“The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get—you guessed it—four different results. That’s the nature of large numbers—there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.
“But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.”
He’s right, but it’s more complicated than that.
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. But in a very close election, a careful recount will yield a more accurate—but almost certainly not perfectly accurate—result.
Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse. Those can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A. These errors can either be a particular problem in the system—a badly designed ballot, for example—or a random error that only occurs in precincts where A has more supporters than B.
Here’s where the problems of electronic voting machines become critical: they’re more likely to be systemic problems. Vote flipping, for example, seems to generally affect one candidate more than another. Even individual machine failures are going to affect supporters of one candidate more than another, depending on where the particular machine is. And if there are no paper ballots to fall back on, no recount can undo these problems.
Conley proposes to nullify any election where the margin of victory is less than 1%, and have everyone vote again. I agree, but I think his margin is too large. In the Virginia Senate race, Allen was right not to demand a recount. Even though his 7,800-vote loss was only 0.33%, in the absence of systemic flaws it is unlikely that a recount would change things. I think an automatic revote if the margin of victory is less than 0.1% makes more sense.
“Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor’s edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold—the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.
“It may make us existentially uncomfortable to admit that random chance and sampling error play a role in our governance decisions. But in reality, by requiring a margin of victory greater than one, seemingly arbitrary vote, we would build in a buffer to democracy, one that offers us a more bedrock sense of security that the ‘winner’ really did win.”
This is a good idea, but it doesn’t address the systemic problems with voting. If there are systemic problems, there should be another election day limited to only those precincts that had the problem and only those people who can prove they voted—or tried to vote and failed—during the first election day. (Although I could be persuaded that another re-voting protocol would make more sense.)
But most importantly, we need better voting machines and better voting procedures.
In the U.S., elections are run by an army of hundreds of thousands of volunteers. These are both Republicans and Democrats, and the idea is that the one group watches the other: security by competing interests. But at the top are state-elected or -appointed officials, and many election shenanigans in the past several years have been perpetrated by them.
In yet another “New York Times” op-ed, Loyola Law School professor Richard Hansen argues” for professional, non-partisan election officials: “The United States should join the rest of the world’s advanced democracies and put nonpartisan professionals in charge. We need officials whose ultimate allegiance is to the fairness, integrity and professionalism of the election process, not to helping one party or the other gain political advantage. We don’t need disputes like the current one in Florida being resolved by party hacks.”
And: “To improve the chances that states will choose an independent and competent chief elections officer, states should enact laws making that officer a long-term gubernatorial appointee who takes office only upon confirmation by a 75 percent vote of the legislature—a supermajority requirement that would ensure that a candidate has true bipartisan support. Nonpartisanship in election administration is no dream. It is how Canada and Australia run their national elections.”
To me, this is easier said than done. Where are these hundreds of thousands of disinterested election officials going to come from? And how do we ensure that they’re disinterested and fair, and not just partisans in disguise? I actually like security by competing interests.
But I do like his idea of a supermajority-confirmed chief elections officer for each state. And at least he’s starting the debate about better election procedures in the U.S.
I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a “Los Angeles Times” op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book “Stumbling on Happiness,” which is not a self-help book but instead about how the brain works. Strongly recommended.)
The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:
1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena. “That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.”
2. We over-react to things that offend our morals. “When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.”
He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.
3. We over-react to immediate threats and under-react to long-term threats. “The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened. Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.”
4. We under-react to changes that occur slowly and over time. “The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.”
It’s interesting to compare this to what I wrote in “Beyond Fear” (pages 26-27) about perceived vs. actual risk:
” * People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
” *People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naive and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
” * Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, ‘A single death is a tragedy, a million deaths is a statistic.’ He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
” * People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
“Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.”
The Security of RFID Passports:
Liabilities and Software Vulnerabilities:
The Zotob Worm:
Why Election Technology is Hard:
Electronic Voting Machines:
The Security of Checks and Balances
Security Information Management Systems (SIMS):
Technology and Counterterrorism:
The Trojan Defense
Why Digital Signatures are Not Signatures
Programming Satan’s Computer: Why Computers Are Insecure
Elliptic Curve Public-Key Cryptography
The Future of Fraud: Three reasons why electronic commerce is different
Software Copy Protection: Why copy protection does not work
Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.
Remember Total Information Awareness (TIA), the massive database on everyone that was supposed to find terrorists? The public found it so abhorrent, and objected so forcefully, that Congress killed funding for the program in September 2003.
None of us thought that meant the end of TIA, only that it would turn into a classified program and be renamed. Well, the program is now called Tangram, and it is classified.
The “National Journal” writes:
“The government’s top intelligence agency is building a computerized system to search very large stores of information for patterns of activity that look like terrorist planning. The system, which is run by the Office of the Director of National Intelligence, is in the early research phases and is being tested, in part, with government intelligence that may contain information on U.S. citizens and other people inside the country.
“It encompasses existing profiling and detection systems, including those that create ‘suspicion scores’ for suspected terrorists by analyzing very large databases of government intelligence, as well as records of individuals’ private communications, financial transactions, and other everyday activities.”
The information about Tangram comes from a government document looking for contractors to help design and build the system.
DefenseTech writes: “The document, which is a description of the Tangram program for potential contractors, describes other, existing profiling and detection systems that haven’t moved beyond so-called ‘guilt-by-association models,’ which link suspected terrorists to potential associates, but apparently don’t tell analysts much about why those links are significant. Tangram wants to improve upon these methods, as well as investigate the effectiveness of other detection links such as ‘collective inferencing,’ which attempt to create suspicion scores of entire networks of people simultaneously.”
Data mining for terrorists has always been a dumb idea. And the existence of Tangram illustrates the problem with Congress trying to stop a program by killing its funding; it just comes back under a different name.
Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest—the most visible by Rep. Edward Markey (D-Massachusetts)—who has since recanted. And it’s gotten him more publicity than he ever dreamed of.
All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.
This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it in 2005 as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.
It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.
You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.
This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.
You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.
Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.
Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.
Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?
As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”
The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.
But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.
Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.
Unlike every other airplane security measure—including reinforcing cockpit doors, which could have prevented 9/11—the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.
So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?
The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.
This is my 30th essay for Wired.com:
Older mentions of the vulnerability:
This article argues that most of the $44 billion spent in the U.S. on bioterrorism defense has been wasted.
Interview with a pickpocket expert:
Swiss police considering using Trojans for VoIP tapping:
Lousy home security installation. (Yes, it’s an advertisement. But there are still important security lessons in the blog post.)
I don’t think I’ve ever read anyone talking about class issues as they relate to security before.
This interesting article in “The New York Times” illustrates that the problem of agricultural safety and security mirrors the security issues in computer networks, especially with the monoculture in operating systems and network protocols.
Interesting speculation: “Warning Signs for Tomorrow.”
Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.
And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.
Blog entry URL:
Doonesbury on terrorism and fear:
Really interesting article about online hacker forums, especially the politics that goes on in them.
Real-world social engineering crime:
Here’s another social-engineering story (link in Turkish). The police receive an anonymous emergency call from someone claiming to have planted an explosive in the Haydarpasa Numune Hospital. They evacuate the hospital (100 patients plus doctors, staff, visitors, etc.) and search the place for two hours. They find nothing. When patients and visitors return, they realize that their valuables were stolen.
Paramedic stopped at airport security for nitroglycerine residue. (At least we know those chemical-residue detectors are working.)
If you have control of a network of computers—by infecting them with some sort of malware—the hard part is controlling that network. Traditionally, these computers (called zombies) are controlled via IRC. But IRC can be detected and blocked, so the hackers have adapted:
The trick here is to not let the computer’s legitimate owner know that someone else is controlling it. It’s an arms race between attacker and defender.
Microsoft’s Privacy Guidelines for Developing Software and Services. It’s actually pretty good:
Canadian “Guidelines for Identification and Authentication,” released by the Canadian Privacy Commissioner, is a good document discussing both privacy risks and security threats.
And here’s a longer document published in 2004 by Industry Canada: “Principles for Electronic Authentication.”
Blog entry URL:
Surveillance as performance art:
This is extreme, but the level of surveillance is likely to be the norm. It won’t be on a public website available to everyone, but it will be available to governments and corporations.
“Mother Jones” article on Google and privacy:
They may be great at keeping you from taking your bottle of water onto the plane, but when it comes to catching actual bombs and guns they not very good: “Screeners at Newark Liberty International Airport, one of the starting points for the Sept. 11 hijackers, failed 20 of 22 security tests conducted by undercover U.S. agents last week, missing concealed bombs and guns at checkpoints throughout the major air hub’s three terminals, according to federal security officials.”
As I’ve written before, this is actually a very hard problem to solve:
Remember this truism: We can’t keep weapons out of prisons. We can’t possibly keep them out of airports.
The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security recommended against putting RFID chips in identity cards. It’s only a draft report, but what it says is so controversial that a vote on the final report is being delayed.
Online ID theft hyped, to on one’s surprise:
CEO arrested for stealing the identities of his employees:
This guy wants to give students bullet-proof textbooks to help in the case of school shootings. You can’t make this stuff up.
New U.S. Customs database on trucks and travelers. It’s yet another massive government surveillance program:
Classical crypto with lasers. I simply don’t have the physics background to evaluate it:
On August 18 of last year, the Zotob worm badly infected computers at the Department of Homeland Security, particularly the 1,300 workstations running the US-VISIT application at border crossings. Wired News filed a Freedom of Information Act request for details, which was denied. So they sued. Eventually the government was forced to cough up the documents. The details say nothing about the technical details of the computer systems, and only point to the incompetence of the DHS in handling the incident.
Seagate has announced a product called DriveTrust, which provides hardware-based encryption on the drive itself. The technology is proprietary, but they use standard algorithms: AES and triple-DES, RSA, and SHA-1. Details on the key management are sketchy, but the system requires a pre-boot password and/or combination of biometrics to access the disk. And Seagate is working on some sort of enterprise-wide key management system to make it easier to deploy the technology company-wide. The first target market is laptop computers. No computer manufacturer has announced support for DriveTrust yet.
It’s easy to skim personal information off an RFID credit card.
Why management doesn’t get IT security:
“Keyboards and Covert Channels.” Interesting research.
“Deconstructing Information Warfare”
The Future of Identity in the Information Society (FIDIS) hates RFID passports:
Good essay on data mining.
Cryptography comic: Alice, Bob, and Eve. (I get a mention, too.)
A classified Wikipedia for the U.S. intelligence services:
The political firestorm over former U.S. Rep. Mark Foley’s salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.
Everyday conversation used to be ephemeral. Whether face-to-face or by phone, we could be reasonably sure that what we said disappeared as soon as we said it. Of course, organized crime bosses worried about phone taps and room bugs, but that was the exception. Privacy was the default assumption.
This has changed. We now type our casual conversations. We chat in e-mail, with instant messages on our computer and SMS messages on our cell phones, and in comments on social networking Web sites like Friendster, LiveJournal, and MySpace. These conversations—with friends, lovers, colleagues, fellow employees—are not ephemeral; they leave their own electronic trails.
We know this intellectually, but we haven’t truly internalized it. We type on, engrossed in conversation, forgetting that we’re being recorded.
Foley’s instant messages were saved by the young men he talked to, but they could have also been saved by the instant messaging service. There are tools that allow both businesses and government agencies to monitor and log IM conversations. E-mail can be saved by your ISP or by the IT department in your corporation. Gmail, for example, saves everything, even if you delete it.
And these conversations can come back to haunt people—in criminal prosecutions, divorce proceedings or simply as embarrassing disclosures. During the 1998 Microsoft anti-trust trial, the prosecution pored over masses of e-mail, looking for a smoking gun. Of course they found things; everyone says things in conversation that, taken out of context, can prove anything.
The moral is clear: If you type it and send it, prepare to explain it in public later.
And voice is no longer a refuge. Face-to-face conversations are still safe, but we know that the National Security Agency is monitoring everyone’s international phone calls. (They said nothing about SMS messages, but one can assume they were monitoring those too.) Routine recording of phone conversations is still rare—certainly the NSA has the capability—but will become more common as telephone calls continue migrating to the IP network.
If you find this disturbing, you should. Fewer conversations are ephemeral, and we’re losing control over the data. We trust our ISPs, employers and cell phone companies with our privacy, but again and again they’ve proven they can’t be trusted. Identity thieves routinely gain access to these repositories of our information. Paris Hilton and other celebrities have been the victims of hackers breaking into their cell phone providers’ networks. Google reads our Gmail and inserts context-dependent ads.
Even worse, normal constitutional protections don’t apply to much of this. The police need a court-issued warrant to search our papers or eavesdrop on our communications, but can simply issue a subpoena—or ask nicely or threateningly—for data of ours that is held by a third party, including stored copies of our communications.
The Justice Department wants to make this problem even worse, by forcing ISPs and others to save our communications—just in case we’re someday the target of an investigation. This is not only bad privacy and security, it’s a blow to our liberty as well. A world without ephemeral conversation is a world without freedom.
We can’t turn back technology; electronic communications are here to stay. But as technology makes our conversations less ephemeral, we need laws to step in and safeguard our privacy. We need a comprehensive data privacy law, protecting our data and communications regardless of where it is stored or how it is processed. We need laws forcing companies to keep it private and to delete it as soon as it is no longer needed.
And we need to remember, whenever we type and send, we’re being watched.
Foley is an anomaly. Most of us do not send instant messages in order to solicit sex with minors. Law enforcement might have a legitimate need to access Foley’s IMs, e-mails and cell phone calling logs, but that’s why there are warrants supported by probable cause—they help ensure that investigations are properly focused on suspected pedophiles, terrorists and other criminals. We saw this in the recent UK terrorist arrests; focused investigations on suspected terrorists foiled the plot, not broad surveillance of everyone without probable cause.
Without legal privacy protections, the world becomes one giant airport security area, where the slightest joke—or comment made years before—lands you in hot water. The world becomes one giant market-research study, where we are all life-long subjects. The world becomes a police state, where we all are assumed to be Foleys and terrorists in the eyes of the government.
This essay originally appeared on Forbes.com:
I have previously written and spoken about the privacy threats that come from the confluence of government and corporate interests. It’s not the deliberate police-state privacy invasions from governments that worry me, but the normal-business privacy invasions by corporations—and how corporate privacy invasions pave the way for government privacy invasions and vice versa.
The U.S. government’s airline passenger profiling system was called Secure Flight, and I’ve written about it extensively. At one point, the system was going to perform automatic background checks on all passengers based on both government and commercial databases—credit card databases, phone records, whatever—and assign everyone a “risk score” based on the data. Those with a higher risk score would be searched more thoroughly than those with a lower risk score. It’s a complete waste of time, and a huge invasion of privacy, and the last time I paid attention it had been scrapped.
But the very same system that is useless at picking terrorists out of passenger lists is probably very good at identifying consumers. So what the government rightly decided not to do, the start-up corporation Jetera is doing instead:
“Jetera would start with an airline’s information on individual passengers on board a given flight, drawing the name, address, credit card number and loyalty club status from reservations data. Through a process, for which it seeks a patent, the company would match the passenger’s identification data with the mountains of information about him or her available at one of the mammoth credit bureaus, which maintain separately managed marketing as well as credit information. Jetera would tap into the marketing side, showing consumer demographics, purchases, interests, attitudes and the like.
“Jetera’s data manipulation would shape the entertainment made available to each passenger during a flight. The passenger who subscribes to a do-it-yourself magazine might be offered a video on woodworking. Catalog purchase records would boost some offerings and downplay others. Sports fans, known through their subscriptions, credit card ticket-buying or booster club memberships, would get ‘The Natural’ instead of ‘Pretty Woman.'”
The article is dated August 21, 2006 and is subscriber-only. Most of it talks about the revenue potential of the model, the funding the company received, and the talks it has had with anonymous airlines. No airline has signed up for the service yet, which would not only include in-flight personalization but pre- and post-flight mailings and other personalized services. Privacy is dealt with at the end of the article:
“Jetera sees two legal issues regarding privacy and resolves both in its favor. Nothing Jetera intends to do would violate federal law or airline privacy policies as expressed on their websites. In terms of customer perceptions, Jetera doesn’t intend to abuse anyone’s privacy and will have an ‘opt-out’ opportunity at the point where passengers make inflight entertainment choices.
“If an airline wants an opt-out feature at some other point in the process, Jetera will work to provide one, McChesney says. Privacy and customer service will be an issue for each airline, and Jetera will adapt specifically to each.”
The U.S. government already collects data from the phone company, from hotels and rental-car companies, and from airlines. How long before it piggy backs onto this system?
The other side to this is in the news, too: commercial databases using government data:
“Records once held only in paper form by law enforcement agencies, courts and corrections departments are now routinely digitized and sold in bulk to the private sector. Some commercial databases now contain more than 100 million criminal records. They are updated only fitfully, and expunged records now often turn up in criminal background checks ordered by employers and landlords.”
BT Acquires Counterpane:
On October 25, British Telecom announced that it acquired Counterpane Internet Security, Inc.
This is something I’ve been working on for about a year, and I’m thrilled that it has finally come to pass.
Trade and news media:
Best blog comment ever:
Commentary from one of our investors:
Blog entry URL:
You’ve seen them: those large concrete blocks in front of skyscrapers, monuments and government buildings, designed to protect against car and truck bombs. They sprang up like weeds in the months after 9/11, but the idea is much older. The prettier ones doubled as planters; the uglier ones just stood there.
Form follows function. From medieval castles to modern airports, security concerns have always influenced architecture. Castles appeared during the reign of King Stephen of England because they were the best way to defend the land and there wasn’t a strong king to put any limits on castle-building. But castle design changed over the centuries in response to both innovations in warfare and politics, from motte-and-bailey to concentric design in the late medieval period to entirely decorative castles in the 19th century.
These changes were expensive. The problem is that architecture tends toward permanence, while security threats change much faster. Something that seemed a good idea when a building was designed might make little sense a century—or even a decade—later. But by then it’s hard to undo those architectural decisions.
When Syracuse University built a new campus in the mid-1970s, the student protests of the late 1960s were fresh on everybody’s mind. So the architects designed a college without the open greens of traditional college campuses. It’s now 30 years later, but Syracuse University is stuck defending itself against an obsolete threat.
Similarly, hotel entries in Montreal were elevated above street level in the 1970s, in response to security worries about Quebecois separatists. Today the threat is gone, but those older hotels continue to be maddeningly difficult to navigate.
Also in the 1970s, the Israeli consulate in New York built a unique security system: a two-door vestibule that allowed guards to identify visitors and control building access. Now this kind of entryway is widespread, and buildings with it will remain unwelcoming long after the threat is gone.
The same thing can be seen in cyberspace as well. In his book, “Code and Other Laws of Cyberspace,” Lawrence Lessig describes how decisions about technological infrastructure—the architecture of the internet—become embedded and then impracticable to change. Whether it’s technologies to prevent file copying, limit anonymity, record our digital habits for later investigation or reduce interoperability and strengthen monopoly positions, once technologies based on these security concerns become standard it will take decades to undo them.
It’s dangerously shortsighted to make architectural decisions based on the threat of the moment without regard to the long-term consequences of those decisions.
Concrete building barriers are an exception: They’re removable. They started appearing in Washington, D.C., in 1983, after the truck bombing of the Marines barracks in Beirut. After 9/11, they were a sort of bizarre status symbol: They proved your building was important enough to deserve protection. In New York City alone, more than 50 buildings were protected in this fashion.
Today, they’re slowly coming down. Studies have found they impede traffic flow, turn into giant ashtrays and can pose a security risk by becoming flying shrapnel if exploded.
We should be thankful they can be removed, and did not end up as permanent aspects of our cities’ architecture. We won’t be so lucky with some of the design decisions we’re seeing about internet architecture.
This essay originally appeared in Wired.com.
Concrete barriers coming down in New York:
Activism-restricting architecture at the University of Texas:
Commentary from the Architectures of Control in Design Blog.
I’ll just quote this bit: “Files are encrypted in place using the 524,288 Bit cipher SCC, better know [sic] as the king of ciphers.”
For reference, here’s my snake oil guide from 1999.
Heathrow airport is testing an iris scan biometric machine to identify passengers at customs.
I’ve written previously about biometrics: when they work and when they fail: “Biometrics are powerful and useful, but they are not keys. They are useful in situations where there is a trusted path from the reader to the verifier; in those cases all you need is a unique identifier. They are not useful when you need the characteristics of a key: secrecy, randomness, the ability to update or destroy. Biometrics are unique identifiers, but they are not secrets.”
The system under trial at Heathrow is a good use of biometrics. There’s a trusted path from the person through the reader to the verifier; attempts to use fake eyeballs will be immediately obvious and suspicious. The verifier is being asked to match a biometric with a specific reference, and not to figure out who the person is from his or her biometric. There’s no need for secrecy or randomness; it’s not being used as a key. And it has the potential to really speed up customs lines.
Residents of Prescott Valley are being invited to register their car if they don’t drive in the middle of the night. Police will then stop those cars if they are on the road at that time, under the assumption that they’re stolen:
“The Watch Your Car decal program is a voluntary program whereby vehicle owners enroll their vehicles with the AATA. The vehicle is then entered into a special database, developed and maintained by the AATA, which is directly linked to the Motor Vehicle Division (MVD).
“Participants then display the Watch Your Car decals in the front and rear windows of their vehicle. By displaying the decals, vehicle owners convey to law enforcement officials that their vehicle is not usually in use between the hours of 1:00 AM and 5:00 AM, when the majority of thefts occur.
“If a police officer witnesses the vehicle in operation between these hours, they have the authority to pull it over and question the driver. With access to the MVD database, the officer will be able to determine if the vehicle has been stolen, or not. The program also allows law enforcement officials to notify the vehicle’s owner immediately upon determination that it is being illegally operated.”
This program is entirely optional, but there’s a serious externality. If the police spend time chasing false alarms, they’re not available for other police business. If the town charged car owners a fine for each false alarm, I would have no problems with this program. It doesn’t have to be a large fine, but it has to be enough to offset the cost to the town. It’s no different than police departments charging homeowners for false burglar alarms, when the alarm systems are automatically hooked into the police stations.
BBC is reported a “major” hole in air cargo security. Basically, cargo is being flown on passenger planes without being screened. A would-be terrorist could therefore blow up a passenger plane by shipping a bomb via FedEx.
In general, cargo deserves much less security scrutiny than passengers. Here’s the reasoning:
Cargo planes are much less of a terrorist risk than passenger planes, because terrorism is about innocents dying. Blowing up a planeload of FedEx packages is annoying, but not nearly as terrorizing as blowing up a planeload of tourists. Hence, the security around air cargo doesn’t have to be as strict.
Given that, if most air cargo flies around on cargo planes, then it’s okay for some small amount—assuming it’s random and assuming the shipper doesn’t know which packages beforehand—of cargo to fly as baggage on passenger planes. A would-be terrorist would be better off taking his bomb and blowing up a bus than shipping it and hoping it might possibly be put on a passenger plane.
At least, that’s the theory. But theory and practice are different.
The British system involves “known shippers”:
“Under a system called “known shipper” or “known consignor” companies which have been security vetted by government appointed agents can send parcels by air, which do not have to be subjected to any further security checks.
“Unless a package from a known shipper arouses suspicion or is subject to a random search it is taken on trust that its contents are safe.”
“Captain Gary Boettcher, president of the US Coalition Of Airline Pilots Associations, says the ‘known shipper’ system ‘is probably the weakest part of the cargo security today.’
“‘There are approx 1.5 million known shippers in the US. There are thousands of freight forwarders. Anywhere down the line packages can be intercepted at these organisations,’ he said.
“‘Even reliable respectable organisations, you really don’t know who is in the warehouse, who is tampering with packages, putting parcels together.'”
This system has already been exploited by drug smugglers:
“Mr Adeyemi brought pounds of cocaine into Britain unchecked by air cargo, transported from the US by the Federal Express courier company. He did not have to pay the postage.
“This was made possible because he managed to illegally buy the confidential Fed Ex account numbers of reputable and security cleared companies from a former employee.
“An accomplice in the US was able to put the account numbers on drugs parcels which, as they appeared to have been sent by known shippers, arrived unchecked at Stansted Airport.
“When police later contacted the companies whose accounts and security clearance had been so abused they discovered they had suspected nothing.”
And it’s not clear that a terrorist can’t figure out which shipments are likely to be put on passenger aircraft:
“However several large companies such as FedEx and UPS offer clients the chance to follow the progress of their parcels online.
“This is a facility that Chris Yates, an expert on airline security for Jane’s Transport, says could be exploited by terrorists.
“‘From these you can get a fair indication when that package is in the air, if you are looking to get a package into New York from Heathrow at a given time of day.'”
And BBC reports that 70% of cargo is shipped on passenger planes. That seems like too high a number.
If we had infinite budget, of course we’d screen all air cargo. But we don’t, and it’s a reasonable trade-off to ignore cargo planes and concentrate on passenger planes. But there are some awfully big holes in this system.
Cheyenne Mountain was the United States’ underground command post, designed to survive a direct hit from a nuclear warhead. It’s a Cold War relic—built in the 1960s—and retiring the site is probably a good idea. But this paragraph gives me pause:
“Keating said the new control room, in contrast, could be damaged if a terrorist commandeered a jumbo jet and somehow knew exactly where to crash it. But ‘how unlikely is that? We think very,’ Keating said.”
I agree that this is an unlikely terrorist target, but still.
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.
Copyright (c) 2006 by Bruce Schneier.