Entries Tagged "risk assessment"

Page 19 of 21

CYA Security

Since 9/11, we’ve spent hundreds of billions of dollars defending ourselves from terrorist attacks. Stories about the ineffectiveness of many of these security measures are common, but less so are discussions of why they are so ineffective. In short: much of our country’s counterterrorism security spending is not designed to protect us from the terrorists, but instead to protect our public officials from criticism when another attack occurs.

Boston, January 31: As part of a guerilla marketing campaign, a series of amateur-looking blinking signs depicting characters in the Aqua Teen Hunger Force, a show on the Cartoon Network, were placed on bridges, near a medical center, underneath an interstate highway, and in other crowded public places.

Police mistook these signs for bombs and shut down parts of the city, eventually spending over $1M sorting it out. Authorities blasted the stunt as a terrorist hoax, while others ridiculed the Boston authorities for overreacting. Almost no one looked beyond the finger pointing and jeering to discuss exactly why the Boston authorities overreacted so badly. They overreacted because the signs were weird.

If someone left a backpack full of explosives in a crowded movie theater, or detonated a truck bomb in the middle of a tunnel, no one would demand to know why the police hadn’t noticed it beforehand. But if a weird device with blinking lights and wires turned out to be a bomb—what every movie bomb looks like—there would be inquiries and demands for resignations. It took the police two weeks to notice the Mooninite blinkies, but once they did, they overreacted because their jobs were at stake.

This is “Cover Your Ass” security, and unfortunately it’s very common.

Airplane security seems to forever be looking backwards. Pre-9/11, it was bombs, guns, and knives. Then it was small blades and box cutters. Richard Reid tried to blow up a plane, and suddenly we all have to take off our shoes. And after last summer’s liquid plot, we’re stuck with a series of nonsensical bans on liquids and gels.

Once you think about this in terms of CYA, it starts to make sense. The TSA wants to be sure that if there’s another airplane terrorist attack, it’s not held responsible for letting it slip through. One year ago, no one could blame the TSA for not detecting liquids. But since everything seems obvious in hindsight, it’s basic job preservation to defend against what the terrorists tried last time.

We saw this kind of CYA security when Boston and New York randomly checked bags on the subways after the London bombing, or when buildings started sprouting concrete barriers after the Oklahoma City bombing. We also see it in ineffective attempts to detect nuclear bombs; authorities employ CYA security against the media-driven threat so they can say “we tried.”

At the same time, we’re ignoring threat possibilities that don’t make the news as much—against chemical plants, for example. But if there were ever an attack, that would change quickly.

CYA also explains the TSA’s inability to take anyone off the no-fly list, no matter how innocent. No one is willing to risk his career on removing someone from the no-fly list who might—no matter how remote the possibility—turn out to be the next terrorist mastermind.

Another form of CYA security is the overly specific countermeasures we see during big events like the Olympics and the Oscars, or in protecting small towns. In all those cases, those in charge of the specific security don’t dare return the money with a message “use this for more effective general countermeasures.” If they were wrong and something happened, they’d lose their jobs.

And finally, we’re seeing CYA security on the national level, from our politicians. We might be better off as a nation funding intelligence gathering and Arabic translators, but it’s a better re-election strategy to fund something visible but ineffective, like a national ID card or a wall between the U.S. and Mexico.

Securing our nation from threats that are weird, threats that either happened before or captured the media’s imagination, and overly specific threats are all examples of CYA security. It happens not because the authorities involved—the Boston police, the TSA, and so on—are not competent, or not doing their job. It happens because there isn’t sufficient national oversight, planning, and coordination.

People and organizations respond to incentives. We can’t expect the Boston police, the TSA, the guy who runs security for the Oscars, or local public officials to balance their own security needs against the security of the nation. They’re all going to respond to the particular incentives imposed from above. What we need is a coherent antiterrorism policy at the national level: one based on real threat assessments, instead of fear-mongering, re-election strategies, or pork-barrel politics.

Sadly, though, there might not be a solution. All the money is in fear-mongering, re-election strategies, and pork-barrel politics. And, like so many things, security follows the money.

This essay originally appeared on Wired.com.

EDITED TO ADD (2/23): Interesting commentary, and a Slashdot thread.

Posted on February 22, 2007 at 5:52 AMView Comments

In Praise of Security Theater

While visiting some friends and their new baby in the hospital last week, I noticed an interesting bit of security. To prevent infant abduction, all babies had RFID tags attached to their ankles by a bracelet. There are sensors on the doors to the maternity ward, and if a baby passes through, an alarm goes off.

Infant abduction is rare, but still a risk. In the last 22 years, about 233 such abductions have occurred in the United States. About 4 million babies are born each year, which means that a baby has a 1-in-375,000 chance of being abducted. Compare this with the infant mortality rate in the U.S.—one in 145—and it becomes clear where the real risks are.

And the 1-in-375,000 chance is not today’s risk. Infant abduction rates have plummeted in recent years, mostly due to education programs at hospitals.

So why are hospitals bothering with RFID bracelets? I think they’re primarily to reassure the mothers. Many times during my friends’ stay at the hospital the doctors had to take the baby away for this or that test. Millions of years of evolution have forged a strong bond between new parents and new baby; the RFID bracelets are a low-cost way to ensure that the parents are more relaxed when their baby was out of their sight.

Security is both a reality and a feeling. The reality of security is mathematical, based on the probability of different risks and the effectiveness of different countermeasures. We know the infant abduction rates and how well the bracelets reduce those rates. We also know the cost of the bracelets, and can thus calculate whether they’re a cost-effective security measure or not. But security is also a feeling, based on individual psychological reactions to both the risks and the countermeasures. And the two things are different: You can be secure even though you don’t feel secure, and you can feel secure even though you’re not really secure.

The RFID bracelets are what I’ve come to call security theater: security primarily designed to make you feel more secure. I’ve regularly maligned security theater as a waste, but it’s not always, and not entirely, so.

It’s only a waste if you consider the reality of security exclusively. There are times when people feel less secure than they actually are. In those cases—like with mothers and the threat of baby abduction—a palliative countermeasure that primarily increases the feeling of security is just what the doctor ordered.

Tamper-resistant packaging for over-the-counter drugs started to appear in the 1980s, in response to some highly publicized poisonings. As a countermeasure, it’s largely security theater. It’s easy to poison many foods and over-the-counter medicines right through the seal—with a syringe, for example—or to open and replace the seal well enough that an unwary consumer won’t detect it. But in the 1980s, there was a widespread fear of random poisonings in over-the-counter medicines, and tamper-resistant packaging brought people’s perceptions of the risk more in line with the actual risk: minimal.

Much of the post-9/11 security can be explained by this as well. I’ve often talked about the National Guard troops in airports right after the terrorist attacks, and the fact that they had no bullets in their guns. As a security countermeasure, it made little sense for them to be there. They didn’t have the training necessary to improve security at the checkpoints, or even to be another useful pair of eyes. But to reassure a jittery public that it’s OK to fly, it was probably the right thing to do.

Security theater also addresses the ancillary risk of lawsuits. Lawsuits are ultimately decided by juries, or settled because of the threat of jury trial, and juries are going to decide cases based on their feelings as well as the facts. It’s not enough for a hospital to point to infant abduction rates and rightly claim that RFID bracelets aren’t worth it; the other side is going to put a weeping mother on the stand and make an emotional argument. In these cases, security theater provides real security against the legal threat.

Like real security, security theater has a cost. It can cost money, time, concentration, freedoms and so on. It can come at the cost of reducing the things we can do. Most of the time security theater is a bad trade-off, because the costs far outweigh the benefits. But there are instances when a little bit of security theater makes sense.

We make smart security trade-offs—and by this I mean trade-offs for genuine security—when our feeling of security closely matches the reality. When the two are out of alignment, we get security wrong. Security theater is no substitute for security reality, but, used correctly, security theater can be a way of raising our feeling of security so that it more closely matches the reality of security. It makes us feel more secure handing our babies off to doctors and nurses, buying over-the-counter medicines, and flying on airplanes—closer to how secure we should feel if we had all the facts and did the math correctly.

Of course, too much security theater and our feeling of security becomes greater than the reality, which is also bad. And others—politicians, corporations and so on—can use security theater to make us feel more secure without doing the hard work of actually making us secure. That’s the usual way security theater is used, and why I so often malign it.

But to write off security theater completely is to ignore the feeling of security. And as long as people are involved with security trade-offs, that’s never going to work.

This essay appeared on Wired.com, and is dedicated to my new godson, Nicholas Quillen Perry.

EDITED TO ADD: This essay has been translated into Portuguese.

Posted on January 25, 2007 at 5:50 AMView Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

Automated Targeting System

If you’ve traveled abroad recently, you’ve been investigated. You’ve been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.

Curious about your score? You can’t see it. Interested in what information was used? You can’t know that. Want to clear your name if you’ve been wrongly categorized? You can’t challenge it. Want to know what kind of rules the computer is using to judge you? That’s secret, too. So is when and how the score will be used.

U.S. customs agencies have been quietly operating this system for several years. Called Automated Targeting System, it assigns a “risk assessment” score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.

Little is known about this program. Its bare outlines were disclosed in the Federal Register in October. We do know that the score is partially based on details of your flight record—where you’re from, how you bought your ticket, where you’re sitting, any special meal requests—or on motor vehicle records, as well as on information from crime, watch-list and other databases.

Civil liberties groups have called the program Kafkaesque. But I have an even bigger problem with it. It’s a waste of money.

The idea of feeding a limited set of characteristics into a computer, which then somehow divines a person’s terrorist leanings, is farcical. Uncovering terrorist plots requires intelligence and investigation, not large-scale processing of everyone.

Additionally, any system like this will generate so many false alarms as to be completely unusable. In 2005 Customs & Border Protection processed 431 million people. Assuming an unrealistic model that identifies terrorists (and innocents) with 99.9% accuracy, that’s still 431,000 false alarms annually.

The number of false alarms will be much higher than that. The no-fly list is filled with inaccuracies; we’ve all read about innocent people named David Nelson who can’t fly without hours-long harassment. Airline data, too, are riddled with errors.

The odds of this program’s being implemented securely, with adequate privacy protections, are not good. Last year I participated in a government working group to assess the security and privacy of a similar program developed by the Transportation Security Administration, called Secure Flight. After five years and $100 million spent, the program still can’t achieve the simple task of matching airline passengers against terrorist watch lists.

In 2002 we learned about yet another program, called Total Information Awareness, for which the government would collect information on every American and assign him or her a terrorist risk score. Congress found the idea so abhorrent that it halted funding for the program. Two years ago, and again this year, Secure Flight was also banned by Congress until it could pass a series of tests for accuracy and privacy protection.

In fact, the Automated Targeting System is arguably illegal, as well (a point several congressmen made recently); all recent Department of Homeland Security appropriations bills specifically prohibit the department from using profiling systems against persons not on a watch list.

There is something un-American about a government program that uses secret criteria to collect dossiers on innocent people and shares that information with various agencies, all without any oversight. It’s the sort of thing you’d expect from the former Soviet Union or East Germany or China. And it doesn’t make us any safer from terrorism.

This essay, without the links, was published in Forbes. They also published a rebuttal by William Baldwin, although it doesn’t seen to rebut any of the actual points.

Here’s an odd division of labor: a corporate data consultant argues for more openness, while a journalist favors more secrecy.

It’s only odd if you don’t understand security.

Posted on December 22, 2006 at 11:38 AMView Comments

Cybercrime Hype Alert

It seems to be the season for cybercrime hype. First, we have this article from CNN, which seems to have no actual news:

Computer hackers will open a new front in the multi-billion pound “cyberwar” in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.

As people grow wise to email scams, criminal gangs will find new ways to commit online fraud, sell fake goods or steal corporate secrets.

And next, this article, which claims that criminal organizations are paying student members to get IT degrees:

The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.

“Traditional criminals have the ability to move funds and use all of the background they have,” he said, “but they don’t have the technical expertise.”

As the number of criminal gangs looking to move into cyber crime expanded, it got harder to recruit skilled hackers, said Mr Day. This has led criminals to target university students all around the world.

“Some students are being sponsored through their IT degree,” said Mr Day. Once qualified, the graduates go to work for the criminal gangs.

[…]

The aura of rebellion the name conjured up helped criminals ensnare children as young as 14, suggested the study.

By trawling websites, bulletin boards and chat rooms that offer hacking tools, cracks or passwords for pirated software, criminal recruiters gather information about potential targets.

Once identified, young hackers are drawn in by being rewarded for carrying out low-level tasks such as using a network of hijacked home computers, a botnet, to send out spam.

The low risk of being caught and the relatively high-rewards on offer helped the criminal gangs to paint an attractive picture of a cyber criminal’s life, said Mr Day.

As youngsters are drawn in the stakes are raised and they are told to undertake increasingly risky jobs.

Criminals targeting children—that’s sure to peg anyone’s hype-meter.

To be sure, I don’t want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain—to large organized crime syndicates. Cybercrime is big business, and it’s getting bigger.

But I’m not sure if stories like these help or hurt.

Posted on December 14, 2006 at 2:36 PMView Comments

The Square Root of Terrorist Intent

I’ve already written about the DHS’s database of top terrorist targets and how dumb it is. Important sites are not on the list, and unimportant ones are. The reason is pork, of course; states get security money based on this list, so every state wants to make sure they have enough sites on it. And over the past five years, states with Republican congressmen got more money than states without.

Here’s another article on this general topic, centering around an obscure quantity: the square root of terrorist intent:

The Department of Homeland Security is the home of many mysteries. There is, of course, the color-coded system for gauging the threat of an attack. And there is the department database of national assets to protect against a terrorist threat, which includes Old MacDonald’s Petting Zoo in Woodville, Ala., and the Apple and Pork Festival in Clinton, Ill.

And now Jim O’Brien, the director of the Office of Emergency Management and Homeland Security in Clark County, Nev., has discovered another hard-to-fathom DHS notion: a mathematical value purporting to represent the square root of terrorist intent. The figure appears deep in the mind-numbingly complex risk-assessment formulas that the department used in 2006 to decide the likelihood that a place is or will become a terrorist target—an all-important estimate outside the Beltway, because greater slices of the federal anti-terrorism pie go to the locations with the highest scores. Overall, the department awarded $711 million in high-risk urban counterterrorism grants last year.

[…]

As O’Brien reviewed the risk-assessment formulas—a series of calculations that runs into the billions—he found himself unable to account for several factors, the terrorist-intent notion principal among them. “I have a Ph.D. I think I understand formulas,” he says. “Take the square root of terrorist intent? Now, give me a break.” The whole notion, O’Brien says, is a contradiction in terms: “How can you quantify what somebody is thinking?”

Other designations for variables in the formula are almost befuddling, O’Brien says, such as the “attractiveness factor,” which seeks to establish how terrorists might prefer one sort of target over another, and the “chatter factor,” which tries to gauge the intent of potential terror plotters based on communication intercepts.

“One man’s garbage is another man’s treasure,” he says. “So I don’t know how you measure attractiveness.” The chatter factor, meanwhile, leaves O’Brien entirely in the dark: “I’m not sure what that means.”

What I said last time still applies:

We’re never going to get security right if we continue to make it a parody of itself.

Posted on December 11, 2006 at 12:18 PMView Comments

New U.S. Customs Database on Trucks and Travellers

It’s yet another massive government surveillance program:

US Customs and Border Protection issued a notice in the Federal Register yesterday which detailed the agency’s massive database that keeps risk assessments on every traveler entering or leaving the country. Citizens who are concerned that their information is inaccurate are all but out of luck: the system “may not be accessed under the Privacy Act for the purpose of contesting the content of the record.”

The system in question is the Automated Targeting System, which is associated with the previously-existing Treasury Enforcement Communications System. TECS was built to screen people and assets that moved in and out of the US, and its database contains more than one billion records that are accessible by more than 30,000 users at 1,800 sites around the country. Customs has adapted parts of the TECS system to its own use and now plans to screen all passengers, inbound and outbound cargo, and ships.

The system creates a risk assessment for each person or item in the database. The assessment is generated from information gleaned from federal and commercial databases, provided by people themselves as they cross the border, and the Passenger Name Record information recorded by airlines. This risk assessment will be maintained for up to 40 years and can be pulled up by agents at a moment’s notice in order to evaluate potential threats against the US.

If you leave the country, the government will suddenly know a lot about you. The Passenger Name Record alone contains names, addresses, telephone numbers, itineraries, frequent-flier information, e-mail addresses—even the name of your travel agent. And this information can be shared with plenty of people:

  • Federal, state, local, tribal, or foreign governments
  • A court, magistrate, or administrative tribunal
  • Third parties during the course of a law enforcement investigation
  • Congressional office in response to an inquiry
  • Contractors, grantees, experts, consultants, students, and others performing or working on a contract, service, or grant
  • Any organization or person who might be a target of terrorist activity or conspiracy
  • The United States Department of Justice
  • The National Archives and Records Administration
  • Federal or foreign government intelligence or counterterrorism agencies
  • Agencies or people when it appears that the security or confidentiality of their information has been compromised.

That’s a lot of people who could be looking at your information and your government-designed risk assessment. The one person who won’t be looking at that information is you. The entire system is exempt from inspection and correction under provision 552a (j)(2) and (k)(2) of US Code Title 5, which allows such exemptions when the data in question involves law enforcement or intelligence information.

This means you can’t review your data for accuracy, and you can’t correct any errors.

But the system can be used to give you a risk assessment score, which presumably will affect how you’re treated when you return to the U.S.

I’ve already explained why data mining does not find terrorists or terrorist plots. So have actual math professors. And we’ve seen this kind of “risk assessment score” idea and the problems it causes with Secure Flight.

This needs some mainstream press attention.

EDITED TO ADD (11/4): More commentary here, here, and here.

EDITED TO ADD (11/5): It’s buried in the back pages, but at least The Washington Post wrote about it.

Posted on November 4, 2006 at 9:19 AMView Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AMView Comments

Air Cargo Security

BBC is reporting a “major” hole in air cargo security. Basically, cargo is being flown on passenger planes without being screened. A would-be terrorist could therefore blow up a passenger plane by shipping a bomb via FedEx.

In general, cargo deserves much less security scrutiny than passengers. Here’s the reasoning:

Cargo planes are much less of a terrorist risk than passenger planes, because terrorism is about innocents dying. Blowing up a planeload of FedEx packages is annoying, but not nearly as terrorizing as blowing up a planeload of tourists. Hence, the security around air cargo doesn’t have to be as strict.

Given that, if most air cargo flies around on cargo planes, then it’s okay for some small amount—assuming it’s random and assuming the shipper doesn’t know which packages beforehand—of cargo to fly as baggage on passenger planes. A would-be terrorist would be better off taking his bomb and blowing up a bus than shipping it and hoping it might possibly be put on a passenger plane.

At least, that’s the theory. But theory and practice are different.

The British system involves “known shippers”:

Under a system called “known shipper” or “known consignor” companies which have been security vetted by government appointed agents can send parcels by air, which do not have to be subjected to any further security checks.

Unless a package from a known shipper arouses suspicion or is subject to a random search it is taken on trust that its contents are safe.

But:

Captain Gary Boettcher, president of the US Coalition Of Airline Pilots Associations, says the “known shipper” system “is probably the weakest part of the cargo security today”.

“There are approx 1.5 million known shippers in the US. There are thousands of freight forwarders. Anywhere down the line packages can be intercepted at these organisations,” he said.

“Even reliable respectable organisations, you really don’t know who is in the warehouse, who is tampering with packages, putting parcels together.”

This system has already been exploited by drug smugglers:

Mr Adeyemi brought pounds of cocaine into Britain unchecked by air cargo, transported from the US by the Federal Express courier company. He did not have to pay the postage.

This was made possible because he managed to illegally buy the confidential Fed Ex account numbers of reputable and security cleared companies from a former employee.

An accomplice in the US was able to put the account numbers on drugs parcels which, as they appeared to have been sent by known shippers, arrived unchecked at Stansted Airport.

When police later contacted the companies whose accounts and security clearance had been so abused they discovered they had suspected nothing.

And it’s not clear that a terrorist can’t figure out which shipments are likely to be put on passenger aircraft:

However several large companies such as FedEx and UPS offer clients the chance to follow the progress of their parcels online.

This is a facility that Chris Yates, an expert on airline security for Jane’s Transport, says could be exploited by terrorists.

“From these you can get a fair indication when that package is in the air, if you are looking to get a package into New York from Heathrow at a given time of day.

And BBC reports that 70% of cargo is shipped on passenger planes. That seems like too high a number.

If we had infinite budget, of course we’d screen all air cargo. But we don’t, and it’s a reasonable trade-off to ignore cargo planes and concentrate on passenger planes. But there are some awfully big holes in this system.

Posted on October 24, 2006 at 6:11 AMView Comments

Perceived Risk vs. Actual Risk

Good essay on perceived vs. actual risk. The hook is Mayor Daley of Chicago demanding a no-fly-zone over Chicago in the wake of the New York City airplane crash.

Other politicians (with the spectacular and notable exception of New York City Mayor Michael Bloomberg) and self-appointed “experts” are jumping on the tragic accident—repeat, accident—in New York to sound off again about the “danger” of light aircraft, and how they must be regulated, restricted, banned.

OK, for all of those ranting about “threats” from GA aircraft, we’ll believe that you’re really serious about controlling “threats” when you call for:

  • Banning all vans within cities. A small panel van was used in the first World Trade Center attack. The bomb, which weighed 1,500 pounds, killed six and injured 1,042.
  • Banning all box trucks from cities. Timothy McVeigh’s rented Ryder truck carried a 5,000-pound bomb that killed 168 in Oklahoma City.
  • Banning all semi-trailer trucks. They can carry bombs weighing more than 50,000 pounds.
  • Banning newspapers on subways. That’s how the terrorists hid packages of sarin nerve gas in the Tokyo subway system. They killed 12.
  • Banning backpacks on all buses and subways. That’s how the terrorists got the bombs into the London subway system. They killed 52.
  • Banning all cell phones on trains. That’s how they detonated the bombs in backpacks placed on commuter trains in Madrid. They killed 191.
  • Banning all small pleasure boats on public waterways. That’s how terrorists attacked the USS Cole, killing 17.
  • Banning all heavy or bulky clothing in all public places. That’s how suicide bombers hide their murderous charges. Thousands killed.

Number of people killed by a terrorist attack using a GA aircraft? Zero.

Number of people injured by a terrorist attack using a GA aircraft? Zero.

Property damage from a terrorist attack using a GA aircraft? None.

So Mr. Mayor (and Mr. Governor, Ms. Senator, Mr. Congressman, and Mr. “Expert”), if you’re truly serious about “protecting” the public, advocate all of the bans I’ve listed above. Using the “logic” you apply to general aviation aircraft, you’re forced to conclude that newspapers, winter coats, cell phones, backpacks, trucks, and boats all pose much greater risks to the public.

So be consistent in your logic. If you are dead set on restricting a personal transportation system that carries more passengers than any single airline, reaches more American cities than all the airlines combined, provides employment for 1.3 million American citizens and $160 billion in business “to protect the public,” then restrict or control every other transportation system that the terrorists have demonstrated they can use to kill.

And, on the same topic, why it doesn’t make sense to ban small aircraft from cities as a terrorism defense.

Posted on October 23, 2006 at 10:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.