Entries Tagged "cost-benefit analysis"

Page 20 of 23

Talking to Strangers

In Beyond Fear I wrote: “Many children are taught never to talk to strangers, an extreme precaution with minimal security benefit.”

In talks, I’m even more direct. I think “don’t talk to strangers” is just about the worst possible advice you can give a child. Most people are friendly and helpful, and if a child is in distress, asking the help of a stranger is probably the best possible thing he can do.

This advice would have helped Brennan Hawkins, the 11-year-old boy who was lost in the Utah wilderness for four days.

The parents said Brennan had seen people searching for him on horse and ATV, but avoided them because of what he had been taught.

“He stayed on the trail, he avoided strangers,” Jody Hawkins said. “His biggest fear, he told me, was that someone would steal him.”

They said they hadn’t talked to Brennan and his four siblings about what they should do about strangers if they were lost. “This may have come to a faster conclusion had we discussed that,” Toby Hawkins said.

In a world where good guys are common and bad guys are rare, assuming a random person is a good guy is a smart security strategy. We need to help children develop their natural intuition about risk, and not give them overbroad rules.

Also in Beyond Fear, I wrote:

As both individuals and a society, we can make choices about our security. We can choose more security or less security. We can choose greater impositions on our lives and freedoms, or fewer impositions. We can choose the types of risks and security solutions we’re willing to tolerate and decide that others are unacceptable.

As individuals, we can decide to buy a home alarm system to make ourselves more secure, or we can save the money because we don’t consider the added security to be worth it. We can decide not to travel because we fear terrorism, or we can decide to see the world because the world is wonderful. We can fear strangers because they might be attackers, or we can talk to strangers because they might become friends.

Posted on June 23, 2005 at 2:40 PMView Comments

Risks of Pointy Knives

An article in the British Medical Journal recommends that long pointy knives be banned because they’re a stabbing risk.

Of course it’s ridiculous. (I wrote about this kind of thing two days ago, in the context of cell phones on airplanes. Banning something with good uses just because there are also bad uses is rarely a good security trade-off.)

But the researchers actually have a point—so to speak—when they say that there’s no good reason for long knives to be pointy. From the BBC:

The researchers said there was no reason for long pointed knives to be publicly available at all.

They consulted 10 top chefs from around the UK, and found such knives have little practical value in the kitchen.

None of the chefs felt such knives were essential, since the point of a short blade was just as useful when a sharp end was needed.

I do a lot of cooking, and have all my life. I never use a long knife to stab. I never use the point of a chef’s knife, or the point of any other long knife. I rarely stab at all, and when I do, I’m using a small utility knife or a petty knife.

Okay, then. Why are so many large knives pointy? Carving knives aren’t pointy. Bread knives aren’t pointy. I can rock my chef’s knife just as easily on a rounded end.

Anyone know?

Posted on June 10, 2005 at 1:17 PMView Comments

Backscatter X-Ray Technology

Backscatter X-ray technology is a method of using X rays to see inside objects. The science is complicated, but the upshot is that you can see people naked:

The application of this new x-ray technology to airport screening uses high energy x-rays that are more likely to scatter than penetrate materials as compared to lower-energy x-rays used in medical applications. Although this type of x-ray is said to be harmless it can move through other materials, such as clothing.

A passenger is scanned by rastering or moving a single high energy x-ray beam rapidly over their form. The signal strength of detected backscattered x-rays from a known position then allows a highly realistic image to be reconstructed. Since only Compton scattered x-rays are used, the registered image is mainly that of the surface of the object/person being imaged. In the case of airline passenger screening it is her nude form. The image resolution of the technology is high, so details of the human form of airline passengers present privacy challenges.

EPIC’s “Spotlight on Security” page is an excellent resource on this issue.

The TSA has recently announced a proposal to use these machines to screen airport passengers.

I’m not impressed with this security trade-off. Yes, backscatter X-ray machines might be able to detect things that conventional screening might miss. But I already think we’re spending too much effort screening airplane passengers at the expense of screening luggage and airport employees…to say nothing of the money we should be spending on non-airport security.

On the other side, these machines are expensive and the technology is incredibly intrusive. I don’t think that people should be subjected to strip searches before they board airplanes. And I believe that most people would be appalled by the prospect of security screeners seeing them naked.

I believe that there will be a groundswell of popular opposition to this idea. Aside from the usual list of pro-privacy and pro-liberty groups, I expect fundamentalist Christian groups to be appalled by this technology. I think we can get a bevy of supermodels to speak out against the invasiveness of the search.

News article

Posted on June 9, 2005 at 1:04 PMView Comments

Risks of Cell Phones on Airplanes

Everyone—except those who like peace and quiet—thinks it’s a good idea to allow cell phone calls on airplanes, and are working out the technical details. But the U.S. government is worried that terrorists might make telephone calls from airplanes.

If the mobile phone ban were lifted, law enforcement authorities worry an attacker could use the device to coordinate with accomplices on the ground, on another flight or seated elsewhere on the same plane.

If mobile phone calls are to be allowed during flights, the law enforcement agencies urged that users be required to register their location on a plane before placing a call and that officials have fast access to call identification data.

“There is a short window of opportunity in which action can be taken to thwart a suicidal terrorist hijacking or remedy other crisis situations on board an aircraft,” the agencies said.

This is beyond idiotic. Again and again, we hear the argument that a particular technology can be used for bad things, so we have to ban or control it. The problem is that when we ban or control a technology, we also deny ourselves some of the good things it can be used for. Security is always a trade-off. Almost all technologies can be used for both good and evil; in Beyond Fear, I call them “dual use” technologies. Most of the time, the good uses far outweigh the evil uses, and we’re much better off as a society embracing the good uses and dealing with the evil uses some other way.

We don’t ban cars because bank robbers can use them to get away faster. We don’t ban cell phones because drug dealers use them to arrange sales. We don’t ban money because kidnappers use it. And finally, we don’t ban cryptography because the bad guys it to keep their communications secret. In all of these cases, the benefit to society of having the technology is much greater than the benefit to society of controlling, crippling, or banning the technology.

And, of course, security countermeasures that force the attackers to make a minor modification in their tactics aren’t very good trade-offs. Banning cell phones on airplanes only makes sense if the terrorists are planning to use cell phones on airplanes, and will give up and not bother with their attack because they can’t. If their plan doesn’t involve air-to-ground communications, or if it doesn’t involve air travel at all, then the security measure is a waste. And even worse, we denied ourselves all the good uses of the technology in the process.

Security officials are also worried that personal phone use could increase the risk that remotely-controlled bomb will be used to down an airliner. But they acknowledged simple radio-controlled explosive devices have been used in the past on planes and the first line of defence was security checks at airports.

Still, they said that “the departments believe that the new possibilities generated by airborne passenger connectivity must be recognized.”

That last sentence got it right. New possibilities, both good and bad.

Posted on June 8, 2005 at 2:40 PMView Comments

Billions Wasted on Anti-Terrorism Security

Recently there have been a bunch of news articles about how lousy counterterrorism security is in the United States, how billions of dollars have been wasted on security since 9/11, and how much of what was purchased doesn’t work as advertised.

The first is from the May 8 New York Times (available at the website for pay, but there are copies here and here):

After spending more than $4.5 billion on screening devices to monitor the nation’s ports, borders, airports, mail and air, the federal government is moving to replace or alter much of the antiterrorism equipment, concluding that it is ineffective, unreliable or too expensive to operate.

Many of the monitoring tools—intended to detect guns, explosives, and nuclear and biological weapons—were bought during the blitz in security spending after the attacks of Sept. 11, 2001.

In its effort to create a virtual shield around America, the Department of Homeland Security now plans to spend billions of dollars more. Although some changes are being made because of technology that has emerged in the last couple of years, many of them are planned because devices currently in use have done little to improve the nation’s security, according to a review of agency documents and interviews with federal officials and outside experts.

From another part of the article:

Among the problems:

  • Radiation monitors at ports and borders that cannot differentiate between radiation emitted by a nuclear bomb and naturally occurring radiation from everyday material like cat litter or ceramic tile.
  • Air-monitoring equipment in major cities that is only marginally effective because not enough detectors were deployed and were sometimes not properly calibrated or installed. They also do not produce results for up to 36 hours—long after a biological attack would potentially infect thousands of people.
  • Passenger-screening equipment at airports that auditors have found is no more likely than before federal screeners took over to detect whether someone is trying to carry a weapon or a bomb aboard a plane.
  • Postal Service machines that test only a small percentage of mail and look for anthrax but no other biological agents.

The Washington Post had a series of articles. The first lists some more problems:

  • The contract to hire airport passenger screeners grew to $741 million from $104 million in less than a year. The screeners are failing to detect weapons at roughly the same rate as shortly after the attacks.
  • The contract for airport bomb-detection machines ballooned to at least $1.2 billion from $508 million over 18 months. The machines have been hampered by high false-alarm rates.
  • A contract for a computer network called US-VISIT to screen foreign visitors could cost taxpayers $10 billion. It relies on outdated technology that puts the project at risk.
  • Radiation-detection machines worth a total of a half-billion dollars deployed to screen trucks and cargo containers at ports and borders have trouble distinguishing between highly enriched uranium and common household products. The problem has prompted costly plans to replace the machines.

The second is about border security.

And more recently, a New York Times article on how lousy port security is.

There are a lot of morals here: the problems of believing companies that have something to sell you, the difficulty of making technological security solutions work, the problems with making major security changes quickly, the mismanagement that comes from any large bureaucracy like the DHS, and the wastefulness of defending potential terrorist targets instead of broadly trying to deal with terrorism.

Posted on June 3, 2005 at 8:17 AMView Comments

Surveillance Cameras in U.S. Cities

From EPIC:

The Department of Homeland Security (DHS) has requested more than $2 billion to finance grants to state and local governments for homeland security needs. Some of this money is being used by state and local governments to create networks of surveillance cameras to watch over the public in the streets, shopping centers, at airports and more. However, studies have found that such surveillance systems have little effect on crime, and that it is more effective to place more officers on the streets and improve lighting in high-crime areas. There are significant concerns about citizens’ privacy rights and misuse or abuse of the system. A professor at the University of Nevada at Reno has alleged that the university used a homeland security camera system to surreptitiously watch him after he filed a complaint alleging that the university abused its research animals. Also, British studies have found there is a significant danger of racial discrimination and stereotyping by those monitoring the cameras.

Posted on May 16, 2005 at 9:00 AMView Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it’s voice-over-IP spam, and it has the clever name of “spit” (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they’re going to get dozens of calls a day from audio spammers. Or, at least, they’re only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It’s all basically the same thing—unsolicited marketing messages—and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it’s to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That’s why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it’s particularly effective; the response rates for spam are very low. It’s common because it’s ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you’re a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person—to buy your product, vote for your guy, whatever—then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the “cost” of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using “cost” very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge—and it’s a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it’s the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists—lists of senders a user is willing to accept e-mail from—and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn’t just have to be at the recipient’s e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We’ve already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail “postage,” either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve “computational puzzles”: time-consuming tasks the sender’s computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don’t care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can’t prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there’s no end in sight for the spam arms race. Even so, spam is one of computer security’s success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better—Crypto-Gram is occasionally classified as spam by one service or another, for example—but they’re working pretty well. It’ll be a long time before spam stops clogging up the Internet, but at least we don’t have to look at it.

Posted on May 13, 2005 at 9:47 AMView Comments

REAL ID

The United States is getting a national ID card. The REAL ID Act (text of the bill and the Congressional Research Services analysis of the bill) establishes uniform standards for state driver’s licenses, effectively creating a national ID card. It’s a bad idea, and is going to make us all less safe. It’s also very expensive. And it’s all happening without any serious debate in Congress.

I’ve already written about national IDs. I’ve written about the fallacies of identification as a security tool. I’m not going to repeat myself here, and I urge everyone who is interested to read those two essays (and even this older essay). A national ID is a lousy security trade-off, and everyone needs to understand why.

Aside from those generalities, there are specifics about REAL ID that make for bad security.

The REAL ID Act requires driver’s licenses to include a “common machine-readable technology.” This will, of course, make identity theft easier. Assume that this information will be collected by bars and other businesses, and that it will be resold to companies like ChoicePoint and Acxiom. It actually doesn’t matter how well the states and federal government protect the data on driver’s licenses, as there will be parallel commercial databases with the same information.

Even worse, the same specification for RFID chips embedded in passports includes details about embedding RFID chips in driver’s licenses. I expect the federal government will require states to do this, with all of the associated security problems (e.g., surreptitious access).

REAL ID requires that driver’s licenses contain actual addresses, and no post office boxes. There are no exceptions made for judges or police—even undercover police officers. This seems like a major unnecessary security risk.

REAL ID also prohibits states from issuing driver’s licenses to illegal aliens. This makes no sense, and will only result in these illegal aliens driving without licenses—which isn’t going to help anyone’s security. (This is an interesting insecurity, and is a direct result of trying to take a document that is a specific permission to drive an automobile, and turning it into a general identification device.)

REAL ID is expensive. It’s an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I’ve seen estimates that the cost to the states of complying with REAL ID will be $120 million. That’s $120 million that can’t be spent on actual security.

And the wackiest thing is that none of this is required. In October 2004, the Intelligence Reform and Terrorism Prevention Act of 2004 was signed into law. That law included stronger security measures for driver’s licenses, the security measures recommended by the 9/11 Commission Report. That’s already done. It’s already law.

REAL ID goes way beyond that. It’s a huge power-grab by the federal government over the states’ systems for issuing driver’s licenses.

REAL ID doesn’t go into effect until three years after it becomes law, but I expect things to be much worse by then. One of my fears is that this new uniform driver’s license will bring a new level of “show me your papers” checks by the government. Already you can’t fly without an ID, even though no one has ever explained how that ID check makes airplane terrorism any harder. I have previously written about Secure Flight, another lousy security system that tries to match airline passengers against terrorist watch lists. I’ve already heard rumblings about requiring states to check identities against “government databases” before issuing driver’s licenses. I’m sure Secure Flight will be used for cruise ships, trains, and possibly even subways. Combine REAL ID with Secure Flight and you have an unprecedented system for broad surveillance of the population.

Is there anyone who would feel safer under this kind of police state?

Americans overwhelmingly reject national IDs in general, and there’s an enormous amount of opposition to the REAL ID Act. This is from the EPIC page on REAL ID and National IDs:

More than 600 organizations have expressed opposition to the Real ID Act. Only two groups—Coalition for a Secure Driver’s License and Numbers USA—support the controversial national ID plan. Organizations such as the American Association of Motor Vehicle Administrators, National Association of Evangelicals, American Library Association, Association for Computing Machinery (pdf), National Council of State Legislatures, American Immigration Lawyers Association (pdf), and National Governors Association are among those against the legislation.

And this site is trying to coordinate individual action against the REAL ID Act, although time is running short. It’s already passed in the House, and the Senate votes tomorrow.

If you haven’t heard much about REAL ID in the newspapers, that’s not an accident. The politics of REAL ID is almost surreal. It was voted down last fall, but has been reintroduced and attached to legislation that funds military actions in Iraq. This is a “must-pass” piece of legislation, which means that there has been no debate on REAL ID. No hearings, no debates in committees, no debates on the floor. Nothing.

Near as I can tell, this whole thing is being pushed by Wisconsin Rep. Sensenbrenner primarily as an anti-immigration measure. The huge insecurities this will cause to everyone else in the United States seem to be collateral damage.

Unfortunately, I think this is a done deal. The legislation REAL ID is attached to must pass, and it will pass. Which means REAL ID will become law. But it can be fought in other ways: via funding, in the courts, etc. Those seriously interested in this issue are invited to attend an EPIC-sponsored event in Washington, DC, on the topic on June 6th. I’ll be there.

Posted on May 9, 2005 at 9:06 AM

New Risks of Automatic Speedtraps

Every security system brings about new threats. Here’s an example:

The RAC Foundation yesterday called for an urgent review of the first fixed motorway speed cameras.

Far from improving drivers’ behaviour, motorists are now bunching at high speeds between junctions 14-18 on the M4 in Wiltshire, said Edmund King, the foundation’s executive director.

The cameras were introduced by the Wiltshire and Swindon Safety Camera Partnership in an attempt to reduce accidents on a stretch of the motorway. But most motorists are now travelling at just under 79mph, the speed at which they face being fined.

In response to automated speedtraps, drivers are adopting the obvious tactic of driving just below the trigger speed for the cameras, presumably on cruise control. So instead of cars on the road traveling at a spectrum of speeds with reasonable gaps between them, we are seeing “pelotons” of cars traveling closely bunched together at the same high speed, presenting unfamiliar hazards to each other and to law-abiding slower road-users.

The result is that average speeds are going up, and not down.

Posted on April 25, 2005 at 3:12 PMView Comments

Security Trade-Offs

An essay by an anonymous CSO. This is how it begins:

On any given day, we CSOs come to work facing a multitude of security risks. They range from a sophisticated hacker breaching the network to a common thug picking a lock on the loading dock and making off with company property. Each of these scenarios has a probability of occurring and a payout (in this case, a cost to the company) should it actually occur. To guard against these risks, we have a finite budget of resources in the way of time, personnel, money and equipment—poker chips, if you will.

If we’re good gamblers, we put those chips where there is the highest probability of winning a high payout. In other words, we guard against risks that are most likely to occur and that, if they do occur, will cost the company the most money. We could always be better, but as CSOs, I think we’re getting pretty good at this process. So lately I’ve been wondering—as I watch spending on national security continue to skyrocket, with diminishing marginal returns—why we as a nation can’t apply this same logic to national security spending. If we did this, the war on terrorism would look a lot different. In fact, it might even be over.

The whole thing is worth reading.

Posted on April 22, 2005 at 12:32 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.