Entries Tagged "cost-benefit analysis"

Page 21 of 23

Biometric Passports in the UK

The UK government tried, and failed, to get a national ID. Now they’re adding biometrics to their passports.

Financing for the Passport Office is planned to rise from £182 million a year to £415 million a year by 2008 to cope with the introduction of biometric information such as fingerprints.

A Home Office spokesman said the aim was to cut out the 1,500 fraudulent applications found through the postal system last year alone.

Okay, let’s do the math. Eliminating 1,500 instances of fraud will cost £233 million a year. That comes to £155,000 per instance of fraud.

Does this kind of security trade-off make sense to anyone? Is there absolutely nothing better the UK government can do to ensure security and safety with £233 million a year?

Yes, adding additional biometrics to passports—there’s already a picture—will make them more secure. But I don’t think that the additional security is worth the money and the additional risks. It’s a bad security trade-off.

And I’m not a fan of national IDs.

Posted on April 21, 2005 at 1:18 PMView Comments

Processing Exit Visas

From Federal Computer Week:

The Homeland Security Department will choose in the next 60 days which of three procedures it will use to track international visitors leaving the United States, department officials said today.

A report evaluating the three methods under consideration is due in the next few weeks, said Anna Hinken, spokeswoman for US-VISIT, the program that screens foreign nationals entering and exiting the country to weed out potential terrorists.

The first process uses kiosks located throughout an airport or seaport. An “exit attendant”—who would be a contract worker, Hinken said—checks the traveler’s documents. The traveler then steps to the station, scans both index fingers and has a digital photo taken. The station prints out a receipt that verifies the passenger has checked out.

The second method requires the passenger to present the receipt when reaching the departure gate. An exit attendant will scan the receipt and one of the passenger’s index fingers using a wireless handheld device. If the passenger’s fingerprint matches the identity on the receipt, the attendant returns the receipt and the passenger can board.

The third procedure uses just the wireless device at the gate. The screening officer scans the traveler’s fingerprints and takes a picture with the device, which is similar in size to tools that car-rental companies use, Hinken said. The device wirelessly checks the US-VISIT database. Once the traveler’s identity is confirmed as safe, the officer prints out a receipt and the visitor can pass.

Properly evaluating this trade-off would look at the relative ease of attacking the three systems, the relative costs of the three systems, and the relative speed and convenience—to the traveller—of the three systems. My guess is that the system that requires the least amount of interaction with a person when boarding the plane is best.

Posted on April 20, 2005 at 8:16 AMView Comments

Security as a Trade-Off

The Economist has an excellent editorial on security trade-offs. You need to subscribe to read the whole thing, but here’s my favorite paragraph:

The second point is that all technologies have both good and bad uses. There is currently a debate about whether it is safe to install mobile antennas in underground stations, for example, for fear that
terrorists will use mobile phones to detonate bombs. Last year’s bombs in Madrid were detonated by mobile phones, but it was the phones’ internal alarm-clock function, not a call, that was used as the trigger mechanism. Nobody is suggesting that alarm clocks be outlawed, however; nor does anyone suggest banning telephones, even though kidnappers can use them to make ransom demands. Rather than demonising new technologies, their legitimate uses by good people must always be weighed against their illegitimate uses by bad ones. New technologies are inevitable, but by learning the lessons of history, needless scares need not be.

Posted on April 11, 2005 at 1:05 PMView Comments

More Uses for Airline Passenger Data

I’ve been worried about the government getting comprehensive data on airline passengers in order to check their names against a terrorist “watch list.” Turns out that the government has another reason for wanting passenger data.

Although privacy experts worry about the government gathering personal information on airline travelers, Delta Airlines is handing over electronic lists of passengers from some flights to help stop the spread of deadly infectious diseases.

The lists will allow health officials to notify more quickly those travelers who might have been exposed to illnesses such as dengue fever, flu, plague, SARS and biological agents, the Centers for Disease Control and Prevention told a congressional panel on Wednesday.

It’s the same story: a massive privacy violation of everybody just in case something happens to a few.

As an example of the CDC’s notification efforts, Schuchat cited the case of a New Jersey resident who returned from a trip to Sierra Leone in September with Lassa fever. The patient flew to Newark via London and took a train home. Only after he died a few days later did the CDC confirm the disease.

CDC worked with the state, the airline, the railroad, the hospital and others to identify 188 people who had been near the patient. Nineteen were deemed at-risk and 16 were contacted; none of those contacted came down with the disease. It took more than five days to notify some passengers, Schuchat said.

It’s unclear how this program would reduce that “five days” problem. I think it’s a better trade-off for the airlines to be ready to send the CDC the data in the event of a problem, rather than them sending the CDC all the data—just in case—before there is any problem.

Posted on April 8, 2005 at 9:14 AMView Comments

Sandia on Terrorism Security

I have very mixed feelings about this report:

Anticipating attacks from terrorists, and hardening potential targets against them, is a wearying and expensive business that could be made simpler through a broader view of the opponents’ origins, fears, and ultimate objectives, according to studies by the Advanced Concepts Group (ACG) of Sandia National Laboratories.

“Right now, there are way too many targets considered and way too many ways to attack them,” says ACG’s Curtis Johnson. “Any thinking person can spin up enemies, threats, and locations it takes billions [of dollars] to fix.”

That makes a lot of sense, and this way of thinking is sorely needed. As is this kind of thing:

“The game really starts when the bad guys are getting together to plan something, not when they show up at your door,” says Johnson. “Can you ping them to get them to reveal their hand, or get them to turn against themselves?”

Better yet is to bring the battle to the countries from which terrorists spring, and beat insurgencies before they have a foothold.

“We need to help win over the as-yet-undecided populace to the view it is their government that is legitimate and not the insurgents,” says the ACG’s David Kitterman. Data from Middle East polls suggest, perhaps surprisingly, that most respondents are favorable to Western values. Turbulent times, however, put that liking under stress.

A nation’s people and media can be won over, says Yonas, through global initiatives that deal with local problems such as the need for clean water and affordable energy.

Says Johnson, “U.S. security already is integrated with global security. We’re always helping victims of disaster like tsunami victims, or victims of oppressive governments. Perhaps our ideas on national security should be redefined to reflect the needs of these people.”

Remember right after 9/11, when that kind of thinking would get you vilified?

But the article also talks about security mechanisms that won’t work, cost too much in freedoms and liberties, and have dangerous side effects.

People in airports voluntarily might carry smart cards if the cards could be sweetened to perform additional tasks like helping the bearer get through security, or to the right gate at the right time.

Mall shoppers might be handed a sensing card that also would help locate a particular store, a special sale, or find the closest parking space through cheap distributed-sensor networks.

“Suppose every PDA had a sensor on it,” suggests ACG researcher Laura McNamara. “We would achieve decentralized surveillance.” These sensors could report by radio frequency to a central computer any signal from contraband biological, chemical, or nuclear material.

Universal surveillance to improve our security? Seems unlikely.

But the most chilling quote of all:

“The goal here is to abolish anonymity, the terrorist’s friend,” says Sandia researcher Peter Chew. “We’re not talking about abolishing privacy—that’s another issue. We’re only considering the effect of setting up an electronic situation where all the people in a mall, subway, or airport ‘know’ each other—via, say, Bluetooth—as they would have, personally, in a small town. This would help malls and communities become bad targets.”

Anonymity is now the terrorist’s friend? I like to think of it as democracy’s friend.

Security against terrorism is important, but it’s equally important to remember that terrorism isn’t the only threat. Criminals, police, and governments are also threats, and security needs to be viewed as a trade-off with respect to all the threats. When you analyze terrorism in isolation, you end up with all sorts of weird answers.

Posted on April 5, 2005 at 9:26 AMView Comments

Security Risks of Biometrics

From the BBC:

Police in Malaysia are hunting for members of a violent gang who chopped off a car owner’s finger to get round the vehicle’s hi-tech security system.

The car, a Mercedes S-class, was protected by a fingerprint recognition system.

What interests me about this story is the interplay between attacker and defender. The defender implements a countermeasure that causes the attacker to change his tactics. Sometimes the new tactics are more harmful, and it’s not obvious whether or not the countermeasure was worth it.

I wrote about something similar in Beyond Fear (p. 113):

Someone might think: “I am worried about car theft, so I will buy an expensive security device that makes ignitions impossible to hot-wire.” That seems like a reasonable thought, but countries such as Russia, where these security devices are commonplace, have seen an increase in carjackings. A carjacking puts the driver at a much greater risk; here the security countermeasure has caused the weakest link to move from the ignition switch to the driver. Total car thefts may have declined, but drivers’ safety did, too.

Posted on April 1, 2005 at 9:12 AMView Comments

Anonymity and the Internet

From Slate:

Anonymice on Anonymity Wendy.Seltzer.org (“Musings of a techie lawyer”) deflates the New York Times‘ breathless Saturday (March 19) piece about the menace posed by anonymous access to Wi-Fi networks (“Growth of Wireless Internet Opens New Path for Thieves” by Seth Schiesel). Wi-Fi pirates around the nation are using unsecured hotspots to issue anonymous death threats, download child pornography, and commit credit card fraud, Schiesel writes. Then he plays the terrorist card.

But unsecured wireless networks are nonetheless being looked at by the authorities as a potential tool for furtive activities of many sorts, including terrorism. Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Never mind the pod of qualifiers swimming through in those two sentences—”being looked at”; “potential tool”; “not aware of specific cases”; “might”—look at the sourcing. “Two federal law enforcement officials said on condition of anonymity. …” Seltzer points out the deep-dish irony of the Times citing anonymous sources about the imagined threats posed by anonymous Wi-Fi networks. Anonymous sources of unsubstantiated information, good. Anonymous Wi-Fi networks, bad.

This is the post from wendy.seltzer.org:

The New York Times runs an article in which law enforcement officials lament, somewhat breathlessly, that open wifi connections can be used, anonymously, by wrongdoers. The piece omits any mention of the benefits of these open wireless connections—no-hassle connectivity anywhere the “default” community network is operating, and anonymous browsing and publication for those doing good, too.

Without a hint of irony, however:

Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Yes, even law enforcement needs anonymity sometimes.

Open WiFi networks are a good thing. Yes, they allow bad guys to do bad things. But so do automobiles, telephones, and just about everything else you can think of. I like it when I find an open wireless network that I can use. I like it when my friends keep their home wireless network open so I can use it.

Scare stories like the New York Times one don’t help any.

Posted on March 25, 2005 at 12:49 PMView Comments

Secrecy and Security

In my previous entry, I wrote about the U.S. government’s SSI classification. I meant it as to be an analysis of the procedures of secrecy, not an analysis of secrecy as security.

I’ve previously written about the relationship between secrecy and security. I think secrecy hurts security in all but a few well-defined circumstances.

In recent years, the U.S. government has pulled a veil of secrecy over much of its inner workings, using security against terrorism as an excuse. The Director of the National Security Archive recently gave excellent testimony on the topic. This is worth reading both for this general conclusions and for his specific data.

The lesson of 9/11 is that we are losing protection by too much secrecy. The risk is that by keeping information secret, we make ourselves vulnerable. The risk is that when we keep our vulnerabilities secret, we avoid fixing them. In an open society, it is only by exposure that problems get fixed. In a distributed information networked world, secrecy creates risk—risk of inefficiency, ignorance, inaction, as in 9/11. As the saying goes in the computer security world, when the bug is secret, then only the vendor and the hacker know—and the larger community can neither protect itself nor offer fixes.

Posted on March 9, 2005 at 7:46 AMView Comments

Banning Matches and Lighters on Airplanes

According to the Washington Post:

When Congress voted last year to prohibit passengers from bringing lighters and matches aboard commercial airplanes, it sounded like a reasonable idea for improving airline security.

But as airports and government leaders began discussing how to create flame-free airport terminals, the task became more complicated. Would newsstands and other small airport stores located beyond the security checkpoint have to stop selling lighters? Would airports have to ban smoking and close smoking lounges? How would security screeners detect matches in passengers’ pockets or carry-on bags when they don’t contain metal to set off the magnetometers? And what about arriving international travelers, who might have matches and lighters with them as they walk through the terminal?

It’s the silly security season out there. Given all of the things to spend money on to improve security, how this got to the top of anyone’s list is beyond me.

Posted on March 4, 2005 at 3:00 PMView Comments

Garbage Cans that Spy on You

From The Guardian:

Though he foresaw many ways in which Big Brother might watch us, even George Orwell never imagined that the authorities would keep a keen eye on your bin.

Residents of Croydon, south London, have been told that the microchips being inserted into their new wheely bins may well be adapted so that the council can judge whether they are producing too much rubbish.

I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law.

And there are security risks.

If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant.

To me, this is just another example of those implementing policy not being the ones who bear the costs. How long would the policy last if it were made clear to those implementing it that they would be held personally liable, even if only via their departmental budgets or careers, for any losses to residents if the database did get hacked?

Posted on March 4, 2005 at 10:32 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.