Entries Tagged "cost-benefit analysis"

Page 13 of 23

Airport Security Study

Surprising nobody, a new study concludes that airport security isn’t helping:

A team at the Harvard School of Public Health could not find any studies showing whether the time-consuming process of X-raying carry-on luggage prevents hijackings or attacks.

They also found no evidence to suggest that making passengers take off their shoes and confiscating small items prevented any incidents.

[…]

The researchers said it would be interesting to apply medical standards to airport security. Screening programs for illnesses like cancer are usually not broadly instituted unless they have been shown to work.

Note the defense by the TSA:

“Even without clear evidence of the accuracy of testing, the Transportation Security Administration defended its measures by reporting that more than 13 million prohibited items were intercepted in one year,” the researchers added. “Most of these illegal items were lighters.”

This is where the TSA has it completely backwards. The goal isn’t to confiscate prohibited items. The goal is to prevent terrorism on airplanes. When the TSA confiscates millions of lighters from innocent people, that’s a security failure. The TSA is reacting to non-threats. The TSA is reacting to false alarms. Now you can argue that this level of failures is necessary to make people safer, but it’s certainly not evidence that people are safer.

For example, does anyone think that the TSA’s vigilance regarding pies is anything other than a joke?

Here’s the actual paper from the British Medical Journal:

Of course, we are not proposing that money spent on unconfirmed but politically comforting efforts to identify and seize water bottles and skin moisturisers should be diverted to research on cancer or malaria vaccines. But what would the National Screening Committee recommend on airport screening? Like mammography in the 1980s, or prostate specific antigen testing and computer tomography for detecting lung cancer more recently, we would like to open airport security screening to public and academic debate. Rigorously evaluating the current system is just the first step to building a future airport security programme that is more user friendly and cost effective, and that ultimately protects passengers from realistic threats.

I talked about airport security at length with Kip Hawley, the head of the TSA, here.

Posted on December 27, 2007 at 6:28 AMView Comments

Security-Breach Notification Laws

Interesting study on the effects of security-breach notification laws in the U.S.:

This study surveys the literature on changes in the information security world and significantly expands upon it with qualitative data from seven in-depth discussions with information security officers. These interviews focused on the most important factors driving security investment at their organizations and how security breach notification laws fit into that list. Often missing from the debate is that, regardless of the risk of identity theft and alleged consumer apathy towards notices, the simple fact of having to publicly notify causes organizations to implement stronger security standards that protect personal information.

The interviews showed that security breaches drive information exchange among security professionals, causing them to engage in discussions about information security issues that may arise at their and others’ organizations. For example, we found that some CSOs summarize news reports from breaches at other organizations and circulate them to staff with “lessons learned” from each incident. In some cases, organizations have a “that could have been us” moment, and patch systems with similar vulnerabilities to the entity that had a breach.

Breach notification laws have significantly contributed to heightened awareness of the importance of information security throughout all levels of a business organization and to development of a level of cooperation among different departments within an organization hat resulted from the need to monitor data access for the purposes of detecting, investigating, and reporting breaches. CSOs reported that breach notification duties empowered them to implement new access controls, auditing measures, and encryption. Aside from the organization’s own efforts at complying with notification laws, reports of breaches at other organizations help information officers maintain that sense of awareness.

Posted on December 12, 2007 at 1:53 PMView Comments

The War on the Unexpected

We’ve opened up a new front on the war on terror. It’s an attack on the unique, the unorthodox, the unexpected; it’s a war on different. If you act different, you might find yourself investigated, questioned, and even arrested—even if you did nothing wrong, and had no intention of doing anything wrong. The problem is a combination of citizen informants and a CYA attitude among police that results in a knee-jerk escalation of reported threats.

This isn’t the way counterterrorism is supposed to work, but it’s happening everywhere. It’s a result of our relentless campaign to convince ordinary citizens that they’re the front line of terrorism defense. “If you see something, say something” is how the ads read in the New York City subways. “If you suspect something, report it” urges another ad campaign in Manchester, UK. The Michigan State Police have a seven-minute video. Administration officials from then-attorney general John Ashcroft to DHS Secretary Michael Chertoff to President Bush have asked us all to report any suspicious activity.

The problem is that ordinary citizens don’t know what a real terrorist threat looks like. They can’t tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.

Even worse: after someone reports a “terrorist threat,” the whole system is biased towards escalation and CYA instead of a more realistic threat assessment.

Watch how it happens. Someone sees something, so he says something. The person he says it to—a policeman, a security guard, a flight attendant—now faces a choice: ignore or escalate. Even though he may believe that it’s a false alarm, it’s not in his best interests to dismiss the threat. If he’s wrong, it’ll cost him his career. But if he escalates, he’ll be praised for “doing his job” and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we’re done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.

This story has been repeated endlessly, both in the U.S. and in other countries. Someone—these are all real—notices a funny smell, or some white powder, or two people passing an envelope, or a dark-skinned man leaving boxes at the curb, or a cell phone in an airplane seat; the police cordon off the area, make arrests, and/or evacuate airplanes; and in the end the cause of the alarm is revealed as a pot of Thai chili sauce, or flour, or a utility bill, or an English professor recycling, or a cell phone in an airplane seat.

Of course, by then it’s too late for the authorities to admit that they made a mistake and overreacted, that a sane voice of reason at some level should have prevailed. What follows is the parade of police and elected officials praising each other for doing a great job, and prosecuting the poor victim—the person who was different in the first place—for having the temerity to try to trick them.

For some reason, governments are encouraging this kind of behavior. It’s not just the publicity campaigns asking people to come forward and snitch on their neighbors; they’re asking certain professions to pay particular attention: truckers to watch the highways, students to watch campuses, and scuba instructors to watch their students. The U.S. wanted meter readers and telephone repairmen to snoop around houses. There’s even a new law protecting people who turn in their travel mates based on some undefined “objectively reasonable suspicion,” whatever that is.

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

We need to do two things. The first is to stop urging people to report their fears. People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they’re spooked only squanders our security resources and makes no one safer.

We don’t want people to never report anything. A store clerk’s tip led to the unraveling of a plot to attack Fort Dix last May, and in March an alert Southern California woman foiled a kidnapping by calling the police about a suspicious man carting around a person-sized crate. But these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.

Equally important, politicians need to stop praising and promoting the officers who get it wrong. And everyone needs to stop castigating, and prosecuting, the victims just because they embarrassed the police by their innocence.

Causing a city-wide panic over blinking signs, a guy with a pellet gun, or stray backpacks, is not evidence of doing a good job: it’s evidence of squandering police resources. Even worse, it causes its own form of terror, and encourages people to be even more alarmist in the future. We need to spend our resources on things that actually make us safer, not on chasing down and trumpeting every paranoid threat anyone can come up with.

This essay originally appeared on Wired.com.

EDITED TO ADD (11/1): Some links didn’t make it into the original article. There’s this creepy “if you see a father holding his child’s hands, call the cops” campaign, this story of an iPod found on an airplane, and this story of an “improvised electronics device” trying to get through airport security. This is a good essay on the “war on electronics.”

EDITED TO ADD (11/25): More examples of rediculous non-terrorism overreactions, and a story about recruiting firefighters to snoop around in peoples’ houses:

Unlike police, firefighters and emergency medical personnel don’t need warrants to access hundreds of thousands of homes and buildings each year, putting them in a position to spot behavior that could indicate terrorist activity or planning.

Posted on November 1, 2007 at 4:42 AMView Comments

House of Lords on the Liquid Ban

From the UK:

“We continuously monitor the effectiveness of, in particular, the liquid security measures…”

How, one might ask? But hold on:

“The fact that there has not been a serious incident involving liquid explosives indicates, I would have thought, that the measures that we have put in place so far have been very effective.”

Ah, that’s how. On which basis the measures against asteroid strike, alien invasion and unexplained nationwide floods of deadly boiling custard have also been remarkably effective.

Posted on October 31, 2007 at 2:52 PMView Comments

Future of Malware

Excellent threepart series on trends in criminal malware:

When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren’t weren’t paying for already-stolen credentials. Instead, 76service sold subscriptions or “projects” to Gozi-infected machines. Usually, projects were sold in 30-day increments because that’s a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.

Subscribers could log in with their assigned user name and password any time during the 30-day project. They’d be met with a screen that told them which of their bots was currently active, and a side bar of management options. For example, they could pull down the latest drops—data deposits that the Gozi-infected machines they subscribed to sent to the servers, like the 3.3 GB one Jackson had found.

A project was like an investment portfolio. Individual Gozi-infected machines were like stocks and subscribers bought a group of them, betting they could gain enough personal information from their portfolio of infected machines to make a profit, mostly by turning around and selling credentials on the black market. (In some cases, subscribers would use a few of the credentials themselves).

Some machines, like some stocks, would under perform and provide little private information. But others would land the subscriber a windfall of private data. The point was to subscribe to several infected machines to balance that risk, the way Wall Street fund managers invest in many stocks to offset losses in one company with gains in another.

[…]

That’s why the subscription prices were steep. “Prices started at $1,000 per machine per project,” says Jackson. With some tinkering and thanks to some loose database configuration, Jackson gained a view into other people’s accounts. He mostly saw subscriptions that bought access to only a handful of machines, rarely more than a dozen.

The $1K figure was for “fresh bots”—new infections that hadn’t been part of a project yet. Used bots that were coming off an expired project were available, but worth less (and thus, cost less) because of the increased likelihood that personal information gained from that machine had already been sold. Customers were urged to act quickly to get the freshest bots available.

This was another advantage for the seller. Providing the self-service interface freed up the sellers to create ancillary services. 76service was extremely customer-focused. “They were there to give you services that made it a good experience,” Jackson says. You want us to clean up the reports for you? Sure, for a small fee. You want a report on all the credentials from one bank in your drop? Hundred bucks, please. For another $150 a month, we’ll create secure remote drops for you. Alternative packaging and delivery options? We can do that. Nickel and dime. Nickel and dime.

And about banks not caring:

As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. “If you look at the volume of loss versus revenue, it’s not horribly bad yet,” says Chris Hoff, with a nod to the criminal hacker’s strategy of distributed pain. “The banks say, ‘Regulations say I need to do these seven things, so I do them and let’s hope the technology to defend against this catches up.'”

“John” the security executive at the bank, one of the only security professionals from financial services who agreed to speak for this story, says “If you audited a financial institution, you wouldn’t find many out of compliance. From a legal perspective, banks can spin that around and say there’s nothing else we could do.”

The banks know how much data Lance James at Secure Science is monitoring; some of them are his clients. The researcher with expertise on the HangUp Team calls consumers’ ability to transfer funds online “the dumbest thing I’ve ever seen. You can’t walk into the branch of a bank with a mask on and no ID and make a transfer. So why is it okay online?”

And yet banks push online banking to customers with one hand while the other hand pushes problems like Gozi away, into acceptable loss budgets and insurance—transferred risk.

As long as consumers don’t raise a fuss, and thus far they haven’t in any meaningful way, the banks have little to fear from their strategies.

But perhaps the only reason consumers don’t raise a fuss is because the banks have both overstated the safety and security of online banking and downplayed negative events around it, like the existence of Gozi and 76service.

The whole thing is worth reading.

Posted on October 17, 2007 at 1:07 PMView Comments

Chlorine and Cholera in Iraq

Excellent blog post:

So cholera has now reached Baghdad. That’s not much of a surprise given the utter breakdown of infrastructure. But there’s a reason the cholera is picking up speed now. From the NYT:

“We are suffering from a shortage of chlorine, which is sometimes zero,” Dr. Ameer said in an interview on Al Hurra, an American-financed television network in the Middle East. “Chlorine is essential to disinfect the water.”

So why is there is a shortage? Because insurgents have laced a few bombs with chlorine and the U.S. and Iraq have responded by making it darn hard to import the stuff. From the AP:

[A World Health Organization representative in Iraq] also said some 100,000 tons of chlorine were being held up at Iraq’s border with Jordan, apparently because of fears the chemical could be used in explosives. She urged authorities to release it for use in decontaminating water supplies.

I understand why Iraq would put restrictions on dangerous chemicals. And I’m sure nobody intended for the restrictions to be so burdensome that they’d effectively cut off Iraq’s clean water supply. But that’s what looks to have happened. What makes it all the more tragic is that chlorine—for all the hype and worry—is actually a very ineffective booster for bombs. Of the roughly dozen chlorine-laced bombings in Iraq, it appears the chlorine has killed exactly nobody.

In other words, the biggest damage from chlorine bombs—as with so many terrorist attacks—has come from overreaction to it. Fear operates as a “force multiplier” for terrorists, and in this case has helped them cut off Iraq’s clean water. Pretty impressive feat for some bombs that turned out to be close to duds.

I couldn’t have said it better. In this case, the security countermeasure is worse than the threat. Same thing could be said about a lot of the terrorism countermeasures in the U.S.

Another article on the topic.

Posted on September 25, 2007 at 12:23 PMView Comments

London's Security Cameras Don't Help

Interesting article. London’s 10,000 security cameras don’t reduce crime:

A comparison of the number of cameras in each London borough with the proportion of crimes solved there found that police are no more likely to catch offenders in areas with hundreds of cameras than in those with hardly any.

In fact, four out of five of the boroughs with the most cameras have a record of solving crime that is below average.

EDITED TO ADD (10/11): This is a follow-up to a 2005 article.

Posted on September 20, 2007 at 2:03 PMView Comments

On the Ineffectiveness of Security Cameras

Information from San Francisco public housing developments:

The 178 video cameras that keep watch on San Francisco public housing developments have never helped police officers arrest a homicide suspect even though about a quarter of the city’s homicides occur on or near public housing property, city officials say.

Nobody monitors the cameras, and the videos are seen only if police specifically request it from San Francisco Housing Authority officials. The cameras have occasionally managed to miss crimes happening in front of them because they were trained in another direction, and footage is particularly grainy at night when most crime occurs, according to police and city officials.

Similar concerns have been raised about the 70 city-owned cameras located at high-crime locations around San Francisco.

[…]

Four homicides have occurred in the past 12 months at the intersection of Laguna and Eddy streets—at the corner of the Plaza East public housing development—including the daytime killing of a 19-year-old in May. A security camera is trained on that corner but so far has not proven useful in making any arrests, Mirkarimi said.

Both the Housing Authority and city have many security cameras in the area, and it wasn’t clear Monday whether the camera in question was purchased by the Housing Authority or city. In any case, the camera hasn’t helped make arrests in the crimes, Mirkarimi said.

“They’re feeling strongly that they don’t work,” Mirkarimi said of Western Addition residents’ views of the security cameras. “They’re just apoplectic why they can’t figure out why nothing comes of this.”

He added that he thinks the cameras may have “a scarecrow effect” in that they give residents the feeling they are safer when they actually have little impact on crime.

That’s not a scarecrow effect. A scarecrow is security theater that works: something that doesn’t actually prevent crime, but deters it by scaring off criminals. Mirkarimi is saying that they have the opposite effect; the cameras make victims feel safer than they really are.

Posted on August 17, 2007 at 1:25 PMView Comments

Security ROI

Interesting essay on security and return on investment (ROI):

Let’s get back to ROI. The major problem the ROSI crowd has is they are trying to speak the language of their managers who select projects based on ROI. There is no problem with selecting projects based on ROI, if the project is a wealth creation project and not a wealth preservation project.

Security managers should be unafraid to avoid using the term ROI, and instead say “My project will cost $1,000 but save the company $10,000.” Saving money / wealth preservation / loss avoidance is good.

Posted on July 14, 2007 at 6:54 AMView Comments

1 11 12 13 14 15 23

Sidebar photo of Bruce Schneier by Joe MacInnis.