Entries Tagged "cost-benefit analysis"

Page 6 of 23

Interview with TSA Administrator John Pistole

He’s more realistic than one normally hears:

So if they get through all those defenses, they get to Reagan [National Airport] over here, and they’ve got an underwear bomb, they got a body cavity bomb—what’s reasonable to expect TSA to do? Hopefully our behavior detection people will see somebody sweating, or they’re dancing on their shoes or something, or they’re fiddling with something. Our explosives specialists, they’ll do something – they do hand swabs at random, unpredictably. If that doesn’t work then they go through (the enhanced scanner). And these machines give the best opportunity to detect a non-metallic device, but they’re not foolproof.

[…]

We’re not in the risk elimination business. The only way you can eliminate car accidents from happening is by not driving. OK, that’s not acceptable. The only way you can eliminate the risk of planes blowing up is nobody flies.

He still ducks some of the hard questions.

I am reminded my own interview from 2007 with then-TSA Administrator Kip Hawley.

Posted on December 22, 2010 at 12:27 PMView Comments

"Architecture of Fear"

I like the phrase:

Németh said the zones not only affect the appearance of landmark buildings but also reflect an ‘architecture of fear’ as evidenced, for example, by the bunker-like appearance of embassies and other perceived targets.

Ultimately, he said, these places impart a dual message—simultaneously reassuring the public while causing a sense of unease.

And in the end, their effect could be negligible.

“Indeed, overt security measures may be no more effective than covert intelligence techniques,” he said. “But the architecture aims to comfort both property developers concerned with investment risk and residents and tourists with the notion that terror threats are being addressed and that daily life will soon ‘return to normal.'”

My own essay on architecture and security from 2006.

EDITED TO ADD (1/13): Here’s the full paper. And some stuff from the Whole Building Design Guide site. Also see the planned U.S. embassy in London, which includes a moat.

Posted on December 20, 2010 at 5:55 AMView Comments

Sometimes CCTV Cameras Work

Sex attack caught on camera.

Hamilton police have arrested two men after a sex attack on a woman early today was caught on the city’s closed circuit television (CCTV) cameras.

CCTV operators contacted police when they became concerned about the safety of a woman outside an apartment block near the intersection of Victoria and Collingwood streets about 5am today.

Remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them. That is, were the cameras necessary for arrest or conviction?

My previous writing on cameras.

EDITED TO ADD (12/17): When I wrote “remember, though, that the test for whether the surveillance cameras are worth it is whether or not this crime would have been solved without them,” I was being sloppy. That’s the test as to whether or not they had any value in this case.

Posted on December 13, 2010 at 2:01 PMView Comments

Internet Quarantines

Last month, Scott Charney of Microsoft proposed that infected computers be quarantined from the Internet. Using a public health model for Internet security, the idea is that infected computers spreading worms and viruses are a risk to the greater community and thus need to be isolated. Internet service providers would administer the quarantine, and would also clean up and update users’ computers so they could rejoin the greater Internet.

This isn’t a new idea. Already there are products that test computers trying to join private networks, and only allow them access if their security patches are up-to-date and their antivirus software certifies them as clean. Computers denied access are sometimes shunned to a limited-capability sub-network where all they can do is download and install the updates they need to regain access. This sort of system has been used with great success at universities and end-user-device-friendly corporate networks. They’re happy to let you log in with any device you want—this is the consumerization trend in action—as long as your security is up to snuff.

Charney’s idea is to do that on a larger scale. To implement it we have to deal with two problems. There’s the technical problem—making the quarantine work in the face of malware designed to evade it, and the social problem—ensuring that people don’t have their computers unduly quarantined. Understanding the problems requires us to understand quarantines in general.

Quarantines have been used to contain disease for millennia. In general several things need to be true for them to work. One, the thing being quarantined needs to be easily recognized. It’s easier to quarantine a disease if it has obvious physical characteristics: fever, boils, etc. If there aren’t any obvious physical effects, or if those effects don’t show up while the disease is contagious, a quarantine is much less effective.

Similarly, it’s easier to quarantine an infected computer if that infection is detectable. As Charney points out, his plan is only effective against worms and viruses that our security products recognize, not against those that are new and still undetectable.

Two, the separation has to be effective. The leper colonies on Molokai and Spinalonga both worked because it was hard for the quarantined to leave. Quarantined medieval cities worked less well because it was too easy to leave, or—when the diseases spread via rats or mosquitoes—because the quarantine was targeted at the wrong thing.

Computer quarantines have been generally effective because the users whose computers are being quarantined aren’t sophisticated enough to break out of the quarantine, and find it easier to update their software and rejoin the network legitimately.

Three, only a small section of the population must need to be quarantined. The solution works only if it’s a minority of the population that’s affected, either with physical diseases or computer diseases. If most people are infected, overall infection rates aren’t going to be slowed much by quarantining. Similarly, a quarantine that tries to isolate most of the Internet simply won’t work.

Fourth, the benefits must outweigh the costs. Medical quarantines are expensive to maintain, especially if people are being quarantined against their will. Determining who to quarantine is either expensive (if it’s done correctly) or arbitrary, authoritative and abuse-prone (if it’s done badly). It could even be both. The value to society must be worth it.

It’s the last point that Charney and others emphasize. If Internet worms were only damaging to the infected, we wouldn’t need a societally imposed quarantine like this. But they’re damaging to everyone else on the Internet, spreading and infecting others. At the same time, we can implement systems that quarantine cheaply. The value to society far outweighs the cost.

That makes sense, but once you move quarantines from isolated private networks to the general Internet, the nature of the threat changes. Imagine an intelligent and malicious infectious disease: That’s what malware is. The current crop of malware ignores quarantines; they’re few and far enough between not to affect their effectiveness.

If we tried to implement Internet-wide—or even countrywide—quarantining, worm-writers would start building in ways to break the quarantine. So instead of nontechnical users not bothering to break quarantines because they don’t know how, we’d have technically sophisticated virus-writers trying to break quarantines. Implementing the quarantine at the ISP level would help, and if the ISP monitored computer behavior, not just specific virus signatures, it would be somewhat effective even in the face of evasion tactics. But evasion would be possible, and we’d be stuck in another computer security arms race. This isn’t a reason to dismiss the proposal outright, but it is something we need to think about when weighing its potential effectiveness.

Additionally, there’s the problem of who gets to decide which computers to quarantine. It’s easy on a corporate or university network: the owners of the network get to decide. But the Internet doesn’t have that sort of hierarchical control, and denying people access without due process is fraught with danger. What are the appeal mechanisms? The audit mechanisms? Charney proposes that ISPs administer the quarantines, but there would have to be some central authority that decided what degree of infection would be sufficient to impose the quarantine. Although this is being presented as a wholly technical solution, it’s these social and political ramifications that are the most difficult to determine and the easiest to abuse.

Once we implement a mechanism for quarantining infected computers, we create the possibility of quarantining them in all sorts of other circumstances. Should we quarantine computers that don’t have their patches up to date, even if they’re uninfected? Might there be a legitimate reason for someone to avoid patching his computer? Should the government be able to quarantine someone for something he said in a chat room, or a series of search queries he made? I’m sure we don’t think it should, but what if that chat and those queries revolved around terrorism? Where’s the line?

Microsoft would certainly like to quarantine any computers it feels are not running legal copies of its operating system or applications software.The music and movie industry will want to quarantine anyone it decides is downloading or sharing pirated media files—they’re already pushing similar proposals.

A security measure designed to keep malicious worms from spreading over the Internet can quickly become an enforcement tool for corporate business models. Charney addresses the need to limit this kind of function creep, but I don’t think it will be easy to prevent; it’s an enforcement mechanism just begging to be used.

Once you start thinking about implementation of quarantine, all sorts of other social issues emerge. What do we do about people who need the Internet? Maybe VoIP is their only phone service. Maybe they have an Internet-enabled medical device. Maybe their business requires the Internet to run. The effects of quarantining these people would be considerable, even potentially life-threatening. Again, where’s the line?

What do we do if people feel they are quarantined unjustly? Or if they are using nonstandard software unfamiliar to the ISP? Is there an appeals process? Who administers it? Surely not a for-profit company.

Public health is the right way to look at this problem. This conversation—between the rights of the individual and the rights of society—is a valid one to have, and this solution is a good possibility to consider.

There are some applicable parallels. We require drivers to be licensed and cars to be inspected not because we worry about the danger of unlicensed drivers and uninspected cars to themselves, but because we worry about their danger to other drivers and pedestrians. The small number of parents who don’t vaccinate their kids have already caused minor outbreaks of whooping cough and measles among the greater population. We all suffer when someone on the Internet allows his computer to get infected. How we balance that with individuals’ rights to maintain their own computers as they see fit is a discussion we need to start having.

This essay previously appeared on Forbes.com.

EDITED TO ADD (11/15): From an anonymous reader:

In your article you mention that for quarantines to work, you must be able to detect infected individuals. It must also be detectable quickly, before the individual has the opportunity to infect many others. Quarantining an individual after they’ve infected most of the people they regularly interact with is of little value. You must quarantine individuals when they have infected, on average, less than one other person.

Just as worm-writers would respond to the technical mechanisms to implement a quarantine by investing in ways to get around them, they would also likely invest in outpacing the quarantine. If a worm is designed to spread fast, even the best quarantine mechanisms may be unable to keep up.

Another concern with quarantining mechanisms is the damage that attackers could do if they were able to compromise the mechanism itself. This is of especially great concern if the mechanism were to include code within end-user’s TCBs to scan computers­ essentially a built-in root kit. Without a scanner in the end-user’s TCB, it’s hard to see how you could reliably detect infections.

Posted on November 15, 2010 at 4:55 AMView Comments

Questioning Terrorism Policy

Worth reading:

…what if we chose to accept the fact that every few years, despite all reasonable precautions, some hundreds or thousands of us may die in the sort of ghastly terrorist attack that a democratic republic cannot 100-percent protect itself from without subverting the very principles that make it worth protecting?

Is this thought experiment monstrous? Would it be monstrous to refer to the 40,000-plus domestic highway deaths we accept each year because the mobility and autonomy of the car are evidently worth that high price? Is monstrousness why no serious public figure now will speak of the delusory trade-off of liberty for safety that Ben Franklin warned about more than 200 years ago? What exactly has changed between Franklin’s time and ours? Why now can we not have a serious national conversation about sacrifice, the inevitability of sacrifice—either of (a) some portion of safety or (b) some portion of the rights and protections that make the American idea so incalculably precious?

Posted on September 18, 2010 at 6:05 AMView Comments

Consumerization and Corporate IT Security

If you’re a typical wired American, you’ve got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You’ve got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it’s available.

So why can’t work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can’t you use the cool stuff you already have?

More and more companies are letting you. They’re giving you an allowance and allowing you to buy whatever laptop you want, and to connect into the corporate network with whatever device you choose. They’re allowing you to use whatever cell phone you have, whatever portable e-mail device you have, whatever you personally need to get your job done. And the security office is freaking.

You can’t blame them, really. Security is hard enough when you have control of the hardware, operating system and software. Lose control of any of those things, and the difficulty goes through the roof. How do you ensure that the employee devices are secure, and have up-to-date security patches? How do you control what goes on them? How do you deal with the tech support issues when they fail? How do you even begin to manage this logistical nightmare? Better to dig your heels in and say “no.”

But security is on the losing end of this argument, and the sooner it realizes that, the better.

The meta-trend here is consumerization: cool technologies show up for the consumer market before they’re available to the business market. Every corporation is under pressure from its employees to allow them to use these new technologies at work, and that pressure is only getting stronger. Younger employees simply aren’t going to stand for using last year’s stuff, and they’re not going to carry around a second laptop. They’re either going to figure out ways around the corporate security rules, or they’re going to take another job with a more trendy company. Either way, senior management is going to tell security to get out of the way. It might even be the CEO, who wants to get to the company’s databases from his brand new iPad, driving the change. Either way, it’s going to be harder and harder to say no.

At the same time, cloud computing makes this easier. More and more, employee computing devices are nothing more than dumb terminals with a browser interface. When corporate e-mail is all webmail, corporate documents are all on GoogleDocs, and when all the specialized applications have a web interface, it’s easier to allow employees to use any up-to-date browser. It’s what companies are already doing with their partners, suppliers, and customers.

Also on the plus side, technology companies have woken up to this trend and—from Microsoft and Cisco on down to the startups—are trying to offer security solutions. Like everything else, it’s a mixed bag: some of them will work and some of them won’t, most of them will need careful configuration to work well, and few of them will get it right. The result is that we’ll muddle through, as usual.

Security is always a tradeoff, and security decisions are often made for non-security reasons. In this case, the right decision is to sacrifice security for convenience and flexibility. Corporations want their employees to be able to work from anywhere, and they’re going to have loosened control over the tools they allow in order to get it.

This essay first appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security Magazine. You can read Marcus’s half here.

Posted on September 7, 2010 at 7:25 AMView Comments

"The Fear Tax"

Good essay by Seth Godin:

We pay the fear tax every time we spend time or money seeking reassurance. We pay it twice when the act of seeking that reassurance actually makes us more anxious, not less.

We pay the tax when we cover our butt instead of doing the right thing, and we pay the tax when we take away someone’s dignity because we’re afraid.

We should quantify the tax. The government should publish how much of our money they’re spending to create fear and then spending to (apparently) address fear. Corporations should add to their annual reports how much they spent just-in-case. Once we know how much it costs, we can figure out if it’s worth it.

Posted on August 18, 2010 at 3:48 PMView Comments

Internet Kill Switch

Last month, Sen. Joe Lieberman, I-Conn., introduced a bill (text here) that might—we’re not really sure—give the president the authority to shut down all or portions of the Internet in the event of an emergency. It’s not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly. So let’s talk about the idea of an Internet kill switch in general.

It’s a bad one.

Security is always a trade-off: costs versus benefits. So the first question to ask is: What are the benefits? There is only one possible use of this sort of capability, and that is in the face of a warfare-caliber enemy attack. It’s the primary reason lawmakers are considering giving the president a kill switch. They know that shutting off the Internet, or even isolating the U.S. from the rest of the world, would cause damage, but they envision a scenario where not doing so would cause even more.

That reasoning is based on several flawed assumptions.

The first flawed assumption is that cyberspace has traditional borders, and we could somehow isolate ourselves from the rest of the world using an electronic Maginot Line. We can’t.

Yes, we can cut off almost all international connectivity, but there are lots of ways to get out onto the Internet: satellite phones, obscure ISPs in Canada and Mexico, long-distance phone calls to Asia.

The Internet is the largest communications system mankind has ever created, and it works because it is distributed. There is no central authority. No nation is in charge. Plugging all the holes isn’t possible.

Even if the president ordered all U.S. Internet companies to block, say, all packets coming from China, or restrict non-military communications, or just shut down access in the greater New York area, it wouldn’t work. You can’t figure out what packets do just by looking at them; if you could, defending against worms and viruses would be much easier.

And packets that come with return addresses are easy to spoof. Remember the cyberattack July 4, 2009, that probably came from North Korea, but might have come from England, or maybe Florida? On the Internet, disguising traffic is easy. And foreign cyberattackers could always have dial-up accounts via U.S. phone numbers and make long-distance calls to do their misdeeds.

The second flawed assumption is that we can predict the effects of such a shutdown. The Internet is the most complex machine mankind has ever built, and shutting down portions of it would have all sorts of unforeseen ancillary effects.

Would ATMs work? What about the stock exchanges? Which emergency services would fail? Would trucks and trains be able to route their cargo? Would airlines be able to route their passengers? How much of the military’s logistical system would fail?

That’s to say nothing of the variety of corporations that rely on the Internet to function, let alone the millions of Americans who would need to use it to communicate with their loved ones in a time of crisis.

Even worse, these effects would spill over internationally. The Internet is international in complex and surprising ways, and it would be impossible to ensure that the effects of a shutdown stayed domestic and didn’t cause similar disasters in countries we’re friendly with.

The third flawed assumption is that we could build this capability securely. We can’t.

Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.

Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.

But the main problem with an Internet kill switch is that it’s too coarse a hammer.

Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.

Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.

For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.

Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.

Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.

Defending his proposal, Sen. Lieberman pointed out that China has this capability. It’s debatable whether or not it actually does, but it’s actively pursuing the capability because the country cares less about its citizens.

Here in the U.S., it is both wrong and dangerous to give the president the power and ability to commit Internet suicide and terrorize Americans in this way.

This essay was originally published on AOL.com News.

Posted on July 12, 2010 at 7:07 AMView Comments

Security Trade-Offs in Crayfish

Interesting:

The experiments offered the crayfish stark decisions—a choice between finding their next meal and becoming a meal for an apparent predator. In deciding on a course of action, they carefully weighed the risk of attack against the expected reward, Herberholz says.

Using a non-invasive method that allowed the crustaceans to freely move, the researchers offered juvenile Louisiana Red Swamp crayfish a simultaneous threat and reward: ahead lay the scent of food, but also the apparent approach of a predator.

In some cases, the “predator” (actually a shadow) appeared to be moving swiftly, in others slowly. To up the ante, the researchers also varied the intensity of the odor of food.

How would the animals react? Did the risk of being eaten outweigh their desire to feed? Should they “freeze”—in effect, play dead, hoping the predator would pass by, while the crayfish remained close to its meal—or move away from both the predator and food?

To make a quick escape, the crayfish flip their tails and swim backwards, an action preceded by a strong, measurable electric neural impulse. The specially designed tanks could non-invasively pick up and record these electrical signals. This allowed the researchers to identify the activation patterns of specific neurons during the decision-making process.

Although tail-flipping is a very effective escape strategy against natural predators, it adds critical distance between a foraging animal and its next meal.

The crayfish took decisive action in a matter of milliseconds. When faced with very fast shadows, they were significantly more likely to freeze than tail-flip away.

The researchers conclude that there is little incentive for retreat when the predator appears to be moving too rapidly for escape, and the crayfish would lose its own opportunity to eat. This was also true when the food odor was the strongest, raising the benefit of staying close to the expected reward. A strong predator stimulus, however, was able to override an attractive food signal, and crayfish decided to flip away under these conditions.

It’s not that this surprises anyone, it’s that researchers can now try and figure out the exact brain processes that enable the crayfish to make these decisions.

Posted on June 25, 2010 at 6:53 AMView Comments

How Much Counterterrorism Can We Afford?

In an article on using terahertz rays (is that different from terahertz radar?) to detect biological agents, we find this quote:

“High-tech, low-tech, we can’t afford to overlook any possibility in dealing with mass casualty events,” according to center head Donald Sebastian. “You need multiple methods of detection and response. Terrorism comes in many forms; you have to see, smell, taste and analyze everything.”

He’s got it completely backwards. I think we can easily afford not to do what he’s saying, and can’t afford to do it.

The technology to detect traces of chemical and biological agents is neat, though. And I am very much in favor of research along these lines.

Posted on June 23, 2010 at 6:00 AMView Comments

1 4 5 6 7 8 23

Sidebar photo of Bruce Schneier by Joe MacInnis.