Entries Tagged "cost-benefit analysis"

Page 3 of 23

Trying to Value Online Privacy

Interesting paper: “The Value of Online Privacy,” by Scott Savage and Donald M. Waldman.

Abstract: We estimate the value of online privacy with a differentiated products model of the demand for Smartphone apps. We study the apps market because it is typically necessary for the consumer to relinquish some personal information through “privacy permissions” to obtain the app and its benefits. Results show that the representative consumer is willing to make a one-time payment for each app of $2.28 to conceal their browser history, $4.05 to conceal their list of contacts, $1.19 to conceal their location, $1.75 to conceal their phone’s identification number, and $3.58 to conceal the contents of their text messages. The consumer is also willing to pay $2.12 to eliminate advertising. Valuations for concealing contact lists and text messages for “more experienced” consumers are also larger than those for “less experienced” consumers. Given the typical app in the marketplace has advertising, requires the consumer to reveal their location and their phone’s identification number, the benefit from consuming this app must be at least $5.06.

Interesting analysis, though we know that the point of sale is not the best place to capture the privacy preferences of people. There are too many other factors at play, and privacy isn’t the most salient thing going on.

Posted on January 29, 2014 at 12:26 PMView Comments

The Changing Cost of Surveillance

From Ashkan Soltani’s blog post:

The Yale Law Journal Online (YLJO) just published an article that I co-authored with Kevin Bankston (first workshopped at the Privacy Law Scholars Conference last year) entitled “Tiny Constables and the Cost of Surveillance: Making Cents Out of United States v. Jones.” In it, we discuss the drastic reduction in the cost of tracking an individual’s location and show how technology has greatly reduced the barriers to performing surveillance. We estimate the hourly cost of location tracking techniques used in landmark Supreme Court cases Jones, Karo, and Knotts and use the opinions issued in those cases to propose an objective metric: if the cost of the surveillance using the new technique is an order of magnitude (ten times) less than the cost of the surveillance without using the new technique, then the new technique violates a reasonable expectation of privacy. For example, the graph above shows that tracking a suspect using a GPS device is 28 times cheaper than assigning officers to follow him.

Posted on January 15, 2014 at 6:23 AMView Comments

The NSA's New Risk Analysis

As I recently reported in the Guardian, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.

Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack. According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head — all delivered from a FOXACID subsystem called Ferret Cannon. Oh how I love some of these code names. (On the other hand, EGOTISTICALGIRAFFE has to be the dumbest code name ever.)

Snowden explained this to Guardian reporter Glenn Greenwald in Hong Kong. If the target is a high-value one, FOXACID might run a rare zero-day exploit that it developed or purchased. If the target is technically sophisticated, FOXACID might decide that there’s too much chance for discovery, and keeping the zero-day exploit a secret is more important. If the target is a low-value one, FOXACID might run an exploit that’s less valuable. If the target is low-value and technically sophisticated, FOXACID might even run an already-known vulnerability.

We know that the NSA receives advance warning from Microsoft of vulnerabilities that will soon be patched; there’s not much of a loss if an exploit based on that vulnerability is discovered. FOXACID has tiers of exploits it can run, and uses a complicated trade-off system to determine which one to run against any particular target.

This cost-benefit analysis doesn’t end at successful exploitation. According to Snowden, the TAO — that’s Tailored Access Operations — operators running the FOXACID system have a detailed flowchart, with tons of rules about when to stop. If something doesn’t work, stop. If they detect a PSP, a personal security product, stop. If anything goes weird, stop. This is how the NSA avoids detection, and also how it takes mid-level computer operators and turn them into what they call “cyberwarriors.” It’s not that they’re skilled hackers, it’s that the procedures do the work for them.

And they’re super cautious about what they do.

While the NSA excels at performing this cost-benefit analysis at the tactical level, it’s far less competent at doing the same thing at the policy level. The organization seems to be good enough at assessing the risk of discovery — for example, if the target of an intelligence-gathering effort discovers that effort — but to have completely ignored the risks of those efforts becoming front-page news.

It’s not just in the U.S., where newspapers are heavy with reports of the NSA spying on every Verizon customer, spying on domestic e-mail users, and secretly working to cripple commercial cryptography systems, but also around the world, most notably in Brazil, Belgium, and the European Union. All of these operations have caused significant blowback — for the NSA, for the U.S., and for the Internet as a whole.

The NSA spent decades operating in almost complete secrecy, but those days are over. As the corporate world learned years ago, secrets are hard to keep in the information age, and openness is a safer strategy. The tendency to classify everything means that the NSA won’t be able to sort what really needs to remain secret from everything else. The younger generation is more used to radical transparency than secrecy, and is less invested in the national security state. And whistleblowing is the civil disobedience of our time.

At this point, the NSA has to assume that all of its operations will become public, probably sooner than it would like. It has to start taking that into account when weighing the costs and benefits of those operations. And it now has to be just as cautious about new eavesdropping operations as it is about using FOXACID exploits attacks against users.

This essay previously appeared in the Atlantic.

Posted on October 9, 2013 at 6:28 AMView Comments

Excess Automobile Deaths as a Result of 9/11

People commented about a point I made in a recent essay:

In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

Yes, that’s wrong. Where I said “months,” I should have said “years.”

I got the sound bite from John Mueller and Mark G. Stewart’s book, Terror, Security, and Money. This is footnote 19 from Chapter 1:

The inconvenience of extra passenger screening and added costs at airports after 9/11 cause many short-haul passengers to drive to their destination instead, and, since airline travel is far safer than car travel, this has led to an increase of 500 U.S. traffic fatalities per year. Using DHS-mandated value of statistical life at $6.5 million, this equates to a loss of $3.2 billion per year, or $32 billion over the period 2002 to 2011 (Blalock et al. 2007).

The authors make the same point in this earlier (and shorter) essay:

Increased delays and added costs at U.S. airports due to new security procedures provide incentive for many short-haul passengers to drive to their destination rather than flying, and, since driving is far riskier than air travel, the extra automobile traffic generated has been estimated in one study to result in 500 or more extra road fatalities per year.

The references are:

  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2007. “The Impact of Post-9/11 Airport Security Measures on the Demand for Air Travel.” Journal of Law and Economics 50(4) November: 731­–755.
  • Garrick Blalock, Vrinda Kadiyali, and Daniel H. Simon. 2009. “Driving Fatalities after 9/11: A Hidden Cost of Terrorism.” Applied Economics 41(14): 1717­–1729.

Business Week makes the same point here.

There’s also this reference:

  • Michael Sivak and Michael J. Flannagan. 2004. “Consequences for road traffic fatalities of the reduction in flying following September 11, 2001.” Transportation Research Part F: Traffic Psychology and Behavior 7 (4).

Abstract: Gigerenzer (Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15 , 286­287) argued that the increased fear of flying in the U.S. after September 11 resulted in a partial shift from flying to driving on rural interstate highways, with a consequent increase of 353 road traffic fatalities for October through December 2001. We reevaluated the consequences of September 11 by utilizing the trends in road traffic fatalities from 2000 to 2001 for January through August. We also examined which road types and traffic participants contributed most to the increased road fatalities. We conclude that (1) the partial modal shift after September 11 resulted in 1018 additional road fatalities for the three months in question, which is substantially more than estimated by Gigerenzer, (2) the major part of the increased toll occurred on local roads, arguing against a simple modal shift from flying to driving to the same destinations, (3) driver fatalities did not increase more than in proportion to passenger fatalities, and (4) pedestrians and bicyclists bore a disproportionate share of the increased fatalities.

This is another analysis.

Posted on September 9, 2013 at 6:20 AMView Comments

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and — paradoxically — they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes — a wrong address on a warrant, a misunderstanding — result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade — including the wars in Iraq and Afghanistan — money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world — among the wealthier of us who live in peaceful Western countries — our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are — and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline — no matter how benign the infraction — the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved — medicine is the big one, but other sciences as well — we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect — and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AMView Comments

Management Issues in Terrorist Organizations

Terrorist organizations have the same management problems as other organizations, and new ones besides:

Terrorist leaders also face a stubborn human resources problem: Their talent pool is inherently unstable. Terrorists are obliged to seek out recruits who are predisposed to violence — that is to say, young men with a chip on their shoulder. Unsurprisingly, these recruits are not usually disposed to following orders or recognizing authority figures. Terrorist managers can craft meticulous long-term strategies, but those are of little use if the people tasked with carrying them out want to make a name for themselves right now.

Terrorist managers are also obliged to place a premium on bureaucratic control, because they lack other channels to discipline the ranks. When Walmart managers want to deal with an unruly employee or a supplier who is defaulting on a contract, they can turn to formal legal procedures. Terrorists have no such option. David Ervine, a deceased Irish Unionist politician and onetime bomb maker for the Ulster Volunteer Force (UVF), neatly described this dilemma to me in 2006. “We had some very heinous and counterproductive activities being carried out that the leadership didn’t punish because they had to maintain the hearts and minds within the organization,” he said….

EDITED TO ADD (9/13): More on the economics of terrorism.

Posted on August 16, 2013 at 7:31 AMView Comments

NSA Increasing Security by Firing 90% of Its Sysadmins

General Keith Alexander thinks he can improve security by automating sysadmin duties such that 90% of them can be fired:

Using technology to automate much of the work now done by employees and contractors would make the NSA’s networks “more defensible and more secure,” as well as faster, he said at the conference, in which he did not mention Snowden by name.

Does anyone know a sysadmin anywhere who believes it’s possible to automate 90% of his job? Or who thinks any such automation will actually improve security?

He’s stuck. Computerized systems require trusted people to administer them. And any agency with all that computing power is going to need thousands of sysadmins. Some of them are going to be whistleblowers.

Leaking secret information is the civil disobedience of our age. Alexander has to get used to it.

Posted on August 12, 2013 at 2:33 PMView Comments

Prosecuting Snowden

I generally don’t like stories about Snowden as a person, because they distract from the real story of the NSA surveillance programs, but this article on the costs and benefits of the US government prosecuting Edward Snowden is worth reading.

Additional concerns relate to the trial. Snowden would no doubt obtain high-powered lawyers. Protesters would ring the courthouse. Journalists would camp out inside. As proceedings dragged on for months, the spotlight would remain on the N.S.A.’s spying and the administration’s pursuit of leakers. Instead of fading into obscurity, the Snowden affair would continue to grab headlines, and thus to undermine the White House’s ability to shape political discourse.

A trial could turn out to be much more than a distraction: It could be a focal point for domestic and international outrage. From the executive branch’s institutional perspective, the greatest danger posed by the Snowden case is not to any particular program. It is to the credibility of the secrecy system, and at one remove the ideal of our government as a force for good.

[…]

More broadly, Snowden’s case may clash with certain foreign policy goals. The United States often wants other countries’ dissidents to be able to find refuge abroad; this is a longstanding plank of its human rights agenda. The United States also wants illiberal regimes to tolerate online expression that challenges their authority; this is the core of its developing Internet freedom agenda.

Snowden’s prosecution may limit our soft power to lead and persuade in these areas. Of course, U.S. officials could emphasize that Snowden is different, that he’s not a courageous activist but a reckless criminal. But that is what the repressive governments say about their prisoners, too.

EDITED TO ADD (7/22): Related is this article on whether Snowden can manage to avoid arrest. Here’s the ending:

Speaking of movies, near the end of the hit film “Catch Me If You Can,” there’s a scene that Snowden might do well to watch while he’s killing time in the airport lounge (or wherever he is) pondering his fate. The young forger, Frank Abagnale, who has been staying a step ahead of the feds, finally grows irritated and fatigued. Not because they are particularly skilled in their hunting, nor because they are getting closer, but simply because they won’t give up. In a fit of pique, he blurts into the phone, “Stop chasing me!” On the other end, the dogged, bureaucratic Treasury agent, Carl Hanratty, answers, “I can’t stop. It’s my job.”

Ultimately, this is why many people who have been involved in such matters believe Snowden will be caught. Because no matter how much he may love sticking it to the U.S. government and waving the banner of truth, justice, and freedom of speech, that mission will prove largely unsustainable without serious fundraisers, organizers and dedicated allies working on his behalf for a long time.

They’ll have to make Edward Snowden their living, because those who are chasing him already have. Government agents will be paid every minute of every day for as long as it takes. Seasons may change and years may pass, but the odds say that one morning, he’ll look out of a window, go for a walk or stop for a cup of coffee, and the trap will spring shut. It will be almost like a movie.

Posted on July 22, 2013 at 1:04 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.