Entries Tagged "mitigation"

Page 2 of 8

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and—paradoxically—they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes—a wrong address on a warrant, a misunderstanding—result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade—including the wars in Iraq and Afghanistan—money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world—among the wealthier of us who live in peaceful Western countries—our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are—and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline—no matter how benign the infraction—the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved—medicine is the big one, but other sciences as well—we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect—and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Preventive vs. Reactive Security

This is kind of a rambling essay on the need to spend more on infrastructure, but I was struck by this paragraph:

Here’s a news flash: There are some events that no society can afford to be prepared for to the extent that we have come to expect. Some quite natural events—hurricanes, earthquakes, tsunamis, derechos—have such unimaginable power that the destruction they wreak will always take days, or weeks, or months to fix. No society can afford to harden the infrastructure that supports it to make that infrastructure immune to such destructive forces.

Add terrorism to that list and it sounds like something I would say. Sometimes it makes more sense to spend money on mitigation than it does to spend it on prevention.

Posted on August 13, 2012 at 12:41 PMView Comments

Attack Mitigation

At the RSA Conference this year, I noticed a trend of companies that have products and services designed to help victims recover from attacks. Kelly Jackson Higgins noticed the same thing: “Damage Mitigation as the New Defense.”

That new reality, which has been building for several years starting in the military sector, has shifted the focus from trying to stop attackers at the door to instead trying to lessen the impact of an inevitable hack. The aim is to try to detect an attack as early in its life cycle as possible and to quickly put a stop to any damage, such as extricating the attacker from your data server—or merely stopping him from exfiltrating sensitive information.
It’s more about containment now, security experts say. Relying solely on perimeter defenses is now passe—and naively dangerous. “Organizations that are only now coming to the realization that their network perimeters have been compromised are late to the game. Malware ceased being obvious and destructive years ago,” says Dave Piscitello, senior security technologist for ICANN. “The criminal application of collected/exfiltrated data is now such an enormous problem that it’s impossible to avoid.”

Attacks have become more sophisticated, and social engineering is a powerful, nearly sure-thing tool for attackers to schmooze their way into even the most security-conscious companies. “Security traditionally has been a preventative game, trying to prevent things from happening. What’s been going on is people realizing you cannot do 100 percent prevention anymore,” says Chenxi Wang, vice president and principal analyst for security and risk at Forrester Research. “So we figured out what we’re going to do is limit the damage when prevention fails.”

Posted on April 27, 2012 at 6:53 AMView Comments

A Systems Framework for Catastrophic Disaster Response

The National Academies Press has published Crisis Standards of Care: A Systems Framework for Catastrophic Disaster Response.

When a nation or region prepares for public health emergencies such as a pandemic influenza, a large-scale earthquake, or any major disaster scenario in which the health system may be destroyed or stressed to its limits, it is important to describe how standards of care would change due to shortages of critical resources. At the 17th World Congress on Disaster and Emergency Medicine, the IOM Forum on Medical and Public Health Preparedness sponsored a session that focused on the promise of and challenges to integrating crisis standards of care principles into international disaster response plans.

Posted on April 6, 2012 at 11:03 AMView Comments

Allocating Security Resources to Protect Critical Infrastructure

Alan T. Murray and Tony H. Grubesic, “Critical Infrastructure Protection: The Vulnerability Conundrum,” Telematics & Informatics, 29 (February 2012): 56­65 (full article behind paywall).

Abstract: Critical infrastructure and key resources (CIKR) refer to a broad array of assets which are essential to the everyday functionality of social, economic, political and cultural systems in the United States. The interruption of CIKR poses significant threats to the continuity of these systems and can result in property damage, human casualties and significant economic losses. In recent years, efforts to both identify and mitigate systemic vulnerabilities through federal, state, local and private infrastructure protection plans have improved the readiness of the United States for disruptive events and terrorist threats. However, strategies that focus on worst-case vulnerability reduction, while potentially effective, do not necessarily ensure the best allocation of protective resources. This vulnerability conundrum presents a significant challenge to advanced disaster planning efforts. The purpose of this paper is to highlight the conundrum in the context of CIKR.

Posted on January 2, 2012 at 12:33 PMView Comments

Domodedovo Airport Bombing

I haven’t written anything about the suicide bombing at Moscow’s Domodedovo Airport because I didn’t think there was anything to say. The bomber was outside the security checkpoint, in the area where family and friends wait for arriving passengers. From a security perspective, the bombing had nothing to do with airport security. He could have just as easily been in a movie theater, stadium, shopping mall, market, or anywhere else lots of people are crowded together with limited exits. The large death and injury toll indicates the bomber chose his location well.

I’ve often written that security measures that are only effective if the implementers guess the plot correctly are largely wastes of money—at best they would have forced this bomber to choose another target—and that our best security investments are intelligence, investigation, and emergency response. This latest terrorist attack underscores that even more. “Critics say” that the TSA couldn’t have detected this sort of attack. Of course; the TSA can’t be everywhere. And that’s precisely the point.

Many reporters asked me about the likely U.S. reaction. I don’t know; it could range from “Moscow is a long way off and that doesn’t concern us” to “Oh my god we’re all going to die!” The worry, of course, is that we will need to “do something,” even though there is no “something” that should be done.

I was interviewed by the Esquire politics blog about this. I’m not terribly happy with the interview; I was rushed and sloppy on the phone.

Posted on January 28, 2011 at 3:15 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.