Entries Tagged "fear"

Page 3 of 22

The Politics of Fear

This is very good:

…one might suppose that modern democratic states, with the lessons of history at hand, would seek to minimize fear ­ or at least minimize its effect on deliberative decision-making in both foreign and domestic policy.

But today the opposite is frequently true. Even democracies founded in the principles of liberty and the common good often take the path of more authoritarian states. They don’t work to minimize fear, but use it to exert control over the populace and serve the government’s principle aim: consolidating power.

[…]

However, since 9/11 leaders of both political parties in the United States have sought to consolidate power by leaning not just on the danger of a terrorist attack, but on the fact that the possible perpetrators are frightening individuals who are not like us. As President George W. Bush put it before a joint session of Congress in 2001: “They hate our freedoms: our freedom of religion, our freedom of speech, our freedom to vote and assemble and disagree with each other.” Last year President Obama brought the enemy closer to home, arguing in a speech at the National Defense University that “we face a real threat from radicalized individuals here in the United States” — radicalized individuals who were “deranged or alienated individuals ­- often U.S. citizens or legal residents.”

The Bush fear-peddling is usually considered the more extreme, but is it? The Obama formulation puts the “radicalized individuals” in our midst. They could be American citizens or legal residents. And the subtext is that if we want to catch them we need to start looking within. The other is among us. The pretext for the surveillance state is thus established.

Posted on January 29, 2014 at 6:24 AMView Comments

Our Newfound Fear of Risk

We’re afraid of risk. It’s a normal part of life, but we’re increasingly unwilling to accept it at any level. So we turn to technology to protect us. The problem is that technological security measures aren’t free. They cost money, of course, but they cost other things as well. They often don’t provide the security they advertise, and — paradoxically — they often increase risk somewhere else. This problem is particularly stark when the risk involves another person: crime, terrorism, and so on. While technology has made us much safer against natural risks like accidents and disease, it works less well against man-made risks.

Three examples:

  1. We have allowed the police to turn themselves into a paramilitary organization. They deploy SWAT teams multiple times a day, almost always in nondangerous situations. They tase people at minimal provocation, often when it’s not warranted. Unprovoked shootings are on the rise. One result of these measures is that honest mistakes — a wrong address on a warrant, a misunderstanding — result in the terrorizing of innocent people, and more death in what were once nonviolent confrontations with police.
  2. We accept zero-tolerance policies in schools. This results in ridiculous situations, where young children are suspended for pointing gun-shaped fingers at other students or drawing pictures of guns with crayons, and high-school students are disciplined for giving each other over-the-counter pain relievers. The cost of these policies is enormous, both in dollars to implement and its long-lasting effects on students.
  3. We have spent over one trillion dollars and thousands of lives fighting terrorism in the past decade — including the wars in Iraq and Afghanistan — money that could have been better used in all sorts of ways. We now know that the NSA has turned into a massive domestic surveillance organization, and that its data is also used by other government organizations, which then lie about it. Our foreign policy has changed for the worse: we spy on everyone, we trample human rights abroad, our drones kill indiscriminately, and our diplomatic outposts have either closed down or become fortresses. In the months after 9/11, so many people chose to drive instead of fly that the resulting deaths dwarfed the deaths from the terrorist attack itself, because cars are much more dangerous than airplanes.

There are lots more examples, but the general point is that we tend to fixate on a particular risk and then do everything we can to mitigate it, including giving up our freedoms and liberties.

There’s a subtle psychological explanation. Risk tolerance is both cultural and dependent on the environment around us. As we have advanced technologically as a society, we have reduced many of the risks that have been with us for millennia. Fatal childhood diseases are things of the past, many adult diseases are curable, accidents are rarer and more survivable, buildings collapse less often, death by violence has declined considerably, and so on. All over the world — among the wealthier of us who live in peaceful Western countries — our lives have become safer.

Our notions of risk are not absolute; they’re based more on how far they are from whatever we think of as “normal.” So as our perception of what is normal gets safer, the remaining risks stand out more. When your population is dying of the plague, protecting yourself from the occasional thief or murderer is a luxury. When everyone is healthy, it becomes a necessity.

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are — and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

Some of this fear stems from the fact that we put people in charge of just one aspect of the risk equation. No one wants to be the senior officer who didn’t approve the SWAT team for the one subpoena delivery that resulted in an officer being shot. No one wants to be the school principal who didn’t discipline — no matter how benign the infraction — the one student who became a shooter. No one wants to be the president who rolled back counterterrorism measures, just in time to have a plot succeed. Those in charge will be naturally risk averse, since they personally shoulder so much of the burden.

We also expect that science and technology should be able to mitigate these risks, as they mitigate so many others. There’s a fundamental problem at the intersection of these security measures with science and technology; it has to do with the types of risk they’re arrayed against. Most of the risks we face in life are against nature: disease, accident, weather, random chance. As our science has improved — medicine is the big one, but other sciences as well — we become better at mitigating and recovering from those sorts of risks.

Security measures combat a very different sort of risk: a risk stemming from another person. People are intelligent, and they can adapt to new security measures in ways nature cannot. An earthquake isn’t able to figure out how to topple structures constructed under some new and safer building code, and an automobile won’t invent a new form of accident that undermines medical advances that have made existing accidents more survivable. But a terrorist will change his tactics and targets in response to new security measures. An otherwise innocent person will change his behavior in response to a police force that compels compliance at the threat of a Taser. We will all change, living in a surveillance state.

When you implement measures to mitigate the effects of the random risks of the world, you’re safer as a result. When you implement measures to reduce the risks from your fellow human beings, the human beings adapt and you get less risk reduction than you’d expect — and you also get more side effects, because we all adapt.

We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings. We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society. The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.

This essay previously appeared on Forbes.com.

EDITED TO ADD (8/5): Slashdot thread.

Posted on September 3, 2013 at 6:41 AMView Comments

A Problem with the US Privacy and Civil Liberties Oversight Board

I haven’t heard much about the Privacy and Civil Liberties Oversight Board. They recently held hearings regarding the Snowden documents.

This particular comment stood out:

Rachel Brand, another seemingly unsympathetic board member, concluded: “There is nothing that is more harmful to civil liberties than terrorism. This discussion here has been quite sterile because we have not been talking about terrorism.”

If terrorism harms civil liberties, it’s because elected officials react in panic and revoke them.

I’m not optimistic about this board.

Posted on July 16, 2013 at 7:11 AMView Comments

The Psychology of Conspiracy Theories

Interesting.

Crazy as these theories are, those propagating them are not — they’re quite normal, in fact. But recent scientific research tells us this much: if you think one of the theories above is plausible, you probably feel the same way about the others, even though they contradict one another. And it’s very likely that this isn’t the only news story that makes you feel as if shadowy forces are behind major world events.

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories,” says Viren Swami, a psychology professor who studies conspiracy belief at the University of Westminster in England. Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview.

[…]

Our access to high-quality information has not, unfortunately, ushered in an age in which disagreements of this sort can easily be solved with a quick Google search. In fact, the Internet has made things worse. Confirmation bias—the tendency to pay more attention to evidence that supports what you already believe—is a well-documented and common human failing. People have been writing about it for centuries. In recent years, though, researchers have found that confirmation bias is not easy to overcome. You can’t just drown it in facts.

Posted on June 11, 2013 at 12:30 PMView Comments

The Politics of Security in a Democracy

Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.

Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.

The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.

The second is that doing something — anything — is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”

The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.

Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.

The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy — and probably profitable — for a politician to make them happy and say yes.

For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.

Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.

This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.

We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.

Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.

What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.

Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.

This essay originally appeared on CNN.com.

EDITED TO ADD (6/4): This essay has been translated into Swedish.

EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.

Posted on May 28, 2013 at 5:09 AMView Comments

More Links on the Boston Terrorist Attacks

Max Abrahms has two sensible essays.

Probably the ultimate in security theater: Williams-Sonoma stops selling pressure cookers in the Boston area “out of respect.” They say it’s temporary. (I bought a Williams-Sonoma pressure cooker last Christmas; I wonder if I’m now on a list.)

A tragedy: Sunil Tripathi, whom Reddit and other sites wrongly identified as one of the bombers, was found dead in the Providence River. I hope it’s not a suicide.

And worst of all, New York Mayor Bloomberg scares me more than the terrorists ever could:

In the wake of the Boston Marathon bombings, Mayor Michael Bloomberg said Monday the country’s interpretation of the Constitution will “have to change” to allow for greater security to stave off future attacks.

“The people who are worried about privacy have a legitimate worry,” Mr. Bloomberg said during a press conference in Midtown. “But we live in a complex world where you’re going to have to have a level of security greater than you did back in the olden days, if you will. And our laws and our interpretation of the Constitution, I think, have to change.”

Terrorism’s effectiveness doesn’t come from the terrorist acts; it comes from our reactions to it. We need leaders who aren’t terrorized.

EDITED TO ADD (4/29): Only indirectly related, but the Kentucky Derby is banning “removable lens cameras” for security reasons.

EDITED TO ADD (4/29): And a totally unscientific CNN opinion poll: 57% say no to: “Is it justifiable to violate certain civil liberties in the name of national security?”

EDITED TO ADD (4/29): It seems that Sunil Tripathi died well before the Boston bombing. So while his family was certainly affected by the false accusations, he wasn’t.

EDITED TO ADD (4/29): On the difference between mass murder and terrorism:

What the United States means by terrorist violence is, in large part, “public violence some weirdo had the gall to carry out using a weapon other than a gun.”

EDITED TO ADD (5/14): On fear fatigue — and a good modeling of how to be indomitable. On the surprising dearth of terrorists. Why emergency medical response has improved since 9/11. What if the Boston bombers had been shooters instead. More on Williams-Sonoma: Shortly thereafter, they released a statement apologizing to anyone who might be offended. Don’t be terrorized. “The new terrorism” — from 2011 (in five parts, and this is the first one). This is kind of wordy, but it’s an interesting essay on the nature of fear…and cats. Glenn Greenwald on reactions to the bombing. How a 20-year-old Saudi victim of the bombing was instantly, and baselessly, converted by the US media and government into a “suspect.” Four effective responses to terrorism. People being terrorized. On not letting the bad guys win. Resilience. More resilience Why terrorism works. Data shows that terrorism has declined. Mass hysteria as a terrorist weapon.

Posted on April 29, 2013 at 10:27 AMView Comments

Random Links on the Boston Terrorist Attack

Encouraging poll data says that maybe Americans are starting to have realistic fears about terrorism, or at least are refusing to be terrorized.

Good essay by Scott Atran on terrorism and our reaction.

Reddit apologizes. I think this is a big story. The Internet is going to help in everything, including trying to identify terrorists. This will happen whether or not the help is needed, wanted, or even helpful. I think this took the FBI by surprise. (Here’s a good commentary on this sort of thing.)

Facial recognition software didn’t help. I agree with this, though; it will only get better.

EDITED TO ADD (4/25): “Hapless, Disorganized, and Irrational“: John Mueller and Mark Stewart describe the Boston — and most other — terrorists.

Posted on April 25, 2013 at 6:42 AMView Comments

Initial Thoughts on the Boston Bombings

I rewrote my “refuse to be terrorized” essay for the Atlantic. David Rothkopf (author of the great book Power, Inc.) wrote something similar, and so did John Cole.

It’s interesting to see how much more resonance this idea has today than it did a dozen years ago. If other people have written similar essays, please post links in the comments.

EDITED TO ADD (4/16): Two good essays.

EDITED TO ADD (4/16): I did a Q&A on the Washington Post blog. And — I can hardly believe it — President Obama said “the American people refuse to be terrorized” in a press briefing today.

EDITED TO ADD (4/16): I did a podcast interview and another press interview.

EDITED TO ADD (4/16): This, on the other hand, is pitiful.

EDITED TO ADD (4/17): Another audio interview with me.

EDITED TO ADD (4/19): I have done a lot of press this week. Here’s a link to a “To the Point” segment, and two Huffington Post Live segments. I was on The Steve Malzberg Show, which I didn’t realize was shouting conservative talk radio until it was too late.

EDITED TO ADD (4/20): That Atlantic essay had 40,000 Facebook likes and 6800 Tweets. The editor told me it had about 360,000 hits. That makes it the most popular piece I’ve ever written.

EDITED TO ADD (5/14): More links here.

Posted on April 16, 2013 at 9:19 AMView Comments

Elite Panic

I hadn’t heard of this term before, but it’s an interesting one. The excerpt below is from an interview with Rebecca Solnit, author of A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster:

The term “elite panic” was coined by Caron Chess and Lee Clarke of Rutgers. From the beginning of the field in the 1950s to the present, the major sociologists of disaster — Charles Fritz, Enrico Quarantelli, Kathleen Tierney, and Lee Clarke — proceeding in the most cautious, methodical, and clearly attempting-to-be-politically-neutral way of social scientists, arrived via their research at this enormous confidence in human nature and deep critique of institutional authority. It’s quite remarkable.

Elites tend to believe in a venal, selfish, and essentially monstrous version of human nature, which I sometimes think is their own human nature. I mean, people don’t become incredibly wealthy and powerful by being angelic, necessarily. They believe that only their power keeps the rest of us in line and that when it somehow shrinks away, our seething violence will rise to the surface — that was very clear in Katrina. Timothy Garton Ash and Maureen Dowd and all these other people immediately jumped on the bandwagon and started writing commentaries based on the assumption that the rumors of mass violence during Katrina were true. A lot of people have never understood that the rumors were dispelled and that those things didn’t actually happen; it’s tragic.

But there’s also an elite fear — going back to the 19th century — that there will be urban insurrection. It’s a valid fear. I see these moments of crisis as moments of popular power and positive social change. The major example in my book is Mexico City, where the ’85 earthquake prompted public disaffection with the one-party system and, therefore, the rebirth of civil society.

Posted on April 8, 2013 at 1:30 PMView Comments

Government Use of Hackers as an Object of Fear

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power:

But these more serious threats don’t seem to loom as large as hackers in the minds of those who make the laws and regulations that shape the Internet. It is the hacker — a sort of modern folk devil who personifies our anxieties about technology — who gets all the attention. The result is a set of increasingly paranoid and restrictive laws and regulations affecting our abilities to communicate freely and privately online, to use and control our own technology, and which puts users at risk for overzealous prosecutions and invasive electronic search and seizure practices. The Computer Fraud and Abuse Act, the cornerstone of domestic computer-crime legislation, is overly broad and poorly defined. Since its passage in 1986, it has created a pile of confused caselaw and overzealous prosecutions. The Departments of Defense and Homeland Security manipulate fears of techno-disasters to garner funding and support for laws and initiatives, such as the recently proposed Cyber Intelligence Sharing and Protection Act, that could have horrific implications for user rights. In order to protect our rights to free speech and privacy on the internet, we need to seriously reconsider those laws and the shadowy figure used to rationalize them.

[…]

In the effort to protect society and the state from the ravages of this imagined hacker, the US government has adopted overbroad, vaguely worded laws and regulations which severely undermine internet freedom and threaten the Internet’s role as a place of political and creative expression. In an effort to stay ahead of the wily hacker, laws like the Computer Fraud and Abuse Act (CFAA) focus on electronic conduct or actions, rather than the intent of or actual harm caused by those actions. This leads to a wide range of seemingly innocuous digital activities potentially being treated as criminal acts. Distrust for the hacker politics of Internet freedom, privacy, and access abets the development of ever-stricter copyright regimes, or laws like the proposed Cyber Intelligence Sharing and Protection Act, which if passed would have disastrous implications for personal privacy online.

Note that this was written last year, before any of the recent overzealous prosecutions.

Posted on April 8, 2013 at 6:34 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.