Entries Tagged "psychology of security"

Page 11 of 26

Bin Laden's Death Causes Spike in Suspicious Package Reports

It’s not that the risk is greater, it’s that the fear is greater. Data from New York:

There were 10,566 reports of suspicious objects across the five boroughs in 2010. So far this year, the total was 2,775 as of Tuesday compared with 2,477 through the same period last year.

[…]

The daily totals typically spike when terrorist plot makes headlines here or overseas, NYPD spokesman Paul Browne said Tuesday. The false alarms themselves sometimes get break-in cable news coverage or feed chatter online, fueling further fright.

On Monday, with news of the dramatic military raid of bin Laden’s Pakistani lair at full throttle, there were 62 reports of suspicious packages. The previous Monday, the 24-hour total was 18. All were deemed non-threats.

Despite all the false alarms, the New York Police Department still wants to hear them:

“We anticipate that with increased public vigilance comes an increase in false alarms for suspicious packages,” Kelly said at the Monday news conference. “This typically happens at times of heightened awareness. But we don’t want to discourage the public. If you see something, say something.”

That slogan, oddly enough, is owned by New York’s transit authority.

I have a different opinion: “If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.”

People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they’re spooked only squanders our security resources and makes no one safer.

Refuse to be terrorized,” people.

Posted on May 5, 2011 at 6:43 AMView Comments

Social Solidarity as an Effect of the 9/11 Terrorist Attacks

It’s standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.

Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other’s symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.

This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It’s the kind of thing I am talking about in my new book.

Paper also available here.

Posted on April 27, 2011 at 9:10 AMView Comments

Detecting Cheaters

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating—otherwise, the social group would fall apart.

Perhaps the most vivid demonstration of this can be seen with variations on what’s known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, it’s used more as a demonstration of evolutionary psychology. But before we get to the experiment, let’s get into the mathematical background.

Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they’re not generally considered to be easy classes. Two particular rules of inference are relevant here: modus ponens and modus tollens. Both allow you to reason from a statement of the form, "if P, then Q." (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) Modus ponens goes like this:

If P, then Q. P. Therefore, Q.

In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,

If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.

Modus tollens is more complicated:

If P, then Q. Not Q. Therefore, not P.

If Socrates was a man, then Socrates was mortal. Socrates was not mortal. Therefore, Socrates was not a man.

This makes sense: if Socrates was not mortal, then he was a demigod or a stone statue or something.

Both are valid forms of logical reasoning. If you know "if P, then Q" and "P," then you know "Q." If you know "if P, then Q" and "not Q," then you know "not P." (The other two similar forms don’t work. If you know "if P, then Q" and "Q," you don’t know anything about "P." And if you know "if P, then Q" and "not P," then you don’t know anything about "Q.")

If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, "If a person travels to Boston, then he or she takes a plane." The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: "went to Boston," "went to New York," "took a plane," and "took a car." Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there’s the general rule: if P, then Q. The four cards are "P," "not P," "Q," and "not Q." To verify that "if P, then Q" is a valid rule, you have to verify modus ponens by turning over the "P" card and making sure that the reverse says "Q." To verify modus tollens, you turn over the "not Q" card and make sure that the reverse doesn’t say "P."

Shifting back to the example, you need to turn over the "went to Boston" card to make sure that person took a plane, and you need to turn over the "took a car" card to make sure that person didn’t go to Boston. You don’t—as many people think—need to turn over the "took a plane" card to see if it says "went to Boston" because you don’t care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn’t break the rule if someone flies elsewhere.

If you’re confused, you aren’t alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I’ve seen is "fewer than 25 percent." Training in formal logic doesn’t seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.

This isn’t just another "math is hard" story. There’s a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family—"gets dessert," "doesn’t get dessert," "ate vegetables," and "didn’t eat vegetables"—and change the rule to "If a child gets dessert, he or she ate his or her vegetables." Many people—65 to 80 percent—get it right immediately. They turn over the "ate dessert" card, making sure the child ate his vegetables, and they turn over the "didn’t eat vegetables" card, making sure the child didn’t get dessert. Another way of saying this is that they turn over the "benefit received" card to make sure the cost was paid. And they turn over the "cost not paid" card to make sure no benefit was received. They look for cheaters.

The difference is startling. Subjects don’t need formal logic training. They don’t need math or philosophy. When asked to explain their reasoning, they say things like the answer "popped out at them."

Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.

In the world of propositional calculus, there’s absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there’s an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It’s of the form "If you take Benefit B, you must first satisfy Requirement R."

Our brains are optimized to detect cheaters in a social exchange. We’re good at it. Even as children, we intuitively notice when someone gets a benefit he didn’t pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn’t fair, but we still know who among our friends cheats in social exchanges. We know who doesn’t pay his or her fair share of a group meal. At an airport, we might not notice the rule "If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights." But we’ll certainly notice who breaks the "If you board first, then you must be a first-class passenger" rule.

This essay was originally published in IEEE Security & Privacy, and is an excerpt from the draft of my new book.

EDITED TO ADDD (4/14): Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.

Posted on April 7, 2011 at 1:10 PMView Comments

Folk Models in Home Computer Security

This is a really interesting paper: “Folk Models of Home Computer Security,” by Rick Wash. It was presented at SOUPS, the Symposium on Usable Privacy and Security, last year.

Abstract:

Home computer systems are frequently insecure because they are administered by untrained, unskilled users. The rise of botnets has amplified this problem; attackers can compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I investigate how home computer users make security-relevant decisions about their computers. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which security advice to follow: four different conceptualizations of ‘viruses’ and other malware, and four different conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring some security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they have been cleverly designed to take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

I’d list the models, but it’s more complicated than that. Read the paper.

Posted on March 22, 2011 at 7:12 AMView Comments

Societal Security

Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society—the honest majority—prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security—in particular, something I’m calling societal security.

I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or—more precisely—as old as predation. And humans have brought an incredible level of sophistication to individual security.

Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.

There are many types—I might try to estimate the number someday—of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.

James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels—not the fallen ones—ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.

Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!

It wasn’t always like this. Security—especially societal security—used to be cheap. It used to be an incidental cost of society.

In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention—detection and punishment leading to deterrence—fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.

We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.

I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security—individual self-interest vs. societal group interest—as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.

Using this model, I will explore the security systems that protect—and fail to protect—market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.

Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.

I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first—especially because I don’t write books in strict order—but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.

And if I write fewer extended blog posts and essays in the coming year, you’ll know why.

Posted on February 15, 2011 at 5:43 AMView Comments

Terrorist Targets of Choice

This makes sense.

Generally, militants prefer to attack soft targets where there are large groups of people, that are symbolic and recognizable around the world and that will generate maximum media attention when attacked. Some past examples include the World Trade Center in New York, the Taj Mahal Hotel in Mumbai and the London Underground. The militants’ hope is that if the target meets these criteria, terror magnifiers like the media will help the attackers produce a psychological impact that goes far beyond the immediate attack site ­ a process we refer to as “creating vicarious victims.” The best-case scenario for the attackers is that this psychological impact will also produce an adverse economic impact against the targeted government.

Unlike hard targets, which frequently require attackers to use large teams of operatives with elaborate attack plans or very large explosive devices in order to breach defenses, soft targets offer militant planners an advantage in that they can frequently be attacked by a single operative or small team using a simple attack plan. The failed May 1, 2010, attack against New York’s Times Square and the July 7, 2005, London Underground attacks are prime examples of this, as was the Jan. 24 attack at Domodedovo airport. Such attacks are relatively cheap and easy to conduct and can produce a considerable propaganda return for very little investment.

Posted on February 4, 2011 at 6:00 AMView Comments

James Fallows on Political Shootings

Interesting:

So the train of logic is:

  1. anything that can be called an “assassination” is inherently political;
  2. very often the “politics” are obscure, personal, or reflecting mental disorders rather than “normal” political disagreements. But now a further step,
  3. the political tone of an era can have some bearing on violent events. The Jonestown/Ryan and Fromme/Ford shootings had no detectable source in deeper political disagreements of that era. But the anti-JFK hate-rhetoric in Dallas before his visit was so intense that for decades people debated whether the city was somehow “responsible” for the killing. (Even given that Lee Harvey Oswald was an outlier in all ways.)

Posted on January 10, 2011 at 7:04 AMView Comments

The Social Dynamics of Terror

Good essay:

Nineteenth-century anarchists promoted what they called the “propaganda of the deed,” that is, the use of violence as a symbolic action to make a larger point, such as inspiring the masses to undertake revolutionary action. In the late 1960s and early 1970s, modern terrorist organizations began to conduct operations designed to serve as terrorist theater, an undertaking greatly aided by the advent and spread of broadcast media. Examples of attacks designed to grab international media attention are the September 1972 kidnapping and murder of Israeli athletes at the Munich Olympics and the December 1975 raid on OPEC headquarters in Vienna. Aircraft hijackings followed suit, changing from relatively brief endeavors to long, drawn-out and dramatic media events often spanning multiple continents.

Today, the proliferation of 24-hour television news networks and the Internet have allowed the media to broadcast such attacks live and in their entirety. This development allowed vast numbers of people to watch live as the World Trade Center towers collapsed on Sept. 11, 2001, and as teams of gunmen ran amok in Mumbai in November 2008.

This exposure not only allows people to be informed about unfolding events, it also permits them to become secondary victims of the violence they have watched unfold before them. As the word indicates, the intent of “terrorism” is to create terror in a targeted audience, and the media allow that audience to become far larger than just those in the immediate vicinity of a terrorist attack. I am not a psychologist, but even I can understand that on 9/11, watching the second aircraft strike the South Tower, seeing people leap to their deaths from the windows of the World Trade Center Towers in order to escape the ensuing fire and then watching the towers collapse live on television had a profound impact on many people. A large portion of the United State was, in effect, victimized, as were a large number of people living abroad, judging from the statements of foreign citizens and leaders in the wake of 9/11 that “We are all Americans.”

Posted on January 7, 2011 at 6:30 AMView Comments

"Architecture of Fear"

I like the phrase:

Németh said the zones not only affect the appearance of landmark buildings but also reflect an ‘architecture of fear’ as evidenced, for example, by the bunker-like appearance of embassies and other perceived targets.

Ultimately, he said, these places impart a dual message—simultaneously reassuring the public while causing a sense of unease.

And in the end, their effect could be negligible.

“Indeed, overt security measures may be no more effective than covert intelligence techniques,” he said. “But the architecture aims to comfort both property developers concerned with investment risk and residents and tourists with the notion that terror threats are being addressed and that daily life will soon ‘return to normal.'”

My own essay on architecture and security from 2006.

EDITED TO ADD (1/13): Here’s the full paper. And some stuff from the Whole Building Design Guide site. Also see the planned U.S. embassy in London, which includes a moat.

Posted on December 20, 2010 at 5:55 AMView Comments

1 9 10 11 12 13 26

Sidebar photo of Bruce Schneier by Joe MacInnis.