Entries Tagged "psychology of security"

Page 10 of 26

Risk Tolerance and Culture

This is an interesting study on cultural differences in risk tolerance.

The Cultures of Risk Tolerance

Abstract: This study explores the links between culture and risk tolerance, based on surveys conducted in 23 countries. Altogether, more than 4,000 individuals participated in the surveys. Risk tolerance is associated with culture. Risk tolerance is relatively low in countries where uncertainty avoidance is relatively high and in countries which are relatively individualistic. Risk tolerance is also relatively low in countries which are relatively egalitarian and harmonious. And risk tolerance is relatively high in countries where trust is relatively high. Culture is also associated with risk tolerance indirectly, through the association between culture and income-per-capita. People in countries with relatively high income-per-capita tend to be relatively individualistic, egalitarian, and trusting. Risk tolerance is relatively high in countries with relatively low income-per-capita.

Posted on September 14, 2011 at 2:02 PMView Comments

Human Pattern-Matching Failures in Airport Screening

I’ve written about this before: the human brain just isn’t suited to finding rare anomalies in a screening situation.

The Role of the Human Operator in Image-Based Airport Security Technologies

Abstract: Heightened international concerns relating to security and identity management have led to an increased interest in security applications, such as face recognition and baggage and passenger screening at airports. A common feature of many of these technologies is that a human operator is presented with an image and asked to decide whether the passenger or baggage corresponds to a person or item of interest. The human operator is a critical component in the performance of the system and it is of considerable interest to not only better understand the performance of human operators on such tasks, but to also design systems with a human operator in mind. This paper discusses a number of human factors issues which will have an impact on human operator performance in the operational environment, as well as highlighting the variables which must be considered when evaluating the performance of these technologies in scenario or operational trials based on Defence Science and Technology Organisation’s experience in such testing.

Posted on September 13, 2011 at 1:46 PMView Comments

Steven Pinker on Terrorism

It’s almost time for a deluge of “Ten Years After 9/11” essays. Here’s Steven Pinker:

The discrepancy between the panic generated by terrorism and the deaths generated by terrorism is no accident. Panic is the whole point of terrorism, as the root of the word makes clear: “Terror” refers to a psychological state, not an enemy or an event. The effects of terrorism depend completely on the psychology of the audience.

[…]

Cognitive psychologists such as Amos Tversky, Daniel Kahneman, Gerd Gigerenzer, and Paul Slovic have shown that the perceived danger of a risk depends on two factors: fathomability and dread. People are terrified of risks that are novel, undetectable, delayed in their effects, and poorly understood. And they are terrified about worst-case scenarios, the ones that are uncontrollable, catastrophic, involuntary, and inequitable (that is, the people exposed to the risk are not the ones who benefit from it).

These psychologists suggest that cognitive illusions are a legacy of ancient brain circuitry that evolved to protect us against natural risks such as predators, poisons, storms, and especially enemies. Large-scale terrorist plots are novel, undetectable, catastrophic, and inequitable, and thus maximize both unfathomability and dread. They give the terrorists a large psychological payoff for a small investment in damage.

[…]

Audrey Cronin nicely captures the conflicting moral psychology that defines the arc of terrorist movements: “Violence has an international language, but so does decency.”

Posted on August 18, 2011 at 1:32 PMView Comments

Revenge Effects of Too-Safe Playground Equipment

Sometimes too much security isn’t good.

After observing children on playgrounds in Norway, England and Australia, Dr. Sandseter identified six categories of risky play: exploring heights, experiencing high speed, handling dangerous tools, being near dangerous elements (like water or fire), rough-and-tumble play (like wrestling), and wandering alone away from adult supervision. The most common is climbing heights.

“Climbing equipment needs to be high enough, or else it will be too boring in the long run,” Dr. Sandseter said. “Children approach thrills and risks in a progressive manner, and very few children would try to climb to the highest point for the first time they climb. The best thing is to let children encounter these challenges from an early age, and they will then progressively learn to master them through their play over the years.”

[…]

By gradually exposing themselves to more and more dangers on the playground, children are using the same habituation techniques developed by therapists to help adults conquer phobias, according to Dr. Sandseter and a fellow psychologist, Leif Kennair, of the Norwegian University for Science and Technology.

“Risky play mirrors effective cognitive behavioral therapy of anxiety,” they write in the journal Evolutionary Psychology, concluding that this “anti-phobic effect” helps explain the evolution of children’s fondness for thrill-seeking. While a youthful zest for exploring heights might not seem adaptive—why would natural selection favor children who risk death before they have a chance to reproduce?—the dangers seemed to be outweighed by the benefits of conquering fear and developing a sense of mastery.

Posted on July 25, 2011 at 1:06 PMView Comments

Fourth SHB Workshop

I’m at SHB 2011, the fourth Interdisciplinary Workshop on Security and Human Behavior, at Carnegie Mellon University. This is a two-day invitational gathering of computer security researchers, psychologists, behavioral economists, sociologists, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary conference; most of the people here are individually interdisciplinary. For the past four years, this has been the most intellectually stimulating conference I have attended.

Here is the program. The list of attendees contains links to readings from each of them—definitely a good place to browse for more information on this topic.

Ross Anderson is liveblogging this event. Matt Blaze is taping the sessions; I’ll link to them if he puts them up on the Internet.

Here are links to my posts on the first, second, and third SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 18, 2011 at 1:06 PMView Comments

Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PMView Comments

Keeping Sensitive Information Out of the Hands of Terrorists Through Self-Restraint

In my forthcoming book (available February 2012), I talk about various mechanisms for societal security: how we as a group protect ourselves from the “dishonest minority” within us. I have four types of societal security systems:

  • moral systems—any internal rewards and punishments;
  • reputational systems—any informal external rewards and punishments;
  • rule-based systems—any formal system of rewards and punishments (mostly punishments); laws, mostly;
  • technological systems—everything like walls, door locks, cameras, and so on.

We spend most of our effort in the third and fourth category. I am spending a lot of time researching how the first two categories work.

Given that, I was very interested in seeing an article by Dallas Boyd in Homeland Security Affairs: “Protecting Sensitive Information: The Virtue of Self-Restraint,” where he basically says that people should not publish information that terrorists could use out of moral responsibility (he calls it “civic duty”). Ignore for a moment the debate about whether publishing information that could give the terrorists ideas is actually a bad idea—I think it’s not—what Boyd is proposing is actually very interesting. He specifically says that censorship is bad and won’t work, and wants to see voluntary self-restraint along with public shaming of offenders.

As an alternative to formal restrictions on communication, professional societies and influential figures should promote voluntary self-censorship as a civic duty. As this practice is already accepted among many scientists, it may be transferrable to members of other professions. As part of this effort, formal channels should be established in which citizens can alert the government to vulnerabilities and other sensitive information without exposing it to a wide audience. Concurrent with this campaign should be the stigmatization of those who recklessly disseminate sensitive information. This censure would be aided by the fact that many such people are unattractive figures whose writings betray their intellectual vanity. The public should be quick to furnish the opprobrium that presently escapes these individuals.

I don’t think it will work, and I don’t even think it’s possible in this international day and age, but it’s interesting to read the proposal.

Slashdot thread on the paper. Another article.

Posted on May 31, 2011 at 6:34 AMView Comments

The Normalization of Security

TSA-style security is now so normal that it’s part of a Disney ride:

The second room of the queue is now a security check area, similar to a TSA checkpoint. The two G-series droids are still there, G2-9T scanning luggage and G2-4T scanning passengers. For those attraction junkies, you’ll remember that the G-series droids are so named because in the original Disneyland Park version of the ride, they were created by removing the “skins” from two of the goose animatronics from the soon-to-close America Sings attraction (Goose = “G” series). While we won’t tell you why, you’ll enjoy paying a lot of attention to what the scans of the luggage show is inside. When it’s your turn to go through the passenger scan (a thermal body scan), you may be verbally accosted by a security droid. Also, keep an eye out in the queue for an earlier version of RX-24 (“Captain Rex”) from the original Star Tours; he’s labeled “defective” and has some familiar dialogue.

This is the new Star Tours ride at Walt Disney World in Orlando.

Posted on May 20, 2011 at 2:43 PMView Comments

1 8 9 10 11 12 26

Sidebar photo of Bruce Schneier by Joe MacInnis.