Entries Tagged "psychology of security"

Page 14 of 26

Wanted: Trust Detector

It’s good to dream:

IARPA’s five-year plan aims to design experiments that can measure trust with high certainty—a tricky proposition for a psychological study. Developing such experimental protocols could prove very useful for assessing levels of trust within one-on-one talks, or even during group interactions.

A second part of the IARPA proposal might involve using new types of sensors and software to gauge human facial, language or body signals that might help predict trustworthiness. Perhaps facial recognition technology that could deduce emotions or facial tics might help, not to mention better lie detectors.

IARPA is the Intelligence Advanced Research Projects Activity, the U.S. intelligence community’s answer to DARPA.

Posted on March 11, 2010 at 6:17 AMView Comments

The Limits of Visual Inspection

Interesting research:

Target prevalence powerfully influences visual search behavior. In most visual search experiments, targets appear on at least 50% of trials. However, when targets are rare (as in medical or airport screening), observers shift response criteria, leading to elevated miss error rates. Observers also speed target-absent responses and may make more motor errors. This could be a speed/accuracy tradeoff with fast, frequent absent responses producing more miss errors. Disproving this hypothesis, our experiment one shows that very high target prevalence (98%) shifts response criteria in the opposite direction, leading to elevated false alarms in a simulated baggage search. However, the very frequent target-present responses are not speeded. Rather, rare target-absent responses are greatly slowed. In experiment two, prevalence was varied sinusoidally over 1000 trials as observers’ accuracy and reaction times (RTs) were measured. Observers’ criterion and target-absent RTs tracked prevalence. Sensitivity (d’) and target-present RTs did not vary with prevalence. These results support a model in which prevalence influences two parameters: a decision criterion governing the series of perceptual decisions about each attended item, and a quitting threshold that governs the timing of target-absent responses. Models in which target prevalence only influences an overall decision criterion are not supported.

This has implications for searching for contraband at airports.

Posted on February 8, 2010 at 1:54 PMView Comments

Nate Silver on the Risks of Airplane Terrorism

Over at fivethirtyeight.com, Nate Silver crunches the numbers and concludes that, at least as far as terrorism is concerned, air travel is safer than it’s ever been:

In the 2000s, a total of 469 passengers (including crew and terrorists) were killed worldwide as the result of Violent Passenger Incidents, 265 of which were on 9/11 itself. No fatal incidents have occurred since nearly simultaneous bombings of two Russian aircraft on 8/24/2004; this makes for the longest streak without a fatal incident since World War II. The overall death toll during the 2000s is about the same as it was during the 1960s, and substantially less than in the 1970s and 1980s, when violent incidents peaked. The worst individual years were 1985, 1988 and 1989, in that order; 2001 ranks fourth.

Of course, there is a lot more air travel now than there was a couple of decades ago. Although worldwide data is difficult to obtain, U.S. air travel generally expanded at rates of 10-15% per year from the 1930s through 9/11. If we assume that U.S. air traffic represents about a third of the worldwide total (the U.S. share of global GDP, which is probably a reasonable proxy, has fairly consistently been between 26-28% during this period), we can estimate the number of deaths from Violent Passenger Incidents per one billion passenger boardings. By this measure, the 2000s tied the 1990s for being the safest on record, each of which were about six times safer than any previous decade. About 22 passengers per one billion enplanements were killed as the result of VPIs during the 2000s; this compares with a rate of about 191 deaths per billion enplanements during the 1960s.

Why? Because over the past decade, the risk of airplane terrorism has been very low:

Over the past decade, according to BTS, there have been 99,320,309 commercial airline departures that either originated or landed within the United States. Dividing by six, we get one terrorist incident per 16,553,385 departures.

These departures flew a collective 69,415,786,000 miles. That means there has been one terrorist incident per 11,569,297,667 mles flown. This distance is equivalent to 1,459,664 trips around the diameter of the Earth, 24,218 round trips to the Moon, or two round trips to Neptune.

Assuming an average airborne speed of 425 miles per hour, these airplanes were aloft for a total of 163,331,261 hours. Therefore, there has been one terrorist incident per 27,221,877 hours airborne. This can also be expressed as one incident per 1,134,245 days airborne, or one incident per 3,105 years airborne.

There were a total of 674 passengers, not counting crew or the terrorists themselves, on the flights on which these incidents occurred. By contrast, there have been 7,015,630,000 passenger enplanements over the past decade. Therefore, the odds of being on given departure which is the subject of a terrorist incident have been 1 in 10,408,947 over the past decade. By contrast, the odds of being struck by lightning in a given year are about 1 in 500,000. This means that you could board 20 flights per year and still be less likely to be the subject of an attempted terrorist attack than to be struck by lightning.

In 2008, 37,000 people died in automobile accidents—the lowest number since 1961. Even so, that’s more than a 9/11 worth of fatalities every month, month after month, year after year.

There are all sorts of psychological biases that cause us to both misjudge risk and overreact to rare risks, but we can do better than that if we stop and think rationally.

Posted on January 6, 2010 at 2:59 PMView Comments

Emotional Epidemiology

This, from The New England Journal of Medicine, sounds familiar:

This is the story line for most headline-grabbing illnesses—HIV, Ebola virus, SARS, typhoid. These diseases capture our imagination and ignite our fears in ways that more prosaic illnesses do not. These dramatic stakes lend themselves quite naturally to thriller books and movies; Dustin Hoffman hasn’t starred in any blockbusters about emphysema or dysentery.

When the inoculum of dramatic illness is first introduced into society, the public psyche rapidly becomes infected. Almost like an IgE-mediated histamine release, there is an immediate flooding of fear, even if the illness—like Ebola—is infinitely less likely to cause death than, say, a run-in with the Second Avenue bus. This immediate fear of the unknown was what had all my patients demanding the as-yet-unproduced H1N1 vaccine last spring.

As the novel disease establishes itself within society, a certain amount of emotional tolerance is created. H1N1 infection waxed and waned over the summer, and my patients grew less anxious. There was, of course, no medical basis for this decreased vigilance. Unusual risk groups and atypical seasonality should, in fact, have raised concern. By late summer, the perceived mysteriousness of H1N1 had receded, and the number of messages on my clinic phone followed suit.

But emotional epidemiology does not remain static. As autumn rolled around, I sensed a peeved expectation from my patients that this swine flu problem should have been solved already. The fact that it wasn’t “solved,” that the medical profession seemed somehow to be dithering, created an uneasy void. Not knowing whether to succumb to panic or to indifference, patients instead grew suspicious.

Posted on December 9, 2009 at 6:43 AMView Comments

The Psychology of Being Scammed

This is a very interesting paper: “Understanding scam victims: seven principles for systems security,” by Frank Stajano and Paul Wilson. Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There’s no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.

The paper describes a dozen different con scenarios—entertaining in itself—and then lists and explains six general psychological principles that con artists use:

1. The distraction principle. While you are distracted by what retains your interest, hustlers can do anything to you and you won’t notice.

2. The social compliance principle. Society trains people not to question authority. Hustlers exploit this “suspension of suspiciousness” to make you do what they want.

3. The herd principle. Even suspicious marks will let their guard down when everyone next to them appears to share the same risks. Safety in numbers? Not if they’re all conspiring against you.

4. The dishonesty principle. Anything illegal you do will be used against you by the fraudster, making it harder for you to seek help once you realize you’ve been had.

5. The deception principle. Things and people are not what they seem. Hustlers know how to manipulate you to make you believe that they are.

6. The need and greed principle. Your needs and desires make you vulnerable. Once hustlers know what you really want, they can easily manipulate you.

It all makes for very good reading.

Two previous posts on the psychology of conning and being conned.

EDITED TO ADD (12/12): Some of the episodes of The Real Hustle are available on the BBC site, but only to people with UK IP addresses—or people with a VPN tunnel to the UK.

Posted on November 30, 2009 at 6:17 AMView Comments

Fear and Public Perception

This 1996 interview with psychiatrist Robert DuPont was part of a Frontline program called “Nuclear Reaction.”

He’s talking about the role fear plays in the perception of nuclear power. It’s a lot of the sorts of things I say, but particularly interesting is this bit on familiarity and how it reduces fear:

You see, we sited these plants away from metropolitan areas to “protect the public” from the dangers of nuclear power. What we did when we did that was move the plants away from the people, so they became unfamiliar. The major health effect, adverse health effect of nuclear power is not radiation. It’s fear. And by siting them away from the people, we insured that that would be maximized. If we’re serious about health in relationship to nuclear power, we would put them in downtown, big cities, so people would see them all the time. That is really important, in terms of reducing the fear. Familiarity is the way fear is reduced. No question. It’s not done intellectually. It’s not done by reading a book. It’s done by being there and seeing it and talking to the people who work there.

So, among other reasons, terrorism is scary because it’s so rare. When it’s more common—England during the Troubles, Israel today—people have a more rational reaction to it.

My recent essay on fear and overreaction.

Posted on November 27, 2009 at 8:25 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

1 12 13 14 15 16 26

Sidebar photo of Bruce Schneier by Joe MacInnis.