Entries Tagged "psychology of security"

Page 5 of 26

Neighborhood Security: Feeling vs. Reality

Research on why some neighborhoods feel safer:

Salesses and collaborators Katja Schechtner and César A. Hidalgo built an online comparison tool using Google Street View images to identify these often unseen triggers of our perception of place. Have enough people compare paired images of streets in New York or Boston, for instance, for the scenes that look more “safe” or “upper-class,” and eventually some patterns start to emerge.

“We found images with trash in it, and took the trash out, and we noticed a 30 percent increase in perception of safety,” Salesses says. “It’s surprising that something that easy had that large an effect.”

This also means some fairly cost-effective government interventions ­—collecting trash—could have a significant impact on how safe people feel in a neighborhood. “It’s like bringing a data source to something that’s always been subjective,” Salesses says.

I’ve written about the feeling and reality of security, and how they’re different. (That’s also the subject of this TEDx talk.) Yes, it’s security theater: things that make a neighborhood feel safer rather than actually safer. But when the neighborhood is actually safer than people think it is, this sort of security theater has value.

Original paper.

EDITED TO ADD (8/14): Two related links.

Posted on July 30, 2013 at 1:44 PMView Comments

Secret Information Is More Trusted

This is an interesting, if slightly disturbing, result:

In one experiment, we had subjects read two government policy papers from 1995, one from the State Department and the other from the National Security Council, concerning United States intervention to stop the sale of fighter jets between foreign countries.

The documents, both of which were real papers released through the Freedom of Information Act, argued different sides of the issue. Depending on random assignment, one was described as having been previously classified, the other as being always public. Most people in the study thought that whichever document had been “classified” contained more accurate and well-reasoned information than the public document.

In another experiment, people read a real government memo from 1978 written by members of the National Security Council about the sale of fighter jets to Taiwan; we then explained that the council used the information to make decisions. Again, depending on random assignment, some people were told that the document had been secret and for exclusive use by the council, and that it had been recently declassified under the Freedom of Information Act. Others were told that the document had always been public.

As we expected, people who thought the information was secret deemed it more useful, important and accurate than did those who thought it was public. And people judged the National Security Council’s actions based on the information as more prudent and wise when they believed the document had been secret.

[…]

Our study helps explain the public’s support for government intelligence gathering. A recent poll by the Pew Research Center for the People and the Press reported that a majority of Americans thought it was acceptable for the N.S.A. to track Americans’ phone activity to investigate terrorism. Some frustrated commentators have concluded that Americans have much less respect for their own privacy than they should.

But our research suggests another conclusion: the secret nature of the program itself may lead the public to assume that the information it gathers is valuable, without even examining what that information is or how it might be used.

Original paper abstract; the full paper is behind a paywall.

Posted on July 26, 2013 at 6:25 AMView Comments

F2P Monetization Tricks

This is a really interesting article about something I never even thought about before: how games (“F2P” means “free to play”) trick players into paying for stuff.

For example:

This is my favorite coercive monetization technique, because it is just so powerful. The technique involves giving the player some really huge reward, that makes them really happy, and then threatening to take it away if they do not spend. Research has shown that humans like getting rewards, but they hate losing what they already have much more than they value the same item as a reward. To be effective with this technique, you have to tell the player they have earned something, and then later tell them that they did not. The longer you allow the player to have the reward before you take it away, the more powerful is the effect.

This technique is used masterfully in Puzzle and Dragons. In that game the play primarily centers around completing “dungeons.” To the consumer, a dungeon appears to be a skill challenge, and initially it is. Of course once the customer has had enough time to get comfortable with the idea that this is a skill game the difficulty goes way up and it becomes a money game. What is particularly effective here is that the player has to go through several waves of battles in a dungeon, with rewards given after each wave. The last wave is a “boss battle” where the difficulty becomes massive and if the player is in the recommended dungeon for them then they typically fail here. They are then told that all of the rewards from the previous waves are going to be lost, in addition to the stamina used to enter the dungeon (this can be 4 or more real hours of time worth of stamina).

At this point the user must choose to either spend about $1 or lose their rewards, lose their stamina (which they could get back for another $1), and lose their progress. To the brain this is not just a loss of time. If I spend an hour writing a paper and then something happens and my writing gets erased, this is much more painful to me than the loss of an hour. The same type of achievement loss is in effect here. Note that in this model the player could be defeated multiple times in the boss battle and in getting to the boss battle, thus spending several dollars per dungeon.

This technique alone is effective enough to make consumers of any developmental level spend. Just to be safe, PaD uses the same technique at the end of each dungeon again in the form of an inventory cap. The player is given a number of “eggs” as rewards, the contents of which have to be held in inventory. If your small inventory space is exceeded, again those eggs are taken from you unless you spend to increase your inventory space. Brilliant!

It really is a piece about security. These games use all sorts of mental tricks to coerce money from people who would not have spent it otherwise. Tricks include misdirection, sunk costs, withholding information, cognitive dissonance, and prospect theory.

I am reminded of the cognitive tricks scammers use. And, of course, much of the psychology of security.

Posted on July 12, 2013 at 6:37 AMView Comments

The Psychology of Conspiracy Theories

Interesting.

Crazy as these theories are, those propagating them are not—they’re quite normal, in fact. But recent scientific research tells us this much: if you think one of the theories above is plausible, you probably feel the same way about the others, even though they contradict one another. And it’s very likely that this isn’t the only news story that makes you feel as if shadowy forces are behind major world events.

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories,” says Viren Swami, a psychology professor who studies conspiracy belief at the University of Westminster in England. Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview.

[…]

Our access to high-quality information has not, unfortunately, ushered in an age in which disagreements of this sort can easily be solved with a quick Google search. In fact, the Internet has made things worse. Confirmation bias—the tendency to pay more attention to evidence that supports what you already believe—is a well-documented and common human failing. People have been writing about it for centuries. In recent years, though, researchers have found that confirmation bias is not easy to overcome. You can’t just drown it in facts.

Posted on June 11, 2013 at 12:30 PMView Comments

Security and Human Behavior (SHB 2013)

I’m at the Sixth Interdisciplinary Workshop on Security and Human Behavior (SHB 2013). This year we’re in Los Angeles, at USC—hosted by CREATE.

My description from last year still applies:

SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

It is still the most intellectually stimulating conference I attend all year. The format has remained unchanged since the beginning. Each panel consists of six people. Everyone has ten minutes to talk, and then we have half an hour of questions and discussion. The format maximizes interaction, which is really important in an interdisciplinary conference like this one.

The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Both Ross Anderson and Vaibhav Garg have liveblogged the event.

Here are my posts on the first, second, third, fourth, and fifth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 5, 2013 at 7:20 AMView Comments

Training Baggage Screeners

The research in G. Giguère and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

Posted on May 24, 2013 at 12:17 PMView Comments

Security Risks of Too Much Security

All of the anti-counterfeiting features of the new Canadian $100 bill are resulting in people not bothering to verify them.

The fanfare about the security features on the bills, may be part of the problem, said RCMP Sgt. Duncan Pound.

“Because the polymer series’ notes are so secure … there’s almost an overconfidence among retailers and the public in terms of when you sort of see the strip, the polymer looking materials, everybody says ‘oh, this one’s going to be good because you know it’s impossible to counterfeit,'” he said.

“So people don’t actually check it.”

Posted on May 20, 2013 at 6:34 AMView Comments

Intelligence Analysis and the Connect-the-Dots Metaphor

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

It’s an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber’s failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. All you have to do is move your pencil from one dot to the next, and when you’re done, you’ve drawn a sailboat. Or a tiger. It’s so simple that 5-year-olds can do it.

But in real life, the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from a Russian request for information to a foreign visit to some other piece of information that might have been collected.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

How many? We don’t know. But we know that the no-fly list had 21,000 people on it last year. The Terrorist Identities Datamart Environment, also known as the watch list, has 700,000 names on it.

We have no idea how many potential “dots” the FBI, CIA, NSA and other agencies collect, but it’s easily in the millions. It’s easy to work backwards through the data and see all the obvious warning signs. But before a terrorist attack, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a sailboat, a puppy, two guys with pressure-cooker bombs, or just an unintelligible mess of dots? You try to figure it out.

It’s not a matter of not enough data, either.

Piling more data onto the mix makes it harder, not easier. The best way to think of it is a needle-in-a-haystack problem; the last thing you want to do is increase the amount of hay you have to search through. The television show Person of Interest is fiction, not fact.

There’s a name for this sort of logical fallacy: hindsight bias. First explained by psychologists Daniel Kahneman and Amos Tversky, it’s surprisingly common. Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen. It’s a surprisingly strong tendency, one that has been observed in countless laboratory experiments and real-world examples of behavior. And it’s what all the post-Boston-Marathon bombing dot-connectors are doing.

Before we start blaming agencies for failing to stop the Boston bombers, and before we push “intelligence reforms” that will shred civil liberties without making us any safer, we need to stop seeing the past as a bunch of obvious dots that need connecting.

Kahneman, a Nobel prize winner, wisely noted: “Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.” Kahneman calls it “the illusion of understanding,” explaining that the past is only so understandable because we have cast it as simple inevitable stories and leave out the rest.

Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” We humans are natural storytellers, and the world of stories is much more tidy, predictable and coherent than the real world.

Millions of people behave strangely enough to warrant the FBI’s notice, and almost all of them are harmless. It is simply not possible to find every plot beforehand, especially when the perpetrators act alone and on impulse.

We have to accept that there always will be a risk of terrorism, and that when the occasional plot succeeds, it’s not necessarily because our law enforcement systems have failed.

This essay previously appeared on CNN.

EDITED TO ADD (5/7): The hindsight bias was actually first discovered by Baruch Fischhoff: “Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty,” Journal of Experimental Psychology: Human Perception and Performance, 1(3), 1975, pp. 288-299.

Posted on May 7, 2013 at 6:10 AMView Comments

Random Links on the Boston Terrorist Attack

Encouraging poll data says that maybe Americans are starting to have realistic fears about terrorism, or at least are refusing to be terrorized.

Good essay by Scott Atran on terrorism and our reaction.

Reddit apologizes. I think this is a big story. The Internet is going to help in everything, including trying to identify terrorists. This will happen whether or not the help is needed, wanted, or even helpful. I think this took the FBI by surprise. (Here’s a good commentary on this sort of thing.)

Facial recognition software didn’t help. I agree with this, though; it will only get better.

EDITED TO ADD (4/25): “Hapless, Disorganized, and Irrational“: John Mueller and Mark Stewart describe the Boston—and most other—terrorists.

Posted on April 25, 2013 at 6:42 AMView Comments

Initial Thoughts on the Boston Bombings

I rewrote my “refuse to be terrorized” essay for the Atlantic. David Rothkopf (author of the great book Power, Inc.) wrote something similar, and so did John Cole.

It’s interesting to see how much more resonance this idea has today than it did a dozen years ago. If other people have written similar essays, please post links in the comments.

EDITED TO ADD (4/16): Two good essays.

EDITED TO ADD (4/16): I did a Q&A on the Washington Post blog. And—I can hardly believe it—President Obama said “the American people refuse to be terrorized” in a press briefing today.

EDITED TO ADD (4/16): I did a podcast interview and another press interview.

EDITED TO ADD (4/16): This, on the other hand, is pitiful.

EDITED TO ADD (4/17): Another audio interview with me.

EDITED TO ADD (4/19): I have done a lot of press this week. Here’s a link to a “To the Point” segment, and two Huffington Post Live segments. I was on The Steve Malzberg Show, which I didn’t realize was shouting conservative talk radio until it was too late.

EDITED TO ADD (4/20): That Atlantic essay had 40,000 Facebook likes and 6800 Tweets. The editor told me it had about 360,000 hits. That makes it the most popular piece I’ve ever written.

EDITED TO ADD (5/14): More links here.

Posted on April 16, 2013 at 9:19 AMView Comments

1 3 4 5 6 7 26

Sidebar photo of Bruce Schneier by Joe MacInnis.