Entries Tagged "psychology of security"

Page 5 of 26

Peter Watts on the Harms of Surveillance

Biologist Peter Watts makes some good points:

Mammals don’t respond well to surveillance. We consider it a threat. It makes us paranoid, and aggressive and vengeful.

[…]

“Natural selection favors the paranoid,” Watts said. Those who run away. In the earliest days of man on the savannah, when we roamed among the predatory, wild animals, someone realized pretty quickly that lions stalked their prey from behind the tall, untamed grass. And so anyone hoping to keep on breathing developed a healthy fear of the lions in the grass and listened for the rustling in the brush in order to avoid becoming lunch for an animal more powerful than themselves. It was instinct. If the rustling, the perceived surveillance, turns out to just be the wind? Well, no harm done.

“For a very long time, people who don’t see agency have a disproportionate tendency to get eaten,” Watts noted.

And so, we’ve developed those protective instincts. “We see faces in the clouds; we hear ghosts and monsters in the stairs at night,” Watts said. “The link between surveillance and fear is a lot deeper than the average privacy advocate is willing to admit.”

[…]

“A lot of critics say blanket surveillance treats us like criminals, but it’s deeper than that,” he said. “It makes us feel like prey. We’re seeing stalking behavior in the illogical sense,” he said.

This is interesting. People accept government surveillance out of fear: fear of the terrorists, fear of the criminals. If Watts is right, then there’s a conflict of fears. Because terrorists and criminals—kidnappers, child pornographers, drug dealers, whatever—is more evocative than the nebulous fear of being stalked, it wins.

EDITED TO ADD (5/23): His own post is better than the write-up.

EDITED TO ADD (5/24): Peter Watts has responded to this post, complaining about the misquotes in the article I quoted. He will post a transcript of his talk, so we can see what he actually said. My guess is that I will still agree with it.

He also recommended this post of his, which is well worth reading.

EDITED TO ADD (5/27): Here is the transcript.

Posted on May 23, 2014 at 6:42 AMView Comments

Retelling of Stories Increases Bias

Interesting experiment shows that the retelling of stories increases conflict and bias.

For their study, which featured 196 undergraduates, the researchers created a narrative about a dispute between two groups of young people. It described four specific points of tension, but left purposely ambiguous the issue of which party was the aggressor, and “depicted the groups as equally blameworthy.”

Half of the participants read a version of the story in which the two hostile groups were from two Maryland cities. The other half read a version in which one group was from the city of Gaithersburg, but the other was identified as “your friends.”

Participants were assigned a position between one and four. Those in the first position read the initial version of the story, and then “re-told” it in their own words by writing their version of the events. This was passed on to the person in the second position, who did the same.

The procedure was repeated until all four people had created their own versions of the story. Each new version was then examined for subtle shifts in emphasis, blame, and wording.

The results: Each “partisan communicator”—that is, each student who wrote about the incident involving his or her “friends”—”contributed small distortions that, when accumulated, produced a highly biased, inaccurate representation of the original dispute,” the researchers write.

Standard disclaimer—that American undergraduates might not be the best representatives of our species—applies. But the results are not surprising. We tend to play up the us vs. them narrative when we tell stories. The result is particularly interesting in light of the echo chamber that Internet-based politics has become.

The actual paper is behind a paywall.

EDITED TO ADD (5/14): The paper.

Posted on May 8, 2014 at 7:32 AMView Comments

Consumer Manipulation

Tim Harford talks about consumer manipulation:

Consider, first, confusion by design: Las Vegas casinos are mazes, carefully crafted to draw players to the slot machines and to keep them there. Casino designers warn against the “yellow brick road” effect of having a clear route through the casino. (One side effect: it takes paramedics a long time to find gamblers in cardiac arrest; as Ms Schüll also documents, it can be tough to get the slot-machine players to assist, or even to make room for, the medical team.)

Most mazes in our economy are metaphorical: the confusion of multi-part tariffs for mobile phones, cable television or electricity. My phone company regularly contacts me to assure me that I am on the cheapest possible plan given my patterns of usage. No doubt this claim can be justified on some narrow technicality but it seems calculated to deceive. Every time I have put it to the test it has proved false.

I recently cancelled a contract with a different provider after some gizmo broke. The company first told me the whole thing was my problem, then at the last moment offered me hundreds of pounds to stay. When your phone company starts using the playbook of an emotionally abusive spouse, this is not a market in good working order.

This is a security story: manipulation vs. manipulation defense. One of my worries about our modern market system is that the manipulators have gotten too good. We need better security—either technical defenses or legal prohibitions—against this manipulation.

EDITED TO ADD (1/23): More about how cellphone companies rip you off.

Posted on January 23, 2014 at 7:03 AMView Comments

Neighborhood Security: Feeling vs. Reality

Research on why some neighborhoods feel safer:

Salesses and collaborators Katja Schechtner and César A. Hidalgo built an online comparison tool using Google Street View images to identify these often unseen triggers of our perception of place. Have enough people compare paired images of streets in New York or Boston, for instance, for the scenes that look more “safe” or “upper-class,” and eventually some patterns start to emerge.

“We found images with trash in it, and took the trash out, and we noticed a 30 percent increase in perception of safety,” Salesses says. “It’s surprising that something that easy had that large an effect.”

This also means some fairly cost-effective government interventions ­—collecting trash—could have a significant impact on how safe people feel in a neighborhood. “It’s like bringing a data source to something that’s always been subjective,” Salesses says.

I’ve written about the feeling and reality of security, and how they’re different. (That’s also the subject of this TEDx talk.) Yes, it’s security theater: things that make a neighborhood feel safer rather than actually safer. But when the neighborhood is actually safer than people think it is, this sort of security theater has value.

Original paper.

EDITED TO ADD (8/14): Two related links.

Posted on July 30, 2013 at 1:44 PMView Comments

Secret Information Is More Trusted

This is an interesting, if slightly disturbing, result:

In one experiment, we had subjects read two government policy papers from 1995, one from the State Department and the other from the National Security Council, concerning United States intervention to stop the sale of fighter jets between foreign countries.

The documents, both of which were real papers released through the Freedom of Information Act, argued different sides of the issue. Depending on random assignment, one was described as having been previously classified, the other as being always public. Most people in the study thought that whichever document had been “classified” contained more accurate and well-reasoned information than the public document.

In another experiment, people read a real government memo from 1978 written by members of the National Security Council about the sale of fighter jets to Taiwan; we then explained that the council used the information to make decisions. Again, depending on random assignment, some people were told that the document had been secret and for exclusive use by the council, and that it had been recently declassified under the Freedom of Information Act. Others were told that the document had always been public.

As we expected, people who thought the information was secret deemed it more useful, important and accurate than did those who thought it was public. And people judged the National Security Council’s actions based on the information as more prudent and wise when they believed the document had been secret.

[…]

Our study helps explain the public’s support for government intelligence gathering. A recent poll by the Pew Research Center for the People and the Press reported that a majority of Americans thought it was acceptable for the N.S.A. to track Americans’ phone activity to investigate terrorism. Some frustrated commentators have concluded that Americans have much less respect for their own privacy than they should.

But our research suggests another conclusion: the secret nature of the program itself may lead the public to assume that the information it gathers is valuable, without even examining what that information is or how it might be used.

Original paper abstract; the full paper is behind a paywall.

Posted on July 26, 2013 at 6:25 AMView Comments

F2P Monetization Tricks

This is a really interesting article about something I never even thought about before: how games (“F2P” means “free to play”) trick players into paying for stuff.

For example:

This is my favorite coercive monetization technique, because it is just so powerful. The technique involves giving the player some really huge reward, that makes them really happy, and then threatening to take it away if they do not spend. Research has shown that humans like getting rewards, but they hate losing what they already have much more than they value the same item as a reward. To be effective with this technique, you have to tell the player they have earned something, and then later tell them that they did not. The longer you allow the player to have the reward before you take it away, the more powerful is the effect.

This technique is used masterfully in Puzzle and Dragons. In that game the play primarily centers around completing “dungeons.” To the consumer, a dungeon appears to be a skill challenge, and initially it is. Of course once the customer has had enough time to get comfortable with the idea that this is a skill game the difficulty goes way up and it becomes a money game. What is particularly effective here is that the player has to go through several waves of battles in a dungeon, with rewards given after each wave. The last wave is a “boss battle” where the difficulty becomes massive and if the player is in the recommended dungeon for them then they typically fail here. They are then told that all of the rewards from the previous waves are going to be lost, in addition to the stamina used to enter the dungeon (this can be 4 or more real hours of time worth of stamina).

At this point the user must choose to either spend about $1 or lose their rewards, lose their stamina (which they could get back for another $1), and lose their progress. To the brain this is not just a loss of time. If I spend an hour writing a paper and then something happens and my writing gets erased, this is much more painful to me than the loss of an hour. The same type of achievement loss is in effect here. Note that in this model the player could be defeated multiple times in the boss battle and in getting to the boss battle, thus spending several dollars per dungeon.

This technique alone is effective enough to make consumers of any developmental level spend. Just to be safe, PaD uses the same technique at the end of each dungeon again in the form of an inventory cap. The player is given a number of “eggs” as rewards, the contents of which have to be held in inventory. If your small inventory space is exceeded, again those eggs are taken from you unless you spend to increase your inventory space. Brilliant!

It really is a piece about security. These games use all sorts of mental tricks to coerce money from people who would not have spent it otherwise. Tricks include misdirection, sunk costs, withholding information, cognitive dissonance, and prospect theory.

I am reminded of the cognitive tricks scammers use. And, of course, much of the psychology of security.

Posted on July 12, 2013 at 6:37 AMView Comments

The Psychology of Conspiracy Theories

Interesting.

Crazy as these theories are, those propagating them are not—they’re quite normal, in fact. But recent scientific research tells us this much: if you think one of the theories above is plausible, you probably feel the same way about the others, even though they contradict one another. And it’s very likely that this isn’t the only news story that makes you feel as if shadowy forces are behind major world events.

“The best predictor of belief in a conspiracy theory is belief in other conspiracy theories,” says Viren Swami, a psychology professor who studies conspiracy belief at the University of Westminster in England. Psychologists say that’s because a conspiracy theory isn’t so much a response to a single event as it is an expression of an overarching worldview.

[…]

Our access to high-quality information has not, unfortunately, ushered in an age in which disagreements of this sort can easily be solved with a quick Google search. In fact, the Internet has made things worse. Confirmation bias—the tendency to pay more attention to evidence that supports what you already believe—is a well-documented and common human failing. People have been writing about it for centuries. In recent years, though, researchers have found that confirmation bias is not easy to overcome. You can’t just drown it in facts.

Posted on June 11, 2013 at 12:30 PMView Comments

Security and Human Behavior (SHB 2013)

I’m at the Sixth Interdisciplinary Workshop on Security and Human Behavior (SHB 2013). This year we’re in Los Angeles, at USC—hosted by CREATE.

My description from last year still applies:

SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

It is still the most intellectually stimulating conference I attend all year. The format has remained unchanged since the beginning. Each panel consists of six people. Everyone has ten minutes to talk, and then we have half an hour of questions and discussion. The format maximizes interaction, which is really important in an interdisciplinary conference like this one.

The conference website contains a schedule and a list of participants, which includes links to writings by each of them. Both Ross Anderson and Vaibhav Garg have liveblogged the event.

Here are my posts on the first, second, third, fourth, and fifth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 5, 2013 at 7:20 AMView Comments

Training Baggage Screeners

The research in G. Giguère and B.C. Love, “Limits in decision making arise from limits in memory retrieval,” Proceedings of the National Academy of Sciences v. 19 (2013) has applications in training airport baggage screeners.

Abstract: Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers.

Posted on May 24, 2013 at 12:17 PMView Comments

Security Risks of Too Much Security

All of the anti-counterfeiting features of the new Canadian $100 bill are resulting in people not bothering to verify them.

The fanfare about the security features on the bills, may be part of the problem, said RCMP Sgt. Duncan Pound.

“Because the polymer series’ notes are so secure … there’s almost an overconfidence among retailers and the public in terms of when you sort of see the strip, the polymer looking materials, everybody says ‘oh, this one’s going to be good because you know it’s impossible to counterfeit,'” he said.

“So people don’t actually check it.”

Posted on May 20, 2013 at 6:34 AMView Comments

1 3 4 5 6 7 26

Sidebar photo of Bruce Schneier by Joe MacInnis.