Entries Tagged "psychology of security"

Page 8 of 26

Security and Human Behavior (SHB 2012)

I’m at the Fifth Interdisciplinary Workshop on Security and Human Behavior, SHB 2012. Google is hosting this year, at its offices in lower Manhattan.

SHB is an invitational gathering of psychologists, computer security researchers, behavioral economists, sociologists, law professors, business school professors, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

This is the best and most intellectually stimulating conference I attend all year. I told that to one of the participants yesterday, and he said something like: “Of course it is. You’ve specifically invited everyone you want to listen to.” Which is basically correct. The workshop is organized into panels of 6-7 people. Each panelist gets ten minutes to talk about what he or she is working on, and then we spend the rest of the hour and a half in discussion.

Here is the list of participants. The list contains links to readings from each of them—definitely a good place to browse for more information on this topic. Ross Anderson, who has far more discipline than I, is liveblogging this event. Go to the comments of that blog post to see summaries of the individual sessions.

Here are links to my posts on the first, second, third, and fourth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 5, 2012 at 1:16 PMView Comments

The Unreliability of Eyewitness Testimony

Interesting article:

The reliability of witness testimony is a vastly complex subject, but legal scholars and forensic psychologists say it’s possible to extract the truth from contradictory accounts and evolving memories. According to Barbara Tversky, professor emerita of psychology at Stanford University, the bottom line is this: “All other things equal, earlier recountings are more likely to be accurate than later ones. The longer the delay, the more likely that subsequent information will get confused with the target memory.”

[…]

Memory is a reconstructive process, says Richard Wise, a forensic psychologist at the University of North Dakota. “When an eyewitness recalls a crime, he or she must reconstruct his or her memory of the crime.” This, he says, is an unconscious process. To reconstruct a memory, the eyewitness draws upon several sources of information, only one being his or her actual recollection.

“To fill in gaps in memory, the eyewitness relies upon his or her expectation, attitudes, prejudices, bias, and prior knowledge. Furthermore, information supplied to an eyewitness after a crime (i.e., post-event information) by the police, prosecutor, other eyewitnesses, media, etc., can alter an eyewitness’s memory of the crime,” Wise said in an email.

That external input is what makes eyewitness testimony so unreliable. Eyewitnesses are generally unaware that their memory has been altered by post-event information, and feel convinced they’re recalling only the incident itself. “Once an eyewitness’s memory of the crime has been altered by post-event information, it is difficult or impossible to restore the eyewitness’s original memory of the crime,” Wise told Life’s Little Mysteries.

Posted on June 4, 2012 at 6:36 AMView Comments

The Psychology of Immoral (and Illegal) Behavior

When I talk about Liars and Outliers to security audiences, one of the things I stress is our traditional security focus—on technical countermeasures—is much narrower than it could be. Leveraging moral, reputational, and institutional pressures are likely to be much more effective in motivating cooperative behavior.

This story illustrates the point. It’s about the psychology of fraud, “why good people do bad things.”

There is, she says, a common misperception that at moments like this, when people face an ethical decision, they clearly understand the choice that they are making.

“We assume that they can see the ethics and are consciously choosing not to behave ethically,” Tenbrunsel says.

This, generally speaking, is the basis of our disapproval: They knew. They chose to do wrong.

But Tenbrunsel says that we are frequently blind to the ethics of a situation.

Over the past couple of decades, psychologists have documented many different ways that our minds fail to see what is directly in front of us. They’ve come up with a concept called “bounded ethicality”: That’s the notion that cognitively, our ability to behave ethically is seriously limited, because we don’t always see the ethical big picture.

One small example: the way a decision is framed. “The way that a decision is presented to me,” says Tenbrunsel, “very much changes the way in which I view that decision, and then eventually, the decision it is that I reach.”

Essentially, Tenbrunsel argues, certain cognitive frames make us blind to the fact that we are confronting an ethical problem at all.

Tenbrunsel told us about a recent experiment that illustrates the problem. She got together two groups of people and told one to think about a business decision. The other group was instructed to think about an ethical decision. Those asked to consider a business decision generated one mental checklist; those asked to think of an ethical decision generated a different mental checklist.

Tenbrunsel next had her subjects do an unrelated task to distract them. Then she presented them with an opportunity to cheat.

Those cognitively primed to think about business behaved radically different from those who were not—no matter who they were, or what their moral upbringing had been.

“If you’re thinking about a business decision, you are significantly more likely to lie than if you were thinking from an ethical frame,” Tenbrunsel says.

According to Tenbrunsel, the business frame cognitively activates one set of goals—to be competent, to be successful; the ethics frame triggers other goals. And once you’re in, say, a business frame, you become really focused on meeting those goals, and other goals can completely fade from view.

Also:

Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don’t fully explain it. They’re interested in another possible explanation: Human beings commit fraud because human beings like each other.

We like to help each other, especially people we identify with. And when we are helping people, we really don’t see what we are doing as unethical.

The article even has some concrete security ideas:

Now if these psychologists and economists are right, if we are all capable of behaving profoundly unethically without realizing it, then our workplaces and regulations are poorly organized. They’re not designed to take into account the cognitively flawed human beings that we are. They don’t attempt to structure things around our weaknesses.

Some concrete proposals to do that are on the table. For example, we know that auditors develop relationships with clients after years of working together, and we know that those relationships can corrupt their audits without them even realizing it. So there is a proposal to force businesses to switch auditors every couple of years to address that problem.

Another suggestion: A sentence should be placed at the beginning of every business contract that explicitly says that lying on this contract is unethical and illegal, because that kind of statement would get people into the proper cognitive frame.

Along similar lines, some years ago Ross Anderson made the suggestion that the webpages of peoples’ online bank accounts should include their photographs, based on the research that it’s harder to commit fraud against someone who you identify with as a person.

Two excellent papers on this topic:

Abstract of the second paper:

Dishonesty plays a large role in the economy. Causes for (dis)honest behavior seem to be based partially on external rewards, and partially on internal rewards. Here, we investigate how such external and internal rewards work in concert to produce (dis)honesty. We propose and test a theory of self-concept maintenance that allows people to engage to some level in dishonest behavior, thereby benefiting from external benefits of dishonesty, while maintaining their positive view about themselves in terms of being honest individuals. The results show that (1) given the opportunity to engage in beneficial dishonesty, people will engage in such behaviors; (2) the amount of dishonesty is largely insensitive to either the expected external benefits or the costs associated with the deceptive acts; (3) people know about their actions but do not update their self-concepts; (4) causing people to become more aware of their internal standards for honesty decreases their tendency for deception; and (5) increasing the “degrees of freedom” that people have to interpret their actions increases their tendency for deception. We suggest that dishonesty governed by self-concept maintenance is likely to be prevalent in the economy, and understanding it has important implications for designing effective methods to curb dishonesty.

Posted on May 30, 2012 at 12:54 PMView Comments

The Problem of False Alarms

The context is tornado warnings:

The basic problem, Smith says, it that sirens are sounded too often in most places. Sometimes they sound in an entire county for a warning that covers just a sliver of it; sometimes for other thunderstorm phenomena like large hail and/or strong straight-line winds; and sometimes for false alarm warnings ­ warnings for tornadoes that were incorrectly detected.

The residents of Joplin, Smith contends, were numbed by the too frequent blaring of sirens. As a result of too many past false alarms, he writes: “The citizens of Joplin were unwittingly being trained to NOT act when the sirens sounded.”

Posted on May 30, 2012 at 6:44 AMView Comments

The Ubiquity of Cyber-Fears

A new study concludes that more people are worried about cyber threats than terrorism.

…the three highest priorities for Americans when it comes to security issues in the presidential campaign are:

  1. Protecting government computer systems against hackers and criminals (74 percent)
  2. Protecting our electric power grid, water utilities and transportation systems against computer or terrorist attacks (73 percent)
  3. Homeland security issues such as terrorism (68 percent)

Posted on May 24, 2012 at 11:31 AMView Comments

Racism as a Vestigal Remnant of a Security Mechanism

Roots of Racism,” by Elizabeth Culotta in Science:

Our attitudes toward outgroups are part of a threat-detection system that allows us to rapidly determine friend from foe, says psychologist Steven Neuberg of ASU Tempe. The problem, he says, is that like smoke detectors, the system is designed to give many false alarms rather than miss a true threat. So outgroup faces alarm us even when there is no danger.

Lots of interesting stuff in the article. Unfortunately, it requires registration to access.

Posted on May 22, 2012 at 1:10 PMView Comments

Rules for Radicals

It was written in 1971, but this still seems like a cool book:

For an elementary illustration of tactics, take parts of your face as the point of reference; your eyes, your ears, and your nose. First the eyes: if you have organized a vast, mass-based people’s organization, you can parade it visibly before the enemy and openly show your power. Second the ears; if your organization is small in numbers, then do what Gideon did: conceal the members in the dark but raise a din and clamor that will make the listener believe that your organization numbers many more than it does. Third, the nose; if your organization is too tiny even for noise, stink up the place.

Always remember the first rule of power tactics: Power is not only what you have but what the enemy thinks you have.

The second rule is: Never go outside the experience of your people. When an action or tactic is outside the experience of the people, the result is confusion, fear, and retreat. It also means a collapse of communication, as we have notes.

The third rule is: Wherever possible go outside the experience of the enemy. Here you want to cause confusion, fear, and retreat.

The fourth rule is: Make the enemy live up to their own book of rules. You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.

The fourth rule carries within in the fifth rule: Ridicule is man’s most potent weapon. It is almost impossible to counterattack ridicule. Also it infuriates the opposition, who then react to your advantage.

The sixth rule is: A good tactic is one that your people enjoy. If your people are not having a ball doing it, there is something very wrong with the tactic.

The seventh rule: A tactic that drags on too long becomes a drag.

[…]

The twelfth rule: The price of a successful attack is a constructive alternative. You cannot risk being trapped by the enemy in his sudden agreement with your demand and saying “You’re right—we don’t know what to do about this issue. Now you tell us.”

The thirteenth rule: Pick the target, freeze it, personalize it, and polarize it.

Posted on May 17, 2012 at 7:20 AMView Comments

Fear and the Attention Economy

danah boyd is thinking about—in a draft essay, and as a recording of a presentation—fear and the attention economy. Basically, she is making the argument that the attention economy magnifies the culture of fear because fear is a good way to get attention, and that this is being made worse by the rise of social media.

A lot of this isn’t new. Fear has been used to sell products (I’ve written about that here) and policy (“Remember the Maine!” “Remember the Alamo! “Remember 9/11!”) since forever. Newspapers have used fear to attract readers since there were readers. Long before there were child predators on the Internet, irrational panics swept society. Shark attacks in the 1970s. Marijuana in the 1950s. boyd relates a story from Glassner’s The Culture of Fear about elderly women being mugged in the 1990s.

These fears have largely been driven from the top down: from political leaders, from the news media. What’s new today—and I agree this is very interesting—is that in addition to these traditional top-down fears, we’re also seeing fears come from the bottom up. Social media are allowing all of us to sow fear and, because fear gets attention, is enticing us to do so. Rather than fostering empathy and bringing us all together, social media might be pushing us further apart.

A lot of this is related to my own writing about trust. Fear causes us to mistrust a group we’re fearful of, and to more strongly trust the group we’re a part of. It’s natural, and it can be manipulated. It can be amplified, and it can be dampened. How social media are both enabling and undermining trust is a really important thing for us to understand. As boyd says: “What we design and how we design it matters. And how our systems are used also matters, even if those uses aren’t what we intended.”

Posted on April 25, 2012 at 6:51 AMView Comments

Amazing Round of "Split or Steal"

In Liars and Outliers, I use the metaphor of the Prisoner’s Dilemma to exemplify the conflict between group interest and self-interest. There are a gazillion academic papers on the Prisoner’s Dilemma from a good dozen different academic disciplines, but the weirdest dataset on real people playing the game is from a British game show called Golden Balls.

In the final round of the game, called “Split or Steal,” two contestants play a one-shot Prisoner’s Dilemma—technically, it’s a variant—choosing to either cooperate (and split a jackpot) or defect (and try to steal it). If one steals and the other splits, the stealer gets the whole jackpot. And, of course, if both contestants steal then both end up with nothing. There are lots of videos from the show on YouTube. (There are even two papers that analyze data from the game.) The videos are interesting to watch, not just to see how players cooperate and defect, but to watch their conversation beforehand and their reactions afterwards. I wrote a few paragraphs about this game for Liars and Outliers, but I ended up deleting them.

This is the weirdest, most surreal round of “Split or Steal” I have ever seen. The more I think about the psychology of it, the more interesting it is. I’ll save my comments for the comments, because I want you to watch it before I say more. Really.

For consistency’s sake in the comments, here are their names. The man on the left is Ibrahim, and the man on the right is Nick.

EDITED TO ADD (5/14): Economic analysis of the episode.

Posted on April 24, 2012 at 6:43 AMView Comments

James Randi on Magicians and the Security Mindset

Okay, so he doesn’t use that term. But he explains how a magician’s inherent ability to detect deception can be useful to science.

We can’t make magicians out of scientists—we wouldn’t want to—but we can help scientists “think in the groove”—think like a magician. And we should.

We are not scientists—with a few rare but important exceptions, like Ray Hyman and Richard Wiseman. But our highly specific expertise comes from knowledge of the ways in which our audiences can be led to quite false conclusions by calculated means ­ psychological, physical and especially sensory, visual being rather paramount since it has such a range of variety.

The fact that ours is a concealed art as well as one designed to confound persons of average and advanced thinking skills—our typical audience—makes it rather immune to ordinary analysis or solutions.

I’ve observed that scientists tend to think and perceive logically by using their training and observational skills—of course—and are thus often psychologically insulated from the possibility that there might be chicanery at work. This is where magicians can come in. No matter how well educated, or how basically intelligent, trained, or observant a scientist may be, s/he may be a poor judge of a methodology employed in deliberate deception.

Here’s my essay on the security mindset.

Posted on April 6, 2012 at 5:35 AMView Comments

1 6 7 8 9 10 26

Sidebar photo of Bruce Schneier by Joe MacInnis.