Entries Tagged "psychology of security"

Page 19 of 26

The Neuroscience of Cons

Fascinating:

The key to a con is not that you trust the conman, but that he shows he trusts you. Conmen ply their trade by appearing fragile or needing help, by seeming vulnerable. Because of THOMAS [The Human Oxytocin Mediated Attachment System], the human brain makes us feel good when we help others—this is the basis for attachment to family and friends and cooperation with strangers. “I need your help” is a potent stimulus for action.

This is interesting. They say that all cons rely on the mark’s greed to work. But this short essay implies that greed is only a secondary factor.

Posted on November 18, 2008 at 6:32 AMView Comments

Interview on Nuclear Terror

With Brian Michael Jenkins from Rand Corp. I like his distinction between “terrorism” and “terror”:

NJ: Why did you decide to delve so deeply into the psychological underpinnings of nuclear terror?

Jenkins: Well, I couldn’t write about the history of nuclear terrorism, because at least as of yet there hasn’t been any. So that would have been a very short book. Nonetheless, the U.S. government has stated that it is the No. 1 threat to the national security of the United States. In fact, according to public opinion polls, two out of five Americans consider it likely that a terrorist will detonate a nuclear bomb in an American city within the next five years. That struck me as an astonishing level of apprehension.

NJ: To what do you attribute that fear?

Jenkins: I concluded that there is a difference between nuclear terrorism and nuclear terror. Nuclear terrorism is about the possibility that terrorists will acquire and detonate a nuclear weapon. Nuclear terror, on the other hand, concerns our anticipation of such an attack. It’s about our imagination. And while there is no history of nuclear terrorism, there is a rich history of nuclear terror. It’s deeply embedded in our popular culture and in policy-making circles.

This is also good:

NJ: How do you break this chain reaction of fear?

Jenkins: The first thing we have to do is truly understand the threat. Nuclear terrorism is a frightening possibility but it is not inevitable or imminent, and there is no logical progression from truck bombs to nuclear bombs. Some of the steps necessary to a sustainable strategy we’ve already begun. We do need better intelligence-sharing internationally and enhanced homeland security and civil defense, and we need to secure stockpiles of nuclear materials around the world.

Nations that might consider abetting terrorists in acquiring nuclear weapons should also be made aware that we will hold them fully responsible in the event of an attack. We need to finish the job of eliminating Al Qaeda, not only to prevent another attack but also to send the message to others that if you go down this path, we will hunt you down relentlessly and destroy you.

NJ: What should political leaders tell the American people?

Jenkins: Rather than telling Americans constantly to be very afraid, we should stress that even an event of nuclear terrorism will not bring this Republic to its knees. Some will argue that fear is useful in galvanizing people and concentrating their minds on this threat, but fear is not free. It creates its own orthodoxy and demands obedience to it. A frightened population is intolerant. It trumpets a kind of “lapel pin” patriotism rather than the real thing. A frightened population is also prone both to paralysis—we’re doomed!—and to dangerous overreaction.

I believe that fear gets in the way of addressing the issue of nuclear terrorism in a sustained and sensible way. Instead of spreading fear, our leaders should speak to the American traditions of courage, self-reliance, and resiliency. Heaven forbid that an act of nuclear terrorism ever actually occurs, but if it does, we’ll get through it.

Posted on November 11, 2008 at 6:26 AMView Comments

The Psychology of Con Men

Interesting:

My all-time favourite [short con] only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.

Posted on October 20, 2008 at 5:57 AMView Comments

Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes—miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior—dart left, dart right, dart left, and at the last moment jump—instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision—buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living—insurance companies, financial trading firms and arbitrageurs—try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PMView Comments

The Seven Habits of Highly Ineffective Terrorists

Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. If we’re ever going to defeat terrorism, we need to understand what drives people to become terrorists in the first place.

Conventional wisdom holds that terrorism is inherently political, and that people become terrorists for political reasons. This is the “strategic” model of terrorism, and it’s basically an economic model. It posits that people resort to terrorism when they believe—rightly or wrongly—that terrorism is worth it; that is, when they believe the political gains of terrorism minus the political costs are greater than if they engaged in some other, more peaceful form of protest. It’s assumed, for example, that people join Hamas to achieve a Palestinian state; that people join the PKK to attain a Kurdish national homeland; and that people join al-Qaida to, among other things, get the United States out of the Persian Gulf.

If you believe this model, the way to fight terrorism is to change that equation, and that’s what most experts advocate. Governments tend to minimize the political gains of terrorism through a no-concessions policy; the international community tends to recommend reducing the political grievances of terrorists via appeasement, in hopes of getting them to renounce violence. Both advocate policies to provide effective nonviolent alternatives, like free elections.

Historically, none of these solutions has worked with any regularity. Max Abrahms, a predoctoral fellow at Stanford University’s Center for International Security and Cooperation, has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. In a paper published this year in International Security that—sadly—doesn’t have the title “Seven Habits of Highly Ineffective Terrorists,” he discusses, well, seven habits of highly ineffective terrorists. These seven tendencies are seen in terrorist organizations all over the world, and they directly contradict the theory that terrorists are political maximizers:

Terrorists, he writes, (1) attack civilians, a policy that has a lousy track record of convincing those civilians to give the terrorists what they want; (2) treat terrorism as a first resort, not a last resort, failing to embrace nonviolent alternatives like elections; (3) don’t compromise with their target country, even when those compromises are in their best interest politically; (4) have protean political platforms, which regularly, and sometimes radically, change; (5) often engage in anonymous attacks, which precludes the target countries making political concessions to them; (6) regularly attack other terrorist groups with the same political platform; and (7) resist disbanding, even when they consistently fail to achieve their political objectives or when their stated political objectives have been achieved.

Abrahms has an alternative model to explain all this: People turn to terrorism for social solidarity. He theorizes that people join terrorist organizations worldwide in order to be part of a community, much like the reason inner-city youths join gangs in the United States.

The evidence supports this. Individual terrorists often have no prior involvement with a group’s political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can’t describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren’t working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida.

For example, several of the 9/11 hijackers planned to fight in Chechnya, but they didn’t have the right paperwork so they attacked America instead. The mujahedeen had no idea whom they would attack after the Soviets withdrew from Afghanistan, so they sat around until they came up with a new enemy: America. Pakistani terrorists regularly defect to another terrorist group with a totally different political platform. Many new al-Qaida members say, unconvincingly, that they decided to become a jihadist after reading an extreme, anti-American blog, or after converting to Islam, sometimes just a few weeks before. These people know little about politics or Islam, and they frankly don’t even seem to care much about learning more. The blogs they turn to don’t have a lot of substance in these areas, even though more informative blogs do exist.

All of this explains the seven habits. It’s not that they’re ineffective; it’s that they have a different goal. They might not be effective politically, but they are effective socially: They all help preserve the group’s existence and cohesion.

This kind of analysis isn’t just theoretical; it has practical implications for counterterrorism. Not only can we now better understand who is likely to become a terrorist, we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations. Driving a wedge between group members—commuting prison sentences in exchange for actionable intelligence, planting more double agents within terrorist groups—will go a long way to weakening the social bonds within those groups.

We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/9): Interesting rebuttal.

Posted on October 7, 2008 at 5:48 AMView Comments

Taleb on the Limitations of Risk Management

Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb:

Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again—a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you’ll overestimate the probability of these scenarios. If you examine them at the expense of those you don’t examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I’ll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of “make-sense” type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don’t give a fool the illusion of risk management. Don’t ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk.

Posted on October 3, 2008 at 7:48 AMView Comments

Movie-Plot Threats in the Guardian

We spend far more effort defending our countries against specific movie-plot threats, rather than the real, broad threats. In the US during the months after the 9/11 attacks, we feared terrorists with scuba gear, terrorists with crop dusters and terrorists contaminating our milk supply. Both the UK and the US fear terrorists with small bottles of liquid. Our imaginations run wild with vivid specific threats. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.

It’s not just terrorism; it’s any rare risk in the news. The big fear in Canada right now, following a particularly gruesome incident, is random decapitations on intercity buses. In the US, fears of school shootings are much greater than the actual risks. In the UK, it’s child predators. And people all over the world mistakenly fear flying more than driving. But the very definition of news is something that hardly ever happens. If an incident is in the news, we shouldn’t worry about it. It’s when something is so common that its no longer news – car crashes, domestic violence – that we should worry. But that’s not the way people think.

Psychologically, this makes sense. We are a species of storytellers. We have good imaginations and we respond more emotionally to stories than to data. We also judge the probability of something by how easy it is to imagine, so stories that are in the news feel more probable – and ominous – than stories that are not. As a result, we overreact to the rare risks we hear stories about, and fear specific plots more than general threats.

The problem with building security around specific targets and tactics is that its only effective if we happen to guess the plot correctly. If we spend billions defending the Underground and terrorists bomb a school instead, we’ve wasted our money. If we focus on the World Cup and terrorists attack Wimbledon, we’ve wasted our money.

It’s this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they’re going to do something else. Or they’ll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.

These are stupid games, so let’s stop playing. Some high-profile targets deserve special attention and some tactics are worse than others. Airplanes are particularly important targets because they are national symbols and because a small bomb can kill everyone aboard. Seats of government are also symbolic, and therefore attractive, targets. But targets and tactics are interchangeable.

The following three things are true about terrorism. One, the number of potential terrorist targets is infinite. Two, the odds of the terrorists going after any one target is zero. And three, the cost to the terrorist of switching targets is zero.

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t require us to guess. We need to focus resources on intelligence and investigation: identifying terrorists, cutting off their funding and stopping them regardless of what their plans are. We need to focus resources on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy.

In 2006, UK police arrested the liquid bombers not through diligent airport security, but through intelligence and investigation. It didn’t matter what the bombers’ target was. It didn’t matter what their tactic was. They would have been arrested regardless. That’s smart security. Now we confiscate liquids at airports, just in case another group happens to attack the exact same target in exactly the same way. That’s just illogical.

This essay originally appeared in The Guardian. Nothing I haven’t already said elsewhere.

Posted on September 4, 2008 at 5:56 AMView Comments

Pentagon Consulting Social Scientists on Security

This seems like a good idea:

Eager to embrace eggheads and ideas, the Pentagon has started an ambitious and unusual program to recruit social scientists and direct the nation’s brainpower to combating security threats like the Chinese military, Iraq, terrorism and religious fundamentalism.

The article talks a lot about potential conflicts of interest and such, and less on what sorts of insights the social scientists can offer. I think there is a lot of potential value here.

Posted on June 30, 2008 at 12:13 PMView Comments

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AMView Comments

1 17 18 19 20 21 26

Sidebar photo of Bruce Schneier by Joe MacInnis.