Entries Tagged "societal security"

Page 2 of 3

One-Shot vs. Iterated Prisoner's Dilemma

This post by Aleatha Parker-Wood is very applicable to the things I wrote in Liars & Outliers:

A lot of fundamental social problems can be modeled as a disconnection between people who believe (correctly or incorrectly) that they are playing a non-iterated game (in the game theory sense of the word), and people who believe that (correctly or incorrectly) that they are playing an iterated game.

For instance, mechanisms such as reputation mechanisms, ostracism, shaming, etc., are all predicated on the idea that the person you’re shaming will reappear and have further interactions with the group. Legal punishment is only useful if you can catch the person, and if the cost of the punishment is more than the benefit of the crime.

If it is possible to act as if the game you are playing is a one-shot game (for instance, you have a very large population to hide in, you don’t need to ever interact with people again, or you can be anonymous), your optimal strategies are going to be different than if you will have to play the game many times, and live with the legal or social consequences of your actions. If you can make enough money as CEO to retire immediately, you may choose to do so, even if you’re so terrible at running the company that no one will ever hire you again.

Social cohesion can be thought of as a manifestation of how “iterated” people feel their interactions are, how likely they are to interact with the same people again and again and have to deal with long term consequences of locally optimal choices, or whether they feel they can “opt out” of consequences of interacting with some set of people in a poor way.

Posted on May 23, 2013 at 9:18 AMView Comments

Keeping Sensitive Information Out of the Hands of Terrorists Through Self-Restraint

In my forthcoming book (available February 2012), I talk about various mechanisms for societal security: how we as a group protect ourselves from the “dishonest minority” within us. I have four types of societal security systems:

  • moral systems—any internal rewards and punishments;
  • reputational systems—any informal external rewards and punishments;
  • rule-based systems—any formal system of rewards and punishments (mostly punishments); laws, mostly;
  • technological systems—everything like walls, door locks, cameras, and so on.

We spend most of our effort in the third and fourth category. I am spending a lot of time researching how the first two categories work.

Given that, I was very interested in seeing an article by Dallas Boyd in Homeland Security Affairs: “Protecting Sensitive Information: The Virtue of Self-Restraint,” where he basically says that people should not publish information that terrorists could use out of moral responsibility (he calls it “civic duty”). Ignore for a moment the debate about whether publishing information that could give the terrorists ideas is actually a bad idea—I think it’s not—what Boyd is proposing is actually very interesting. He specifically says that censorship is bad and won’t work, and wants to see voluntary self-restraint along with public shaming of offenders.

As an alternative to formal restrictions on communication, professional societies and influential figures should promote voluntary self-censorship as a civic duty. As this practice is already accepted among many scientists, it may be transferrable to members of other professions. As part of this effort, formal channels should be established in which citizens can alert the government to vulnerabilities and other sensitive information without exposing it to a wide audience. Concurrent with this campaign should be the stigmatization of those who recklessly disseminate sensitive information. This censure would be aided by the fact that many such people are unattractive figures whose writings betray their intellectual vanity. The public should be quick to furnish the opprobrium that presently escapes these individuals.

I don’t think it will work, and I don’t even think it’s possible in this international day and age, but it’s interesting to read the proposal.

Slashdot thread on the paper. Another article.

Posted on May 31, 2011 at 6:34 AMView Comments

RFID Tags Protecting Hotel Towels

The stealing of hotel towels isn’t a big problem in the scheme of world problems, but it can be expensive for hotels. Sure, we have moral prohibitions against stealing—that’ll prevent most people from stealing the towels. Many hotels put their name or logo on the towels. That works as a reputational societal security system; most people don’t want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect: making towels and other items into souvenirs of the hotel and thus more desirable to steal. It’s against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted. (This might be different in third world countries. In 2010, someone was sentenced to three months in jail for stealing two towels from a Nigerian hotel.) The result is that more towels are stolen than hotels want. And for expensive resort hotels, those towels are expensive to replace.

The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items—this is particularly common with bathrobes—and charge the guest for them if they disappear from the rooms. This works with some things, but it’s too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool.

A more recent system, still not widespread, is to embed washable RFID chips into the towels and track them that way. The one data point I have for this is an anonymous Hawaii hotel that claims they’ve reduced towel theft from 4,000 a month to 750, saving $16,000 in replacement costs monthly.

Assuming the RFID tags are relatively inexpensive and don’t wear out too quickly, that’s a pretty good security trade-off.

Posted on May 11, 2011 at 11:01 AMView Comments

Status Report: The Dishonest Minority

Three months ago, I announced that I was writing a book on why security exists in human societies. This is basically the book’s thesis statement:

All complex systems contain parasites. In any system of cooperative behavior, an uncooperative strategy will be effective—and the system will tolerate the uncooperatives—as long as they’re not too numerous or too effective. Thus, as a species evolves cooperative behavior, it also evolves a dishonest minority that takes advantage of the honest majority. If individuals within a species have the ability to switch strategies, the dishonest minority will never be reduced to zero. As a result, the species simultaneously evolves two things: 1) security systems to protect itself from this dishonest minority, and 2) deception systems to successfully be parasitic.

Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual’s short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.

Two of these systems evolved in prehistory: morals and reputation. Two others evolved as our social groups became larger and more formal: laws and technical security systems. What these security systems do, effectively, is give individuals incentives to act in the group interest. But none of these systems, with the possible exception of some fanciful science-fiction technologies, can ever bring that dishonest minority down to zero.

In complex modern societies, many complications intrude on this simple model of societal security. Decisions to cooperate or defect are often made by groups of people—governments, corporations, and so on—and there are important differences because of dynamics inside and outside the groups. Much of our societal security is delegated—to the police, for example—and becomes institutionalized; the dynamics of this are also important. Power struggles over who controls the mechanisms of societal security are inherent: “group interest” rapidly devolves to “the king’s interest.” Societal security can become a tool for those in power to remain in power, with the definition of “honest majority” being simply the people who follow the rules.

The term “dishonest minority” is not a moral judgment; it simply describes the minority who does not follow societal norm. Since many societal norms are in fact immoral, sometimes the dishonest minority serves as a catalyst for social change. Societies without a reservoir of people who don’t follow the rules lack an important mechanism for societal evolution. Vibrant societies need a dishonest minority; if society makes its dishonest minority too small, it stifles dissent as well as common crime.

At this point, I have most of a first draft: 75,000 words. The tentative title is still “The Dishonest Minority: Security and its Role in Modern Society.” I have signed a contract with Wiley to deliver a final manuscript in November for February 2012 publication. Writing a book is a process of exploration for me, and the final book will certainly be a little different—and maybe even very different—from what I wrote above. But that’s where I am today.

And it’s why my other writings continue to be sparse.

Posted on May 9, 2011 at 7:02 AMView Comments

Social Solidarity as an Effect of the 9/11 Terrorist Attacks

It’s standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.

Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other’s symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.

This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It’s the kind of thing I am talking about in my new book.

Paper also available here.

Posted on April 27, 2011 at 9:10 AMView Comments

Detecting Cheaters

Our brains are specially designed to deal with cheating in social exchanges. The evolutionary psychology explanation is that we evolved brain heuristics for the social problems that our prehistoric ancestors had to deal with. Once humans became good at cheating, they then had to become good at detecting cheating—otherwise, the social group would fall apart.

Perhaps the most vivid demonstration of this can be seen with variations on what’s known as the Wason selection task, named after the psychologist who first studied it. Back in the 1960s, it was a test of logical reasoning; today, it’s used more as a demonstration of evolutionary psychology. But before we get to the experiment, let’s get into the mathematical background.

Propositional calculus is a system for deducing conclusions from true premises. It uses variables for statements because the logic works regardless of what the statements are. College courses on the subject are taught by either the mathematics or the philosophy department, and they’re not generally considered to be easy classes. Two particular rules of inference are relevant here: modus ponens and modus tollens. Both allow you to reason from a statement of the form, "if P, then Q." (If Socrates was a man, then Socrates was mortal. If you are to eat dessert, then you must first eat your vegetables. If it is raining, then Gwendolyn had Crunchy Wunchies for breakfast. That sort of thing.) Modus ponens goes like this:

If P, then Q. P. Therefore, Q.

In other words, if you assume the conditional rule is true, and if you assume the antecedent of that rule is true, then the consequent is true. So,

If Socrates was a man, then Socrates was mortal. Socrates was a man. Therefore, Socrates was mortal.

Modus tollens is more complicated:

If P, then Q. Not Q. Therefore, not P.

If Socrates was a man, then Socrates was mortal. Socrates was not mortal. Therefore, Socrates was not a man.

This makes sense: if Socrates was not mortal, then he was a demigod or a stone statue or something.

Both are valid forms of logical reasoning. If you know "if P, then Q" and "P," then you know "Q." If you know "if P, then Q" and "not Q," then you know "not P." (The other two similar forms don’t work. If you know "if P, then Q" and "Q," you don’t know anything about "P." And if you know "if P, then Q" and "not P," then you don’t know anything about "Q.")

If I explained this in front of an audience full of normal people, not mathematicians or philosophers, most of them would be lost. Unsurprisingly, they would have trouble either explaining the rules or using them properly. Just ask any grad student who has had to teach a formal logic class; people have trouble with this.

Consider the Wason selection task. Subjects are presented with four cards next to each other on a table. Each card represents a person, with each side listing some statement about that person. The subject is then given a general rule and asked which cards he would have to turn over to ensure that the four people satisfied that rule. For example, the general rule might be, "If a person travels to Boston, then he or she takes a plane." The four cards might correspond to travelers and have a destination on one side and a mode of transport on the other. On the side facing the subject, they read: "went to Boston," "went to New York," "took a plane," and "took a car." Formal logic states that the rule is violated if someone goes to Boston without taking a plane. Translating into propositional calculus, there’s the general rule: if P, then Q. The four cards are "P," "not P," "Q," and "not Q." To verify that "if P, then Q" is a valid rule, you have to verify modus ponens by turning over the "P" card and making sure that the reverse says "Q." To verify modus tollens, you turn over the "not Q" card and make sure that the reverse doesn’t say "P."

Shifting back to the example, you need to turn over the "went to Boston" card to make sure that person took a plane, and you need to turn over the "took a car" card to make sure that person didn’t go to Boston. You don’t—as many people think—need to turn over the "took a plane" card to see if it says "went to Boston" because you don’t care. The person might have been flying to Boston, New York, San Francisco, or London. The rule only says that people going to Boston fly; it doesn’t break the rule if someone flies elsewhere.

If you’re confused, you aren’t alone. When Wason first did this study, fewer than 10 percent of his subjects got it right. Others replicated the study and got similar results. The best result I’ve seen is "fewer than 25 percent." Training in formal logic doesn’t seem to help very much. Neither does ensuring that the example is drawn from events and topics with which the subjects are familiar. People are just bad at the Wason selection task. They also tend to only take college logic classes upon requirement.

This isn’t just another "math is hard" story. There’s a point to this. The one variation of this task that people are surprisingly good at getting right is when the rule has to do with cheating and privilege. For example, change the four cards to children in a family—"gets dessert," "doesn’t get dessert," "ate vegetables," and "didn’t eat vegetables"—and change the rule to "If a child gets dessert, he or she ate his or her vegetables." Many people—65 to 80 percent—get it right immediately. They turn over the "ate dessert" card, making sure the child ate his vegetables, and they turn over the "didn’t eat vegetables" card, making sure the child didn’t get dessert. Another way of saying this is that they turn over the "benefit received" card to make sure the cost was paid. And they turn over the "cost not paid" card to make sure no benefit was received. They look for cheaters.

The difference is startling. Subjects don’t need formal logic training. They don’t need math or philosophy. When asked to explain their reasoning, they say things like the answer "popped out at them."

Researchers, particularly evolutionary psychologists Leda Cosmides and John Tooby, have run this experiment with a variety of wordings and settings and on a variety of subjects: adults in the US, UK, Germany, Italy, France, and Hong Kong; Ecuadorian schoolchildren; and Shiriar tribesmen in Ecuador. The results are the same: people are bad at the Wason selection task, except when the wording involves cheating.

In the world of propositional calculus, there’s absolutely no difference between a rule about traveling to Boston by plane and a rule about eating vegetables to get dessert. But in our brains, there’s an enormous difference: the first is a arbitrary rule about the world, and the second is a rule of social exchange. It’s of the form "If you take Benefit B, you must first satisfy Requirement R."

Our brains are optimized to detect cheaters in a social exchange. We’re good at it. Even as children, we intuitively notice when someone gets a benefit he didn’t pay the cost for. Those of us who grew up with a sibling have experienced how the one child not only knew that the other cheated, but felt compelled to announce it to the rest of the family. As adults, we might have learned that life isn’t fair, but we still know who among our friends cheats in social exchanges. We know who doesn’t pay his or her fair share of a group meal. At an airport, we might not notice the rule "If a plane is flying internationally, then it boards 15 minutes earlier than domestic flights." But we’ll certainly notice who breaks the "If you board first, then you must be a first-class passenger" rule.

This essay was originally published in IEEE Security & Privacy, and is an excerpt from the draft of my new book.

EDITED TO ADDD (4/14): Another explanation of the Wason Selection Task, with a possible correlation with psychopathy.

Posted on April 7, 2011 at 1:10 PMView Comments

Societal Security

Humans have a natural propensity to trust non-kin, even strangers. We do it so often, so naturally, that we don’t even realize how remarkable it is. But except for a few simplistic counterexamples, it’s unique among life on this planet. Because we are intelligently calculating and value reciprocity (that is, fairness), we know that humans will be honest and nice: not for any immediate personal gain, but because that’s how they are. We also know that doesn’t work perfectly; most people will be dishonest some of the time, and some people will be dishonest most of the time. How does society—the honest majority—prevent the dishonest minority from taking over, or ruining society for everyone? How is the dishonest minority kept in check? The answer is security—in particular, something I’m calling societal security.

I want to divide security into two types. The first is individual security. It’s basic. It’s direct. It’s what normally comes to mind when we think of security. It’s cops vs. robbers, terrorists vs. the TSA, Internet worms vs. firewalls. And this sort of security is as old as life itself or—more precisely—as old as predation. And humans have brought an incredible level of sophistication to individual security.

Societal security is different. At the tactical level, it also involves attacks, countermeasures, and entire security systems. But instead of A vs. B, or even Group A vs. Group B, it’s Group A vs. members of Group A. It’s security for individuals within a group from members of that group. It’s how Group A protects itself from the dishonest minority within Group A. And it’s where security really gets interesting.

There are many types—I might try to estimate the number someday—of societal security systems that enforce our trust of non-kin. They’re things like laws prohibiting murder, taxes, traffic laws, pollution control laws, religious intolerance, Mafia codes of silence, and moral codes. They enable us to build a society that the dishonest minority can’t exploit and destroy. Originally, these security systems were informal. But as society got more complex, the systems became more formalized, and eventually were embedded into technologies.

James Madison famously wrote: “If men were angels, no government would be necessary.” Government is just the beginning of what wouldn’t be necessary. Currency, that paper stuff that’s deliberately made hard to counterfeit, wouldn’t be necessary, as people could just keep track of how much money they had. Angels never cheat, so nothing more would be required. Door locks, and any barrier that isn’t designed to protect against accidents, wouldn’t be necessary, since angels never go where they’re not supposed to go. Police forces wouldn’t be necessary. Armies: I suppose that’s debatable. Would angels—not the fallen ones—ever go to war against one another? I’d like to think they would be able to resolve their differences peacefully. If people were angels, every security measure that isn’t designed to be effective against accident, animals, forgetfulness, or legitimate differences between scrupulously honest angels could be dispensed with.

Security isn’t just a tax on the honest; it’s a very expensive tax on the honest. It’s the most expensive tax we pay, regardless of the country we live in. If people were angels, just think of the savings!

It wasn’t always like this. Security—especially societal security—used to be cheap. It used to be an incidental cost of society.

In a primitive society, informal systems are generally good enough. When you’re living in a small community, and objects are both scarce and hard to make, it’s pretty easy to deal with the problem of theft. If Alice loses a bowl, and at the same time, Bob shows up with an identical bowl, everyone knows Bob stole it from Alice, and the community can then punish Bob as it sees fit. But as communities get larger, as social ties weaken and anonymity increases, this informal system of theft prevention—detection and punishment leading to deterrence—fails. As communities get more technological and as the things people might want to steal get more interchangeable and harder to identify, it also fails. In short, as our ancestors made the move from small family groups to larger groups of unrelated families, and then to a modern form of society, the informal societal security systems started failing and more formal systems had to be invented to take their place. We needed to put license plates on cars and audit people’s tax returns.

We had no choice. Anything larger than a very primitive society couldn’t exist without societal security.

I’m writing a book about societal security. I will discuss human psychology: how we make security trade-offs, why we routinely trust non-kin (an evolutionary puzzle, to be sure), how the majority of us are honest, and that a minority of us are dishonest. That dishonest minority are the free riders of societal systems, and security is how we protect society from them. I will model the fundamental trade-off of societal security—individual self-interest vs. societal group interest—as a group prisoner’s dilemma problem, and use that metaphor to examine the basic mechanics of societal security. A lot falls out of this: free riders, the Tragedy of the Commons, the subjectivity of both morals and risk trade-offs.

Using this model, I will explore the security systems that protect—and fail to protect—market economics, corporations and other organizations, and a variety of national systems. I think there’s a lot we can learn about security by applying the prisoner’s dilemma model, and I’ve only recently started. Finally, I want to discuss modern changes to our millennia-old systems of societal security. The Information Age has changed a number of paradigms, and it’s not clear that our old security systems are working properly now or will work in the future. I’ve got a lot of work to do yet, and the final book might look nothing like this short outline. That sort of thing happens.

Tentative title: The Dishonest Minority: Security and its Role in Modern Society. I’ve written several books on the how of security. This book is about the why of security.

I expect to finish my first draft before Summer. Throughout 2011, expect to see bits from the book here. They might not make sense as a coherent whole at first—especially because I don’t write books in strict order—but by the time the book is published, it’ll all be part of a coherent and (hopefully) compelling narrative.

And if I write fewer extended blog posts and essays in the coming year, you’ll know why.

Posted on February 15, 2011 at 5:43 AMView Comments

Psychopaths and Security

I have been thinking a lot about security against psychopaths. Or, at least, how we have traditionally secured social systems against these sorts of people, and how we can secure our socio-technical systems against them. I don’t know if I have any conclusions yet, only a short reading list.

EDITED TO ADD (12/12): Good article from 2001. The sociobiology of sociopathy. Psychopathic fraudsters and how they function in bureaucracies.

Posted on November 26, 2010 at 1:52 PMView Comments

Wrasse Punish Cheaters

Interesting:

The bluestreak cleaner wrasse (Labroides dimidiatus) operates an underwater health spa for larger fish. It advertises its services with bright colours and distinctive dances. When customers arrive, the cleaner eats parasites and dead tissue lurking in any hard-to-reach places. Males and females will sometimes operate a joint business, working together to clean their clients. The clients, in return, dutifully pay the cleaners by not eating them.

That’s the basic idea, but cleaners sometimes violate their contracts. Rather than picking off parasites, they’ll take a bite of the mucus that lines their clients’ skin. That’s an offensive act—it’s like a masseuse having an inappropriate grope between strokes. The affronted client will often leave. That’s particularly bad news if the cleaners are working as a pair because the other fish, who didn’t do anything wrong, still loses out on future parasite meals.

Males don’t take this sort of behaviour lightly. Nichola Raihani from the Zoological Society of London has found that males will punish their female partners by chasing them aggressively, if their mucus-snatching antics cause a client to storm out.

[…]

At first glance, the male cleaner wrasse behaves oddly for an animal, in punishing an offender on behalf of a third party, even though he hasn’t been wronged himself. That’s common practice in human societies but much rarer in the animal world. But Raihani’s experiments clearly show that the males are actually doing themselves a favour by punishing females on behalf of a third party. Their act of apparent altruism means they get more food in the long run.

Posted on January 20, 2010 at 1:26 PMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.