Entries Tagged "audio"

Page 4 of 7

Second SHB Workshop Liveblogging (7)

Session Six—”Terror”—chaired by Stuart Schechter.

Bill Burns, Decision Research (suggested reading: The Diffusion of Fear: Modeling Community Response to a Terrorist Strike), studies social reaction to risk. He discussed his theoretical model of how people react to fear events, and data from the 9/11 attacks, the 7/7 bombings in the UK, and the 2008 financial collapse. Basically, we can’t remain fearful. No matter what happens, fear spikes immediately after and recovers 45 or so days afterwards. He believes that the greatest mistake we made after 9/11 was labeling the event as terrorism instead of an international crime.

Chris Cocking, London Metropolitan University (suggested reading: Effects of social identity on responses to emergency mass evacuation), looks at the group behavior of people responding to emergencies. Traditionally, most emergency planning is based on the panic model: people in crowds are prone to irrational behavior and panic. There’s also a social attachment model that predicts that social norms don’t break down in groups. He prefers a self-categorization approach: disasters create a common identity, which results in orderly and altruistic behavior among strangers. The greater the threat, the greater the common identity, and spontaneous resilience can occur. He displayed a photograph of “panic” in New York on 9/11 and showed how it wasn’t panic at all. Panic seems to be more a myth than a reality. This has policy implications during an event: provide people with information, and people are more likely to underreact than overreact, if there is overreaction, it’s because people are acting as individuals rather than groups, so those in authority should encourage a sense of collective identity. “Crowds can be part of the solution rather than part of the problem.”

Richard John, University of Southern California (suggested reading: Decision Analysis by Proxy for the Rational Terrorist), talked about the process of social amplification of risk (with respect to terrorism). Events result in relatively small losses; it’s the changes in behavior following an event that result in much greater losses. There’s a dynamic of risk perception, and it’s very contextual. He uses vignettes to study how risk perception changes over time, and discussed some of the studies he’s conducting and ideas for future studies.

Mark Stewart, University of Newcastle, Australia (suggested reading: A risk and cost-benefit assessment of United States aviation security measures; Risk and Cost-Benefit Assessment of Counter-Terrorism Protective Measures to Infrastructure), examines infrastructure security and whether the costs exceed the benefits. He talked about cost/benefit trade-off, and how to apply probabilistic terrorism risk assessment; then, he tried to apply this model to the U.S. Federal Air Marshal Service. His result: they’re not worth it. You can quibble with his data, but the real value is a transparent process. During the discussion, I said that it is important to realize that risks can’t be taken in isolation, that anyone making a security trade-off is balancing several risks: terrorism risks, political risks, the personal risks to his career, etc.

John Adams, University College London (suggested reading: Deus e Brasileiro?; Can Science Beat Terrorism?; Bicycle bombs: a further inquiry), applies his risk thermostat model to terrorism. He presented a series of amusing photographs of overreactions to risk, most of them not really about risk aversion but more about liability aversion. He talked about bureaucratic paranoia, as well as bureaucratic incitements to paranoia, and how this is beginning to backfire. People treat risks differently, depending on whether they are voluntary, impersonal, or imposed, and whether people have total control, diminished control, or no control.

Dan Gardner, Ottawa Citizen (suggested reading: The Science of Fear: Why We Fear the Things We Shouldn’t—and Put Ourselves in Greater Danger), talked about how the media covers risks, threats, attacks, etc. He talked about the various ways the media screws up, all of which were familiar to everyone. His thesis is not that the media gets things wrong in order to increase readership/viewership and therefore profits, but that the media gets things wrong because reporters are human. Bad news bias is not a result of the media hyping bad news, but the natural human tendency to remember the bad more than the good. The evening news is centered around stories because people—including reporters—respond to stories, and stories with novelty, emotion, and drama are better stories.

Some of the discussion was about the nature of panic: whether and where it exists, and what it looks like. Someone from the audience questioned whether panic was related to proximity to the event; someone else pointed out that people very close to the 7/7 bombings took pictures and made phone calls—and that there was no evidence of panic. Also, on 9/11 pretty much everyone below where the airplanes struck the World Trade Center got out safely; and everyone above couldn’t get out, and died. Angela Sasse pointed out that the previous terrorist attack against the World Trade Center, and the changes made in evacuation procedures afterwards, contributed to the lack of panic on 9/11. Bill Burns said that the purest form of panic is a drowning person. Jean Camp asked whether the recent attacks against women’s health providers should be classified as terrorism, or whether we are better off framing it as crime. There was also talk about sky marshals and their effectiveness. I said that it isn’t sky marshals that are a deterrent, but the idea of sky marshals. Terence Taylor said that increasing uncertainty on the part of the terrorists is, in itself, a security measure. There was also a discussion about how risk-averse terrorists are; they seem to want to believe they have an 80% or 90% change of success before they will launch an attack.

Next, lunch—and two final sessions this afternoon.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 12:01 PMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

Here Comes Everybody Review

In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don’t people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market’s transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense. But if they are too high, it makes more sense to get the job done by an organization that hires people.

Economists have long understood the corollary concept of Coase’s ceiling, a point above which organizations collapse under their own weight—where hiring someone, however competent, means more work for everyone else than the new hire contributes. Software projects often bump their heads against Coase’s ceiling: recall Frederick P. Brooks Jr.’s seminal study, The Mythical Man-Month (Addison-Wesley, 1975), which showed how adding another person onto a project can slow progress and increase errors.

What’s new is something consultant and social technologist Clay Shirky calls "Coase’s Floor," below which we find projects and activities that aren’t worth their organizational costs—things so esoteric, so frivolous, so nonsensical, or just so thoroughly unimportant that no organization, large or small, would ever bother with them. Things that you shake your head at when you see them and think, "That’s ridiculous."

Sounds a lot like the Internet, doesn’t it? And that’s precisely Shirky’s point. His new book, Here Comes Everybody: The Power of Organizing Without Organizations, explores a world where organizational costs are close to zero and where ad hoc, loosely connected groups of unpaid amateurs can create an encyclopedia larger than the Britannica and a computer operating system to challenge Microsoft’s.

Shirky teaches at New York University’s Interactive Telecommunications Program, but this is no academic book. Sacrificing rigor for readability, Here Comes Everybody is an entertaining as well as informative romp through some of the Internet’s signal moments—the Howard Dean phenomenon, Belarusian protests organized on LiveJournal, the lost cellphone of a woman named Ivanna, Meetup.com, flash mobs, Twitter, and more—which Shirky uses to illustrate his points.

The book is filled with bits of insight and common sense, explaining why young people take better advantage of social tools, how the Internet affects social change, and how most Internet discourse falls somewhere between dinnertime conversation and publishing.

Shirky notes that "most user-generated content isn’t ‘content’ at all, in the sense of being created for general consumption, any more than a phone call between you and a sibling is ‘family-generated content.’ Most of what gets created on any given day is just the ordinary stuff of life—gossip, little updates, thinking out loud—but now it’s done in the same medium as professionally produced material. Unlike professionally produced material, however, Internet content can be organized after the fact."

No one coordinates Flickr’s 6 million to 8 million users. Yet Flickr had the first photos from the 2005 London Transport bombings, beating the traditional news media. Why? People with cellphone cameras uploaded their photos to Flickr. They coordinated themselves using tools that Flickr provides. This is the sort of impromptu organization the Internet is ideally suited for. Shirky explains how these moments are harbingers of a future that can self-organize without formal hierarchies.

These nonorganizations allow for contributions from a wider group of people. A newspaper has to pay someone to take photos; it can’t be bothered to hire someone to stand around London underground stations waiting for a major event. Similarly, Microsoft has to pay a programmer full time, and Encyclopedia Britannica has to pay someone to write articles. But Flickr can make use of a person with just one photo to contribute, Linux can harness the work of a programmer with little time, and Wikipedia benefits if someone corrects just a single typo. These aggregations of millions of actions that were previously below the Coasean floor have enormous potential.

But a flash mob is still a mob. In a world where the Coasean floor is at ground level, all sorts of organizations appear, including ones you might not like: violent political organizations, hate groups, Holocaust deniers, and so on. (Shirky’s discussion of teen anorexia support groups makes for very disturbing reading.) This has considerable implications for security, both online and off.

We never realized how much our security could be attributed to distance and inconvenience—how difficult it is to recruit, organize, coordinate, and communicate without formal organizations. That inadvertent measure of security is now gone. Bad guys, from hacker groups to terrorist groups, will use the same ad hoc organizational technologies that the rest of us do. And while there has been some success in closing down individual Web pages, discussion groups, and blogs, these are just stopgap measures.

In the end, a virtual community is still a community, and it needs to be treated as such. And just as the best way to keep a neighborhood safe is for a policeman to walk around it, the best way to keep a virtual community safe is to have a virtual police presence.

Crime isn’t the only danger; there is also isolation. If people can segregate themselves in ever-increasingly specialized groups, then they’re less likely to be exposed to alternative ideas. We see a mild form of this in the current political trend of rival political parties having their own news sources, their own narratives, and their own facts. Increased radicalization is another danger lurking below the Coasean floor.

There’s no going back, though. We’ve all figured out that the Internet makes freedom of speech a much harder right to take away. As Shirky demonstrates, Web 2.0 is having the same effect on freedom of assembly. The consequences of this won’t be fully seen for years.

Here Comes Everybody covers some of the same ground as Yochai Benkler’s Wealth of Networks. But when I had to explain to one of my corporate attorneys how the Internet has changed the nature of public discourse, Shirky’s book is the one I recommended.

This essay previously appeared in IEEE Spectrum.

EDITED TO ADD (12/13): Interesting Clay Shirky podcast.

Posted on November 25, 2008 at 7:39 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.