Entries Tagged "audio"

Page 3 of 7

Liars and Outliers Update

Liars and Outliers is available. Amazon and Barnes & Noble have been shipping the book since the beginning of the month. Both the Kindle and the Nook versions are available for download. I have received 250 books myself. Everyone who read and commented on a draft will get a copy in the mail. And as of today, I have shipped books to everyone who ordered a signed copy.

I’ve seen five more reviews. And there’s one print and one audio (there’s also a transcript) interview about the book.

A bunch of people on Twitter have announced that they’re enjoying the book. Right now, there are only three reviews on Amazon. Please, leave a review on Amazon. (I’ll write about the problem of fake reviews on these sorts of sites in another post.)

I’m not sure, but I think the Kindle price is going to increase. So if you want the book at the current $10 price, now is the time to buy it.

Posted on February 13, 2012 at 2:53 PMView Comments

Fourth SHB Workshop

I’m at SHB 2011, the fourth Interdisciplinary Workshop on Security and Human Behavior, at Carnegie Mellon University. This is a two-day invitational gathering of computer security researchers, psychologists, behavioral economists, sociologists, political scientists, anthropologists, philosophers, and others—all of whom are studying the human side of security—organized by Alessandro Acquisti, Ross Anderson, and me. It’s not just an interdisciplinary conference; most of the people here are individually interdisciplinary. For the past four years, this has been the most intellectually stimulating conference I have attended.

Here is the program. The list of attendees contains links to readings from each of them—definitely a good place to browse for more information on this topic.

Ross Anderson is liveblogging this event. Matt Blaze is taping the sessions; I’ll link to them if he puts them up on the Internet.

Here are links to my posts on the first, second, and third SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.

Posted on June 18, 2011 at 1:06 PMView Comments

Reading Me

The number of different ways to read my essays, commentaries, and links has grown recently. Here’s the rundown:

I think that about covers it for useful distribution formats right now.

EDITED TO ADD (6/20): One more; there’s a Crypto-Gram podcast.

Posted on June 15, 2010 at 1:05 PMView Comments

Second SHB Workshop Liveblogging (9)

The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.

Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”

Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)

John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)

Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.

During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.

And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 4:55 PMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Second SHB Workshop Liveblogging (7)

Session Six—”Terror”—chaired by Stuart Schechter.

Bill Burns, Decision Research (suggested reading: The Diffusion of Fear: Modeling Community Response to a Terrorist Strike), studies social reaction to risk. He discussed his theoretical model of how people react to fear events, and data from the 9/11 attacks, the 7/7 bombings in the UK, and the 2008 financial collapse. Basically, we can’t remain fearful. No matter what happens, fear spikes immediately after and recovers 45 or so days afterwards. He believes that the greatest mistake we made after 9/11 was labeling the event as terrorism instead of an international crime.

Chris Cocking, London Metropolitan University (suggested reading: Effects of social identity on responses to emergency mass evacuation), looks at the group behavior of people responding to emergencies. Traditionally, most emergency planning is based on the panic model: people in crowds are prone to irrational behavior and panic. There’s also a social attachment model that predicts that social norms don’t break down in groups. He prefers a self-categorization approach: disasters create a common identity, which results in orderly and altruistic behavior among strangers. The greater the threat, the greater the common identity, and spontaneous resilience can occur. He displayed a photograph of “panic” in New York on 9/11 and showed how it wasn’t panic at all. Panic seems to be more a myth than a reality. This has policy implications during an event: provide people with information, and people are more likely to underreact than overreact, if there is overreaction, it’s because people are acting as individuals rather than groups, so those in authority should encourage a sense of collective identity. “Crowds can be part of the solution rather than part of the problem.”

Richard John, University of Southern California (suggested reading: Decision Analysis by Proxy for the Rational Terrorist), talked about the process of social amplification of risk (with respect to terrorism). Events result in relatively small losses; it’s the changes in behavior following an event that result in much greater losses. There’s a dynamic of risk perception, and it’s very contextual. He uses vignettes to study how risk perception changes over time, and discussed some of the studies he’s conducting and ideas for future studies.

Mark Stewart, University of Newcastle, Australia (suggested reading: A risk and cost-benefit assessment of United States aviation security measures; Risk and Cost-Benefit Assessment of Counter-Terrorism Protective Measures to Infrastructure), examines infrastructure security and whether the costs exceed the benefits. He talked about cost/benefit trade-off, and how to apply probabilistic terrorism risk assessment; then, he tried to apply this model to the U.S. Federal Air Marshal Service. His result: they’re not worth it. You can quibble with his data, but the real value is a transparent process. During the discussion, I said that it is important to realize that risks can’t be taken in isolation, that anyone making a security trade-off is balancing several risks: terrorism risks, political risks, the personal risks to his career, etc.

John Adams, University College London (suggested reading: Deus e Brasileiro?; Can Science Beat Terrorism?; Bicycle bombs: a further inquiry), applies his risk thermostat model to terrorism. He presented a series of amusing photographs of overreactions to risk, most of them not really about risk aversion but more about liability aversion. He talked about bureaucratic paranoia, as well as bureaucratic incitements to paranoia, and how this is beginning to backfire. People treat risks differently, depending on whether they are voluntary, impersonal, or imposed, and whether people have total control, diminished control, or no control.

Dan Gardner, Ottawa Citizen (suggested reading: The Science of Fear: Why We Fear the Things We Shouldn’t—and Put Ourselves in Greater Danger), talked about how the media covers risks, threats, attacks, etc. He talked about the various ways the media screws up, all of which were familiar to everyone. His thesis is not that the media gets things wrong in order to increase readership/viewership and therefore profits, but that the media gets things wrong because reporters are human. Bad news bias is not a result of the media hyping bad news, but the natural human tendency to remember the bad more than the good. The evening news is centered around stories because people—including reporters—respond to stories, and stories with novelty, emotion, and drama are better stories.

Some of the discussion was about the nature of panic: whether and where it exists, and what it looks like. Someone from the audience questioned whether panic was related to proximity to the event; someone else pointed out that people very close to the 7/7 bombings took pictures and made phone calls—and that there was no evidence of panic. Also, on 9/11 pretty much everyone below where the airplanes struck the World Trade Center got out safely; and everyone above couldn’t get out, and died. Angela Sasse pointed out that the previous terrorist attack against the World Trade Center, and the changes made in evacuation procedures afterwards, contributed to the lack of panic on 9/11. Bill Burns said that the purest form of panic is a drowning person. Jean Camp asked whether the recent attacks against women’s health providers should be classified as terrorism, or whether we are better off framing it as crime. There was also talk about sky marshals and their effectiveness. I said that it isn’t sky marshals that are a deterrent, but the idea of sky marshals. Terence Taylor said that increasing uncertainty on the part of the terrorists is, in itself, a security measure. There was also a discussion about how risk-averse terrorists are; they seem to want to believe they have an 80% or 90% change of success before they will launch an attack.

Next, lunch—and two final sessions this afternoon.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 12:01 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.