The Psychological Impact of Doing Classified Intelligence Work
Richard Thieme gave a talk on the psychological impact of doing classified intelligence work. Summary here.
Page 3 of 26
Richard Thieme gave a talk on the psychological impact of doing classified intelligence work. Summary here.
Interesting research that shows we exaggerate the risks of something when we find it morally objectionable.
From an article about and interview with the researchers:
To get at this question experimentally, Thomas and her collaborators created a series of vignettes in which a parent left a child unattended for some period of time, and participants indicated the risk of harm to the child during that period. For example, in one vignette, a 10-month-old was left alone for 15 minutes, asleep in the car in a cool, underground parking garage. In another vignette, an 8-year-old was left for an hour at a Starbucks, one block away from her parent’s location.
To experimentally manipulate participants’ moral attitude toward the parent, the experimenters varied the reason the child was left unattended across a set of six experiments with over 1,300 online participants. In some cases, the child was left alone unintentionally (for example, in one case, a mother is hit by a car and knocked unconscious after buckling her child into her car seat, thereby leaving the child unattended in the car seat). In other cases, the child was left unattended so the parent could go to work, do some volunteering, relax or meet a lover.
Not surprisingly, the parent’s reason for leaving a child unattended affected participants’ judgments of whether the parent had done something immoral: Ratings were over 3 on a 10-point scale even when the child was left unattended unintentionally, but they skyrocketed to nearly 8 when the parent left to meet a lover. Ratings for the other cases fell in between.
The more surprising result was that perceptions of risk followed precisely the same pattern. Although the details of the cases were otherwise the same - that is, the age of the child, the duration and location of the unattended period, and so on - participants thought children were in significantly greater danger when the parent left to meet a lover than when the child was left alone unintentionally. The ratings for the other cases, once again, fell in between. In other words, participants’ factual judgments of how much danger the child was in while the parent was away varied according to the extent of their moral outrage concerning the parent’s reason for leaving.
Scott Atran has done some really interesting research on why ordinary people become terrorists.
Academics who study warfare and terrorism typically don’t conduct research just kilometers from the front lines of battle. But taking the laboratory to the fight is crucial for figuring out what impels people to make the ultimate sacrifice to, for example, impose Islamic law on others, says Atran, who is affiliated with the National Center for Scientific Research in Paris.
Atran’s war zone research over the last few years, and interviews during the last decade with members of various groups engaged in militant jihad (or holy war in the name of Islamic law), give him a gritty perspective on this issue. He rejects popular assumptions that people frequently join up, fight and die for terrorist groups due to mental problems, poverty, brainwashing or savvy recruitment efforts by jihadist organizations.
Instead, he argues, young people adrift in a globalized world find their own way to ISIS, looking to don a social identity that gives their lives significance. Groups of dissatisfied young adult friends around the world often with little knowledge of Islam but yearning for lives of profound meaning and glory typically choose to become volunteers in the Islamic State army in Syria and Iraq, Atran contends. Many of these individuals connect via the internet and social media to form a global community of alienated youth seeking heroic sacrifice, he proposes.
Preliminary experimental evidence suggests that not only global terrorism, but also festering state and ethnic conflicts, revolutions and even human rights movements—think of the U.S. civil rights movement in the 1960s—depend on what Atran refers to as devoted actors. These individuals, he argues, will sacrifice themselves, their families and anyone or anything else when a volatile mix of conditions are in play. First, devoted actors adopt values they regard as sacred and nonnegotiable, to be defended at all costs. Then, when they join a like-minded group of nonkin that feels like a family a band of brothers a collective sense of invincibility and special destiny overwhelms feelings of individuality. As members of a tightly bound group that perceives its sacred values under attack, devoted actors will kill and die for each other.
EDITED TO ADD (8/13): Related paper, also by Atran.
Adam Conover interviewed me on his podcast.
If you remember, I was featured on his “Adam Ruins Everything” TV episode on security.
Ronald V. Clarke argues for more situational awareness in crime prevention. Turns out if you make crime harder, it goes down. And this has profound policy implications.
Whatever the benefits for Criminology, the real benefits of a greater focus on crime than criminality would be for crime policy. The fundamental attribution error is the main impediment to formulating a broader set of policies to control crime. Nearly everyone believes that the best way to control crime is to prevent people from developing into criminals in the first place or, failing that, to use the criminal justice system to deter or rehabilitate them. This has led directly to overuse of the system at vast human and economic cost.
Hardly anyone recognizes—whether politicians, public intellectuals, government policy makers, police or social workers—that focusing on the offender is dealing with only half the problem. We need also to deal with the many and varied ways in which society inadvertently creates the opportunities for crime that motivated offenders exploit by (i) manufacturing crime-prone goods, (ii) practicing poor management in many spheres of everyday life, (iii) permitting poor layout and design of places, (iv) neglecting the security of the vast numbers of electronic systems that regulate our everyday lives and, (v) enacting laws with unintended benefits for crime.
Situational prevention has accumulated dozens of successes in chipping away at some of the problems created by these conditions, which attests to the principles formulated so many years ago in Home Office research. Much more surprising, however, is that the same thing has been happening in every sector of modern life without any assistance from governments or academics. I am referring to the security measures that hundreds, perhaps thousands, of private and public organizations have been taking in the past 2-3 decades to protect themselves from crime.
Earlier this week, I was at the ninth Workshop on Security and Human Behavior, hosted at Harvard University.
SHB is a small invitational gathering of people studying various aspects of the human side of security. The fifty or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, philosophers, political scientists, neuroscientists, lawyers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.
These are the most intellectually stimulating two days of my year; this year someone called it “Bruce’s brain in conference form.”
The goal is maximum interaction and discussion. We do that by putting everyone on panels. There are eight six-person panels over the course of the two days. Everyone gets to talk for ten minutes about their work, and then there’s half an hour of discussion in the room. Then there are lunches, dinners, and receptions—all designed so people meet each other and talk.
This page lists the participants and gives links to some of their work. As usual, Ross Anderson liveblogged the talks.
Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, and eighth SHB workshops. Follow those links to find summaries, papers, and audio recordings of the workshops.
Interesting research: “Third-party punishment as a costly signal of trustworthiness, by Jillian J. Jordan, Moshe Hoffman, Paul Bloom,and David G. Rand, Nature:
Abstract: Third-party punishment (TPP), in which unaffected observers punish selfishness, promotes cooperation by deterring defection. But why should individuals choose to bear the costs of punishing? We present a game theoretic model of TPP as a costly signal of trustworthiness. Our model is based on individual differences in the costs and/or benefits of being trustworthy. We argue that individuals for whom trustworthiness is payoff-maximizing will find TPP to be less net costly (for example, because mechanisms that incentivize some individuals to be trustworthy also create benefits for deterring selfishness via TPP). We show that because of this relationship, it can be advantageous for individuals to punish selfishness in order to signal that they are not selfish themselves. We then empirically validate our model using economic game experiments. We show that TPP is indeed a signal of trustworthiness: third-party punishers are trusted more, and actually behave in a more trustworthy way, than non-punishers. Furthermore, as predicted by our model, introducing a more informative signal—the opportunity to help directly—attenuates these signalling effects. When potential punishers have the chance to help, they are less likely to punish, and punishment is perceived as, and actually is, a weaker signal of trustworthiness. Costly helping, in contrast, is a strong and highly used signal even when TPP is also possible. Together, our model and experiments provide a formal reputational account of TPP, and demonstrate how the costs of punishing may be recouped by the long-run benefits of signalling one’s trustworthiness.
More accessible essay.
More psychological research on our reaction to terrorism and mass violence:
The researchers collected posts on Twitter made in response to the 2012 shooting attack at Sandy Hook Elementary School in Newtown, Connecticut. They looked at tweets about the school shooting over a five-and-a-half-month period to see whether people used different language in connection with the event depending on how geographically close they were to Newtown, or how much time had elapsed since the tragedy. The analysis showed that the further away people were from the tragedy in either space or time, the less they used words related to sadness (loss, grieve, mourn), suggesting that feelings of sorrow waned with growing psychological distance. But words related to anxiety (crazy, fearful, scared) showed the opposite pattern, increasing in frequency as people gained distance in either time or space from the tragic events. For example, within the first week of the shootings, words expressing sadness accounted for 1.69 percent of all words used in tweets about the event; about five months later, these had dwindled to 0.62 percent. In contrast, anxiety-related words went up from 0.27 percent to 0.62 percent over the same time.
Why does psychological distance mute sadness but incubate anxiety? The authors point out that as people feel more remote from an event, they shift from thinking of it in very concrete terms to more abstract ones, a pattern that has been shown in a number of previous studies. Concrete thoughts highlight the individual lives affected and the horrific details of the tragedy. (Images have >particular power to make us feel the loss of individuals in a mass tragedy.) But when people think about the event abstractly, they’re more apt to focus on its underlying causes, which is anxiety inducing if the cause is seen as arising from an unresolved issue.
This is related.
Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:
Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.
The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.
I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.
As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.
None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:
Dan Luu has written about this, too.
I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.
I don’t have any magic solutions here. Banja’s suggestions are good, but general:
The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.
This essay previously appeared on the Resilient Systems blog.
Fascinating New Yorker article about Samantha Azzopardi, serial con artist and deceiver.
The article is really about how our brains allow stories to deceive us:
Stories bring us together. We can talk about them and bond over them. They are shared knowledge, shared legend, and shared history; often, they shape our shared future. Stories are so natural that we don’t notice how much they permeate our lives. And stories are on our side: they are meant to delight us, not deceive us—an ever-present form of entertainment.
That’s precisely why they can be such a powerful tool of deception. When we’re immersed in a story, we let down our guard. We focus in a way we wouldn’t if someone were just trying to catch us with a random phrase or picture or interaction. (“He has a secret” makes for a far more intriguing proposition than “He has a bicycle.”) In those moments of fully immersed attention, we may absorb things, under the radar, that would normally pass us by or put us on high alert. Later, we may find ourselves thinking that some idea or concept is coming from our own brilliant, fertile minds, when, in reality, it was planted there by the story we just heard or read.
Sidebar photo of Bruce Schneier by Joe MacInnis.