US and China in Cyberspace
This article on US/China cooperation and competition in cyberspace is an interesting lens through which to examine security policy.
Page 1 of 2
This article on US/China cooperation and competition in cyberspace is an interesting lens through which to examine security policy.
I spend a lot of time in my book Liars and Outliers on cooperating versus defecting. Cooperating is good for the group at the expense of the individual. Defecting is good for the individual at the expense of the group. Given that evolution concerns individuals, there has been a lot of controversy over how altruism might have evolved.
Here’s one possible answer: it’s favored by chance:
The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term.
Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10:
“Although the odds [of] winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more than they lose out.”
EDITED TO ADD (8/12): Journal article.
Related article.
This is an interesting paper—the full version is behind a paywall—about how we as humans can motivate people to cooperate with future generations.
Abstract: Overexploitation of renewable resources today has a high cost on the welfare of future generations. Unlike in other public goods games, however, future generations cannot reciprocate actions made today. What mechanisms can maintain cooperation with the future? To answer this question, we devise a new experimental paradigm, the ‘Intergenerational Goods Game’. A line-up of successive groups (generations) can each either extract a resource to exhaustion or leave something for the next group. Exhausting the resource maximizes the payoff for the present generation, but leaves all future generations empty-handed. Here we show that the resource is almost always destroyed if extraction decisions are made individually. This failure to cooperate with the future is driven primarily by a minority of individuals who extract far more than what is sustainable. In contrast, when extractions are democratically decided by vote, the resource is consistently sustained. Voting is effective for two reasons. First, it allows a majority of cooperators to restrain defectors. Second, it reassures conditional cooperators that their efforts are not futile. Voting, however, only promotes sustainability if it is binding for all involved. Our results have implications for policy interventions designed to sustain intergenerational public goods.
Here’s a Q&A with and essay by the author. Article on the research.
EDITED TO ADD (12/10): A low-res version of the full article can be viewed here.
I’ve talked about plant security systems, both here and in Beyond Fear. Specifically, I’ve talked about tobacco plants that call air strikes against insects that eat them, by releasing a scent that attracts predators to those insects. Here’s another defense: the plants also tag caterpillars for predators by feeding them a sweet snack (full episode here) that makes them give off a strong scent.
I hadn’t heard of this term before, but it’s an interesting one. The excerpt below is from an interview with Rebecca Solnit, author of A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster:
The term “elite panic” was coined by Caron Chess and Lee Clarke of Rutgers. From the beginning of the field in the 1950s to the present, the major sociologists of disaster—Charles Fritz, Enrico Quarantelli, Kathleen Tierney, and Lee Clarke—proceeding in the most cautious, methodical, and clearly attempting-to-be-politically-neutral way of social scientists, arrived via their research at this enormous confidence in human nature and deep critique of institutional authority. It’s quite remarkable.
Elites tend to believe in a venal, selfish, and essentially monstrous version of human nature, which I sometimes think is their own human nature. I mean, people don’t become incredibly wealthy and powerful by being angelic, necessarily. They believe that only their power keeps the rest of us in line and that when it somehow shrinks away, our seething violence will rise to the surface—that was very clear in Katrina. Timothy Garton Ash and Maureen Dowd and all these other people immediately jumped on the bandwagon and started writing commentaries based on the assumption that the rumors of mass violence during Katrina were true. A lot of people have never understood that the rumors were dispelled and that those things didn’t actually happen; it’s tragic.
But there’s also an elite fear—going back to the 19th century—that there will be urban insurrection. It’s a valid fear. I see these moments of crisis as moments of popular power and positive social change. The major example in my book is Mexico City, where the ’85 earthquake prompted public disaffection with the one-party system and, therefore, the rebirth of civil society.
In Liars and Outliers, I talk a lot about social norms and when people follow them. This research uses survival data from shipwrecks to measure it.
The authors argue that shipwrecks can actually tell us a fair bit about human behavior, since everyone stuck on a sinking ship has to do a bit of cost-benefit analysis. People will weigh their options—which will generally involve helping others at great risk to themselves—amidst a backdrop of social norms and, at least in case of the Titanic, direct orders from authority figures. “This cost-benefit logic is fundamental in economic models of human behavior,” the authors write, suggesting that a shipwreck could provide a real-world test of ideas derived from controlled experiments.
Eight ideas, to be precise. That’s how many hypotheses the authors lay out, ranging from “women have a survival advantage in shipwrecks” to “women are more likely to survive on British ships, given the UK’s strong sense of gentility.” They tested them using a database of ship sinkings that encompasses over 15,000 passengers and crew, and provides information on everything from age and sex to whether the passenger had a first-class ticket.
For the most part, the lessons provided by the Titanic simply don’t hold. Excluding the two disasters mentioned above, crew members had a survival rate of over 60 percent, far higher than any other group analyzed. (Although they didn’t consistently survive well—in about half the wrecks, there was no statistical difference between crew and passengers). Rather than going down with the ship, captains ended up coming in second, with just under half surviving. The authors offer a number of plausible reasons for crew survival, including better fitness, a thorough knowledge of the ship that’s sinking, and better training for how to handle emergencies. In any case, however, they’re not clearly or consistently sacrificing themselves to save their passengers.
At the other end of the spectrum, nearly half the children on the Titanic survived, but figures for the rest of the shipwrecks were down near 15 percent. About a quarter of women survived other sinkings, but roughly three times that made it through the Titanic alive. If you exclude the Titanic, female survival was 18 percent, or about half the rate at which males came through alive.
What about social factors? Having the captain order “women and children first” did boost female survival, but only by about 10 percentage points. Most of the other ideas didn’t pan out. For example, the speed of sinking, which might give the crew more time to get vulnerable passengers off first, made no difference whatsoever to female survival. Neither did the length of voyage, which might give passengers more time to get to know both the boat and each other. The fraction of passengers that were female didn’t seem to make a difference either.
One social factor that did play a role was price of ticket: “there is a class gradient in survival benefitting first class passengers.” Another is the being on a British ship, where (except with the Titanic), women actually had lower rates of survival.
Paper here (behind a paywall):
Abstract: Since the sinking of the Titanic, there has been a widespread belief that the social norm of “women and children first” (WCF) give women a survival advantage over men in maritime disasters, and that captains and crew members give priority to passengers. We analyze a database of 18 maritime disasters spanning three centuries, covering the fate of over 15,000 individuals of more than 30 nationalities. Our results provide a unique picture of maritime disasters. Women have a distinct survival disadvantage compared with men. Captains and crew survive at a significantly higher rate than passengers. We also find that: the captain has the power to enforce normative behavior; there seems to be no association between duration of a disaster and the impact of social norms; women fare no better when they constitute a small share of the ship’s complement; the length of the voyage before the disaster appears to have no impact on women’s relative survival rate; the sex gap in survival rates has declined since World War I; and women have a larger disadvantage in British shipwrecks. Taken together, our findings show that human behavior in life-and-death situations is best captured by the expression “every man for himself.”
I write a lot about altruism, fairness, and cooperation in my new book (out in February!), and this sort of thing interests me a lot:
In a new study, researchers had 15-month old babies watch movies of a person distributing crackers or milk to two others, either evenly or unevenly. Babies look at things longer when they’re surprised, so measuring looking time can be used to gain insight into what babies expect to happen. In the study, the infants looked longer when the person in the video distributed the foods unevenly, suggesting surprise, and perhaps even an early perception of fairness.
But the team also say they established a link between fairness and altruism. In a second part of the experiment, the babies chose between two toys, and were then asked to share one of the toys with an experimenter. About a third of the babies were “selfish sharers”: they shared the toy they hadn’t chosen. Another third were “altruistic sharers”: they shared their chosen toy. (The rest chose not to share. They may have been inhibited by the unfamiliarity of the experimenter, or maybe they just weren’t that into sharing.)
What’s interesting about the second half of the study was that by and large it was the babies who had previously been surprised by the unfair cracker and milk distribution who tended to share the preferred toy with the experimenter (the altruistic sharers). The babies who shared the rejected toy hadn’t expressed much surprise over unequal distribution. This led the researchers to suggest that there’s a fundamental link between altruism and a sense of equity.
Both psychology and neuroscience have a lot to say about these topics, and the resulting debate reads like a subset of the “Is there such a thing as free will?” debate. I think those who believe there is no free will are misdefining the term.
What does this have to do with security? Everything. It’s not until we understand the natural human tendencies of fairness and altruism that we can really understand people who take advantage of those tendencies, and build systems to prevent them from taking advantage.
EDITED TO ADD (12/14): Related research with dogs.
New paper: Dengpan Liu, Yonghua Ji, and Vijay Mookerjee (2011), “Knowledge Sharing and Investment Decisions in Information Security,” Decision Support Systems, in press.
Abstract: We study the relationship between decisions made by two similar firms pertaining to knowledge sharing and investment in information security. The analysis shows that the nature of information assets possessed by the two firms, either complementary or substitutable, plays a crucial role in influencing these decisions. In the complementary case, we show that the firms have a natural incentive to share security knowledge and no external influence to induce sharing is needed. However, the investment levels chosen in equilibrium are lower than optimal, an aberration that can be corrected using coordination mechanisms that reward the firms for increasing their investment levels. In the substitutable case, the firms fall into a Prisoners’ Dilemma trap where they do not share security knowledge in equilibrium, despite the fact that it is beneficial for both of them to do so. Here, the beneficial role of a social planner to encourage the firms to share is indicated. However, even when the firms share in accordance to the recommendations of a social planner, the level of investment chosen by the firms is sub-optimal. The firms either enter into an “arms race” where they over-invest or reenact the under-investment behavior found in the complementary case. Once again, this sub-optimal behavior can be corrected using incentive mechanisms that penalize for over-investment and reward for increasing the investment level in regions of under-investment. The proposed coordination schemes, with some modifications, achieve the socially optimal outcome even when the firms are risk-averse. Implications for information security vendors, firms, and social planner are discussed.
Three months ago, I announced that I was writing a book on why security exists in human societies. This is basically the book’s thesis statement:
All complex systems contain parasites. In any system of cooperative behavior, an uncooperative strategy will be effective—and the system will tolerate the uncooperatives—as long as they’re not too numerous or too effective. Thus, as a species evolves cooperative behavior, it also evolves a dishonest minority that takes advantage of the honest majority. If individuals within a species have the ability to switch strategies, the dishonest minority will never be reduced to zero. As a result, the species simultaneously evolves two things: 1) security systems to protect itself from this dishonest minority, and 2) deception systems to successfully be parasitic.
Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual’s short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.
Two of these systems evolved in prehistory: morals and reputation. Two others evolved as our social groups became larger and more formal: laws and technical security systems. What these security systems do, effectively, is give individuals incentives to act in the group interest. But none of these systems, with the possible exception of some fanciful science-fiction technologies, can ever bring that dishonest minority down to zero.
In complex modern societies, many complications intrude on this simple model of societal security. Decisions to cooperate or defect are often made by groups of people—governments, corporations, and so on—and there are important differences because of dynamics inside and outside the groups. Much of our societal security is delegated—to the police, for example—and becomes institutionalized; the dynamics of this are also important. Power struggles over who controls the mechanisms of societal security are inherent: “group interest” rapidly devolves to “the king’s interest.” Societal security can become a tool for those in power to remain in power, with the definition of “honest majority” being simply the people who follow the rules.
The term “dishonest minority” is not a moral judgment; it simply describes the minority who does not follow societal norm. Since many societal norms are in fact immoral, sometimes the dishonest minority serves as a catalyst for social change. Societies without a reservoir of people who don’t follow the rules lack an important mechanism for societal evolution. Vibrant societies need a dishonest minority; if society makes its dishonest minority too small, it stifles dissent as well as common crime.
At this point, I have most of a first draft: 75,000 words. The tentative title is still “The Dishonest Minority: Security and its Role in Modern Society.” I have signed a contract with Wiley to deliver a final manuscript in November for February 2012 publication. Writing a book is a process of exploration for me, and the final book will certainly be a little different—and maybe even very different—from what I wrote above. But that’s where I am today.
And it’s why my other writings continue to be sparse.
It’s standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.
Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other’s symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.
This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It’s the kind of thing I am talking about in my new book.
Paper also available here.
Sidebar photo of Bruce Schneier by Joe MacInnis.