Entries Tagged "reputation"

Page 2 of 4

The Court of Public Opinion

Recently, Elon Musk and the New York Times took to Twitter and the Internet to argue the data — and their grievances — over a failed road test and car review. Meanwhile, an Applebee’s server is part of a Change.org petition to get her job back after posting a pastor’s no-tip receipt comment online. And when he wasn’t paid quickly enough, a local Fitness SF web developer rewrote the company’s webpage to air his complaint.

All of these “cases” are seeking their judgments in the court of public opinion. The court of public opinion has a full docket; even brick-and-mortar establishments aren’t immune.

More and more individuals — and companies — are augmenting, even bypassing entirely, traditional legal process hoping to get a more favorable hearing in public.

Every day we have to interact with thousands of strangers, from people we pass on the street to people who touch our food to people we enter short-term business relationships with. Even though most of us don’t have the ability to protect our interests with physical force, we can all be confident when dealing with these strangers because — at least in part — we trust that the legal system will intervene on our behalf in case of a problem. Sometimes that problem involves people who break the rules of society, and the criminal courts deal with them; when the problem is a disagreement between two parties, the civil courts will. Courts are an ancient system of justice, and modern society cannot function without them.

What matters in this system are the facts and the laws. Courts are intended to be impartial and fair in doling out their justice, and societies flourish based on the extent to which we approach this ideal. When courts are unfair — when judges can be bribed, when the powerful are treated better, when more expensive lawyers produce more favorable outcomes — society is harmed. We become more fearful and less able to trust each other. We are less willing to enter into agreement with strangers, and we spend more effort protecting our own because we don’t believe the system is there to back us up.

The court of public opinion is an alternative system of justice. It’s very different from the traditional court system: This court is based on reputation, revenge, public shaming, and the whims of the crowd. Having a good story is more important than having the law on your side. Being a sympathetic underdog is more important than being fair. Facts matter, but there are no standards of accuracy. The speed of the Internet exacerbates this; a good story spreads faster than a bunch of facts.

This court delivers reputational justice. Arguments are measured in relation to reputation. If one party makes a claim against another that seems plausible, based on both of their reputations, then that claim is likely to be received favorably. If someone makes a claim that clashes with the reputations of the parties, then it’s likely to be disbelieved. Reputation is, of course, a commodity, and loss of reputation is the penalty this court imposes. In that respect, it less often recompenses the injured party and more often exacts revenge or retribution. And while those losses may be brutal, the effects are usually short-lived.

The court of public opinion has significant limitations. It works better for revenge and justice than for dispute resolution. It can punish a company for unfairly firing one of its employees or lying in an automobile test drive, but it’s less effective at unraveling a complicated patent litigation or navigating a bankruptcy proceeding.

In many ways, this is a return to a medieval notion of “fama,” or reputation. In other ways, it’s like mob justice: sometimes benign and beneficial, sometimes terrible (think French Revolution). Trial by public opinion isn’t new; remember Rodney King and O.J. Simpson?

Mass media has enabled this system for centuries. But the Internet, and social media in particular, has changed how it’s being used.

Now it’s being used more deliberately, more often, by more and more powerful entities as a redress mechanism. Perhaps because it’s perceived to be more efficient or perhaps because one of the parties feels they can get a more favorable hearing in this new court, but it’s being used instead of lawsuits. Instead of a sideshow to actual legal proceedings, it is turning into an alternate system of dispute resolution and justice.

Part of this trend is because the Internet makes taking a case in front of the court of public opinion so much easier. It used to be that the injured party had to convince a traditional media outlet to make his case public; now he can take his case directly to the people. And while it’s still a surprise when some cases go viral while others languish in obscurity, it’s simply more effective to present your case on Facebook or Twitter.

Another reason is that the traditional court system is increasingly viewed as unfair. Today, money can buy justice: not by directly bribing judges, but by hiring better lawyers and forcing the other side to spend more money than they are able to. We know that the courts treat the rich and the poor differently, that corporations can get away with crimes individuals cannot, and that the powerful can lobby to get the specific laws and regulations they want — irrespective of any notions of fairness.

Smart companies have already prepared for battles in the court of public opinion. They’ve hired policy experts. They’ve hired firms to monitor Facebook, Twitter, and other Internet venues where these battles originate. They have response strategies and communications plans in place. They’ve recognized that while this court is very different from the traditional legal system, money and power does count and that there are ways to tip the outcomes in their favor: For example, fake grassroots movements can be just as effective on the Internet as they can in the offline world.

It’s time we recognize the court of public opinion for what it is — an alternative crowd-enabled system of justice. We need to start discussing its merits and flaws; we need to understand when it results in justice, and how it can be manipulated by the powerful. We also need to have a frank conversation about the failings of the traditional justice scheme, and why people are motivated to take their grievances to the public. Despite 24-hour PR firms and incident-response plans, this is a court where corporations and governments are at an inherent disadvantage. And because the weak will continue to run ahead of the powerful, those in power will prefer to use the more traditional mechanisms of government: police, courts, and laws.

Social-media researcher danah boyd had it right when she wrote in Wired: “In a networked society, who among us gets to decide where the moral boundaries lie? This isn’t an easy question and it’s at the root of how we, as a society, conceptualize justice.” It’s not an easy question, but it’s the key question. The moral and ethical issues surrounding the court of public opinion are the real ones, and ones that society will have to tackle in the decades to come.

This essay originally appeared on Wired.com.

Posted on February 28, 2013 at 2:40 PMView Comments

Public Shaming as a Security Measure

In Liars and Outliers, I talk a lot about the more social forms of security. One of them is reputational. This post is about that squishy sociological security measure: public shaming as a way to punish bigotry (and, by extension, to reduce the incidence of bigotry).

It’s a pretty rambling post, first listing some of the public shaming sites, then trying to figure out whether they’re a good idea or not, and finally coming to the conclusion that shaming doesn’t do very much good and — in many cases — unjustly rewards the shamer.

I disagree with a lot of this. I do agree with:

I do think that shame has a role in the way we control our social norms. Shame is a powerful tool, and it’s something that we use to keep our own actions in check all the time. The source of that shame varies immensely. Maybe we are shamed before God, or our parents, or our boss.

But I disagree with the author’s insistence that “shame, ultimately, has to come from ourselves. We cannot be forced to feel shame.” While technically it’s true, operationally it’s not. Shame comes from others’ reactions to our actions. Yes, we feel it inside — but it originates from out lifelong inculcation into the norms of our social group. And throughout the history of our species, social groups have used shame to effectively punish those who violate social norms. No one wants a bad reputation.

It’s also true that we all have defenses against shame. One of them is to have an alternate social group for whom the shameful behavior is not shameful at all. Another is to simply not care what the group thinks. But none of this makes shame a less valuable tool of societal pressure.

Like all forms of security that society uses to control its members, shame is both useful and valuable. And I’m sure it is effective against bigotry. It might not be obvious how to deploy it effectively in the international and sometimes anonymous world of the Internet, but that’s another discussion entirely.

Posted on December 27, 2012 at 6:21 AMView Comments

Rudyard Kipling on Societal Pressures

In the short story “A Wayside Comedy,” published in 1888 in Under the Deodars, Kipling wrote:

You must remember, though you will not understand, that all laws weaken in a small and hidden community where there is no public opinion. When a man is absolutely alone in a Station he runs a certain risk of falling into evil ways. This risk is multiplied by every addition to the population up to twelve — the Jury number. After that, fear and consequent restraint begin, and human action becomes less grotesquely jerky.

Interesting commentary on how reputational pressure scales. If I had found this quote last year, I would have included it in my book.

Posted on August 16, 2012 at 1:52 PMView Comments

The Psychology of Immoral (and Illegal) Behavior

When I talk about Liars and Outliers to security audiences, one of the things I stress is our traditional security focus — on technical countermeasures — is much narrower than it could be. Leveraging moral, reputational, and institutional pressures are likely to be much more effective in motivating cooperative behavior.

This story illustrates the point. It’s about the psychology of fraud, “why good people do bad things.”

There is, she says, a common misperception that at moments like this, when people face an ethical decision, they clearly understand the choice that they are making.

“We assume that they can see the ethics and are consciously choosing not to behave ethically,” Tenbrunsel says.

This, generally speaking, is the basis of our disapproval: They knew. They chose to do wrong.

But Tenbrunsel says that we are frequently blind to the ethics of a situation.

Over the past couple of decades, psychologists have documented many different ways that our minds fail to see what is directly in front of us. They’ve come up with a concept called “bounded ethicality”: That’s the notion that cognitively, our ability to behave ethically is seriously limited, because we don’t always see the ethical big picture.

One small example: the way a decision is framed. “The way that a decision is presented to me,” says Tenbrunsel, “very much changes the way in which I view that decision, and then eventually, the decision it is that I reach.”

Essentially, Tenbrunsel argues, certain cognitive frames make us blind to the fact that we are confronting an ethical problem at all.

Tenbrunsel told us about a recent experiment that illustrates the problem. She got together two groups of people and told one to think about a business decision. The other group was instructed to think about an ethical decision. Those asked to consider a business decision generated one mental checklist; those asked to think of an ethical decision generated a different mental checklist.

Tenbrunsel next had her subjects do an unrelated task to distract them. Then she presented them with an opportunity to cheat.

Those cognitively primed to think about business behaved radically different from those who were not — no matter who they were, or what their moral upbringing had been.

“If you’re thinking about a business decision, you are significantly more likely to lie than if you were thinking from an ethical frame,” Tenbrunsel says.

According to Tenbrunsel, the business frame cognitively activates one set of goals — to be competent, to be successful; the ethics frame triggers other goals. And once you’re in, say, a business frame, you become really focused on meeting those goals, and other goals can completely fade from view.

Also:

Typically when we hear about large frauds, we assume the perpetrators were driven by financial incentives. But psychologists and economists say financial incentives don’t fully explain it. They’re interested in another possible explanation: Human beings commit fraud because human beings like each other.

We like to help each other, especially people we identify with. And when we are helping people, we really don’t see what we are doing as unethical.

The article even has some concrete security ideas:

Now if these psychologists and economists are right, if we are all capable of behaving profoundly unethically without realizing it, then our workplaces and regulations are poorly organized. They’re not designed to take into account the cognitively flawed human beings that we are. They don’t attempt to structure things around our weaknesses.

Some concrete proposals to do that are on the table. For example, we know that auditors develop relationships with clients after years of working together, and we know that those relationships can corrupt their audits without them even realizing it. So there is a proposal to force businesses to switch auditors every couple of years to address that problem.

Another suggestion: A sentence should be placed at the beginning of every business contract that explicitly says that lying on this contract is unethical and illegal, because that kind of statement would get people into the proper cognitive frame.

Along similar lines, some years ago Ross Anderson made the suggestion that the webpages of peoples’ online bank accounts should include their photographs, based on the research that it’s harder to commit fraud against someone who you identify with as a person.

Two excellent papers on this topic:

Abstract of the second paper:

Dishonesty plays a large role in the economy. Causes for (dis)honest behavior seem to be based partially on external rewards, and partially on internal rewards. Here, we investigate how such external and internal rewards work in concert to produce (dis)honesty. We propose and test a theory of self-concept maintenance that allows people to engage to some level in dishonest behavior, thereby benefiting from external benefits of dishonesty, while maintaining their positive view about themselves in terms of being honest individuals. The results show that (1) given the opportunity to engage in beneficial dishonesty, people will engage in such behaviors; (2) the amount of dishonesty is largely insensitive to either the expected external benefits or the costs associated with the deceptive acts; (3) people know about their actions but do not update their self-concepts; (4) causing people to become more aware of their internal standards for honesty decreases their tendency for deception; and (5) increasing the “degrees of freedom” that people have to interpret their actions increases their tendency for deception. We suggest that dishonesty governed by self-concept maintenance is likely to be prevalent in the economy, and understanding it has important implications for designing effective methods to curb dishonesty.

Posted on May 30, 2012 at 12:54 PMView Comments

Selling a Good Reputation on eBay

Here’s someone who is selling positive feedback on eBay:

Hello, for sale is a picture of a tree. This tree is an original and was taken by me. I have gotten nothing but 100% feedback from people from this picture. Great Picture! Once payment is made I will send you picture via email. Once payment is made and I send picture through email 100% feedback will be given to the buyer!!!! Once you pay for the item send me a ebay message with your email and I will email you the picture!

Posted on June 24, 2011 at 1:59 PMView Comments

Keeping Sensitive Information Out of the Hands of Terrorists Through Self-Restraint

In my forthcoming book (available February 2012), I talk about various mechanisms for societal security: how we as a group protect ourselves from the “dishonest minority” within us. I have four types of societal security systems:

  • moral systems — any internal rewards and punishments;
  • reputational systems — any informal external rewards and punishments;
  • rule-based systems — any formal system of rewards and punishments (mostly punishments); laws, mostly;
  • technological systems — everything like walls, door locks, cameras, and so on.

We spend most of our effort in the third and fourth category. I am spending a lot of time researching how the first two categories work.

Given that, I was very interested in seeing an article by Dallas Boyd in Homeland Security Affairs: “Protecting Sensitive Information: The Virtue of Self-Restraint,” where he basically says that people should not publish information that terrorists could use out of moral responsibility (he calls it “civic duty”). Ignore for a moment the debate about whether publishing information that could give the terrorists ideas is actually a bad idea — I think it’s not — what Boyd is proposing is actually very interesting. He specifically says that censorship is bad and won’t work, and wants to see voluntary self-restraint along with public shaming of offenders.

As an alternative to formal restrictions on communication, professional societies and influential figures should promote voluntary self-censorship as a civic duty. As this practice is already accepted among many scientists, it may be transferrable to members of other professions. As part of this effort, formal channels should be established in which citizens can alert the government to vulnerabilities and other sensitive information without exposing it to a wide audience. Concurrent with this campaign should be the stigmatization of those who recklessly disseminate sensitive information. This censure would be aided by the fact that many such people are unattractive figures whose writings betray their intellectual vanity. The public should be quick to furnish the opprobrium that presently escapes these individuals.

I don’t think it will work, and I don’t even think it’s possible in this international day and age, but it’s interesting to read the proposal.

Slashdot thread on the paper. Another article.

Posted on May 31, 2011 at 6:34 AMView Comments

RFID Tags Protecting Hotel Towels

The stealing of hotel towels isn’t a big problem in the scheme of world problems, but it can be expensive for hotels. Sure, we have moral prohibitions against stealing — that’ll prevent most people from stealing the towels. Many hotels put their name or logo on the towels. That works as a reputational societal security system; most people don’t want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect: making towels and other items into souvenirs of the hotel and thus more desirable to steal. It’s against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted. (This might be different in third world countries. In 2010, someone was sentenced to three months in jail for stealing two towels from a Nigerian hotel.) The result is that more towels are stolen than hotels want. And for expensive resort hotels, those towels are expensive to replace.

The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items — this is particularly common with bathrobes — and charge the guest for them if they disappear from the rooms. This works with some things, but it’s too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool.

A more recent system, still not widespread, is to embed washable RFID chips into the towels and track them that way. The one data point I have for this is an anonymous Hawaii hotel that claims they’ve reduced towel theft from 4,000 a month to 750, saving $16,000 in replacement costs monthly.

Assuming the RFID tags are relatively inexpensive and don’t wear out too quickly, that’s a pretty good security trade-off.

Posted on May 11, 2011 at 11:01 AMView Comments

Medieval Tally Stick Discovered in Germany

Interesting:

The well-preserved tally stick was used in the Middle Ages to count the debts owed by the holder in a time when most people were unable to read or write.

“Debts would have been carved into the stick in the form of small notches. Then the stick would have been split lengthways, with the creditor and the borrower each keeping a half,” explained Hille.

The two halves would then be put together again on the day repayment was due in order to compare them, with both sides hoping that they matched.

Note the security built into this primitive contract system. Neither side can cheat — alter the notches — because if they do, the two sides won’t match. I wonder what the dispute resolution system was: what happened when the two sides didn’t match.

EDITED TO ADD (5/14): In comments, lollardfish answers my question: “One then gets accused of fraud in court. In most circumstances, local power/reputation wins in fraud cases, since it’s not about finding of fact but who do you trust.”

Posted on May 10, 2011 at 1:47 PMView Comments

Status Report: The Dishonest Minority

Three months ago, I announced that I was writing a book on why security exists in human societies. This is basically the book’s thesis statement:

All complex systems contain parasites. In any system of cooperative behavior, an uncooperative strategy will be effective — and the system will tolerate the uncooperatives — as long as they’re not too numerous or too effective. Thus, as a species evolves cooperative behavior, it also evolves a dishonest minority that takes advantage of the honest majority. If individuals within a species have the ability to switch strategies, the dishonest minority will never be reduced to zero. As a result, the species simultaneously evolves two things: 1) security systems to protect itself from this dishonest minority, and 2) deception systems to successfully be parasitic.

Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual’s short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.

Two of these systems evolved in prehistory: morals and reputation. Two others evolved as our social groups became larger and more formal: laws and technical security systems. What these security systems do, effectively, is give individuals incentives to act in the group interest. But none of these systems, with the possible exception of some fanciful science-fiction technologies, can ever bring that dishonest minority down to zero.

In complex modern societies, many complications intrude on this simple model of societal security. Decisions to cooperate or defect are often made by groups of people — governments, corporations, and so on — and there are important differences because of dynamics inside and outside the groups. Much of our societal security is delegated — to the police, for example — and becomes institutionalized; the dynamics of this are also important. Power struggles over who controls the mechanisms of societal security are inherent: “group interest” rapidly devolves to “the king’s interest.” Societal security can become a tool for those in power to remain in power, with the definition of “honest majority” being simply the people who follow the rules.

The term “dishonest minority” is not a moral judgment; it simply describes the minority who does not follow societal norm. Since many societal norms are in fact immoral, sometimes the dishonest minority serves as a catalyst for social change. Societies without a reservoir of people who don’t follow the rules lack an important mechanism for societal evolution. Vibrant societies need a dishonest minority; if society makes its dishonest minority too small, it stifles dissent as well as common crime.

At this point, I have most of a first draft: 75,000 words. The tentative title is still “The Dishonest Minority: Security and its Role in Modern Society.” I have signed a contract with Wiley to deliver a final manuscript in November for February 2012 publication. Writing a book is a process of exploration for me, and the final book will certainly be a little different — and maybe even very different — from what I wrote above. But that’s where I am today.

And it’s why my other writings continue to be sparse.

Posted on May 9, 2011 at 7:02 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.