Entries Tagged "incentives"

Page 2 of 14

Paying People to Infect their Computers

Research paper: “It’s All About The Benjamins: An empirical study on incentivizing users to ignore security advice,” by Nicolas Christin, Serge Egelman, Timothy Vidas, and Jens Grossklags.

Abstract: We examine the cost for an attacker to pay users to execute arbitrary code—potentially malware. We asked users at home to download and run an executable we wrote without being told what it did and without any way of knowing it was harmless. Each week, we increased the payment amount. Our goal was to examine whether users would ignore common security advice—not to run untrusted executables­—if there was a direct incentive, and how much this incentive would need to be. We observed that for payments as low as $0.01, 22% of the people who viewed the task ultimately ran our executable. Once increased to $1.00, this proportion increased to 43%. We show that as the price increased, more and more users who understood the risks ultimately ran the code. We conclude that users are generally unopposed to running programs of unknown provenance, so long as their incentives exceed their inconvenience.

The experiment was run on Mechanical Turk, which means we don’t know who these people were or even if they were sitting at computers they owned (as opposed to, say, computers at an Internet cafe somewhere). But if you want to build a fair-trade botnet, this is a reasonable way to go about it.

Two articles.

Posted on June 19, 2014 at 6:28 AMView Comments

EU Might Raise Fines for Data Breaches

This makes a lot of sense.

Viviane Reding dismissed recent fines for Google as “pocket money” and said the firm would have had to pay $1bn under her plans for privacy failings.

Ms Reding said such punishments were necessary to ensure firms took the use of personal data seriously.

And she questioned how Google was able to take so long to getting round to changing its policy.

“Is it surprising to anyone that two whole years after the case emerged, it is still unclear whether Google will amend its privacy policy or not?” she said in a speech.

Ms Reding, who is also vice-president of the European Commission, wants far tougher laws that would introduce fines of up to 5% of the global annual turnover of a company for data breaches.

If fines are intended to change corporate behavior, they need to be large enough so that avoiding them is a smarter business strategy than simply paying them.

Posted on January 28, 2014 at 6:47 AMView Comments

The Effect of Money on Trust

Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.

The team devised an experiment where subjects in small and large groups had the option to give gifts in exchange for tokens.

They found that there was a social cost to introducing this incentive. When all tokens were “spent”, a potential gift-giver was less likely to help than they had been in a setting where tokens had not yet been introduced.

The same effect was found in smaller groups, who were less generous when there was the option of receiving a token.

“Subjects basically latched on to monetary exchange, and stopped helping unless they received immediate compensation in a form of an intrinsically worthless object [a token].

“Using money does help large societies to achieve larger levels of co-operation than smaller societies, but it does so at a cost of displacing normal of voluntary help that is the bread and butter of smaller societies, in which everyone knows each other,” said Prof Camera.

But he said that this negative result was not found in larger anonymous groups of 32, instead co-operation increased with the use of tokens.

“This is exciting because we introduced something that adds nothing to the economy, but it helped participants converge on a behaviour that is more trustworthy.”

He added that the study reflected monetary exchange in daily life: “Global interaction expands the set of trade opportunities, but it dilutes the level of information about others’ past behaviour. In this sense, one can view tokens in our experiment as a parable for global monetary exchange.”

Posted on September 5, 2013 at 1:57 PMView Comments

Snowden's Dead Man's Switch

Edward Snowden has set up a dead man’s switch. He’s distributed encrypted copies of his document trove to various people, and has set up some sort of automatic system to distribute the key, should something happen to him.

Dead man’s switches have a long history, both for safety (the machinery automatically stops if the operator’s hand goes slack) and security reasons. WikiLeaks did the same thing with the State Department cables.

“It’s not just a matter of, if he dies, things get released, it’s more nuanced than that,” he said. “It’s really just a way to protect himself against extremely rogue behavior on the part of the United States, by which I mean violent actions toward him, designed to end his life, and it’s just a way to ensure that nobody feels incentivized to do that.”

I’m not sure he’s thought this through, though. I would be more worried that someone would kill me in order to get the documents released than I would be that someone would kill me to prevent the documents from being released. Any real-world situation involves multiple adversaries, and it’s important to keep all of them in mind when designing a security system.

Posted on July 18, 2013 at 8:37 AMView Comments

The Politics of Security in a Democracy

Terrorism causes fear, and we overreact to that fear. Our brains aren’t very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.

Our leaders are just as prone to this overreaction as we are. But aside from basic psychology, there are other reasons that it’s smart politics to exaggerate terrorist threats, and security threats in general.

The first is that we respond to a strong leader. Bill Clinton famously said: “When people feel uncertain, they’d rather have somebody that’s strong and wrong than somebody who’s weak and right.” He’s right.

The second is that doing something—anything—is good politics. A politician wants to be seen as taking charge, demanding answers, fixing things. It just doesn’t look as good to sit back and claim that there’s nothing to do. The logic is along the lines of: “Something must be done. This is something. Therefore, we must do it.”

The third is that the “fear preacher” wins, regardless of the outcome. Imagine two politicians today. One of them preaches fear and draconian security measures. The other is someone like me, who tells people that terrorism is a negligible risk, that risk is part of life, and that while some security is necessary, we should mostly just refuse to be terrorized and get on with our lives.

Fast-forward 10 years. If I’m right and there have been no more terrorist attacks, the fear preacher takes credit for keeping us safe. But if a terrorist attack has occurred, my government career is over. Even if the incidence of terrorism is as ridiculously low as it is today, there’s no benefit for a politician to take my side of that gamble.

The fourth and final reason is money. Every new security technology, from surveillance cameras to high-tech fusion centers to airport full-body scanners, has a for-profit corporation lobbying for its purchase and use. Given the three other reasons above, it’s easy—and probably profitable—for a politician to make them happy and say yes.

For any given politician, the implications of these four reasons are straightforward. Overestimating the threat is better than underestimating it. Doing something about the threat is better than doing nothing. Doing something that is explicitly reactive is better than being proactive. (If you’re proactive and you’re wrong, you’ve wasted money. If you’re proactive and you’re right but no longer in power, whoever is in power is going to get the credit for what you did.) Visible is better than invisible. Creating something new is better than fixing something old.

Those last two maxims are why it’s better for a politician to fund a terrorist fusion center than to pay for more Arabic translators for the National Security Agency. No one’s going to see the additional appropriation in the NSA’s secret budget. On the other hand, a high-tech computerized fusion center is going to make front page news, even if it doesn’t actually do anything useful.

This leads to another phenomenon about security and government. Once a security system is in place, it can be very hard to dislodge it. Imagine a politician who objects to some aspect of airport security: the liquid ban, the shoe removal, something. If he pushes to relax security, he gets the blame if something bad happens as a result. No one wants to roll back a police power and have the lack of that power cause a well-publicized death, even if it’s a one-in-a-billion fluke.

We’re seeing this force at work in the bloated terrorist no-fly and watch lists; agents have lots of incentive to put someone on the list, but absolutely no incentive to take anyone off. We’re also seeing this in the Transportation Security Administration’s attempt to reverse the ban on small blades on airplanes. Twice it tried to make the change, and twice fearful politicians prevented it from going through with it.

Lots of unneeded and ineffective security measures are perpetrated by a government bureaucracy that is primarily concerned about the security of its members’ careers. They know the voters are more likely to punish them more if they fail to secure against a repetition of the last attack, and less if they fail to anticipate the next one.

What can we do? Well, the first step toward solving a problem is recognizing that you have one. These are not iron-clad rules; they’re tendencies. If we can keep these tendencies and their causes in mind, we’re more likely to end up with sensible security measures that are commensurate with the threat, instead of a lot of security theater and draconian police powers that are not.

Our leaders’ job is to resist these tendencies. Our job is to support politicians who do resist.

This essay originally appeared on CNN.com.

EDITED TO ADD (6/4): This essay has been translated into Swedish.

EDITED TO ADD (6/14): A similar essay, on the politics of terrorism defense.

Posted on May 28, 2013 at 5:09 AMView Comments

One-Shot vs. Iterated Prisoner's Dilemma

This post by Aleatha Parker-Wood is very applicable to the things I wrote in Liars & Outliers:

A lot of fundamental social problems can be modeled as a disconnection between people who believe (correctly or incorrectly) that they are playing a non-iterated game (in the game theory sense of the word), and people who believe that (correctly or incorrectly) that they are playing an iterated game.

For instance, mechanisms such as reputation mechanisms, ostracism, shaming, etc., are all predicated on the idea that the person you’re shaming will reappear and have further interactions with the group. Legal punishment is only useful if you can catch the person, and if the cost of the punishment is more than the benefit of the crime.

If it is possible to act as if the game you are playing is a one-shot game (for instance, you have a very large population to hide in, you don’t need to ever interact with people again, or you can be anonymous), your optimal strategies are going to be different than if you will have to play the game many times, and live with the legal or social consequences of your actions. If you can make enough money as CEO to retire immediately, you may choose to do so, even if you’re so terrible at running the company that no one will ever hire you again.

Social cohesion can be thought of as a manifestation of how “iterated” people feel their interactions are, how likely they are to interact with the same people again and again and have to deal with long term consequences of locally optimal choices, or whether they feel they can “opt out” of consequences of interacting with some set of people in a poor way.

Posted on May 23, 2013 at 9:18 AMView Comments

Security Externalities and DDOS Attacks

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 209.20.73.44). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this—they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus—which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but—you guessed it—the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.

By the way, a lot of the hype surrounding this attack was media manipulation.

Posted on April 10, 2013 at 12:46 PMView Comments

Getting Security Incentives Right

One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn’t matter how much management tells employees that security is important, employees know when it really isn’t—when getting the job done cheaply and on schedule is much more important.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

Similarly, there’s a supposedly an old Chinese proverb that goes “hang one, warn a thousand.” Or to put it another way, we’re really good at risk management. And there’s John Byng, whose execution gave rise to the Voltaire quote (in French): “in this country, it is good to kill an admiral from time to time, in order to encourage the others.”

I thought of all this when I read about the new security procedures surrounding the upcoming papal election:

According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.

“I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff,” the oath reads.

“I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication ‘latae sententiae’, which is reserved to the Apostolic See,” it continues.

Excommunication is like being fired, only it lasts for eternity.

I’m not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots—a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while—there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.

Posted on March 4, 2013 at 6:38 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.