Entries Tagged "incentives"

Page 6 of 14

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

More European Chip and Pin Insecurity

Optimised to Fail: Card Readers for Online Banking,” by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

Abstract

The Chip Authentication Programme (CAP) has been introduced by banks in Europe to deal with the soaring losses due to online banking fraud. A handheld reader is used together with the customer’s debit card to generate one-time codes for both login and transaction authentication. The CAP protocol is not public, and was rolled out without any public scrutiny. We reverse engineered the UK variant of card readers and smart cards and here provide the first public description of the protocol. We found numerous weaknesses that are due to design errors such as reusing authentication tokens, overloading data semantics, and failing to ensure freshness of responses. The overall strategic error was excessive optimisation. There are also policy implications. The move from signature to PIN for authorising point-of-sale transactions shifted liability from banks to customers; CAP introduces the same problem for online banking. It may also expose customers to physical harm.

EDITED TO ADD (3/12): More info.

Posted on March 5, 2009 at 12:45 PMView Comments

Perverse Security Incentives

An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.

I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision. All security decisions are trade-offs, but the motivations behind them are not always obvious: They’re often subjective, and driven by external incentives. And often security trade-offs are made for nonsecurity reasons.

Almost certainly, Whole Foods has a no-touching-the-customer policy because its attorneys recommended it. “No touching” is a security measure as well, but it’s security against customer lawsuits. The cost of these lawsuits would be much, much greater than the $346 worth of groceries stolen in this instance. Even applied to suspected shoplifters, the policy makes sense: The cost of a lawsuit resulting from tackling an innocent shopper by mistake would be far greater than the cost of letting actual shoplifters get away. As perverse it may seem, the result is completely reasonable given the corporate incentives—Whole Foods wrote a corporate policy that benefited itself.

At least, it works as long as the police and other factors keep society’s shoplifter population down to a reasonable level.

Incentives explain much that is perplexing about security trade-offs. Why does King County, Washington, require one form of ID to get a concealed-carry permit, but two forms of ID to pay for the permit by check? Making a mistake on a gun permit is an abstract problem, but a bad check actually costs some department money.

In the decades before 9/11, why did the airlines fight every security measure except the photo-ID check? Increased security annoys their customers, but the photo-ID check solved a security problem of a different kind: the resale of nonrefundable tickets. So the airlines were on board for that one.

And why does the TSA confiscate liquids at airport security, on the off chance that a terrorist will try to make a liquid explosive instead of using the more common solid ones? Because the officials in charge of the decision used CYA security measures to prevent specific, known tactics rather than broad, general ones.

The same misplaced incentives explain the ongoing problem of innocent prisoners spending years in places like Guantanamo and Abu Ghraib. The solution might seem obvious: Release the innocent ones, keep the guilty ones, and figure out whether the ones we aren’t sure about are innocent or guilty. But the incentives are more perverse than that. Who is going to sign the order releasing one of those prisoners? Which military officer is going to accept the risk, no matter how small, of being wrong?

I read almost five years ago that prisoners were being held by the United States far longer than they should, because ”no one wanted to be responsible for releasing the next Osama bin Laden.” That incentive to do nothing hasn’t changed. It might have even gotten stronger, as these innocents languish in prison.

In all these cases, the best way to change the trade-off is to change the incentives. Look at why the Whole Foods case works. Store employees don’t have to apprehend shoplifters, because society created a special organization specifically authorized to lay hands on people the grocery store points to as shoplifters: the police. If we want more rationality out of the TSA, there needs to be someone with a broader perspective willing to deal with general threats rather than specific targets or tactics.

For prisoners, society has created a special organization specifically entrusted with the role of judging the evidence against them and releasing them if appropriate: the judiciary. It’s only because the George W. Bush administration decided to remove the Guantanamo prisoners from the legal system that we are now stuck with these perverse incentives. Our country would be smart to move as many of these people through the court system as we can.

This essay originally appeared on Wired.com.

Posted on March 2, 2009 at 7:10 AMView Comments

BitArmor's No-Breach Guarantee

BitArmor now comes with a security guarantee. They even use me to tout it:

“We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time,” he [BitArmor’s CEO] says.

Sounds good, until you read the fine print:

If your company has to publicly report a breach while your data is protected by BitArmor, we’ll refund the purchase price of your software. It’s that simple. No gimmicks, no hassles.

[…]

BitArmor cannot be held accountable for data breaches, publicly or otherwise.

So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors—BitArmor will refund the purchase price.

Bottom line: PR gimmick, nothing more.

Yes, I think that software vendors need to accept liability for their products, and that we won’t see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won’t happen without the insurance companies; that’s the industry that knows how to buy and sell liability.

EDITED TO ADD (2/13): BitArmor responds.

Posted on January 23, 2009 at 10:35 AMView Comments

New Police Computer System Impeding Arrests

In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying:

He said police were growing reluctant to make arrests following the latest phased roll-out of QPRIME, or Queensland Police Records Information Management Exchange.

“They are reluctant to make arrests and they’re showing a lot more discretion in the arrests they make because QPRIME is so convoluted to navigate,” Mr Leavers said. He said minor street offences, some traffic offences and minor property matters were going unchallenged, but not serious offences.

However, Mr Leavers said there had been occasions where offenders were released rather than kept in custody because of the length of time it now took to prepare court summaries.

“There was an occasion where two people were arrested on multiple charges. It took six detectives more than six hours to enter the details into QPRIME,” he said. “It would have taken even longer to do the summary to go to court the next morning, so basically the suspects were released on bail, rather than kept in custody.”

He said jobs could now take up to seven hours to process because of the amount of data entry involved.

This is a good example of how non-security incentives affect security decisions.

Posted on January 22, 2009 at 1:51 PMView Comments

Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

This is the second half of a point/counterpoint with Marcus Ranum. Marcus’s essay is here.

Posted on January 21, 2009 at 6:59 AMView Comments

Economic Distress and Fear

This was the Quotation of the Day from January 12:

Part of the debtor mentality is a constant, frantically suppressed undercurrent of terror. We have one of the highest debt-to-income ratios in the world, and apparently most of us are two paychecks from the street. Those in power—governments, employers—exploit this, to great effect. Frightened people are obedient—not just physically, but intellectually and emotionally. If your employer tells you to work overtime, and you know that refusing could jeopardize everything you have, then not only do you work the overtime, but you convince yourself that you’re doing it voluntarily, out of loyalty to the company; because the alternative is to acknowledge that you are living in terror. Before you know it, you’ve persuaded yourself that you have a profound emotional attachment to some vast multinational corporation: you’ve indentured not just your working hours, but your entire thought process. The only people who are capable of either unfettered action or unfettered thought are those who—either because they’re heroically brave, or because they’re insane, or because they know themselves to be safe—are free from fear.

Quote is from The Likeness, a novel set in Ireland, by Tana French.

Posted on January 15, 2009 at 6:14 AMView Comments

RIAA Lawsuits May Be Unconstitutional

Harvard law professor Charles Nesson is arguing, in court, that the Digital Theft Deterrence and Copyright Damages Improvement Act of 1999 is unconstitutional:

He makes the argument that the Digital Theft Deterrence and Copyright Damages Improvement Act of 1999 is very much unconstitutional, in that its hefty fines for copyright infringement (misleadingly called “theft” in the title of the bill) show that the bill is effectively a criminal statute, yet for a civil crime. That’s because it really focuses on punitive damages, rather than making private parties whole again. Even worse, it puts the act of enforcing the criminal statute in the hands of a private body (the RIAA) who uses it for profit motive in being able to get hefty fines.

Imagine a statute which, in the name of deterrence, provides for a $750 fine for each mile-per-hour that a driver exceeds the speed limit, with the fine escalating to $150,000 per mile over the limit if the driver knew he or she was speeding. Imagine that the fines are not publicized, and most drivers do not know they exist. Imagine that enforcement of the fines is put in the hands of a private, self-interested police force, that has no political accountability, that can pursue any defendant it chooses at its own whim, that can accept or reject payoffs in exchange for not prosecuting the tickets, and that pockets for itself all payoffs and fines. Imagine that a significant percentage of these fines were never contested, regardless of whether they had merit, because the individuals being fined have limited financial resources and little idea of whether they can prevail in front of an objective judicial body.

Another news story.

Posted on November 19, 2008 at 1:33 PMView Comments

1 4 5 6 7 8 14

Sidebar photo of Bruce Schneier by Joe MacInnis.