Entries Tagged "incentives"

Page 4 of 14

Changing Incentives Creates Security Risks

One of the things I am writing about in my new book is how security equilibriums change. They often change because of technology, but they sometimes change because of incentives.

An interesting example of this is the recent scandal in the Washington, DC, public school system over teachers changing their students’ test answers.

In the U.S., under the No Child Left Behind Act, students have to pass certain tests; otherwise, schools are penalized. In the District of Columbia, things went further. Michelle Rhee, chancellor of the public school system from 2007 to 2010, offered teachers $8,000 bonuses—and threatened them with termination—for improving test scores. Scores did increase significantly during the period, and the schools were held up as examples of how incentives affect teaching behavior.

It turns out that a lot of those score increases were faked. In addition to teaching students, teachers cheated on their students’ tests by changing wrong answers to correct ones. That’s how the cheating was discovered; researchers looked at the actual test papers and found more erasures than usual, and many more erasures from wrong answers to correct ones than could be explained by anything other than deliberate manipulation.

Teachers were always able to manipulate their students’ test answers, but before, there wasn’t much incentive to do so. With Rhee’s changes, there was a much greater incentive to cheat.

The point is that whatever security measures were in place to prevent teacher cheating before the financial incentives and threats of firing wasn’t sufficient to prevent teacher cheating afterwards. Because Rhee significantly increased the costs of cooperation (by threatening to fire teachers of poorly performing students) and increased the benefits of defection ($8,000), she created a security risk. And she should have increased security measures to restore balance to those incentives.

This is not isolated to DC. It has happened elsewhere as well.

Posted on April 14, 2011 at 6:36 AMView Comments

Reducing Bribery by Legalizing the Giving of Bribes

Here’s some very clever thinking from India’s chief economic adviser. In order to reduce bribery, he proposes legalizing the giving of bribes:

Under the current law, discussed in some detail in the next section, once a bribe is given, the bribe giver and the bribe taker become partners in crime. It is in their joint interest to keep this fact hidden from the authorities and to be fugitives from the law, because, if caught, both expect to be punished. Under the kind of revised law that I am proposing here, once a bribe is given and the bribe giver collects whatever she is trying to acquire by giving the money, the interests of the bribe taker and bribe giver become completely orthogonal to each other. If caught, the bribe giver will go scot free and will be able to collect his bribe money back. The bribe taker, on the other hand, loses the booty of bribe and faces a hefty punishment.

Hence, in the post-bribe situation it is in the interest of the bribe giver to have the bribe taker caught. Since the bribe giver will cooperate with the law, the chances are much higher of the bribe taker getting caught. In fact, it will be in the interest of the bribe giver to have the taker get caught, since that way the bribe giver can get back the money she gave as bribe. Since the bribe taker knows this, he will be much less inclined to take the bribe in the first place. This establishes that there will be a drop in the incidence of bribery.

He notes that this only works for a certain class of bribes: when you have to bribe officials for something you are already entitled to receive. It won’t work for any long-term bribery relationship, or in any situation where the briber would otherwise not want the bribe to become public.

News article.

Posted on April 5, 2011 at 8:46 AMView Comments

Ebook Fraud

Interesting post—and discussion—on Making Light about ebook fraud. Currently there are two types of fraud. The first is content farming, discussed in these two interesting blog posts. People are creating automatically generated content, web-collected content, or fake content, turning it into a book, and selling it on an ebook site like Amazon.com. Then they use multiple identities to give it good reviews. (If it gets a bad review, the scammer just relists the same content under a new name.) That second blog post contains a screen shot of something called “Autopilot Kindle Cash,” which promises to teach people how to post dozens of ebooks to Amazon.com per day.

The second type of fraud is stealing a book and selling it as an ebook. So someone could scan a real book and sell it on an ebook site, even though he doesn’t own the copyright. It could be a book that isn’t already available as an ebook, or it could be a “low cost” version of a book that is already available. Amazon doesn’t seem particularly motivated to deal with this sort of fraud. And it too is suitable for automation.

Broadly speaking, there’s nothing new here. All complex ecosystems have parasites, and every open communications system we’ve ever built gets overrun by scammers and spammers. Far from making editors superfluous, systems that democratize publishing have an even greater need for editors. The solutions are not new, either: reputation-based systems, trusted recommenders, white lists, takedown notices. Google has implemented a bunch of security countermeasures against content farming; ebook sellers should implement them as well. It’ll be interesting to see what particular sort of mix works in this case.

Posted on April 4, 2011 at 9:18 AMView Comments

Comodo Group Issues Bogus SSL Certificates

This isn’t good:

The hacker, whose March 15 attack was traced to an IP address in Iran, compromised a partner account at the respected certificate authority Comodo Group, which he used to request eight SSL certificates for six domains: mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org and login.live.com.

The certificates would have allowed the attacker to craft fake pages that would have been accepted by browsers as the legitimate websites. The certificates would have been most useful as part of an attack that redirected traffic intended for Skype, Google and Yahoo to a machine under the attacker’s control. Such an attack can range from small-scale Wi-Fi spoofing at a coffee shop all the way to global hijacking of internet routes.

At a minimum, the attacker would then be able to steal login credentials from anyone who entered a username and password into the fake page, or perform a “man in the middle” attack to eavesdrop on the user’s session.

More news articles. Comodo announcement.

Fake certs for Google, Yahoo, and Skype? Wow.

This isn’t the first time Comodo has screwed up with certificates. The safest thing for us users to do would be to remove the Comodo root certificate from our browsers so that none of their certificates work, but we don’t have the capability to do that. The browser companies—Microsoft, Mozilla, Opera, etc.—could do that, but my guess is they won’t. The economic incentives don’t work properly. Comodo is likely to sue any browser company that takes this sort of action, and Comodo’s customers might as well. So it’s smarter for the browser companies to just ignore the issue and pass the problem to us users.

Posted on March 31, 2011 at 7:00 AMView Comments

Biometric Wallet

Not an electronic wallet, a physical one:

Virtually indestructible, the dunhill Biometric Wallet will open only with touch of your fingerprint.

It can be linked via Bluetooth to the owner’s mobile phone ­ sounding an alarm if the two are separated by more than 5 metres! This provides a brilliant warning if either the phone or wallet is stolen or misplaced. The exterior of the wallet is constructed from highly durable carbon fibre that will resist all but the most concerted effort to open it, while the interior features a luxurious leather credit card holder and a strong stainless steel money clip.

Only $825. News article.

I don’t think I understand the threat model. If your wallet is stolen, you’re going to replace all your ID cards and credit cards and you’re not going to get your cash back—whether it’s a normal wallet or this wallet. I suppose this wallet makes it less likely that someone will use your stolen credit cards quickly, before you cancel them. But you’re not going to be liable for that delay in any case.

Posted on February 18, 2011 at 1:45 PMView Comments

Cheating on Tests, by the Teachers

If you give people enough incentive to cheat, people will cheat:

Of all the forms of academic cheating, none may be as startling as educators tampering with children’s standardized tests. But investigations in Georgia, Indiana, Massachusetts, Nevada, Virginia and elsewhere this year have pointed to cheating by educators. Experts say the phenomenon is increasing as the stakes over standardized testing ratchet higher—including, most recently, taking student progress on tests into consideration in teachers’ performance reviews.

Posted on June 21, 2010 at 12:01 PMView Comments

Externalities and Identity Theft

Chris Hoofnagle has a new paper: “Internalizing Identity Theft.” Basically, he shows that one of the problems is that lenders extend credit even when credit applications are sketchy.

From an article on the work:

Using a 2003 amendment to the Fair Credit Reporting Act that allows victims of ID theft to ask creditors for the fraudulent applications submitted in their names, Mr. Hoofnagle worked with a small sample of six ID theft victims and delved into how they were defrauded.

Of 16 applications presented by imposters to obtain credit or medical services, almost all were rife with errors that should have suggested fraud. Yet in all 16 cases, credit or services were granted anyway.

In the various cases described in the paper, which was published on Wednesday in The U.C.L.A. Journal of Law and Technology, one victim found four of six fraudulent applications submitted in her name contained the wrong address; two contained the wrong phone number and one the wrong date of birth.

Another victim discovered that his imposter was 70 pounds heavier, yet successfully masqueraded as him using what appeared to be his stolen driver’s license, and in one case submitted an incorrect Social Security number.

This is a textbook example of an economic externality. Because most of the cost of identity theft is borne by the victim—even with the lender reimbursing the victim if pushed to—the lenders make the trade-off that’s best for their business, and that means issuing credit even in marginal situations. They make more money that way.

If we want to reduce identity theft, the only solution is to internalize that externality. Either give victims the ability to sue lenders who issue credit in their names to identity thieves, or pass a law with penalties if lenders do this.

Among the ways to move the cost of the crime back to issuers of credit, Mr. Hoofnagle suggests that lenders contribute to a fund that will compensate victims for the loss of their time in resolving their ID theft problems.

Posted on April 14, 2010 at 6:57 AMView Comments

Online Credit/Debit Card Security Failure

Ross Anderson reports:

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Posted on February 1, 2010 at 6:26 AMView Comments

Matt Blaze on the New "Unpredictable" TSA Screening Measures

Interesting:

“Unpredictable” security as applied to air passenger screening means that sometimes (perhaps most of the time), certain checks that might detect terrorist activity are not applied to some or all passengers on any given flight. Passengers can’t predict or influence when or whether they are be subjected to any particular screening mechanism. And so, the strategy assumes, the would-be terrorist will be forced to prepare for every possible mechanism in the TSA’s arsenal, effectively narrowing his or her range of options enough to make any serious mischief infeasible.

But terrorist organizations—especially those employing suicide bombers—have very different goals and incentives from those of smugglers, fare beaters and tax cheats. Groups like Al Qaeda aim to cause widespread disruption and terror by whatever means they can, even at great cost to individual members. In particular, they are willing and able to sacrifice—martyr—the very lives of their solders in the service of that goal. The fate of any individual terrorist is irrelevant as long as the loss contributes to terror and disruption.

Paradoxically, the best terrorist strategy (as long as they have enough volunteers) under unpredictable screening may be to prepare a cadre of suicide bombers for the least rigorous screening to which they might be subjected, and not, as the strategy assumes, for the most rigorous. Sent on their way, each will either succeed at destroying a plane or be caught, but either outcome serves the terrorists’ objective.

The problem is that catching someone under a randomized strategy creates a terrible dilemma for the authorities. What do we do when we detect a bomb-wielding terrorist whose device was discovered through the enhanced, randomly applied screening procedure?

EDITED TO ADD (1/5): In this blog post, a reader of Andrew Sullivan’s blog argues that the terrorist didn’t care if he blew the plane up or not, that he went back to his seat instead of detonating the explosive in the toilet precisely because he wanted his fellow passengers to see his attempt—just in case it failed.

Posted on January 5, 2010 at 11:41 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.