Entries Tagged "incentives"

Page 9 of 14

Land Title Fraud

There seems to be a small epidemic of land title fraud in Ontario, Canada.

What happens is someone impersonates the homeowner, and then sells the house out from under him. The former owner is still liable for the mortgage, but can’t get in his former house. Cleaning up the problem takes a lot of time and energy.

The problem is one of economic incentives. If banks were held liable for fraudulent mortgages, then the problem would go away really quickly. But as long as they’re not, they have no incentive to ensure that this fraud doesn’t occur. (They have some incentive, because the fraud costs them money, but as long as the few fraud cases cost less than ensuring the validity of every mortgage, they’ll just ignore the problem and eat the losses when fraud occurs.)

EDITED TO ADD (9/8): Another article.

Posted on September 8, 2006 at 6:43 AMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

Call Forwarding Credit Card Scam

This is impressive:

A fraudster contacts an AT&T service rep and says he works at a pizza parlor and that the phone is having trouble. Until things get fixed, he requests that all incoming calls be forwarded to another number, which he provides.

Pizza orders are thus routed by AT&T to the fraudster’s line. When a call comes in, the fraudster pretends to take the customer’s order but says payment must be made in advance by credit card.

The unsuspecting customer gives his or her card number and expiration date, and before you can say “extra cheese,” the fraudster is ready to go on an Internet shopping spree using someone else’s money.

Those of us who know security have been telling people not to trust incoming phone calls—that you should call the company if you are going to divulge personal information to them. Seems like that advice isn’t foolproof.

The problem is the phone company, of course. They’re forwarding calls based on an unauthenticated request. AT&T doesn’t really want to talk about details:

He was reluctant to discuss the steps AT&T has taken to improve its call-forwarding system so this sort of thing doesn’t happen again. What, for example, is to prevent someone from convincing AT&T to forward all calls to a local flower store or some other business that takes orders by phone?

“We had some guidelines in place that we believe were effective,” Britton said. “Now we have extra precautions.”

It seems to me that AT&T would solve this problem more quickly if it were liable. Shouldn’t a pizza customer who has been scammed be allowed to sue AT&T? After all, the phone company didn’t route the customer’s calls properly. Does the credit card company have a basis for a suit? Certainly the pizza parlor does, but the effects of AT&T’s sloppy authentication are much greater than a few missed pizza orders.

Posted on August 21, 2006 at 1:35 PMView Comments

Doping in Professional Sports

The big news in professional bicycle racing is that Floyd Landis may be stripped of his Tour de France title because he tested positive for a banned performance-enhancing drug. Sidestepping the entire issue of whether professional athletes should be allowed to take performance-enhancing drugs, how dangerous those drugs are, and what constitutes a performance-enhancing drug in the first place, I’d like to talk about the security and economic issues surrounding the issue of doping in professional sports.

Drug testing is a security issue. Various sports federations around the world do their best to detect illegal doping, and players do their best to evade the tests. It’s a classic security arms race: improvements in detection technologies lead to improvements in drug detection evasion, which in turn spur the development of better detection capabilities. Right now, it seems that the drugs are winning; in places, these drug tests are described as “intelligence tests”: if you can’t get around them, you don’t deserve to play.

But unlike many security arms races, the detectors have the ability to look into the past. Last year, a laboratory tested Lance Armstrong’s urine and found traces of the banned substance EPO. What’s interesting is that the urine sample tested wasn’t from 2005; it was from 1999. Back then, there weren’t any good tests for EVO in urine. Today there are, and the lab took a frozen urine sample—who knew that labs save urine samples from athletes?—and tested it. He was later cleared—the lab procedures were sloppy—but I don’t think the real ramifications of the episode were ever well understood. Testing can go back in time.

This has two major effects. One, doctors who develop new performance-enhancing drugs may know exactly what sorts of tests the anti-doping laboratories are going to run, and they can test their ability to evade drug detection beforehand. But they cannot know what sorts of tests will be developed in the future, and athletes cannot assume that just because a drug is undetectable today it will remain so years later.

Two, athletes accused of doping based on years-old urine samples have no way of defending themselves. They can’t resubmit to testing; it’s too late. If I were an athlete worried about these accusations, I would deposit my urine “in escrow” on a regular basis to give me some ability to contest an accusation.

The doping arms race will continue because of the incentives. It’s a classic Prisoner’s Dilemma. Consider two competing athletes: Alice and Bob. Both Alice and Bob have to individually decide if they are going to take drugs or not.

Imagine Alice evaluating her two options:

“If Bob doesn’t take any drugs,” she thinks, “then it will be in my best interest to take them. They will give me a performance edge against Bob. I have a better chance of winning.

“Similarly, if Bob takes drugs, it’s also in my interest to agree to take them. At least that way Bob won’t have an advantage over me.

“So even though I have no control over what Bob chooses to do, taking drugs gives me the better outcome, regardless of what his action.”

Unfortunately, Bob goes through exactly the same analysis. As a result, they both take performance-enhancing drugs and neither has the advantage over the other. If they could just trust each other, they could refrain from taking the drugs and maintain the same non-advantage status—without any legal or physical danger. But competing athletes can’t trust each other, and everyone feels he has to dope—and continues to search out newer and more undetectable drugs—in order to compete. And the arms race continues.

Some sports are more vigilant about drug detection than others. European bicycle racing is particularly vigilant; so are the Olympics. American professional sports are far more lenient, often trying to give the appearance of vigilance while still allowing athletes to use performance-enhancing drugs. They know that their fans want to see beefy linebackers, powerful sluggers, and lightning-fast sprinters. So, with a wink and a nod, they only test for the easy stuff.

For example, look at baseball’s current debate on human growth hormone: HGH. They have serious tests, and penalties, for steroid use, but everyone knows that players are now taking HGH because there is no urine test for it. There’s a blood test in development, but it’s still some time away from working. The way to stop HGH use is to take blood tests now and store them for future testing, but the players’ union has refused to allow it and the baseball commissioner isn’t pushing it.

In the end, doping is all about economics. Athletes will continue to dope because the Prisoner’s Dilemma forces them to do so. Sports authorities will either improve their detection capabilities or continue to pretend to do so—depending on their fans and their revenues. And as technology continues to improve, professional athletes will become more like deliberately designed racing cars.

This essay originally appeared on Wired.com.

Posted on August 10, 2006 at 5:18 AMView Comments

iPod Thefts

What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.

“Rise in crime blamed on iPods”, yells the front page of London’s Metro. “Muggers targeting iPod users”, says ITV. This is the reaction to the government’s revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of “young people carrying expensive goods, such as mobile phones and MP3 players”. A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.

This shouldn’t come as a surprise, just as it wasn’t a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.

What to do about it? Basically, there’s not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we’re seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.

The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.

And on a related topic, here’s a great essay by Cory Doctorow on how Apple’s iTunes copy protection screws the music industry.

EDITED TO ADD (8/5): Eric Rescorla comments.

Posted on July 31, 2006 at 7:05 AMView Comments

Memoirs of an Airport Security Screener

This person worked as an airport security screener years before 9/11, before the TSA, so hopefully things are different now. It’s a pretty fascinating read, though.

Two things pop out at me. One, as I wrote, it’s a mind-numbingly boring task. And two, the screeners were trained not to find weapons, but to find the particular example weapons that the FAA would test them on.

“How do you know it’s a gun?” he asked me.

“it looks like one,” I said, and was immediately pounded on the back.

“Goddamn right it does. You get over here,” yelled Mike to Will.

“How do you know it’s a gun?”

“I look for the outline of the cartridge and the…” Will started.

“What?”

“The barrel you can see right here,” Will continued, oblivious to his pending doom.

“What the hell are you talking about? That’s not how you find this gun.”

“No sir. It’s how you find any gun, sir,” said Will. I knew right then that this was a disaster.

“Any gun? Any gun? I don’t give a fuck about any gun, dipshit. I care about this gun. The FAA will not test you with another gun. The FAA will never put any gun but this one in the machine. I don’t care if you are a fucking gun nut who can tell the caliber by sniffing the barrel, you look for this gun. THIS ONE.” Mike strode to the test bag and dumped it out at the feet of the metal detector, sending the machine into a frenzy.

“THIS bomb. This knife. I don’t care if you miss a goddamn bazooka and some son of a bitch cuts your throat with a knife you let through as long as you find THIS GUN.”

“But we’re supposed to find,” Will insisted.

“You find what I trained you to find. The other shit doesn’t get taken out of my paycheck when you miss it,” said Mike.

Not exactly the result we’re looking for, but one that makes sense given the economic incentives that were at work.

I sure hope things are different today.

Posted on July 28, 2006 at 6:22 AMView Comments

Sky Marshals Name Innocents to Meet Quota

One news source is reporting that sky marshals are reporting on innocent people in order to meet a quota:

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

“Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.

[…]

These unknowing passengers who are doing nothing wrong are landing in a secret government document called a Surveillance Detection Report, or SDR. Air marshals told 7NEWS that managers in Las Vegas created and continue to maintain this potentially dangerous quota system.

“Do these reports have real life impacts on the people who are identified as potential terrorists?” 7NEWS Investigator Tony Kovaleski asked.

“Absolutely,” a federal air marshal replied.

[…]

What kind of impact would it have for a flying individual to be named in an SDR?

“That could have serious impact … They could be placed on a watch list. They could wind up on databases that identify them as potential terrorists or a threat to an aircraft. It could be very serious,” said Don Strange, a former agent in charge of air marshals in Atlanta. He lost his job attempting to change policies inside the agency.

This is so insane, it can’t possibly be true. But I have been stunned before by the stupidity of the Department of Homeland Security.

EDITED TO ADD (7/27): This is what Brock Meeks said on David Farber’s IP mailing list:

Well, it so happens that I was the one that BROKE this story… way back in 2004. There were at least two offices, Miami and Las Vegas that had this quota system for writing up and filing “SDRs.”

The requirement was totally renegade and NOT endorsed by Air Marshal officials in Washington. The Las Vegas Air Marshal field office was (I think he’s retired now) by a real cowboy at the time, someone that caused a lot of problems for the Washington HQ staff. (That official once grilled an Air Marshal for three hours in an interrogation room because he thought the air marshal was source of mine on another story. The air marshal was then taken off flight status and made to wash the office cars for two weeks… I broke that story, too. And no, the punished air marshal was never a source of mine.)

Air marshals told they were filing false reports, as they did below, just to hit the quota.

When my story hit, those in the offices of Las Vegas and Miami were reprimanded and the practice was ordered stopped by Washington HQ.

I suppose the biggest question I have for this story is the HYPE of what happens to these reports. They do NOT place the person mention on a “watch list.” These reports, filed on Palm Pilot PDAs, go into an internal Air Marshal database that is rarely seen and pretty much ignored by other intelligence agencies, from all sources I talked to.

Why? Because the air marshals are seen as little more than “sky cops” and these SDRs considered little more than “field interviews” that cops sometimes file when they question someone loitering at a 7-11 too late at night.

The quota system, if it is still going on, is heinous, but it hardly results in the big spooky data collection scare that this cheapjack Denver “investigative” TV reporter makes it out to be.

The quoted former field official from Atlanta, Don Strange, did, in fact, lose his job over trying to chance internal policies. He was the most well-liked official among the rank and file and the Atlanta office, under his command, had the highest morale in the nation.

Posted on July 25, 2006 at 9:55 AMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AMView Comments

Economics and Information Security

I’m sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.

I’m in this awkward situation because 1) this article is due tomorrow, and 2) I’m attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.

The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, “Why Information Security Is Hard—An Economic Perspective” (.pdf), and me in various essays and presentations from that same period.

WEIS began a year later at the University of California at Berkeley and has grown ever since. It’s the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.

And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.

When you start looking, economic considerations are everywhere in computer security. Hospitals’ medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients’ privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.

In all of these examples, the economic considerations of security are more important than the technical considerations.

More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.

Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that’s because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.

Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive—or not restrictive enough—to maximize society’s creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft’s Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.

WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf)—if you have an uncommon phone, the police probably don’t have the tools to perform forensic analysis—and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.

Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.

This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others’ languages. But now it seems that there’s a lot more synergy, and more collaborations between the two camps.

I’ve long said that the fundamental problems in computer security are no longer about technology; they’re about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we’re going to improve security in the information age.

This essay originally appeared on Wired.com.

Posted on June 29, 2006 at 4:31 PMView Comments

1 7 8 9 10 11 14

Sidebar photo of Bruce Schneier by Joe MacInnis.