Entries Tagged "incentives"

Page 11 of 14

Bug Bounties Are Not Security

Paying people rewards for finding security flaws is not the same as hiring your own analysts and testers. It’s a reasonable addition to a software security program, but no substitute.

I’ve said this before, but Moshe Yudkowsky said it better:

Here’s an outsourcing idea: get rid of your fleet of delivery trucks, toss your packages out into the street, and offer a reward to anyone who successfully delivers a package. Sound like a good idea, or a recipe for disaster?

Red Herring offers an article about the bounties that some software companies offer for bugs. That is, if you’re an independent researcher and you find a bug in their software, some companies will offer you a cash bonus when you report the bug.

As the article notes, “in a free market everything has value,” and therefore information that a bug exists should logically result in some sort of market. However, I think it’s misleading to call this practice “outsourcing” of security, any more than calling the practice of tossing packages into the street a “delivery service.” Paying someone to tell you about a bug may or may not be a good business practice, but that practice alone certainly does not constitute a complete security policy.

Posted on December 27, 2005 at 7:46 AMView Comments

Cell Phone Companies and Security

This is a fascinating story of cell phone fraud, security, economics, and externalities. Its moral is obvious, and demonstrates how economic considerations drive security decisions.

Susan Drummond was a customer of Rogers Wireless, a large Canadaian cell phone company. Her phone was cloned while she was on vacation, and she got a $12,237.60 phone bill (her typical bill was $75). Rogers maintains that there is nothing to be done, and that Drummond has to pay.

Like all cell phone companies, Rogers has automatic fraud detection systems that detect this kind of abnormal cell phone usage. They don’t turn the cell phones off, though, because they don’t want to annoy their customers.

Ms. Hopper [a manager in Roger’s security department] said terrorist groups had identified senior cellphone company officers as perfect targets, since the company was loath to shut off their phones for reasons that included inconvenience to busy executives and, of course, the public-relations debacle that would take place if word got out.

As long as Rogers can get others to pay for the fraud, this makes perfect sense. Shutting off a phone based on an automatic fraud-detection system costs the phone company in two ways: people inconvenienced by false alarms, and bad press. But the major cost of not shutting off a phone remains an externality: the customer pays for it.

In fact, there seems be some evidence that Rogers decides whether or not to shut off a suspecious phone based on the customer’s ability to pay:

Ms. Innes [a vice-president with Rogers Communications] said that Rogers has a policy of contacting consumers if fraud is suspected. In some cases, she admitted, phones are shut off automatically, but refused to say what criteria were used. (Ms. Drummond and Mr. Gefen believe that the company bases the decision on a customer’s creditworthiness. “If you have the financial history, they let the meter run,” Ms. Drummond said.) Ms. Drummond noted that she has a salary of more than $100,000, and a sterling credit history. “They knew something was wrong, but they thought they could get the money out of me. It’s ridiculous.”

Makes sense from Rogers’ point of view. High-paying customers are 1) more likely to pay, and 2) more damaging if pissed off in a false alarm. Again, economic considerations trump security.

Rogers is defending itself in court, and shows no signs of backing down:

In court filings, the company has made it clear that it intends to hold Ms. Drummond responsible for the calls made on her phone. “. . . the plaintiff is responsible for all calls made on her phone prior to the date of notification that her phone was stolen,” the company says. “The Plaintiff’s failure to mitigate deprived the Defendant of the opportunity to take any action to stop fraudulent calls prior to the 28th of August 2005.”

The solution here is obvious: Rogers should not be able to charge its customers for telephone calls they did not make. Ms. Drummond’s phone was cloned; there is no possible way she could notify Rogers of this before she saw calls she did not make on her bill. She is also completely powerless to affect the anti-cloning security in the Rogers phone system. To make her liable for the fraud is to ensure that the problem never gets fixed.

Rogers is the only party in a position to do something about the problem. The company can, and according to the article has, implemented automatic fraud-detection software.

Rogers customers will pay for the fraud in any case. If they are responsible for the loss, either they’ll take their chances and pay a lot only if they are the victims, or there’ll be some insurance scheme that spreads the cost over the entire customer base. If Rogers is responsible for the loss, then the customers will pay in the form of slightly higher prices. But only if Rogers is responsible for the loss will they implement security countermeasures to limit fraud.

And if they do that, everyone benefits.

There is a Slashdot thread on the topic.

Posted on December 19, 2005 at 1:10 PMView Comments

Korea Solves the Identity Theft Problem

South Korea gets it:

The South Korean government is introducing legislation that will make it mandatory for financial institutions to compensate customers who have fallen victim to online fraud and identity theft.

The new laws will require financial firms in the country to compensate customers for virtually all financial losses resulting from online identity theft and account hacking, even if the banks are not directly responsible.

Of course, by itself this action doesn’t solve identity theft. But in a vibrant capitalist economic market, this action is going to pave the way for technical security improvements that will effectively deal with identity theft.

The good news for the rest of us is that we can watch what happens now.

Posted on December 14, 2005 at 7:14 AMView Comments

Most Stolen Identities Never Used

This is something I’ve been saying for a while, and it’s nice to see some independent confirmation:

A new study suggests consumers whose credit cards are lost or stolen or whose personal information is accidentally compromised face little risk of becoming victims of identity theft.

The analysis, released on Wednesday, also found that even in the most dangerous data breaches—where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted—only about 1 in 1,000 victims had their identities stolen.

The reason is that thieves are stealing far more identities than they need. Two years ago, if someone asked me about protecting against identity theft, I would tell them to shred their trash and be careful giving information over the Internet. Today, that advice is obsolete. Criminals are not stealing identity information in ones and twos; they’re stealing identity information in blocks of hundreds of thousands and even millions.

If a criminal ring wants a dozen identities for some fraud scam, and they steal a database with 500,000 identities, then—as a percentage—almost none of those identities will ever be the victims of fraud.

Some other findings from their press release:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-level” breaches, where names and Social Security numbers were stolen and “account-level” breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit.

And:

ID Analytics’ fraud experts believe the reason for the minimal use of stolen identities is based on the amount of time it takes to actually perpetrate identity theft against a consumer. As an example, it takes approximately five minutes to fill out a credit application. At this rate, it would take a fraudster working full-time ­ averaging 6.5 hours day, five days a week, 50 weeks a year ­ over 50 years to fully utilize a breached file consisting of one million consumer identities. If the criminal outsourced the work at a rate of $10 an hour in an effort to use a breached file of the same size in one year, it would cost that criminal about $830,000.

Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

That last bit is interesting, and it makes this recommendation even more surprising:

The company suggests, for instance, that companies shouldn’t always notify consumers of data breaches because they may be unnecessarily alarming people who stand little chance of being victimized.

I agree with them that all this notification is having a “boy who cried wolf” effect on people. I know people living in California who get disclosure notifications in the mail regularly, and who have stopped paying attention to them.

But remember, the main security value of notification requirements is the cost. By increasing the cost to companies of data thefts, the goal is for them to increase their security. (The main security value used to be the public shaming, but these breaches are now so common that the press no longer writes about them.) Direct fines would be a better way of dealing with the economic externality, but the notification law is all we’ve got right now. I don’t support eliminating it until there’s something else in its place.

Posted on December 12, 2005 at 9:50 AMView Comments

Airplane Security

My seventh Wired.com column is on line. Nothing you haven’t heard before, except for this part:

I know quite a lot about this. I was a member of the government’s Secure Flight Working Group on Privacy and Security. We looked at the TSA’s program for matching airplane passengers with the terrorist watch list, and found a complete mess: poorly defined goals, incoherent design criteria, no clear system architecture, inadequate testing. (Our report was on the TSA website, but has recently been removed—”refreshed” is the word the organization used—and replaced with an “executive summary” (.doc) that contains none of the report’s findings. The TSA did retain two (.doc) rebuttals (.doc), which read like products of the same outline and dismiss our findings by saying that we didn’t have access to the requisite information.) Our conclusions match those in two (.pdf) reports (.pdf) by the Government Accountability Office and one (.pdf) by the DHS inspector general.

That’s right; the TSA is disappearing our report.

I also wrote an op ed for the Sydney Morning Herald on “weapons”—like the metal knives distributed with in-flight meals—aboard aircraft, based on this blog post. Again, nothing you haven’t heard before. (And I stole some bits from your comments to the blog posting.)

There is new news, though. The TSA is relaxing the rules for bringing pointy things on aircraft:.

The summary document says the elimination of the ban on metal scissors with a blade of four inches or less and tools of seven inches or less – including screwdrivers, wrenches and pliers – is intended to give airport screeners more time to do new types of random searches.

Passengers are now typically subject to a more intensive, so-called secondary search only if their names match a listing of suspected terrorists or because of anomalies like a last-minute ticket purchase or a one-way trip with no baggage.

The new strategy, which has been tested in Pittsburgh, Indianapolis and Orange County, Calif., will mean that a certain number of passengers, even if they are not identified by these computerized checks, will be pulled aside and subject to an added search lasting about two minutes. Officials said passengers would be selected randomly, without regard to ethnicity or nationality.

What happens next will vary. One day at a certain airport, carry-on bags might be physically searched. On the same day at a different airport, those subject to the random search might have their shoes screened for explosives or be checked with a hand-held metal detector. “By design, a traveler will not experience the same search every time he or she flies,” the summary said. “The searches will add an element of unpredictability to the screening process that will be easy for passengers to navigate but difficult for terrorists to manipulate.”

The new policy will also change the way pat-down searches are done to check for explosive devices. Screeners will now search the upper and lower torso, the entire arm and legs from the mid-thigh down to the ankle and the back and abdomen, significantly expanding the area checked.

Currently, only the upper torso is checked. Under the revised policy, screeners will still have the option of skipping pat-downs in certain areas “if it is clear there is no threat,” like when a person is wearing tight clothing making it obvious that there is nothing hidden. But the default position will be to do the more comprehensive search, in part because of fear that a passenger could be carrying plastic explosives that might not set off a handheld metal detector.

I don’t know if they will still make people take laptops out of their cases, make people take off their shoes, or confiscate pocket knives. (Different articles have said different things about the last one.)

This is a good change, and it’s long overdue. Airplane terrorism hasn’t been the movie-plot threat that everyone worries about for a while.

The most amazing reaction to this is from Corey Caldwell, spokeswoman for the Association of Flight Attendants:

When weapons are allowed back on board an aircraft, the pilots will be able to land the plane safety but the aisles will be running with blood.

How’s that for hyperbole?

In Beyond Fear and elsewhere, I’ve written about the notion of “agenda” and how it informs security trade-offs. From the perspective of the flight attendants, subjecting passengers to onerous screening requirements is a perfectly reasonable trade-off. They’re safer—albeit only slightly—because of it, and it doesn’t cost them anything. The cost is an externality to them: the passengers pay it. Passengers have a broader agenda: safety, but also cost, convenience, time, etc. So it makes perfect sense that the flight attendants object to a security change that the passengers are in favor of.

EDITED TO ADD (12/2): The SFWG report hasn’t been removed from the TSA website, just unlinked.

EDITED TO ADD (12/20): The report seems to be gone from the TSA website now, but it’s available here.

Posted on December 1, 2005 at 10:14 AMView Comments

Fraud and Western Union

Western Union has been the conduit of a lot of fraud. But since they’re not the victim, they don’t care much about security. It’s an externality to them. It took a lawsuit to convince them to take security seriously.

Western Union, one of the world’s most frequently used money transfer services, will begin warning its customers against possible fraud in their transactions.

Persuading consumers to send wire transfers, particularly to Canada, has been a popular method for con artists. Recent scams include offering consumers counterfeit cashier’s checks, advance-fee loans and phony lottery winnings.

More than $113 million was swindled in 2002 from U.S. residents through wire transfer fraud to Canada alone, according to a survey conducted by investigators in seven states.

Washington was one of 10 states that negotiated an $8.5 million settlement with Western Union. Most of the settlement would fund a national program to counsel consumers against telemarketing fraud.

In addition to the money, the company has agreed to increase fraud awareness at more than 50,000 locations, develop a computer program that would spot likely fraud-induced transfers before they are completed and block transfers from specific consumers to specific recipients when the company receives fraud information from state authorities.

Posted on November 18, 2005 at 11:06 AM

Fraudulent Stock Transactions

From a Business Week story:

During July 13-26, stocks and mutual funds had been sold, and the proceeds wired out of his account in six transactions of nearly $30,000 apiece. Murty, a 64-year-old nuclear engineering professor at North Carolina State University, could only think it was a mistake. He hadn’t sold any stock in months.

Murty dialed E*Trade the moment its call center opened at 7 a.m. A customer service rep urged him to change his password immediately. Too late. E*Trade says the computer in Murty’s Cary (N.C.) home lacked antivirus software and had been infected with code that enabled hackers to grab his user name and password.

The cybercriminals, pretending to be Murty, directed E*Trade to liquidate his holdings. Then they had the brokerage wire the proceeds to a phony account in his name at Wells Fargo Bank. The New York-based online broker says the wire instructions appeared to be legit because they contained the security code the company e-mailed to Murty to execute the transaction. But the cyberthieves had gained control of Murty’s e-mail, too.

E*Trade recovered some of the money from the Wells Fargo account and returned it to Murty. In October, the Indian-born professor reached what he calls a satisfactory settlement with the firm, which says it did nothing wrong.

That last clause is critical. E*trade insists it did nothing wrong. It executed $174,000 in fraudulent transactions, but it did nothing wrong. It sold stocks without the knowledge or consent of the owner of those stocks, but it did nothing wrong.

Now quite possibly, E*trade did nothing wrong legally. There may very well be a paragraph buried in whatever agreement this guy signed that says something like: “You agree that any trade request that comes to us with the right password, whether it came from you or not, will be processed.” But there’s the market failure. Until we fix that, these losses are an externality to E*Trade. They’ll only fix the problem up to the point where customers aren’t leaving them in droves, not to the point where the customers’ stocks are secure.

Posted on November 10, 2005 at 2:40 PMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Preventing Identity Theft: The Living and the Dead

A company called Metacharge has rolled out an e-commerce security service in the United Kingdom. For about $2 per name, website operators can verify their customers against the UK Electoral Roll, the British Telecom directory, and a mortality database.

That’s not cheap, and the company is mainly targeting customers in high-risk industries, such as online gaming. But the economics behind this system are interesting to examine. They illustrate externalities associated with fraud and identity theft, and why leaving matters to the companies won’t fix the problem.

The mortality database is interesting. According to Metacharge, “the fastest growing form of identity theft is not phishing; it is taking the identities of dead people and using them to get credit.”

For a website, the economics are straightforward. It costs $2 to verify that a customer is alive. If the probability the customer is actually dead (and therefore fraudulent) times the average losses due to this dead customer is more than $2, this service makes sense. If it is less, then the service doesn’t. For example, if dead customers are one in ten thousand, and they cost $15,000 each, then the service is not worth it. If they cost $25,000 each, or if they occur twice as often, then it is worth it.

Imagine now that there is a similar service that identifies identity fraud among living people. The same economic analysis would also hold. But in this case, there’s an externality: there is an additional cost of fraud borne by the victim and not by the website. So if fraud using the identity of living customers occurs at a rate of one in ten thousand, and each one costs $15,000 to the website and another $10,000 to the victim, the website will conclude that the service is not worthwhile, even though paying for it is cheaper overall. This is why legislation is needed: to raise the cost of fraud to the websites.

There’s another economic trade-off. Websites have two basic opportunities to verify customers using services such as these. The first is when they sign up the customer, and the second is after some kind of non-payment. Most of the damages to the customer occur after the non-payment is referred to a credit bureau, so it would make sense to perform some extra identification checks at that point. It would certainly be cheaper to the website, as far fewer checks would be paid for. But because this second opportunity comes after the website has suffered its losses, it has no real incentive to take advantage of it. Again, economics drives security.

Posted on October 28, 2005 at 8:08 AMView Comments

Scandinavian Attack Against Two-Factor Authentication

I’ve repeatedly said that two-factor authentication won’t stop phishing, because the attackers will simply modify their techniques to get around it. Here’s an example where that has happened:

Scandinavian bank Nordea was forced to shut down part of its Web banking service for 12 hours last week following a phishing attack that specifically targeted its paper-based one-time password security system.

According to press reports, the scam targeted customers that access the Nordea Sweden Web banking site using a paper-based single-use password security system.

A blog posting by Finnish security firm F-Secure says recipients of the spam e-mail were directed to bogus Web sites but were also asked to enter their account details along with the next password on their list of one-time passwords issued to them by the bank on a “scratch sheet”.

From F-Secure’s blog:

The fake mails were explaining that Nordea is introducing new security measures, which can be accessed at www.nordea-se.com or www.nordea-bank.net (fake sites hosted in South Korea).

The fake sites looked fairly real. They were asking the user for his personal number, access code and the next available scratch code. Regardless of what you entered, the site would complain about the scratch code and asked you to try the next one. In reality the bad boys were trying to collect several scratch codes for their own use.

The Register also has a story.

Two-factor authentication won’t stop identity theft, because identity theft is not an authentication problem. It’s a transaction-security problem. I’ve written about that already. Solutions need to address the transactions directly, and my guess is that they’ll be a combination of things. Some transactions will become more cumbersome. It will definitely be more cumbersome to get a new credit card. Back-end systems will be put in place to identify fraudulent transaction patterns. Look at credit card security; that’s where you’re going to find ideas for solutions to this problem.

Unfortunately, until financial institutions are liable for all the losses associated with identity theft, and not just their direct losses, we’re not going to see a lot of these solutions. I’ve written about this before as well.

We got them for credit cards because Congress mandated that the banks were liable for all but the first $50 of fraudulent transactions.

EDITED TO ADD: Here’s a related story. The Bank of New Zealand suspended Internet banking because of phishing concerns. Now there’s a company that is taking the threat seriously.

Posted on October 25, 2005 at 12:49 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.