Entries Tagged "externalities"

Page 4 of 5

Security Screening for New York Helicopters

There’s a helicopter shuttle that runs from Lower Manhattan to Kennedy Airport. It’s basically a luxury item: for $139 you can avoid the drive to the airport. But, of course, security screeners are required for passengers, and that’s causing some concern:

At the request of U.S. Helicopter’s executives, the federal Transportation Security Administration set up a checkpoint, with X-ray and bomb-detection machines, to screen passengers and their luggage at the heliport.

The security agency is spending $560,000 this year to operate the checkpoint with a staff of eight screeners and is considering adding a checkpoint at the heliport at the east end of 34th Street. The agency’s involvement has drawn criticism from some elected officials.

“The bottom line here is that there are not enough screeners to go around,” said Senator Charles E. Schumer, Democrat of New York. “The fact that we are taking screeners that are needed at airports to satisfy a luxury market on the government’s dime is a problem.”

This is not a security problem; it’s an economics problem. And it’s a good illustration of the concept of “externalities.” An externality is an effect of a decision not borne by the decision-maker. In this example, U.S. Helicopter made a business decision to offer this service at a certain price. And customers will make a decision about whether or not the service is worth the money. But there is more to the cost than the $139. The cost of that checkpoint is an externality to both U.S. Helicopter and its customers, because the $560,000 spent on the security checkpoint is paid for by taxpayers. Taxpayers are effectively subsidizing the true cost of the helicopter trip.

The only way to solve this is for the government to bill the airline passengers for the cost of security screening. It wouldn’t be much per ticket, maybe $15. And it would be much less at major airports, because the economies of scale are so much greater.

The article even points out that customers would gladly pay the extra $15 because of another externality: the people who decide whether or not to take the helicopter trip are not the people actually paying for it.

Bobby Weiss, a self-employed stock trader and real estate broker who was U.S. Helicopter’s first paying customer yesterday, said he would pay $300 for a round trip to Kennedy, and he expected most corporate executives would, too.

“It’s $300, but so what? It goes on the expense account,” said Mr. Weiss, adding that he had no qualms about the diversion of federal resources to smooth the path of highfliers. “Maybe a richer guy may save a little time at the expense of a poorer guy who spends a little more time in line.”

What Mr. Weiss is saying is that the costs — both the direct cost and the cost of the security checkpoint — are externalities to him, so he really doesn’t care. Exactly.

Posted on April 4, 2006 at 7:51 AMView Comments

Credit Card Companies and Agenda

This has been making the rounds on the Internet. Basically, a guy tears up a credit card application, tapes it back together, fills it out with someone else’s address and a different phone number, and send it in. He still gets a credit card.

Imagine that some fraudster is rummaging through your trash and finds a torn-up credit card application. That’s why this is bad.

To understand why it’s happening, you need to understand the trade-offs and the agenda. From the point of view of the credit card company, the benefits of giving someone a credit card is that he’ll use it and generate revenue. The risk is that it’s a fraudster who will cost the company revenue. The credit card industry has dealt with the risk in two ways: they’ve pushed a lot of the risk onto the merchants, and they’ve implemented fraud detection systems to limit the damage.

All other costs and problems of identity theft are borne by the consumer; they’re an externality to the credit card company. They don’t enter into the trade-off decision at all.

We can laugh at this kind of thing all day, but it’s actually in the best interests of the credit card industry to mail cards in response to torn-up and taped-together applications without doing much checking of the address or phone number. If we want that to change, we need to fix the externality.

Posted on March 13, 2006 at 2:18 PMView Comments

Security, Economics, and Lost Conference Badges

Conference badges are an interesting security token. They can be very valuable — a full conference registration at the RSA Conference this week in San Jose, for example, costs $1,985 — but their value decays rapidly with time. By tomorrow afternoon, they’ll be worthless.

Counterfeiting badges is one security concern, but an even bigger concern is people losing their badge or having their badge stolen. It’s way cheaper to find or steal someone else’s badge than it is to buy your own. People could do this sort of thing on purpose, pretending to lose their badge and giving it to someone else.

A few years ago, the RSA Conference charged people $100 for a replacement badge, which is far cheaper than a second membership. So the fraud remained. (At least, I assume it did. I don’t know anything about how prevalent this kind of fraud was at RSA.)

Last year, the RSA Conference tried to further limit these types of fraud by putting people’s photographs on their badges. Clever idea, but difficult to implement.

For this to work, though, guards need to match photographs with faces. This means that either 1) you need a lot more guards at entrance points, or 2) the lines will move a lot slower. Actually, far more likely is 3) no one will check the photographs.

And it was an expensive solution for the RSA Conference. They needed the equipment to put the photos on the badges. Registration was much slower. And pro-privacy people objected to the conference keeping their photographs on file.

This year, the RSA Conference solved the problem through economics:

If you lose your badge and/or badge holder, you will be required to purchase a new one for a fee of $1,895.00.

Look how clever this is. Instead of trying to solve this particular badge fraud problem through security, they simply moved the problem from the conference to the attendee. The badges still have that $1,895 value, but now if it’s stolen and used by someone else, it’s the attendee who’s out the money. As far as the RSA Conference is concerned, the security risk is an externality.

Note that from an outside perspective, this isn’t the most efficient way to deal with the security problem. It’s likely that the cost to the RSA Conference for centralized security is less than the aggregate cost of all the individual security measures. But the RSA Conference gets to make the trade-off, so they chose a solution that was cheaper for them.

Of course, it would have been nice if the conference provided a slightly more secure attachment point for the badge holder than a thin strip of plastic. But why should they? It’s not their problem anymore.

Posted on February 16, 2006 at 7:16 AMView Comments

Cell Phone Companies and Security

This is a fascinating story of cell phone fraud, security, economics, and externalities. Its moral is obvious, and demonstrates how economic considerations drive security decisions.

Susan Drummond was a customer of Rogers Wireless, a large Canadaian cell phone company. Her phone was cloned while she was on vacation, and she got a $12,237.60 phone bill (her typical bill was $75). Rogers maintains that there is nothing to be done, and that Drummond has to pay.

Like all cell phone companies, Rogers has automatic fraud detection systems that detect this kind of abnormal cell phone usage. They don’t turn the cell phones off, though, because they don’t want to annoy their customers.

Ms. Hopper [a manager in Roger’s security department] said terrorist groups had identified senior cellphone company officers as perfect targets, since the company was loath to shut off their phones for reasons that included inconvenience to busy executives and, of course, the public-relations debacle that would take place if word got out.

As long as Rogers can get others to pay for the fraud, this makes perfect sense. Shutting off a phone based on an automatic fraud-detection system costs the phone company in two ways: people inconvenienced by false alarms, and bad press. But the major cost of not shutting off a phone remains an externality: the customer pays for it.

In fact, there seems be some evidence that Rogers decides whether or not to shut off a suspecious phone based on the customer’s ability to pay:

Ms. Innes [a vice-president with Rogers Communications] said that Rogers has a policy of contacting consumers if fraud is suspected. In some cases, she admitted, phones are shut off automatically, but refused to say what criteria were used. (Ms. Drummond and Mr. Gefen believe that the company bases the decision on a customer’s creditworthiness. “If you have the financial history, they let the meter run,” Ms. Drummond said.) Ms. Drummond noted that she has a salary of more than $100,000, and a sterling credit history. “They knew something was wrong, but they thought they could get the money out of me. It’s ridiculous.”

Makes sense from Rogers’ point of view. High-paying customers are 1) more likely to pay, and 2) more damaging if pissed off in a false alarm. Again, economic considerations trump security.

Rogers is defending itself in court, and shows no signs of backing down:

In court filings, the company has made it clear that it intends to hold Ms. Drummond responsible for the calls made on her phone. “. . . the plaintiff is responsible for all calls made on her phone prior to the date of notification that her phone was stolen,” the company says. “The Plaintiff’s failure to mitigate deprived the Defendant of the opportunity to take any action to stop fraudulent calls prior to the 28th of August 2005.”

The solution here is obvious: Rogers should not be able to charge its customers for telephone calls they did not make. Ms. Drummond’s phone was cloned; there is no possible way she could notify Rogers of this before she saw calls she did not make on her bill. She is also completely powerless to affect the anti-cloning security in the Rogers phone system. To make her liable for the fraud is to ensure that the problem never gets fixed.

Rogers is the only party in a position to do something about the problem. The company can, and according to the article has, implemented automatic fraud-detection software.

Rogers customers will pay for the fraud in any case. If they are responsible for the loss, either they’ll take their chances and pay a lot only if they are the victims, or there’ll be some insurance scheme that spreads the cost over the entire customer base. If Rogers is responsible for the loss, then the customers will pay in the form of slightly higher prices. But only if Rogers is responsible for the loss will they implement security countermeasures to limit fraud.

And if they do that, everyone benefits.

There is a Slashdot thread on the topic.

Posted on December 19, 2005 at 1:10 PMView Comments

Korea Solves the Identity Theft Problem

South Korea gets it:

The South Korean government is introducing legislation that will make it mandatory for financial institutions to compensate customers who have fallen victim to online fraud and identity theft.

The new laws will require financial firms in the country to compensate customers for virtually all financial losses resulting from online identity theft and account hacking, even if the banks are not directly responsible.

Of course, by itself this action doesn’t solve identity theft. But in a vibrant capitalist economic market, this action is going to pave the way for technical security improvements that will effectively deal with identity theft.

The good news for the rest of us is that we can watch what happens now.

Posted on December 14, 2005 at 7:14 AMView Comments

Airplane Security

My seventh Wired.com column is on line. Nothing you haven’t heard before, except for this part:

I know quite a lot about this. I was a member of the government’s Secure Flight Working Group on Privacy and Security. We looked at the TSA’s program for matching airplane passengers with the terrorist watch list, and found a complete mess: poorly defined goals, incoherent design criteria, no clear system architecture, inadequate testing. (Our report was on the TSA website, but has recently been removed — “refreshed” is the word the organization used — and replaced with an “executive summary” (.doc) that contains none of the report’s findings. The TSA did retain two (.doc) rebuttals (.doc), which read like products of the same outline and dismiss our findings by saying that we didn’t have access to the requisite information.) Our conclusions match those in two (.pdf) reports (.pdf) by the Government Accountability Office and one (.pdf) by the DHS inspector general.

That’s right; the TSA is disappearing our report.

I also wrote an op ed for the Sydney Morning Herald on “weapons” — like the metal knives distributed with in-flight meals — aboard aircraft, based on this blog post. Again, nothing you haven’t heard before. (And I stole some bits from your comments to the blog posting.)

There is new news, though. The TSA is relaxing the rules for bringing pointy things on aircraft:.

The summary document says the elimination of the ban on metal scissors with a blade of four inches or less and tools of seven inches or less – including screwdrivers, wrenches and pliers – is intended to give airport screeners more time to do new types of random searches.

Passengers are now typically subject to a more intensive, so-called secondary search only if their names match a listing of suspected terrorists or because of anomalies like a last-minute ticket purchase or a one-way trip with no baggage.

The new strategy, which has been tested in Pittsburgh, Indianapolis and Orange County, Calif., will mean that a certain number of passengers, even if they are not identified by these computerized checks, will be pulled aside and subject to an added search lasting about two minutes. Officials said passengers would be selected randomly, without regard to ethnicity or nationality.

What happens next will vary. One day at a certain airport, carry-on bags might be physically searched. On the same day at a different airport, those subject to the random search might have their shoes screened for explosives or be checked with a hand-held metal detector. “By design, a traveler will not experience the same search every time he or she flies,” the summary said. “The searches will add an element of unpredictability to the screening process that will be easy for passengers to navigate but difficult for terrorists to manipulate.”

The new policy will also change the way pat-down searches are done to check for explosive devices. Screeners will now search the upper and lower torso, the entire arm and legs from the mid-thigh down to the ankle and the back and abdomen, significantly expanding the area checked.

Currently, only the upper torso is checked. Under the revised policy, screeners will still have the option of skipping pat-downs in certain areas “if it is clear there is no threat,” like when a person is wearing tight clothing making it obvious that there is nothing hidden. But the default position will be to do the more comprehensive search, in part because of fear that a passenger could be carrying plastic explosives that might not set off a handheld metal detector.

I don’t know if they will still make people take laptops out of their cases, make people take off their shoes, or confiscate pocket knives. (Different articles have said different things about the last one.)

This is a good change, and it’s long overdue. Airplane terrorism hasn’t been the movie-plot threat that everyone worries about for a while.

The most amazing reaction to this is from Corey Caldwell, spokeswoman for the Association of Flight Attendants:

When weapons are allowed back on board an aircraft, the pilots will be able to land the plane safety but the aisles will be running with blood.

How’s that for hyperbole?

In Beyond Fear and elsewhere, I’ve written about the notion of “agenda” and how it informs security trade-offs. From the perspective of the flight attendants, subjecting passengers to onerous screening requirements is a perfectly reasonable trade-off. They’re safer — albeit only slightly — because of it, and it doesn’t cost them anything. The cost is an externality to them: the passengers pay it. Passengers have a broader agenda: safety, but also cost, convenience, time, etc. So it makes perfect sense that the flight attendants object to a security change that the passengers are in favor of.

EDITED TO ADD (12/2): The SFWG report hasn’t been removed from the TSA website, just unlinked.

EDITED TO ADD (12/20): The report seems to be gone from the TSA website now, but it’s available here.

Posted on December 1, 2005 at 10:14 AMView Comments

Fraud and Western Union

Western Union has been the conduit of a lot of fraud. But since they’re not the victim, they don’t care much about security. It’s an externality to them. It took a lawsuit to convince them to take security seriously.

Western Union, one of the world’s most frequently used money transfer services, will begin warning its customers against possible fraud in their transactions.

Persuading consumers to send wire transfers, particularly to Canada, has been a popular method for con artists. Recent scams include offering consumers counterfeit cashier’s checks, advance-fee loans and phony lottery winnings.

More than $113 million was swindled in 2002 from U.S. residents through wire transfer fraud to Canada alone, according to a survey conducted by investigators in seven states.

Washington was one of 10 states that negotiated an $8.5 million settlement with Western Union. Most of the settlement would fund a national program to counsel consumers against telemarketing fraud.

In addition to the money, the company has agreed to increase fraud awareness at more than 50,000 locations, develop a computer program that would spot likely fraud-induced transfers before they are completed and block transfers from specific consumers to specific recipients when the company receives fraud information from state authorities.

Posted on November 18, 2005 at 11:06 AM

Fraudulent Stock Transactions

From a Business Week story:

During July 13-26, stocks and mutual funds had been sold, and the proceeds wired out of his account in six transactions of nearly $30,000 apiece. Murty, a 64-year-old nuclear engineering professor at North Carolina State University, could only think it was a mistake. He hadn’t sold any stock in months.

Murty dialed E*Trade the moment its call center opened at 7 a.m. A customer service rep urged him to change his password immediately. Too late. E*Trade says the computer in Murty’s Cary (N.C.) home lacked antivirus software and had been infected with code that enabled hackers to grab his user name and password.

The cybercriminals, pretending to be Murty, directed E*Trade to liquidate his holdings. Then they had the brokerage wire the proceeds to a phony account in his name at Wells Fargo Bank. The New York-based online broker says the wire instructions appeared to be legit because they contained the security code the company e-mailed to Murty to execute the transaction. But the cyberthieves had gained control of Murty’s e-mail, too.

E*Trade recovered some of the money from the Wells Fargo account and returned it to Murty. In October, the Indian-born professor reached what he calls a satisfactory settlement with the firm, which says it did nothing wrong.

That last clause is critical. E*trade insists it did nothing wrong. It executed $174,000 in fraudulent transactions, but it did nothing wrong. It sold stocks without the knowledge or consent of the owner of those stocks, but it did nothing wrong.

Now quite possibly, E*trade did nothing wrong legally. There may very well be a paragraph buried in whatever agreement this guy signed that says something like: “You agree that any trade request that comes to us with the right password, whether it came from you or not, will be processed.” But there’s the market failure. Until we fix that, these losses are an externality to E*Trade. They’ll only fix the problem up to the point where customers aren’t leaving them in droves, not to the point where the customers’ stocks are secure.

Posted on November 10, 2005 at 2:40 PMView Comments

Preventing Identity Theft: The Living and the Dead

A company called Metacharge has rolled out an e-commerce security service in the United Kingdom. For about $2 per name, website operators can verify their customers against the UK Electoral Roll, the British Telecom directory, and a mortality database.

That’s not cheap, and the company is mainly targeting customers in high-risk industries, such as online gaming. But the economics behind this system are interesting to examine. They illustrate externalities associated with fraud and identity theft, and why leaving matters to the companies won’t fix the problem.

The mortality database is interesting. According to Metacharge, “the fastest growing form of identity theft is not phishing; it is taking the identities of dead people and using them to get credit.”

For a website, the economics are straightforward. It costs $2 to verify that a customer is alive. If the probability the customer is actually dead (and therefore fraudulent) times the average losses due to this dead customer is more than $2, this service makes sense. If it is less, then the service doesn’t. For example, if dead customers are one in ten thousand, and they cost $15,000 each, then the service is not worth it. If they cost $25,000 each, or if they occur twice as often, then it is worth it.

Imagine now that there is a similar service that identifies identity fraud among living people. The same economic analysis would also hold. But in this case, there’s an externality: there is an additional cost of fraud borne by the victim and not by the website. So if fraud using the identity of living customers occurs at a rate of one in ten thousand, and each one costs $15,000 to the website and another $10,000 to the victim, the website will conclude that the service is not worthwhile, even though paying for it is cheaper overall. This is why legislation is needed: to raise the cost of fraud to the websites.

There’s another economic trade-off. Websites have two basic opportunities to verify customers using services such as these. The first is when they sign up the customer, and the second is after some kind of non-payment. Most of the damages to the customer occur after the non-payment is referred to a credit bureau, so it would make sense to perform some extra identification checks at that point. It would certainly be cheaper to the website, as far fewer checks would be paid for. But because this second opportunity comes after the website has suffered its losses, it has no real incentive to take advantage of it. Again, economics drives security.

Posted on October 28, 2005 at 8:08 AMView Comments

Scandinavian Attack Against Two-Factor Authentication

I’ve repeatedly said that two-factor authentication won’t stop phishing, because the attackers will simply modify their techniques to get around it. Here’s an example where that has happened:

Scandinavian bank Nordea was forced to shut down part of its Web banking service for 12 hours last week following a phishing attack that specifically targeted its paper-based one-time password security system.

According to press reports, the scam targeted customers that access the Nordea Sweden Web banking site using a paper-based single-use password security system.

A blog posting by Finnish security firm F-Secure says recipients of the spam e-mail were directed to bogus Web sites but were also asked to enter their account details along with the next password on their list of one-time passwords issued to them by the bank on a “scratch sheet”.

From F-Secure’s blog:

The fake mails were explaining that Nordea is introducing new security measures, which can be accessed at www.nordea-se.com or www.nordea-bank.net (fake sites hosted in South Korea).

The fake sites looked fairly real. They were asking the user for his personal number, access code and the next available scratch code. Regardless of what you entered, the site would complain about the scratch code and asked you to try the next one. In reality the bad boys were trying to collect several scratch codes for their own use.

The Register also has a story.

Two-factor authentication won’t stop identity theft, because identity theft is not an authentication problem. It’s a transaction-security problem. I’ve written about that already. Solutions need to address the transactions directly, and my guess is that they’ll be a combination of things. Some transactions will become more cumbersome. It will definitely be more cumbersome to get a new credit card. Back-end systems will be put in place to identify fraudulent transaction patterns. Look at credit card security; that’s where you’re going to find ideas for solutions to this problem.

Unfortunately, until financial institutions are liable for all the losses associated with identity theft, and not just their direct losses, we’re not going to see a lot of these solutions. I’ve written about this before as well.

We got them for credit cards because Congress mandated that the banks were liable for all but the first $50 of fraudulent transactions.

EDITED TO ADD: Here’s a related story. The Bank of New Zealand suspended Internet banking because of phishing concerns. Now there’s a company that is taking the threat seriously.

Posted on October 25, 2005 at 12:49 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.