Entries Tagged "externalities"

Page 2 of 5

Ars Technica on Liabilities and Computer Security

Good article:

Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it’s hard to mandate, or even to measure, “security consciousness” from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it’s not likely to be effective unless management’s heart is in it.

This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year’s attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don’t repeat that mistake in future.

I’ve been talking about liabilities for about a decade now. Here are essays I’ve written in 2002, 2003, 2004, and 2006.

Posted on July 27, 2011 at 6:44 AMView Comments

Court Ruling on "Reasonable" Electronic Banking Security

One of the pleasant side effects of being too busy to write longer blog posts is that—if I wait long enough—someone else writes what I would have wanted to.

The ruling in the Patco Construction vs. People’s United Bank case is important, because the judge basically ruled that the bank’s substandard security was good enough—and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.

EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.

Posted on June 17, 2011 at 12:09 PMView Comments

Interview with Me About the Sony Hack

These are what I get for giving interviews when I’m in a bad mood. For the record, I think Sony did a terrible job with its customers’ security. I also think that most companies do a terrible job with customers’ security, simply because there isn’t a financial incentive to do better. And that most of us are pretty secure, despite that.

One of my biggest complaints with these stories is how little actual information we have. We often don’t know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it’s generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it’s impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.

Posted on May 13, 2011 at 11:29 AMView Comments

The Era of "Steal Everything"

Good comment:

“We’re moving into an era of ‘steal everything’,” said David Emm, a senior security researcher for Kaspersky Labs.

He believes that cyber criminals are now no longer just targeting banks or retailers in the search for financial details, but instead going after social and other networks which encourage the sharing of vast amounts of personal information.

As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But—of course—it’s not in the database owner’s interest to limit the data it collects; it’s in the interests of those whom the data is about.

Posted on May 10, 2011 at 6:20 AMView Comments

Comodo Group Issues Bogus SSL Certificates

This isn’t good:

The hacker, whose March 15 attack was traced to an IP address in Iran, compromised a partner account at the respected certificate authority Comodo Group, which he used to request eight SSL certificates for six domains: mail.google.com, www.google.com, login.yahoo.com, login.skype.com, addons.mozilla.org and login.live.com.

The certificates would have allowed the attacker to craft fake pages that would have been accepted by browsers as the legitimate websites. The certificates would have been most useful as part of an attack that redirected traffic intended for Skype, Google and Yahoo to a machine under the attacker’s control. Such an attack can range from small-scale Wi-Fi spoofing at a coffee shop all the way to global hijacking of internet routes.

At a minimum, the attacker would then be able to steal login credentials from anyone who entered a username and password into the fake page, or perform a “man in the middle” attack to eavesdrop on the user’s session.

More news articles. Comodo announcement.

Fake certs for Google, Yahoo, and Skype? Wow.

This isn’t the first time Comodo has screwed up with certificates. The safest thing for us users to do would be to remove the Comodo root certificate from our browsers so that none of their certificates work, but we don’t have the capability to do that. The browser companies—Microsoft, Mozilla, Opera, etc.—could do that, but my guess is they won’t. The economic incentives don’t work properly. Comodo is likely to sue any browser company that takes this sort of action, and Comodo’s customers might as well. So it’s smarter for the browser companies to just ignore the issue and pass the problem to us users.

Posted on March 31, 2011 at 7:00 AMView Comments

Externalities and Identity Theft

Chris Hoofnagle has a new paper: “Internalizing Identity Theft.” Basically, he shows that one of the problems is that lenders extend credit even when credit applications are sketchy.

From an article on the work:

Using a 2003 amendment to the Fair Credit Reporting Act that allows victims of ID theft to ask creditors for the fraudulent applications submitted in their names, Mr. Hoofnagle worked with a small sample of six ID theft victims and delved into how they were defrauded.

Of 16 applications presented by imposters to obtain credit or medical services, almost all were rife with errors that should have suggested fraud. Yet in all 16 cases, credit or services were granted anyway.

In the various cases described in the paper, which was published on Wednesday in The U.C.L.A. Journal of Law and Technology, one victim found four of six fraudulent applications submitted in her name contained the wrong address; two contained the wrong phone number and one the wrong date of birth.

Another victim discovered that his imposter was 70 pounds heavier, yet successfully masqueraded as him using what appeared to be his stolen driver’s license, and in one case submitted an incorrect Social Security number.

This is a textbook example of an economic externality. Because most of the cost of identity theft is borne by the victim—even with the lender reimbursing the victim if pushed to—the lenders make the trade-off that’s best for their business, and that means issuing credit even in marginal situations. They make more money that way.

If we want to reduce identity theft, the only solution is to internalize that externality. Either give victims the ability to sue lenders who issue credit in their names to identity thieves, or pass a law with penalties if lenders do this.

Among the ways to move the cost of the crime back to issuers of credit, Mr. Hoofnagle suggests that lenders contribute to a fund that will compensate victims for the loss of their time in resolving their ID theft problems.

Posted on April 14, 2010 at 6:57 AMView Comments

Eliminating Externalities in Financial Security

This is a good thing:

An Illinois district court has allowed a couple to sue their bank on the novel grounds that it may have failed to sufficiently secure their account, after an unidentified hacker obtained a $26,500 loan on the account using the customers’ user name and password.

[…]

In February 2007, someone with a different IP address than the couple gained access to Marsha Shames-Yeakel’s online banking account using her user name and password and initiated an electronic transfer of $26,500 from the couple’s home equity line of credit to her business account. The money was then transferred through a bank in Hawaii to a bank in Austria.

The Austrian bank refused to return the money, and Citizens Financial insisted that the couple be liable for the funds and began billing them for it. When they refused to pay, the bank reported them as delinquent to the national credit reporting agencies and threatened to foreclose on their home.

The couple sued the bank, claiming violations of the Electronic Funds Transfer Act and the Fair Credit Reporting Act, claiming, among other things, that the bank reported them as delinquent to credit reporting agencies without telling the agencies that the debt in question was under dispute and was the result of a third-party theft. The couple wrote 19 letters disputing the debt, but began making monthly payments to the bank for the stolen funds in late 2007 following the bank’s foreclosure threats.

In addition to these claims, the plaintiffs also accused the bank of negligence under state law.

According to the plaintiffs, the bank had a common law duty to protect their account information from identity theft and failed to maintain state-of-the-art security standards. Specifically, the plaintiffs argued, the bank used only single-factor authentication for customers logging into its server (a user name and password) instead of multi-factor authentication, such as combining the user name and password with a token the customer possesses that authenticates the customer’s computer to the bank’s server or dynamically generates a single-use password for logging in.

As I’ve previously written, this is the only way to mitigate this kind of fraud:

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial institutions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial institutions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

It’s an important security principle: ensure that the person who has the ability to mitigate the risk is responsible for the risk. In this case, the account holders had nothing to do with the security of their account. They could not audit it. They could not improve it. The bank, on the other hand, has the ability to improve security and mitigate the risk, but because they pass the cost on to their customers, they have no incentive to do so. Litigation like this has the potential to fix the externality and improve security.

Posted on September 23, 2009 at 7:13 AMView Comments

Small Business Identity Theft and Fraud

The sorts of crimes we’ve been seeing perpetrated against individuals are starting to be perpetrated against small businesses:

In July, a school district near Pittsburgh sued to recover $700,000 taken from it. In May, a Texas company was robbed of $1.2 million. An electronics testing firm in Baton Rouge, La., said it was bilked of nearly $100,000.

In many cases, the advisory warned, the scammers infiltrate companies in a similar fashion: They send a targeted e-mail to the company’s controller or treasurer, a message that contains either a virus-laden attachment or a link that—when opened—surreptitiously installs malicious software designed to steal passwords. Armed with those credentials, the crooks then initiate a series of wire transfers, usually in increments of less than $10,000 to avoid banks’ anti-money-laundering reporting requirements.

The alert states that these scams typically rely on help from “money mules”—willing or unwitting individuals in the United States—often hired by the criminals via popular Internet job boards. Once enlisted, the mules are instructed to set up bank accounts, withdraw the fraudulent deposits and then wire the money to fraudsters, the majority of which are in Eastern Europe, according to the advisory.

This has the potential to grow into a very big problem. Even worse:

Businesses do not enjoy the same legal protections as consumers when banking online. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.

In contrast, companies that bank online are regulated under the Uniform Commercial Code, which holds that commercial banking customers have roughly two business days to spot and dispute unauthorized activity if they want to hold out any hope of recovering unauthorized transfers from their accounts.

And, of course, the security externality means that the banks care much less:

“The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money,” Litan said. “But the banks don’t spend the same resources on the corporate accounts because they don’t have to refund the corporate losses.”

Posted on August 26, 2009 at 5:46 AMView Comments

Risk Intuition

People have a natural intuition about risk, and in many ways it’s very good. It fails at times due to a variety of cognitive biases, but for normal risks that people regularly encounter, it works surprisingly well: often better than we give it credit for.

This struck me as I listened to yet another conference presenter complaining about security awareness training. He was talking about the difficulty of getting employees at his company to actually follow his security policies: encrypting data on memory sticks, not sharing passwords, not logging in from untrusted wireless networks. “We have to make people understand the risks,” he said.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

You see the same sort of risk intuition on motorways. People are less careful about posted speed limits than they are about the actual speeds police issue tickets for. It’s also true on the streets: people respond to real crime rates, not public officials proclaiming that a neighbourhood is safe.

The warning stickers on ladders might make you think the things are considerably riskier than they are, but people have a good intuition about ladders and ignore most of the warnings. (This isn’t to say that some people don’t do stupid things around ladders, but for the most part they’re safe. The warnings are more about the risk of lawsuits to ladder manufacturers than risks to people who climb ladders.)

As a species, we are naturally tuned in to the risks inherent in our environment. Throughout our evolution, our survival depended on making reasonably accurate risk management decisions intuitively, and we’re so good at it, we don’t even realise we’re doing it.

Parents know this. Children have surprisingly perceptive risk intuition. They know when parents are serious about a threat and when their threats are empty. And they respond to the real risks of parental punishment, not the inflated risks based on parental rhetoric. Again, awareness training lectures don’t work; there have to be real consequences.

It gets even weirder. The University College London professor John Adams popularised the metaphor of a mental risk thermostat. We tend to seek some natural level of risk, and if something becomes less risky, we tend to make it more risky. Motorcycle riders who wear helmets drive faster than riders who don’t.

Our risk thermostats aren’t perfect (that newly helmeted motorcycle rider will still decrease his overall risk) and will tend to remain within the same domain (he might drive faster, but he won’t increase his risk by taking up smoking), but in general, people demonstrate an innate and finely tuned ability to understand and respond to risks.

Of course, our risk intuition fails spectacularly and often, with regards to rare risks , unknown risks, voluntary risks, and so on. But when it comes to the common risks we face every day—the kinds of risks our evolutionary survival depended on—we’re pretty good.

So whenever you see someone in a situation who you think doesn’t understand the risks, stop first and make sure you understand the risks. You might be surprised.

This essay previously appeared in The Guardian.

EDITED TO ADD (8/12): Commentary on risk thermostat.

Posted on August 6, 2009 at 5:08 AMView Comments

Regulating Chemical Plant Security

The New York Times has an editorial on regulating chemical plants:

Since Sept. 11, 2001, experts have warned that an attack on a chemical plant could produce hundreds of thousands of deaths and injuries. Public safety and environmental advocates have fought for strong safety rules, but the chemical industry used its clout in Congress in 2006 to ensure that only a weak law was enacted.

That law sunsets this fall, and the moment is right to move forward. For the first time in years, there is a real advocate for chemical plant security in the White House. As a senator, President Obama co-sponsored a strong bill, and he raised the issue repeatedly in last year’s campaign. Both chambers of Congress are controlled by Democrats who have been far more supportive than Republicans of tough safety rules.

A good bill is moving through the House. It would require the highest-risk chemical plants to switch to less dangerous chemicals only in limited circumstances, but Republicans have still been fighting it. In the House Homeland Security Committee, the Republicans recently succeeded in adding several weakening amendments, including one that could block implementation of safer-chemical rules if they cost jobs. Saving jobs is important, but not if it means putting large numbers of Americans at risk of a deadly attack.

The Obama administration needs to come out forcefully for a clean bill that contains strong safety rules without the Republican loopholes. Janet Napolitano, the secretary of homeland security, said last week that she considers chemical plants a major vulnerability and promised that the administration will be speaking out on the subject in the days ahead.

It is looking increasingly likely that Congress will extend the current inadequate law for another year to take more time to come up with an alternative. That would be regrettable. There is no excuse for continuing to expose the nation to attacks that could lead to mass casualties.

The problem is a classic security externality, which I wrote about in 2007:

Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.

But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.

Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.

Posted on August 4, 2009 at 12:52 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.