Entries Tagged "externalities"

Page 5 of 5

Tax Breaks for Good Security

Congress is talking—it’s just talking, but at least it’s talking—about giving tax breaks to companies with good cybersecurity.

The devil is in the details, and this could be a meaningless handout, but the idea is sound. Rational companies are going to protect their assets only up to their value to that company. The problem is that many of the security risks to digital assets are not risks to the company who owns them. This is an externality. So if we all need a company to protect its digital assets to some higher level, then we need to pay for that extra protection. (At least we do in a capitalist society.) We can pay through regulation or liabilities, which translates to higher prices for whatever the company does. We can pay through directly funding that extra security, either by writing a check or reducing taxes. But we can’t expect a company to spend the extra money out of the goodness of its heart.

Posted on October 13, 2005 at 8:02 AMView Comments

RFID Car Keys

RFID car keys (subscription required) are becoming more popular. Since these devices broadcast a unique serial number, it’s only a matter of time before a significant percentage of the population can be tracked with them.

Lexus has made what it calls the “SmartAccess” keyless-entry system standard on its new IS sedans, designed to compete with German cars like the BMW 3 series or the Audi A4, as well as rivals such as the Infiniti G35 or the U.S.-made Cadillac CTS. BMW offers what it calls “keyless go” as an option on the new 3 series, and on its higher-priced 5, 6 and 7 series sedans.

Volkswagen AG’s Audi brand offers keyless-start systems on its A6 and A8 sedans, but not yet on U.S.-bound A4s. Cadillac’s new STS sedan, big brother to the CTS, also offers a pushbutton start.

Starter buttons have a racy flair—European sports cars and race cars used them in the past. The proliferation of starter buttons in luxury sedans has its roots in theft protection. An increasing number of cars now come with theft-deterrent systems that rely on a chip in the key fob that broadcasts a code to a receiver in the car. If the codes don’t match, the car won’t start.

Cryptography can be used to make these devices anonymous, but there’s no business reason for automobile manufacturers to field such a system. Once again, the economic barriers to security are far greater than the technical ones.

Posted on October 5, 2005 at 8:13 AMView Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it’s voice-over-IP spam, and it has the clever name of “spit” (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they’re going to get dozens of calls a day from audio spammers. Or, at least, they’re only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It’s all basically the same thing—unsolicited marketing messages—and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it’s to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That’s why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it’s particularly effective; the response rates for spam are very low. It’s common because it’s ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you’re a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person—to buy your product, vote for your guy, whatever—then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the “cost” of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using “cost” very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge—and it’s a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it’s the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists—lists of senders a user is willing to accept e-mail from—and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn’t just have to be at the recipient’s e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We’ve already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail “postage,” either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve “computational puzzles”: time-consuming tasks the sender’s computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don’t care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can’t prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there’s no end in sight for the spam arms race. Even so, spam is one of computer security’s success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better—Crypto-Gram is occasionally classified as spam by one service or another, for example—but they’re working pretty well. It’ll be a long time before spam stops clogging up the Internet, but at least we don’t have to look at it.

Posted on May 13, 2005 at 9:47 AMView Comments

Regulation, Liability, and Computer Security

For a couple of years I have been arguing that liability is a way to solve the economic problems underlying our computer security problems. At the RSA conference this year, I was on a panel on that very topic.

This essay argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.

Definitely worth thinking about some more.

Posted on February 25, 2005 at 8:00 AMView Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive—both in money and time—to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs—whether through regulation or liability—it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PMView Comments

1 3 4 5

Sidebar photo of Bruce Schneier by Joe MacInnis.