Entries Tagged "externalities"

Page 3 of 5

Software Liabilities and Free Software

Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.

Don’t worry, they won’t be.

The key to understanding this is that this sort of contractual liability is part of a contract, and with free software—or free anything—there’s no contract. Free software wouldn’t fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law—that would be bad.)

There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.

The insurance industry is key to making this work. Luckily, they’re good at protecting people against liabilities. There’s no reason to think they won’t be able to do it here.

I’ve written more about liabilities and the insurance industry here.

Posted on July 28, 2008 at 2:42 PMView Comments

Information Security and Liabilities

In my fourth column for the Guardian last Thursday, I talk about information security and liabilities:

Last summer, the House of Lords Science and Technology Committee issued a report on “Personal Internet Security.” I was invited to give testimony for that report, and one of my recommendations was that software vendors be held liable when they are at fault. Their final report included that recommendation. The government rejected the recommendations in that report last autumn, and last week the committee issued a report on their follow-up inquiry, which still recommends software liabilities.

Good for them.

I’m not implying that liabilities are easy, or that all the liability for security vulnerabilities should fall on the vendor. But the courts are good at partial liability. Any automobile liability suit has many potential responsible parties: the car, the driver, the road, the weather, possibly another driver and another car, and so on. Similarly, a computer failure has several parties who may be partially responsible: the software vendor, the computer vendor, the network vendor, the user, possibly another hacker, and so on. But we’re never going to get there until we start. Software liability is the market force that will incentivise companies to improve their software quality—and everyone’s security.

Posted on July 23, 2008 at 3:09 PMView Comments

Chemical Plant Security and Externalities

It’s not true that no one worries about terrorists attacking chemical plants, it’s just that our politics seem to leave us unable to deal with the threat.

Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are 7,728 chemical plants in the United States where an act of sabotage—or an accident—could threaten more than 1,000 people. Of those, 106 facilities could threaten more than a million people.

The problem of securing chemical plants against terrorism—or even accidents—is actually simple once you understand the underlying economics. Normally, we leave the security of something up to its owner. The basic idea is that the owner of each chemical plant 1) best understands the risks, and 2) is the one who loses out if security fails. Any outsider—i.e., regulatory agency—is just going to get it wrong. It’s the basic free-market argument, and in most instances it makes a lot of sense.

And chemical plants do have security. They have fences and guards (which might or might not be effective). They have fail-safe mechanisms built into their operations. For example, many large chemical companies use hazardous substances like phosgene, methyl isocyanate and ethylene oxide in their plants, but don’t ship them between locations. They minimize the amounts that are stored as process intermediates. In rare cases of extremely hazardous materials, no significant amounts are stored; instead they are only present in pipes connecting the reactors that make them with the reactors that consume them.

This is all good and right, and what free-market capitalism dictates. The problem is, that isn’t enough.

Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn’t even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that’s the basic idea.

But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.

Sure, the owner could be sued. But he’s not at risk for more than the value of his company, and—in any case—he’d probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation’s chemical plants are secured to a much smaller degree than the risk warrants.

In economics, this is called an externality: an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.

If we—whether we’re the community living near the chemical plant or the nation as a whole—expect the owner of that plant to spend money for increased security to account for those externalities, we’re going to have to pay for it. And we have three basic ways of doing that. One, we can do it ourselves, stationing government police or military or contractors around the chemical plants. Two, we can pay the owners to do it, subsidizing some sort of security standard.

Or three, we could regulate security and force the companies to pay for it themselves. There’s no free lunch, of course. “We,” as in society, still pay for it in increased prices for whatever the chemical plants are producing, but the cost is paid for by the product’s consumers rather than by taxpayers in general.

Personally, I don’t care very much which method is chosen: that’s politics, not security. But I do know we’ll have to pick one, or some combination of the three. Asking nicely just isn’t going to work. It can’t; not in a free-market economy.

We taxpayers pay for airport security, and not the airlines, because the overall effects of a terrorist attack against an airline are far greater than their effects to the particular airline targeted. We pay for port security because the effects of bringing a large weapon into the country are far greater than the concerns of the port’s owners. And we should pay for chemical plant, train and truck security for exactly the same reasons.

Thankfully, after years of hoping the chemical industry would do it on its own, this April the Department of Homeland Security started regulating chemical plant security. Some complain that the regulations don’t go far enough, but at least it’s a start.

This essay previously appeared on Wired.com.

Posted on October 18, 2007 at 7:26 AMView Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AMView Comments

Wal-Mart Stays Open During Bomb Scare

This is interesting: A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb. Presumably, they decided that the loss of revenue due to an evacuation was not worth the additional security of an evacuation:

During the nearly two-hour search Wal-Mart officials opted not to evacuated the busy discount store even though police recomended [sic] they do so. Wal-Mart officials said the call was a hoax and not a threat.

I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.

Remember, though: security trade-offs are based on agenda. From the perspective of the Wal-Mart managers, the store’s revenues are the most important; most of the risks of the bomb threat are externalities.

Of course, the store employees have a different agenda—there is no upside to staying open, and only a downside due to the additional risk—and they didn’t like the decision:

The incident has family members of Wal-Mart employees criticizing store officials for failing to take police’s recommendation to evacuate.

Voorhees has worked at the Mitchell discount chain since Wal-Mart Supercenter opened in 2001. Her daughter, Charlotte Goode, 36, said Voorhees called her Sunday, crying and upset as she relayed the story.

“It’s right before Christmas. They were swamped with people,” she said. “To me, they endangerd [sic] the community, customers and associates. They put making a buck ahead of public safety.”

Posted on December 28, 2006 at 1:32 PMView Comments

Tracking Automobiles Through their Tires

Automobile tires are now being outfitted with RFID transmitters:

Schrader Bridgeport is the market leader in direct Tire Pressure Monitoring Systems. Direct TPMS use pressure sensors inside each tire to transmit data to a dashboard display alerting drivers to tire pressure problems.

I’ll bet anything you can track cars with them, just as you can track some joggers by their sneakers.

As I said before, the people who are designing these systems are putting “zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.”

Posted on December 27, 2006 at 7:44 AMView Comments

Tracking People by their Sneakers

Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.

However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Details are in the paper. Very scary.

This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It’s a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.

To me, the real significance of this work is how easy it was. The people who designed the Nike/iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.

Posted on December 12, 2006 at 1:11 PMView Comments

Please Stop My Car

Residents of Prescott Valley are being invited to register their car if they don’t drive in the middle of the night. Police will then stop those cars if they are on the road at that time, under the assumption that they’re stolen.

The Watch Your Car decal program is a voluntary program whereby vehicle owners enroll their vehicles with the AATA. The vehicle is then entered into a special database, developed and maintained by the AATA, which is directly linked to the Motor Vehicle Division (MVD).

Participants then display the Watch Your Car decals in the front and rear windows of their vehicle. By displaying the decals, vehicle owners convey to law enforcement officials that their vehicle is not usually in use between the hours of 1:00 AM and 5:00 AM, when the majority of thefts occur.

If a police officer witnesses the vehicle in operation between these hours, they have the authority to pull it over and question the driver. With access to the MVD database, the officer will be able to determine if the vehicle has been stolen, or not. The program also allows law enforcement officials to notify the vehicle’s owner immediately upon determination that it is being illegally operated.

This program is entirely optional, but there’s a serious externality. If the police spend time chasing false alarms, they’re not available for other police business. If the town charged car owners a fine for each false alarm, I would have no problems with this program. It doesn’t have to be a large fine, but it has to be enough to offset the cost to the town. It’s no different than police departments charging homeowners for false burglar alarms, when the alarm systems are automatically hooked into the police stations.

Posted on October 16, 2006 at 6:30 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.