Entries Tagged "software liability"

Page 2 of 3

Software Liabilities and Free Software

Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.

Don’t worry, they won’t be.

The key to understanding this is that this sort of contractual liability is part of a contract, and with free software—or free anything—there’s no contract. Free software wouldn’t fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law—that would be bad.)

There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.

The insurance industry is key to making this work. Luckily, they’re good at protecting people against liabilities. There’s no reason to think they won’t be able to do it here.

I’ve written more about liabilities and the insurance industry here.

Posted on July 28, 2008 at 2:42 PMView Comments

Information Security and Liabilities

In my fourth column for the Guardian last Thursday, I talk about information security and liabilities:

Last summer, the House of Lords Science and Technology Committee issued a report on “Personal Internet Security.” I was invited to give testimony for that report, and one of my recommendations was that software vendors be held liable when they are at fault. Their final report included that recommendation. The government rejected the recommendations in that report last autumn, and last week the committee issued a report on their follow-up inquiry, which still recommends software liabilities.

Good for them.

I’m not implying that liabilities are easy, or that all the liability for security vulnerabilities should fall on the vendor. But the courts are good at partial liability. Any automobile liability suit has many potential responsible parties: the car, the driver, the road, the weather, possibly another driver and another car, and so on. Similarly, a computer failure has several parties who may be partially responsible: the software vendor, the computer vendor, the network vendor, the user, possibly another hacker, and so on. But we’re never going to get there until we start. Software liability is the market force that will incentivise companies to improve their software quality—and everyone’s security.

Posted on July 23, 2008 at 3:09 PMView Comments

House of Lords on Computer Security

The Science and Technology Committee of the UK House of Lords has issued a report (pdf here) on “Personal Internet Security.” It’s 121 pages long. Richard Clayton, who helped the committee, has a good summary of the report on his blog. Among other things, the Lords recommend various consumer notification standards, a data-breach disclosure law, and a liability regime for software.

Another summary lists:

  • Increase the resources and skills available to the police and criminal justice system to catch and prosecute e-criminals.
  • Establish a centralised and automated system, administered by law enforcement, for the reporting of e-crime.
  • Provide incentives to banks and other companies trading online to improve the data security by establishing a data security breach notification law.
  • Improve standards of new software and hardware by moving towards legal liability for damage resulting from security flaws.
  • Encourage Internet Service Providers to improve customer security offered by establishing a “kite mark” for internet services.

If that sounds like a lot of the things I’ve been saying for years, there’s a reason for that. Earlier this year, I testified before the committee (transcript here), where I recommended some of these things. (Sadly, I didn’t get to wear a powdered wig.)

This report is a long way from anything even closely resembling a law, but it’s a start. Clayton writes:

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

If you’re interested, the entire body of evidence the committee considered is here (pdf version here). I don’t recommend reading it; it’s absolutely huge, and a lot of it is corporate drivel.

EDITED TO ADD (8/13): I have written about software liabilities before, here and here.

EDITED TO ADD (8/22): Good article here:

They agreed ‘wholeheartedly’ with security guru, and successful author, Bruce Schneier, that the activities of ‘legitimate researchers’ trying to ‘break things to learn to think like the bad guys’ should not be criminalized in forthcoming UK legislation, and they supported the pressing need for a data breach reporting law; in drafting such a law, the UK government could learn from lessons learnt in the US states that have such laws. Such a law should cover the banks, and other sectors, and not simply apply to “communication providers”—a proposal presently under consideration by the EU Commission, which the peers clearly believed would be ineffective in creating incentives to improve security across the board.

Posted on August 13, 2007 at 6:35 AMView Comments

Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AMView Comments

Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AMView Comments

Aligning Interest with Capability

Have you ever been to a retail store and seen this sign on the register: “Your purchase free if you don’t get a receipt”? You almost certainly didn’t see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store. That sign is a security device, and a clever one at that. And it illustrates a very important rule about security: it works best when you align interests with capability.

If you’re a store owner, one of your security worries is employee theft. Your employees handle cash all day, and dishonest ones will pocket some of it for themselves. The history of the cash register is mostly a history of preventing this kind of theft. Early cash registers were just boxes with a bell attached. The bell rang when an employee opened the box, alerting the store owner—who was presumably elsewhere in the store—that an employee was handling money.

The register tape was an important development in security against employee theft. Every transaction is recorded in write-only media, in such a way that it’s impossible to insert or delete transactions. It’s an audit trail. Using that audit trail, the store owner can count the cash in the drawer, and compare the amount with what the register. Any discrepancies can be docked from the employee’s paycheck.

If you’re a dishonest employee, you have to keep transactions off the register. If someone hands you money for an item and walks out, you can pocket that money without anyone being the wiser. And, in fact, that’s how employees steal cash in retail stores.

What can the store owner do? He can stand there and watch the employee, of course. But that’s not very efficient; the whole point of having employees is so that the store owner can do other things. The customer is standing there anyway, but the customer doesn’t care one way or another about a receipt.

So here’s what the employer does: he hires the customer. By putting up a sign saying “Your purchase free if you don’t get a receipt,” the employer is getting the customer to guard the employee. The customer makes sure the employee gives him a receipt, and employee theft is reduced accordingly.

There is a general rule in security to align interest with capability. The customer has the capability of watching the employee; the sign gives him the interest.

In Beyond Fear I wrote about ATM fraud; you can see the same mechanism at work:

“When ATM cardholders in the US complained about phantom withdrawals from their accounts, the courts generally held that the banks had to prove fraud. Hence, the banks’ agenda was to improve security and keep fraud low, because they paid the costs of any fraud. In the UK, the reverse was true: The courts generally sided with the banks and assumed that any attempts to repudiate withdrawals were cardholder fraud, and the cardholder had to prove otherwise. This caused the banks to have the opposite agenda; they didn’t care about improving security, because they were content to blame the problems on the customers and send them to jail for complaining. The result was that in the US, the banks improved ATM security to forestall additional losses—most of the fraud actually was not the cardholder’s fault—while in the UK, the banks did nothing.”

The banks had the capability to improve security. In the US, they also had the interest. But in the UK, only the customer had the interest. It wasn’t until the UK courts reversed themselves and aligned interest with capability that ATM security improved.

Computer security is no different. For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don’t have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They’ll align interest with capability, and they’ll improve software security.

One last story… In Italy, tax fraud used to be a national hobby. (It may still be; I don’t know.) The government was tired of retail stores not reporting sales and paying taxes, so they passed a law regulating the customers. Any customer having just purchased an item and stopped within a certain distance of a retail store, has to produce a receipt or they would be fined. Just as in the “Your purchase free if you don’t get a receipt” story, the law turned the customers into tax inspectors. They demanded receipts from merchants, which in turn forced the merchants to create a paper audit trail for the purchase and pay the required tax.

This was a great idea, but it didn’t work very well. Customers, especially tourists, didn’t like to be stopped by police. People started demanding that the police prove they just purchased the item. Threatening people with fines if they didn’t guard merchants wasn’t as effective an enticement as offering people a reward if they didn’t get a receipt.

Interest must be aligned with capability, but you need to be careful how you generate interest.

This essay originally appeared on Wired.com.

Posted on June 1, 2006 at 6:27 AMView Comments

Man Sues Compaq for False Advertising

Convicted felon Michael Crooker is suing Compaq (now HP) for false advertising. He bought a computer promised to be secure, but the FBI got his data anyway:

He bought it in September 2002, expressly because it had a feature called DriveLock, which freezes up the hard drive if you don’t have the proper password.

The computer’s manual claims that “if one were to lose his Master Password and his User Password, then the hard drive is useless and the data cannot be resurrected even by Compaq’s headquarters staff,” Crooker wrote in the suit.

Crooker has a copy of an ATF search warrant for files on the computer, which includes a handwritten notation: “Computer lock not able to be broken/disabled. Computer forwarded to FBI lab.” Crooker says he refused to give investigators the password, and was told the computer would be broken into “through a backdoor provided by Compaq,” which is now part of HP.

It’s unclear what was done with the laptop, but Crooker says a subsequent search warrant for his e-mail account, issued in January 2005, showed investigators had somehow gained access to his 40 gigabyte hard drive. The FBI had broken through DriveLock and accessed his e-mails (both deleted and not) as well as lists of websites he’d visited and other information. The only files they couldn’t read were ones he’d encrypted using Wexcrypt, a software program freely available on the Internet.

I think this is great. It’s about time that computer companies were held liable for their advertising claims.

But his lawsuit against HP may be a long shot. Crooker appears to face strong counterarguments to his claim that HP is guilty of breach of contract, especially if the FBI made the company provide a backdoor.

“If they had a warrant, then I don’t see how his case has any merit at all,” said Steven Certilman, a Stamford attorney who heads the Technology Law section of the Connecticut Bar Association. “Whatever means they used, if it’s covered by the warrant, it’s legitimate.”

If HP claimed DriveLock was unbreakable when the company knew it was not, that might be a kind of false advertising.

But while documents on HP’s web site do claim that without the correct passwords, a DriveLock’ed hard drive is “permanently unusable,” such warnings may not constitute actual legal guarantees.

According to Certilman and other computer security experts, hardware and software makers are careful not to make themselves liable for the performance of their products.

“I haven’t heard of manufacturers, at least for the consumer market, making a promise of computer security. Usually you buy naked hardware and you’re on your own,” Certilman said. In general, computer warrantees are “limited only to replacement and repair of the component, and not to incidental consequential damages such as the exposure of the underlying data to snooping third parties,” he said. “So I would be quite surprised if there were a gaping hole in their warranty that would allow that kind of claim.”

That point meets with agreement from the noted computer security skeptic Bruce Schneier, the chief technology officer at Counterpane Internet Security in Mountain View, Calif.

“I mean, the computer industry promises nothing,” he said last week. “Did you ever read a shrink-wrapped license agreement? You should read one. It basically says, if this product deliberately kills your children, and we knew it would, and we decided not to tell you because it might harm sales, we’re not liable. I mean, it says stuff like that. They’re absurd documents. You have no rights.”

My final quote in the article:

“Unfortunately, this probably isn’t a great case,” Schneier said. “Here’s a man who’s not going to get much sympathy. You want a defendant who bought the Compaq computer, and then, you know, his competitor, or a rogue employee, or someone who broke into his office, got the data. That’s a much more sympathetic defendant.”

Posted on May 3, 2006 at 9:26 AMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Liabilities and Software Vulnerabilities

My fourth column for Wired discusses liability for software vulnerabilities. Howard Schmidt argued that individual programmers should be liable for vulnerabilities in their code. (There’s a Slashdot thread on Schmidt’s comments.) I say that it should be the software vendors that should be liable, not the individual programmers.

Click on the essay for the whole argument, but here’s the critical point:

If end users can sue software manufacturers for product defects, then the cost of those defects to the software manufacturers rises. Manufacturers are now paying the true economic cost for poor software, and not just a piece of it. So when they’re balancing the cost of making their software secure versus the cost of leaving their software insecure, there are more costs on the latter side. This will provide an incentive for them to make their software more secure.

To be sure, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security-services companies, direct and indirect costs of losses. Making software manufacturers liable moves those costs around, and as a byproduct causes the quality of software to improve.

This is why Schmidt’s idea won’t work. He wants individual software developers to be liable, and not the corporations. This will certainly give pissed-off users someone to sue, but it won’t reduce the externality and it won’t result in more-secure software.

EDITED TO ADD: Dan Farber has a good commentary on my essay. He says I got Schmidt wrong, that Schmidt wants programmers to be accountable but not liable. Be that as it may, I still think that making software vendors liable is a good idea.

There has been some confusion about this in the comments, that somehow this means that software vendors will be expected to achieve perfection and that they will be 100% liable for anything short of that. Clearly that’s ridiculous, and that’s not the way liabilities work. But equally ridiculous is the notion that software vendors should be 0% liable for defects. Somewhere in the middle there is a reasonable amount of liablity, and that’s what I want the courts to figure out.

EDITED TO ADD: Howard Schmidt writes: “It is unfortunate that my comments were reported inaccurately; at least Dan Farber has been trying to correct the inaccurate reports with his blog. I do not support PERSONAL LIABILITY for the developers NOR do I support liability against vendors. Vendors are nothing more then people (employees included) and anything against them hurts the very people who need to be given better tools, training and support.”

Howard wrote an essay on the topic.

Posted on October 20, 2005 at 5:19 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.