Schneier on Security
A blog covering security and security technology.
« Write-Once Read-Many Memory Cards |
| The DNS Vulnerability »
July 28, 2008
Software Liabilities and Free Software
Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.
Don't worry, they won't be.
The key to understanding this is that this sort of contractual liability is part of a contract, and with free software -- or free anything -- there's no contract. Free software wouldn't fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law -- that would be bad.)
There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.
The insurance industry is key to making this work. Luckily, they're good at protecting people against liabilities. There's no reason to think they won't be able to do it here.
I've written more about liabilities and the insurance industry here.
Posted on July 28, 2008 at 2:42 PM
• 51 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Not sure I aghree Bruce.
IANAL but take an example like drugs. If someone was to give out free cancer drugs that did not work, would they be liable for the outcomes?
I suspect they would be. A business makes an offer to treat and the customer accepts or rejects that offer. It makes no substantive difference whether the offer is $1000 or $0, the transaction is based on whether or not the representation of the product is valid.
If I represent that my software will protect you against viruses and it fails in some material way, I suspect that you will be liable.
I disagree strongly on the insurance industry.
The insurance industry is good at dealing with disjoint, random liabilities. They are very bad at dealing with condensed liabilities. EG: if its $1000/bug-user, Blaster would have been a $16B hit!
@anonymous, re drugs example.
There are different kinds of liability, and strict liability is only one. Some products you pay for disclaim nearly all liabilities, e.g. the retail license fee for Windows Vista. Read the EULA, folks.
But no comment on the general economic consequences, which was the focus of much of the comments the other day? disappointing.
btw, how can you know better than the market on this? People can choose to buy strict liability now, but don't.
And what liability does counterpane offer?
I don't think the Red Hat example works. Red Hat does not sell Linux. They sell support.
Maybe different in other countries, but in the US - a contract must include the exchange of goods / services / promises.
If I give you software, and you don't promise me anything in return - that is a gift, not a contract.
Also, most folks who are smart enough to write any code worth downloading - are probably smart enough to include a disclaimer of liability along with a copyleft or GNU GPL.
@anonymous re drugs...
that would sound more like negligent harm, or the "attractive nuisance" - that BS mentioned. If you provide drugs for people with terrible diseases, - and you know the drugs don't help, then you have caused harm. Those folks might have sought out other help if they didn't trust you/ your drug.
Besides that, here in the US free cancer drugs are imaginary & probably illegal.
@grahame: Monopolistic competition, imperfect information.
The software market is not even close to perfect competition. Most software is protected by either patent or copyright, and so perfect substitutes don't exist. This is good, as in perfect competition product cost drops to marginal cost, which kills innovation. But it makes simplistic free market arguements harder.
Imperfect information exists on several fronts. Most obviously, EULAs are so obscenely complex that no one reads them, so the "terms" of the contract are not clearly understood by the consenting parties.
Add to that the uncertainties that the legal system adds. The American tort system makes ANY lawsuit expensive to run, and the unpredictable damage awards make "bet the company" decisions all two common. For this reason, companies would price strict liability very highly, despite the fact that the costs for the median consumer might be fairly low.
Also, the agency problem. If an individual makes a decision to offer an option to keep liability, and the company gets sued, that individual is out on their ass; the upside (for that person) to offering that option is small, a slightly better job performance.
All the herbal remedies available on the market would seem to indicate that you can get away with *selling* ineffective anti-cancer agents and still not get sued.
"Free software wouldn't fall under a liability regime because the writer and the user have no business relationship"
So if I give away free calculators and they malfunction, I have no responsibility for it?
"we could always pass a Good Samaritan-like law that would protect people who distribute free software. "
So you're saying that any software-liability law should include an exception for open source?
I wish I could remember their name but there is a company (I think out of Arizona) doing something very similar to this. They look at open source software programs, widgets, add-ins etc. They then validate them for security and function. They provide support for them and more importantly they provide a level of indemnity if something goes wrong and you, as the service provider using the tools, get sued.
Two points from a law student:
1) There can be liability without a contract. The entire domain of tort law is about defining the scope of extra-contractual liability between parties. Some of the factors that might give rise to liability include the foreseeability of harm, detrimental reliance by one of the parties, the nature of the relationship between the parties, etc.
2) A contract can exist in the absence of monetary payment. The issue is not the exchange of money for service, but the exchange of consideration, which is a much broader category. You are probably correct that no contract arises between the developer and the user of free software in the typical case, but there may be special circumstances in which a contract does arise.
Overall, though, I agree with your conclusion. No liability arises for the developer of free software except under special circumstances. However, developers of free software would be well-advised to take steps to reduce the likelihood of liability arising such as expressly disclaiming any warranty as to the quality or fitness of the software. That said, no disclaimer can guarantee that liability will not arise.
Would it be possible for you to write up example legislation which we could actually read over and understand. I personally think that some kind of liability is a good idea. Or rather, I find the immunity that currently exists wierd. Software manufacturers continue to sell products with defects in them without informing customers. With cars; if a serious defect is found (yes; I have seen fight club) then the model is withdrawn from sale.
What should the basis for liability be? I'd propose a limit per customer of ten times the sale price for. But how would it be triggered in the case of infected PCs, for example? What if you use software for something it's not designed for (e.g. Windows as a router or Ubuntu for a pacemaker?). How about spot fines like for bad parking (e.g. 50 Euros for leaving a zombie computer connected to a network)? If you followed the vendor's instructions or those instructions were unreasonably arduous you get to pass the liability on to the vendor.
I really am not sure, though, how you want to avoid free software becoming a loop hole (free MS Veesta2040; just 10000USD for the hardware dongle), how you want to handle shareware etc. etc. As I said, a real life example would be good
This is a difficult subject. A good samaritan law would protect people who write free software against liability lawsuits, but then it might also protect those who write spyware programs that are embedded in useful tools (for instance, spywares shipped with popular peer-to-peer file sharing programs).
Of course, the spyware vendors use deceptive practices, unlike "normal" free software writers, but a combination of "good samaritan" and enough disclaimers at the installation process might hold water in court.
So in Bruce's universe:
1) Free software that's "good", but destroys your PC has no liability for the creator.
2) Paid for software, good or bad, that destroys your PC carries liability for the creator.
What about free, but bad software (malware) that destroys your PC? How do we separate good from bad software? What about the "gee, I didn't know there was a virus in the installer" defense?
Thanks, Bruce, for explaining this. I had wondered what your position here would be.
To those who talk about liability disclaimers: Such disclaimers are often ineffective against statutory liabilities. It depends on how the statute is written, of course, but it is not uncommon to see laws specify that consumers cannot waive their rights under the scheme (such as by accepting an EULA). If a disclaimer is written by a somewhat honest lawyer, it will contain language like "to the extent permitted by applicable law", to remind the user of this. But even if it doesn't say so, courts will only enforce an EULA to that extent anyway.
> The insurance industry is good at dealing with disjoint, random liabilities. They are very bad at dealing with condensed liabilities. EG: if its $1000/bug-user, Blaster would have been a $16B hit!
Unfortunately, Blaster-like things happen frequently enough that it does make sense to use insurance to smoothen out their economic burden.
$16B is pocket change for the insurance industry as a whole. Anyone who insures something with the market penetration of Microsoft's flagship products would make sure to re-insure heavily. That would drive up premiums a bit for widely-used programs, but that makes good economic sense. We know that a monoculture is more risky than a diversity of implementations (up to a point). If an insurance requirement had the effect of slowing down the most popular implementations of things, that is a feature not a bug!
There was a case last year (JMRI) where a US District Court ruled that the Artistic License was a contract. If this becomes accepted by the courts (although I'm sure there'll be fights about it) it would rather change the landscape.
derf: Isn't releasing malware generally already illegal (at least in the US)?
So, while I suppose it's correct that free malware software would presumably not be affected by this, I don't think that it matters. And, technically, it seems like a sensible interpretation legal-wise: the whole thing about malware is that people don't exactly seek it out and start running it themselves. So where is the contract that civil liabilities could result from?
Rather, malware gets in there without permission or authorization, and as such is something entirely different; however, it wouldn't be getting a free pass under this sort of scheme, because it would still fall under the purview of criminal law. And I could be mistaken, but I'm pretty sure some of the stories about malware authors I've read indicate that there are already provisions for damages to be levied against malware authors.
I don't agree with the idea that software liability should be a compulsory thing: sooner or later the market will get to it and developers will start offering it
1. Free software is the easier case. A liability statute would exclude software for which the author receives no remuneration and include provisions to address whether people can use barter to work around liability. Excluding shareware is likely to be tougher since the author is trying to receive a small cash payment--it would require drawing lines (how small a payment is "small"?) and dealing with alternative business models like subscription arrangements (where it's a recurring small payment.)
2. I disagree that there is no contract with free software. For example, the GPL is very much a contract. The consideration exchanged isn't cash, but the user gains access to certain of the author's rights under copyright law in exchange for a commitment to publish any modifications made to the code. If there weren't a contract, the author couldn't enforce the GPL's provisions.
3. Tort liability, as opposed to contract liability, doesn't require the existence of a contract. It requires that the author owe a duty to the user, which is similar, but that duty can arise independently of a contract. For example, if I'm driving and accidentally bump your car, I can be liable for fixing your car even though there's no contract between us. There may be some arguments that, say, products liability wouldn't apply to freeware because the author isn't in the business of "selling" the freeware, but in general it'd be much cleaner to address the matter in a piece of legislation so everyone's on the same page.
I think you need to be more precise in describing exactly what sort of liability you are proposing. "Liability" is a broad term encompassing many theories, including breach of contract, intentional torts, negligent torts, strict liability, property conflicts, intellectual property infringement, and so on. Each of these theories of liability has its own justifications and limitations; it's hard to evaluate the merits of "software liability" without knowing what you think the nature of that liability should be.
As Devin Johnston points out, many of these forms of liability don't require a contract. And as AJ mentions, a software license—even for free software—may create a contract. So if software does get some kind of liability for security defects, I wouldn't bet on free software getting an exemption.
If any form of liability is valid, what kind should we be talking about? I see three main possibilities:
1. Contractual liability. This could be breach of implied warranty, for example. The problem with contractual liability is that it's easy to avoid through the contract language.
2. Negligence. This is a tort liability that arises when someone fails to act with reasonable care.
3. Strict liability. This is also a tort theory, which gets applied when something goes wrong without asking whether the party incurring liability actually did anything wrong—in strict liability, if it breaks you bought it.
What you're talking about is basically a specialized case of product liability, where the product is software. In the product liability space, negligence is used for design defects on the theory that greater care in design would have prevented the prolblem. Manufacturing defects are evaluated according strict liability, because they happen no matter how careful a manufacturer is, and making the manufacturer liable spreads the cost of defects.
So when talking about software liability, the question is: are software defects avoidable by reasonable care, or are they unavoidable? If they're unavoidable, does it make sense to force the software company to shoulder the cost of the defect?
There's another problem: the economic loss doctrine. The states that follow this doctrine say you can't get recovery in tort if your only loss is economic. If you're physically injured, or your property is damaged, you can sue in tort. If all you lost is money, you can't. So, if a software defect only costs money, some states won't recognize tort liability. If a software defect gets someone hurt or destroys property, there could be liability—but there's probably that already. If software in a car or running big industrial equipment had a flaw that did lots of non-economic damage, I'd guess that existing product liability theories would work just fine.
Which leaves us with some form of liability for software defects for economic harm—a less compelling case.
In any event, I think it's important when delving into discussions of the law to be as precise as possible. In this case, that means talking about exactly what form of liability you'd like to see.
On top of the issues that others have raised, the legal concepts of "proximate cause" and "intervening cause" should be remembered. In many states, the default rule is that a product designer cannot reasonably forsee a criminal act by a third party.
Free as in beer, right? I hope I wasn't too pointed in my comment the other day:
I think see the shift in where you are going with this and I have to agree.
As much as I wish the summer of love could have proven that we can live in a free and open system, the real lesson is that American cities were so full of waste that a portion of the population could sustain themselves on the byproducts of others. In retrospect this was more about learning a new form of social efficiency than any revolutionary thinking, and it seems the best of it came crumbling down under a massive weight of liabilities when takers outnumbered producers and overall quality declined.
I thus agree a better comparison/model is something akin to treatment of free food in the US. Oregon, for example, says that "gleaners or good-faith donors of any food, apparently fit for human consumption, shall not be subject to criminal penalty or civil damages arising from the condition of the food, unless injury is caused by the gross negligence, recklessness or intentional misconduct of the donor."
Likewise, although you give away your security software, if you are grossly negligent, reckless, etc. then I suppose you could still face liability even without formal contract. Transferring liability to an insurance company makes sense, if you think that you can not find another means of reducing risk.
I disagree with your analysis, and with all the above comments. The point in product liability (and warranties) is NOT in whether or not the product was Free in any sense of the word. The presence of a contract (or payment) matters, but the absence a lot less.
Product liabilities arise when users are unable to predict the features and behavior of a product and would not have used it had they known everything.
Warranties and liabilities arise when there is an information imbalance between producer and user. In general, the producer knows (and can control) the quality and working of a product. The user largely has to rely on the information of the producer when selecting the product.
In proprietary software, if a product behaves in unanticipated ways, the producer is liable, as they should have known and informed the user. At the same time, the user had no (reasonable) way of knowing how the product would behave and might not have used it, had she known the possibility of this behavior.
With FOSS, there is no such imbalance. The user can know as much (or more) of the product as the producer. If the user wants, she can hire a firm to check (and change) everything she (dis-)likes about the product.
It can even be argued that the user has more information that the producer, as only the user really knows how the product will be used.
The absence of a contract and payment of any kind absolves the producer from any trade or contractual requirement.
In this light, there is no need for liability. The user can get to know everything there is to know about the product before she employs it for free. There is no information (or power) imbalance.
These points are driven home in the NO WARRANTY clause of the GPL. The producer does not claim any usefulness of the application. If the user thinks she can use the product, she must do her homework (due diligence).
All assuming there was no evil intend on the producer's side.
However, if you get paid to produce a certain product, you can of course be held accountable about whether it actually performs as advertised.
(I read this analysis somewhere, but cannot find the link anymore)
Obviously, there are countries where consumer protection laws and product liability are different. They might clash with the GPL NO Warranty clause. However, in general, the courts consider users who do not do due diligence in a harsh light. Especially, if the users get a free ride on valuable goods.
I found the reference of the analysis I refered to in the previous comment:
And Bryan Pfaffenberger formulates it better than I can.
"Why? It's all in the nature of the deal. With open-source software, you don't need warranty protection (and indeed, it would arguably be bad faith to demand it) because you are, in principle, walking into the deal with your eyes wide open. You know what you're getting, and if you don't, you can find someone who does. Open-source licenses enable the community of users to inspect the code for flaws and to trade knowledge about such flaws, which they most assuredly do. Such licenses allow users to create derivative versions of the code that repair potentially hazardous problems the author couldn't foresee. They let users determine whether the program contains adequate safeguards against safety or security risks. In contrast, the wealthy software firms pushing UCITA are asking us to buy closed-source code that may well contain flaws, and even outright hazards attributable to corporate negligence--but they won't let us see the code, let alone modify it. You don't know what you're getting. And that's why it's not worth giving up your right to sue the bastards, if they've been negligent and stuck you with something that hurts or kills somebody."
There's a thread running through several of the comments above that I'm still trying to understand: the idea that there should be less liability for open source code because the user is always free to inspect the source. The end user of a car is always free to buy a car and perform crash testing on it, or to disassemble it to determine whether the structural engineering was done properly. The reason we still impose product liability on automobile manufacturers is that we think it's more reasonable to incur the cost of testing cars once, with the manufacturer, than to impose it several times on many different consumers. (The consumers still pay the manufacturer's testing cost, in the form of higher prices, but at least they can spread it across a lot of different consumers.) Why doesn't that same logic apply to open source code? After all, even though a copy of the sources is free, there would still be a significant investment in the time to learn the structure of the code, analyze it, and even to get the training necessary to read code and spot security holes.
I'm not saying that we should impose liability on FOSS--it's likely that the social benefit of free software and open sources outweighs the social benefit to be gained from imposing liability in that case--but it's not clear that it makes sense not to impose liability because anyone could, in principle, inspect the sources and discover the vulnerabilities.
This argument should really be passed along to PJ of the GrokLaw.net website.
"There's a thread running through several of the comments above that I'm still trying to understand: the idea that there should be less liability for open source code because the user is always free to inspect the source. The end user of a car is always free to buy a car and perform crash testing on it, or to disassemble it to determine whether the structural engineering was done properly."
You cannot do that with a car. Because you would have to test the very steel of the engine. The engineering is much too subtle to be done after assembly. And most of the tests would be destructive and not apply on the next car produced. So you would have to do destructive tests on a sample of cars. This would be beyond almost all car users. Even professional ones.
In a car, the maker knows what steel was used, and how it was treated. Information that is extremely difficult to extract later. In software, you can know as much as the authors.
In software, you can do non-destructive tests. You can inspect the source, no hidden parts, analyze it, compile it, stress test it, and still use it again and again.
And as all copies are the same, you can exchange your information with all other users of the same program. Which does happen in real life.
So in that respect, FOSS is different from peanut butter, computers, and cars.
André wrote: "The GPL, like other open source licenses, is a license, not a contract."
So is the Artistic License, but that didn't stop the US District Court from treating it as a contract in the JMRI case (see my lwn link above).
The insurance industry is good at protecting people from liabilities? Ha ha, good one. I'll keep that in mind the next time I am refused a health care, car or house insurance claim.
Thanks. I completely agree.
No one is forcing people to use free open-source software, so it makes sense that these software writers providing the free stuff shouldn't be held liable. On the other hand, perhaps more people need to be reminded about the hazards and liabilities of using these products (as free, convenient and cutting-edge as they are). Many people don't even think about the security vulnerabilities involved in using open source web applications. There's an interesting post on this phenomenon of misplaced trust (What Are Strangers Doing With All of Your Information) at:
In the previous article you point to, you write:
"Think about why firewalls succeeded in the marketplace. It's not because they're effective [...]. Firewalls are ubiquitous because auditors started demanding firewalls."
Surely this demolishes your argument: auditors and insurers demanded firewalls as part of a "safe harbour" of demonstrated best practice. The auditors could demonstrate that they were bringing people up to best practice, and companies could demonstrate that they were compliant. Whether the firewalls actually improved security was largely irrelevant.
I believe that the same thing would happen, although on a much larger and more expensive scale, if suppliers were liable for security defects in software products: there would be lots of sound and fury about "best practice", but it would merely be about the creation of a safe harbour for vendors that would have no real impact on software quality.
Not sure why we are already looking for exceptions to an already-bad idea...
See http://spiresecurity.typepad.com/... for more on why software liability is a bad idea.
Chief among them - what is the liability measurement?
What sort of liability are we talking about? Strict liability and serious damages is going to kill any software company that gets unlucky. Certainly a free software developer shouldn't go bankrupt because he released something with a bug, but should the developer who tries to sell something and hasn't incorporated yet?
The correct answer is to put some sort of liability on the people who actually cause damage, and let the market push the liability back. Suddenly, being able to buy an operating system with some sort of indemnification would be very attractive, and software would be rated for security by insurance companies.
Obviously, we're going to wind up with some bad solutions here, as in all forms of liability. Does anybody think we'd do better with government-mandated liability? If there's no way out, software is dead as a profession. If there is, the big companies will find (and, if necessary, lobby for) the loopholes and exploit them to the detriment of the small guy.
"In software, you can know as much as the authors."
You've obviously put a lot of thought into your analysis, but I think you are starting to pick and choose you points to support an outcome here.
As a software architect, I can tell you that having access to 100% of the source code *does not* equal "knowing as much as the authors." In fact, any amount of documentation (design documents, etc.) is not the same as being involved in the development process as the application evolves. This is especially true in many of today's "agile" projects where comparatively little is planned out ahead of time. These projects are *not* like Boeing designing avionics software, where there is more documentation than code.
One of Bruce's reoccurring themes is that you have to engineer security into software from the start. You can't go back and tack it on at the end. I think the same holds true for anyone doing an analysis. If it were indeed that easy than software companies would be doing a much better job of patching right now.
So, IMHO, *if* the argument that the car manufacturer has access to information that it would be unreasonable to assume the consumer has access to, *then* I think you have to allow the same argument for software. You can't have it both ways...
"So, IMHO, *if* the argument that the car manufacturer has access to information that it would be unreasonable to assume the consumer has access to, *then* I think you have to allow the same argument for software. You can't have it both ways..."
I think you are confusing things. It is obvious that those who constructed a program know more about it than those who try to catch on later. But any user has ACCESS to all information needed to determine the risks of using a program. The producers of a program don't HIDE vulnerabilities (if they did, they would be liable). They might have cut corners or be completely incompetent. But both "features" can be easily spotted from the code.
The same holds for a market maker at the stock exchange who will obviously know more about the specifics of the stock he trades than J Random Buyer. However, J Random Buyer has ACCESS to the same information (if not, your stock broker is breaking the law).
The question to ask is, could the user be reasonably expected to have been able to evaluate the risks of using the application. With FOSS, the user could easily have been informed about everything she needed herself, or for a fee from others (on-line, IRC, manuals, code).
My interpretation is, that a user can be required to do due diligence before using a free product for important work. The more important the task, the more work can be expected from the user. For some tasks, it is even required to use only certified programs.
The same way you cannot sue the producer because your $0.25 corkscrew broke off and ruined your $1500 bottle of wine, you cannot sue the coders of some obscure FOSS backup program because it lost your company's complete archive. If it is that important, you should have done your homework.
With proprietary (or commercial) products, this level of due diligence is either illegal or unrealistic. With FOSS products, this is both theoretically possible and practical.
Btw, this is not my analysis, but that of Bryan Pfaffenberger (see above)
I very much like some of the comments made so far that Free/OSS software can gain some immunity due to the fact that the recipient gets full source code. However, I don't think it's true that if you have the source you know what the program does. Have a look at http://underhanded.xcott.com/ (key quote "You can hide a semi truck in 300 lines of C"). I guess the user would have responsibility to check.
How about the following (since Bruce was recommending this to the house of Lords we should gradually create both a Scottish and an English version of the legislation). Note, this probably doesn't do what I would want it to do at all. I haven't thought about it's influence on reality much. It's just an initial proposal.
a) Individual users of software shall be liable for damage caused by their own software up to a value of 100 pounds per other system, maximum of 5000 pounds total.
b) Corporate users of software shall be liable for damage caused by that software up to a value of 500 pounds per other system, maximum of 5000 pounds or 1% of corporate turnover, whichever is larger.
b) Where a user has purchased software, and suffers damage or liability as mentioned above and
b.1) the damage can be shown to be due to flaws in that software
b.2) the damage can be shown to be due to lack of a reasonable protection mechanism in that software
b.3)the damage cannot be correctly attributed due to a lack of a mechanism for attributing damage built into the software
then the manufacturer of the software will be liable for twenty times the purchase price of the software, or 5000 pounds whichever is larger.
c) limits of liability:
c.1) if software is used as a component in a system in a threat environment for which it was not designed,
c.2) invented limitations do not limit liability: c.2 shall not apply if there is not a clear environment
c.3) limitations are only applicable if they are clearly stated in all advertisement of the software.
d) sale of software:
Any sale of software in the UK must have a liability representative who will accept liability. Where a purchase is made by an electronic fund transfer system such as a credit card, the fund transfer company must act as a proxy for the liability representative or accept the liability responsibility themselves.
e.1) the UK Inspectorate of Software (UKIS) will make an initial assessment of liability
e.2) where all information (including any proprietary codes used in the creation of the software and a complete threat model and security assessment for the software) cannot be provided within five days to the UKIS liability will automatically be assessed against the vendor
e.3) Appeals may be mounted against UKIS decisions through the court system. If such an appeal is lost, costs should normally be awarded against the appealing party.
(feel free to rewrite and remix according to the CC-SA license) ... now there's something for the anti-lawyers to bite on ... I am especially sure the idea of an Inspectorate of Software will drive the libertarians here wild :-)
"How about the following (since Bruce was recommending this to the house of Lords we should gradually create both a Scottish and an English version of the legislation)."
I think it is generally a bad idea to create completely new laws to handle exceptions.
To me, it seems to be better to strengthen existing legislation. With all liabilities and insurances, the user/consumer is required to minimize damages or their likelihood.
So, users are required to do due diligence to an extent that reflects the value of the task and possible damages. Critical tools will require extensive precautions.
A FOSS project can make evaluation easy with public discussion lists (eg, lkm), help forums, documentation, and well documented code. With the bar for evaluation low, it will be hard to make the point that a user could not have predicted the likelihood of problems. The user took a calculated risk to use this FOSS product and, therefore, is likely not entitled to damages. Especially as the coders did not promise the product would work.
Even bad FOSS projects will be protected by this. A user that applies a product for which she cannot find documentation or code evaluations should know that the project is immature and act accordingly.
So, I think the courts only have to apply "due diligence" (you can see what you get, so act accordingly) to ward of liability attacks on FOSS.
Well, it's an interesting suggestion. However, I think that your understanding is ruled out for "strict liability" which is a requirement for all product liability in the EU. Even if the user failed to do reasonable checking, the manufacturer will end up liable. (see Consumer Protection Act 1987 http://www.statutelaw.gov.uk/content.aspx?... and also http://www.tradeangles.fsbusiness.co.uk/articles/...
My objection, however, is different. I think that software is fundamentally different from "products" for the same reason that voting machines are different from paper ballots. If a child seat fails, there are bits of broken metal and plastic. You can normally see exactly which component failed and why. Failures in software are very difficult to "see" and information about them may self destruct. For this reason, I think you might be better discussing professional liability clauses, where mere lack of a paper trail is sufficient to prove liability.
If I install my child seat wrongly, it's very unlikely to influence your car on the other side of the world. If I install Windows wrongly, on the other hand, that can be an indirect cause of money being lost from your bank account.
Personally I don't think software for a general computer, a more or less pure mathematical abstraction with real world side effects, really fits into either the category "product" or "information".
Very interesting discussion (which unfortunately I have only had time to skim...)
How about the free speech angle? If I publish source code on my web site, it seems pretty much the same as publishing any kind of text with meaning. I can make many (not any!) non-software statements and not have to be liable for how people receive and use those statements.
Every piece of open-source software I've installed (for instance Gnu) has a "clickwrap license" limiting what you can do with it, and limiting the liability of various people and organizations that took part in writing and distributing it.
Is that a contract, or not? If it is, then why exempt open-source from liability?
JDG: Because free software does not provide an income stream that the developers can use to insure against liability suits.
@John David Galt
because in "open-source software" support and responsibility is normally decoupled from and sold separately from the software its self. With proprietary software, nobody by the vendor can do repairs or even verify the quality of the software. With open source maintenance can be done by any number of companies who often have much greater resources for problem fixing than the original author. These companies are a much better place to put liability. The user in the case they don't take such a company.
I personally think that the company that does the install should be liable also for proprietary software. E.g. if you get an HP PC with Windows installed as a package, HP should be taking responsibility.
If you give legal advice (even free!) in many places you can actually go to prison, let alone be held liable. It all depends what information you choose to share. If you give "financial advice" you can certainly be held responsible if your advice is defective although that's often difficult to define.
Bruce, you still don't seem to know how a lot of free software is developed. Do some research. Assuming there's no business relationship makes you sound like you're stuck in the 1990s, before Cygnus Systems was formed. Stop it already. Today, lot of free software is developed precisely _because_ of the business relationships formed around it.
You also still haven't addressed the subsidy problem and the fact that not everyone needs "secure" software. Software "security" can't even be defined without looking at the context under which the software is used, and there's plenty of places where buffer overflows just aren't important.
On your last post, people commented about how when insurance drives the industry, most of the focus is on auditing that "the process" was followed so that everyone can cover their own asses. Think about it: there are no metrics for looking at a completed piece of software to determine whether it's secure, so the focus must be on the process used to create the software. However, we don't know any good processes for developing secure software, so what do you think will happen?
Instead of blaming "vendors" for not using processes---that don't exist---for writing secure software, why don't you help come up with those processes? Andrew Tanenbaum has been working on the problem for years, and could probably use some help.
"Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums."
- not to improved engineering! I think this spoils Bruce's argument. Any effort by open-source distribution vendors towards improved engineering will be pushed upstream to the non-commercial projects, and the "free rider" problem will become more acute - but only for open source projects, obviously not for closed-source software.
Any legal solution must favor those with the most money to spend on lawyers. A class action suit against MS for poor security might succeed, but MS's cash reserves can buy an awful lot of delay and undue process.
Also the law would need to be carefully written not to destroy open-source - e.g. as Bruce writes, by making the provisions an aspect of contract law. Instead, lobbyists for the software industry will push for a law that better serves their own interests. How many legislators understand the issues well enough to get the law right?
I think it's better to live with the present mess than change to a worse mess.
Unlike other products, an item of software might be produced and/or released largely by a single individual. This could apply regardless of whether the software was shareware, freeware, or free software. Any rules would ideally take this into account.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.