An Interesting Software Liability Proposal

This proposal is worth thinking about.

Clause 1. If you deliver software with complete and buildable source code and a license that allows disabling any functionality or code by the licensee, then your liability is limited to a refund.

This clause addresses how to avoid liability: license your users to inspect and chop off any and all bits of your software they do not trust or do not want to run, and make it practical for them to do so.

The word disabling is chosen very carefully. This clause grants no permission to change or modify how the program works, only to disable the parts of it that the licensee does not want. There is also no requirement that the licensee actually look at the source code, only that it was received.

All other copyrights are still yours to control, and your license can contain any language and restriction you care to include, leaving the situation unchanged with respect to hardware locking, confidentiality, secrets, software piracy, magic numbers, etc. Free and open source software is obviously covered by this clause, and it does not change its legal situation in any way.

Clause 2. In any other case, you are liable for whatever damage your software causes when used normally.

If you do not want to accept the information sharing in Clause 1, you would fall under Clause 2 and have to live with normal product liability, just as manufacturers of cars, blenders, chainsaws, and hot coffee do.

Posted on September 23, 2011 at 5:22 AM60 Comments

Comments

Andrew Gumbrell September 23, 2011 5:44 AM

That will hurt software houses, but give the lawyers plenty of work.
To keep software from piracy would mean cast-iron definitions of ‘normal’ use.

Michael P September 23, 2011 5:55 AM

How would this help people with devices that will only run code that has an approved digital signature? They could build a reduced version of the software but couldn’t actually run it.

How would one make it practical to disable certain features without breaking the rest of the software? They could trim out features but might not be able to build or run the software without making changes that are not clearly “disabling functionality”.

This also seems like it would enable copyright infringement on an unprecedented scale.

Flemming Frandsen September 23, 2011 6:00 AM

Getting rid of the current liability free software industry is not a bug, it’s a feature.

If you make your living only though keeping bits uncopyable and screwing over your users without liability, then you deserve to go out of business.

Meanwhile those of us in responsible software houses that already warrant our software and provide service for the money we’re paid will keep going as we always have.

So: +1

Ben September 23, 2011 6:18 AM

Disabling functionality is insufficient – you also need to be able to fix bugs which exist in the functionality you require (stack overflow, twice-freed block etc).

Tiago September 23, 2011 6:44 AM

So, you put some annoying functionality that all your users will disable and your liability is limited to a refund?

PiP September 23, 2011 6:55 AM

I like this idea; it lays down a good framework. There’s still some quirks that need to be addressed. If a $10 media player causes a bank worker to lose $250,000, how much is the media player company liable for? If someone is running a 30-day trial of a $10 malware scanner that conflicts with a $200 full-disk-encryption product and the company loses $100,000 of data, who is responsible and for how much? If I spill a $1.25 coffee on a $20,000 rack unit, does McDonald’s owe me a replacement coffee, or replacement rack unit? (Do they only owe me a coffee because I had the ability to opt-out of the cream and sugar?)

Anonymous 1 September 23, 2011 7:03 AM

Putting a requirement that a version with features removed from it still be compliable and runnable should fix the problem Micheal P identified (this may require the company which produced it to sign any such customer built software or to just not use hardware that requires a signature the hardware owner doesn’t have).

Somehow I can’t see the big software companies being willing to live under those kind of rules though (even though it’d be better for the rest of us).

PiP: Normal use would probably require that the end-user do things how the manual says to do them and take at least some degree of caution (like not spilling your drink on a server or even having it close enough to a server to potentially damage it, though that wouldn’t be a software problem).

Jonathan September 23, 2011 7:47 AM

If big software comparies suddenly become liable for all the bugs in their existing software, the resulting liability lawsuits could bankrupt them. This proposal could destroy the software industry as we know it.

There would probably be negative consequences too.

Duane Gran September 23, 2011 8:27 AM

The solution isn’t more liability, it is more cooperation between producers and users of software. In that vein, the spirit of releasing the source is a step in the right direction but tying it to liability is a mess. For a variety of reasons we can’t affordably build software the way we build bridges and apply the same liability standards.

If such a proposal became law then software development would become more expensive and become the province of larger companies. The innovation of many software entrepreneurs would be stifled by liability concerns. I can’t envision any way that this would be a net gain for end users or purchasers of software.

Fred P September 23, 2011 8:42 AM

Some of us are already in industries that are highly regulated; major defects that get to the field cost a lot more than simple direct damage liability in all of the industries I’ve worked for significantly.

My surprise is that many software industries are so laxidasial. In my opinion, if you aren’t willing to ship with liability, you’re admitting that your software has little or no value.

As for clause 1, I think that the same applies – however, clause 1 is useful for sharing software for development, it’s useful for beta testing, it’s useful for hobbyists who don’t charge for their code in the first place, and in a number of other cases that have nothing to do with open source or free software.

So, overall, I like this suggestion.

James Grimmelmann September 23, 2011 8:52 AM

Despite what the author says, this would impose vastly more liability on software vendors than on those who make almost any other kind of product. Under U.S. law, the makers of “cars, blenders, chainsaws, and hot coffee” are typically responsible only for the damage caused by “defective” products. The legal system accepts that every product will occasionally cause some damage, even when used correctly. In the words of one influential codification of the law, a product is defective when “when the foreseeable risks of harm posed by the product could have been reduced or avoided by the adoption of a reasonable alternative design.” That last condition is very important. It makes no sense to make a software vendor liable unless there was a different way to write the software, AND writing it in that way would have cost less than the harm done by the mistake. We don’t require people who make blenders to do an absolutely perfect job, and we shouldn’t require it of software vendors, either.

SparkyGSX September 23, 2011 8:56 AM

I’d think it would be a good thing to hold software companies liable for engineering flaws, lack of proper testing and quality assurance procedures, and especially intentional sloppy engineering and marketing lies.

I don’t see how this would bankrupt any company that produces software of reasonably good quality, because I don’t see how it is different from companies who design and sell physical products.

The only companies that would really fight over this, are they ones who know they’ll be screwed, because they have sold us defective crap for ages.

However, I do see at least one major flaw in this proposal, which is the “when used normally” clause. A company could simply state that their software should be used only on PCs with a very specific configuration, with a very specific version of an operating system, while not connected to the outside world, and with absolutely no other software installed, aside from the operating system.

Obviously, this would be an unworkable situation, and most if not all customers would not use the software “normally”, thus relieving the company of any liability.

Also, when considering clause 1, I wonder what “disabling functionality” would mean. I’d think the customer would not be allowed to fix any bugs. If the software is vulnerable to an exploit in it’s input handling (TCP/IP or otherwise), the only thing the user could do is disable the source of the input (the ability to communicate using TCP/IP) altogether, probably rendering the software completely useless in most cases.

Does “disabling functionality” mean you are only allowed to remove characters from the existing code, but not add a single new character to it? How could this ever be enforced, without full access to the users machine?

Also, I’d think it would be illegal to distribute the “fixed” version of a certain piece of software (with features disabled), because that is usually the exclusive right of the copyright owner. Would it be legal to publish detailed instructions or patchfiles to fix a specific program?

What I’d like, is very simple:
1) If you sell the product (meaning, someone payed you for it), you can be held liable for flaws.
2) If you give it away for free (as in beer), you are not liable in any way, maybe unless the misbehavior of the software would be intentional and malicious.

Glenn Maynard September 23, 2011 9:13 AM

Do you really want to be sued because your program crashed and a file was corrupted, and have to spend tens of thousands of dollars in court to somehow prove that the crash was caused by normal non-ECC memory errors, or by a filesystem bug, or a bad video driver, or a bug in Win32 API, or a bad sector on the HDD causing a bad block of code to be swapped in, or a virus, or a misbehaving virus scanner?

It’s hard to believe anyone living in the United States, and has seen how badly and regularly abused the legal system is, would suggest anything like this.

We don’t need source code to engage in copyright infringement.

You don’t need a knife to stab someone, either.

If you think it isn’t a massive help, you’re either deep in denial or still in college.

damjanev September 23, 2011 9:34 AM

@Jonathan
If big software comparies suddenly become liable for all the bugs in their existing software, the resulting liability lawsuits could bankrupt them. This proposal could destroy the software industry as we know it.

It is not in the Buce’s quote, but the original article states:
“The majority of today’s commercial software would fall under Clause 2. To give software houses a reasonable chance to clean up their acts and/or to fall under Clause 1, a sunrise period would make sense, but it should be no longer than five years, as the laws would be aimed at solving a serious computer security problem.”

Andrew Philips September 23, 2011 9:44 AM

This is a really, really stupid idea.

Clearly, the authors never worked for any length of time writing commercial software. There’s no way a software company with any significant revenue (read deep pockets worth suing) would do this.

  1. Beyond the simplest of programs, most commercial s/w is very complex. You often can’t just snip out functionality you don’t like.
  2. The testing and certification process is huge. On the largest projects, make one change and you might have to run 100,000s of tests on a product to ensure nothing broke. Aside from those tests not being delivered, most end-users won’t have the platform to run the tests.
  3. Customer support is thrown out the window (and that’s an area where many s/w companies make money). How can any engineer expect to support a customer sliced and diced product at the code level?
  4. Sometimes, S/W functionality that is cleanly removable from a compile is often written that way for multiple version purposes (different compilations). Providing source code allows end users to (more easily) upgrade to more powerful products (one code base, multiple compiles).

Talk about impractical design in a vacuum.

Pete September 23, 2011 10:19 AM

Yes, it is worth thinking about for maybe two minutes because it is confused and impractical.

Here’s what I said on the original post at acm:

Can you imagine if we sued a car manufacturer every time someone jimmied a lock? or glass manufacturers when someone breaks a window? You are completely ignoring the intelligent adversary, which makes all the difference. There are many, many flaws in today’s real-world environment that do not fall under product liability law because they require exploitation by others.

I really don’t understand the example. Who is supposed to be liable to whom in this scenario? Usually, when discussing software liability folks talk about vulnerabilities, not malware.

Glenn Maynard September 23, 2011 10:26 AM

There’s no way a software company with any significant revenue (read deep pockets worth suing)

Smaller companies are “worth suing”, too–by larger companies wanting to put them out of business. It doesn’t matter if you’re innocent, if you’re bankrupted proving it. This is the reality of the modern legal system.

grumpy September 23, 2011 11:04 AM

No, it’s not worth thinking about for even a second.

There is no practical way to define “normal use” of a computer or even the simplest program if we want to have machines capable of simulating everything and flexible software capable of actually doing something. Sure, if we want something as dumb as an old tee-vee, no problem. But a PC with an office suite? Forget it. Noone is going to pay nuclear-power-plant-level prices for software if it’s not going to power a nuclear power plant. Risk vs. cost, people. This proposal is a quite a bit more silly than what the TSA are currently engaging in in the physical world.

Besides, we have assume users actually engage their brain before using software if such a liability clause is going to be for transforming the market. This will not happen before the sun cools down. People do not engage their brain unless pain is an option so I quite favor the current regime thankyouverymuch. At least the half of the users not thinking will experience pain.

Yeah, I’m a sysadmin, why do you ask? 🙂

Evan D. September 23, 2011 11:07 AM

My company produces a software (desktop) that uses a single installer for all editions and then specific features are unlocked by a specific license type.

It’s worked out very well for us (16 years) and also makes migration paths to higher editions very simple.

Petréa Mitchell September 23, 2011 11:26 AM

I’m a programmer and I’m all for this. Right now companies have an incentive to produce buggy, hard-to-use software because the end user is unable to judge software quality by what it says on the box, and the company that spends more time making sure its product works is the one that is second to market and doesn’t get to lock in as many users.

I think this would help push better software design, too. Even if the jury finds that it’s the user’s own fault they clicked the “Shoot myself in the foot” button in a particular program, the expense of a court case will probably still encourage the company to remove that button.

The original reason software was exempted from liability laws was that IT was a delicate, fragile new industry that could be crushed by the expectation that its products actually work reliably, right? Shouldn’t we be past that point by now?

Mike Scott September 23, 2011 11:50 AM

As worded, this doesn’t protect most open source software, unless open source authors change their practices to significantly increase the size of downloads. If you download and install Ubuntu Linux, you don’t get the source code — it’s there to download if you want it, but it’s not delivered with the software as required by this proposal.

Jordan Brown September 23, 2011 12:02 PM

Plus… source code is absolutely worthless to 99% of customers, and 99% of the remainder really would rather not have to look at it.

(and don’t say “the 0.01% will fix the software for the 99.99%”… today’s open source software has no shortage of bugs.)

Although I think that some level of software liability is appropriate, I think that the primary thing that has to change to get higher-quality software is that the customers have to demand it. The customers have to say (with their checkbooks) “stop adding features until you fix all of the bugs”. Unfortunately, observed reality is that most customers would rather have more features than fewer bugs… or at least that’s the situation that software vendors perceive.

Matt B September 23, 2011 1:09 PM

@James Grimmelman: You are ignoring about ton of modern PL jurisprudence. There are three modern theories of product liability: design defects that leave your product inherently dangerous (what you are discussing); manufacturing defects that make the particular article the user used dangerous (probably not an issue in software); and marketing defects, which usually refer to a failure to warn the user of a dangerous condition that they could not likely discern on their own. For instance, if you label your chainsaw as being safe for juggling and it cuts a performers arm off, even though the chainsaw was not defective for its normal use you can be made to answer in products liability.

Most software products liability probably relates to the last category. If software came with a warning on the box that said something like, “WARNING: This product contains a dangerous, uncorrected race condition that could be exploited by a malicious attacker to take control of your computer, obtain access to all of your data, erase all of your data, and/or conduct and implicate you in criminal misconduct. There are no workaround steps that you may employ to avoid this dangerous condition.” then perhaps the manufacturer would avoid liability.

The first clause is more or less a codification of this standard (and thus, perhaps, redundant,) except that it is probably more narrow than it need be. If manufacturers are uncomfortable releasing code, they should not be compelled to do so. Instead, it would be enough for them to disclose the defects in their product in a way that can be understood by the user, together with clear instructions for avoiding the danger. This would be true no matter how the software is delivered – whether SAS or shrinkwrap.

Note that many of the problems with this proposal would be ironed out in the legal process. For instance, some have asked what “normal” use would be. This question arises in cases all the time, and is answered just as often. It is probably prohibitively difficult to simply list every normal use of any product, but the jury readily can figure out whether a use appears normal or not. The manufacturer’s claim about what is “normal” use might be interesting to the jury, but it would not be the only evidence offered.

Marcos September 23, 2011 2:31 PM

This would be a great idea if a software security cycle guarantees zero bugs and flaws. In real world, this zero bugs philosofy is impractical, like Capers Jones has shown. This will broke all software houses.

I think that software security is a “have to”. The software houses should apply a software security cycle always. But I think that your proposal is not economically viable.

kashmarek September 23, 2011 4:20 PM

Manufacturers of cars are not subject to the option 2 liabilities. Cars are now sold & leased with no sue arbitration agreements (else no car) and the arbitration hearing is generally weighted against all consumer complaints.

pfogg September 23, 2011 6:32 PM

The major disconnect I see is at the court’s ability to understand the software well enough to properly adjudicate several of the cases mentioned. Deciding what constitutes ‘damage when used normally’ or ‘allows disabling of any function or code’, or possibly even ‘buildable source code’, will be very difficult for a court comprised of people who don’t really know any more about computer programming than what dueling expert testimony tells them.

I’ve seen some really wild decisions in patent law and copyright litigation, and even in the narrowly constrained field of voting machines software issues get muddled pretty regularly.

Under these circumstances, it’s not clear how a software producer could limit risk regardless of the procedures followed, and the suggestion that the current industry is so worthless that it would be a net win to shut it down completely and start over strikes me as completely disconnected from reality.

whims September 23, 2011 7:06 PM

@Jordan Brown: “Plus… source code is absolutely worthless to 99% of customers, and 99% of the remainder really would rather not have to look at it.”

Right. They’d have to take their code to a service guy for a check-up, an estimate on the work needed, and suitable modifications.

Nick P September 23, 2011 9:21 PM

@ Marcos & Grimmelmann

As I’ve posted many times, there are many methodologies and software engineering practices that consistently produce low defect software. For methodologies, Cleanroom and Fagan Software Inspection Process are the most cost-effective & perform great. Praxis “Correct by Construction” and Green Hill’s PHASE are costly methods to produce very low defect software. Then, there are more formal software engineering processes (like EAL6-7). Regardless of which is chosen, each has a track record of consistently making software that just works. And some of these companies warranty their software for a certain bug count.

Past that, there are software development strategies that can reduce problems. For instance, there are quite a few issues that come from low level memory management, buffer overflows, etc. Very usable “safe” languages, libraries, etc. have existed for a long time with ways to reuse legacy libraries. These went largely unused. OS’s with the right functionality & few severe vulnerabilities have existed for a long time, but they go largely unused. Quite secure comms protocols have existed for a long time & can be cost effective, but these are rarely used. Low risk file formats with plenty of support have been available for a long time, but high risk formats are used by default. The problem isn’t a lack of techniques to produce correct, reliable, high quality software.

The problem is a lack of effort to use existing techniques, however cheap or productive, to produce such high quality, safe software. It’s an issue of intent, not capability. That’s why liability legislation is the only solution. It must be worded very carefully, though, I will agree.

(Note: The great bug reductions we’ve seen in Windows after adoption of Microsoft’s SDL argue my point. And they still wrote many key portions in unsafe languages, constructs, etc. w/out fully using what tools and strategies are available to prevent security issues. Legacy played a large role in that, though, so we can’t knock them too much & new software vendors applying SDL would experience even greater results.)

ron September 23, 2011 10:18 PM

Bruce, I have agreed with your basic premise for years.
Whether it is these terms, or some other, we as a society and consumers have to DEMAND that software manufacturers face the same liability rules as other manufacturers.

Just because the deal with electrons in the form of bits and bytes rather than steel and plastic should not exempt them from the accepted normal rules of responsibility for their products.

Existing shrink wrap EULAs have to end. Currently defective software can cause havok costing thousands of dollars to recover from and at best the software manufacturer is only liable for the purchase price of the software. In what way is that just?

We need a modern “Ralph Nader” for software.

[snip]
If I spill a $1.25 coffee on a $20,000 rack unit, does McDonald’s owe me a replacement coffee, or replacement rack unit? (Do they only owe me a coffee because I had the ability to opt-out of the cream and sugar?)
[/snip]

Pip: in what way do you figure McD owes you or anyone for any party of this problem. It was your own stupidity for bringing liquids within range of the device, probably expressly against corporate policy/rules, and your clumsiness for actually spilling it.

You f’d up, YOU PAY for the broken device and for the replacement coffee too! McD’s only mistake was selling you the coffee in the first place! You sound like the granny who actually won the legal lottery when she was stupid enough to hold a cup of hot coffee between her legs while sitting in a moving sports car (stiff suspension). Is it any surprise she spilled it and got burnt? But some idiot court agree with her.

I knew a guy who was shafted in exactly this type of situation. He was a tow truck driver. A guy died running into his tow cable, which was extended across the road. He had the flashing lights on etc. The dead guy was:
A) DRUNK (illegal)
B) SPEEDING (illegal)
C) driving a snowmobile on a on the ROAD (illegal)
D) driving without a helmet (illegal)

The dead idiots wife won a multi-million dollar settlement, my friend was stuck for 10%, his company the rest. Where is the justice?

Anonymous 1 September 24, 2011 1:20 AM

Mike Scott: Not really much of a problem just requiring that the source code be available to the customer at no charge or a small charge for media (which basically everyone distributing a Linux distro has to offer already).

Jordan Brown: Today’s open source software tends to have less bugs than proprietary stuff and what bugs do get discovered in the major packages tend to get fixed a lot quicker as well.

Anomymoooooooossss September 24, 2011 7:53 AM

  1. Beyond the simplest of programs, most commercial s/w is very complex. You often can’t just snip out functionality you don’t like.

You can if it’s designed well, at least major parts. Plugins are one way. Simple interfaces, encapsulation, “black box” design…
You know what ? This proposal would push things towards the use of these techniques, and they also happen to make more classes of bugs less likely.

  1. The testing and certification process is huge. On the largest projects, make one change and you might have to run 100,000s of tests on a product to ensure nothing broke. Aside from those tests not being delivered, most end-users won’t have the platform to run the tests.

I see no requirement that the modified version needs to be as stable as the original one. Code designed to try to be less stable than the original when modified would likely fall under “sabotage” and be legally actionable.
This leaves “normal” breakage, which I assume you meant. Which, if the user wants, brings us to:

  1. Customer support is thrown out the window (and that’s an area where many s/w companies make money). How can any engineer expect to support a customer sliced and diced product at the code level?

Surely not ? That’s a huge gold mine here, waiting to be tapped. Think of all the customers suddenly wanting a few modifications to software they run, but don’t have the technical means to do so themselves. Aren’t they going to contract someone else to do it for them ? This already exists in the free software and open source business. Companies such as the old Cygnus (Cygnus, Your GNU Support – does the name/acronym make you realize yet ?), or a number of new ones. A huge opportunity here. Freedom and money to be made.

  1. Sometimes, S/W functionality that is cleanly removable from a compile is often written that way for multiple version purposes (different compilations). Providing source code allows end users to (more easily) upgrade to more powerful products (one code base, multiple compiles).

You say this as if it’s a bad thing ? That companies having built their software properly in the first place would be disadvantaged ? On the contrary, those will be left the most unhindered by such a proposal, as they’ll already be ready for it.

David September 24, 2011 9:43 AM

Who decides what a feature is? To my sorrow, I’ve spent thousands of hours trying to make sense out of customer requests. As soon as you get a customer who disagrees with the software developer about what chunks should be capable of being disabled, this whole notion breaks down. My guess? That’s generally gonna happen by the time customer #3 gets involved.

Kind of a neat idea, maybe, but is it really necessary? I’m very much not a lawyer, but can’t this sort of thing be handled in a EULA? And/or negotiated on a case-by-case basis for those who want something special?

Nick P September 24, 2011 10:25 AM

@ ron

“You sound like the granny who actually won the legal lottery when she was stupid enough to hold a cup of hot coffee between her legs while sitting in a moving sports car (stiff suspension). Is it any surprise she spilled it and got burnt? But some idiot court agree with her.”

You’re statement is unfair to her & shows utter ignorance of the case. It’s forgivable because many people don’t know the important details of that case. I got to debate it in college back in the day. This was not a case of someone spilling an average cup of coffee on themselves and experiencing discomfort. Far from it.

McDonald’s was experiencing an issue with their coffee: it would over time go below the ideal temperature, causing them to have to make more constantly. They wanted to cut costs. The way they did this was boiling extremely hot coffee & there wasn’t anything intuitive about it for a first time drinker. The first time I drank McDonald’s coffee I couldn’t taste anything for the whole day my toungue was burned so well. Likewise, when this lady spilled what she thought was normal “hot” coffee on her, she experienced third degree burns!

So, the courts had to decide if making coffee hot enough to cause third degree burns without an adequate warning was acceptable. The courts also looked that the intent, which was pure profit. The court decided that McDonalds acted unreasonably & that the woman had no way of knowing that their coffee could burn several layers of her skin off in seconds. (I can’t underemphasize third degree burns.) So, she won & McDonalds rightly paid the woman they scarred for life. Q.E.D.

Side note: What happened with the tow truck driver was totally ridiculous & an example of the US legal system gone horribly wrong. This happens too often.

Anonymous 1 September 24, 2011 2:24 PM

David:

Kind of a neat idea, maybe, but is it really necessary? I’m very much not a lawyer, but can’t this sort of thing be handled in a EULA? And/or negotiated on a case-by-case basis for those who want something special?

EULAs are usually used for removing liability from software companies, not adding it (for many software companies you’ll need to make it mandatory as they won’t do it on their own).

Besides, is every small business and home customer really going to be negotiating the details of the software licence? Big companies and governments have the ability to actually negotiate those things (governments can go a step further) so if they think it important that a vendor be liable for crappy software they can ensure that happens.

Oh and in reference to the McD too hot coffee case, all the plaintiff had originally asked for was payment for medical expenses which McD weren’t willing to pay (so they ended up paying a lot more in the end).

Clive Robinson September 25, 2011 1:57 AM

@ Nick P,

“Side note: What happened with the tow truck driver was totally ridiculous & an example of the US legal system gone horribly wrong. This happens too often”

I don’t know of the case in question but from the description I’ve heard of not to disimilar cases.

And in some of those cases it revolves around the idea of “best practice”, and the game works as follows,

As the injured party your lawyer has to demonstrate that the other party was in some way “negligent” in putting out the warning signage etc. It then falls to the other parties lawyer to show “reasonable practice” as a defence to the charge of negligence. The injured parties lawyer then has to show that what they are suggesting is “reasonable” because “other people do it” and thus it is obviously a “known issue” (sometimes a requirment for negligence) which has been “mitigated by others” and is thus better practice that the other party should have followed, thus it follows that they were negligent.

It does not matter if the mittigation is compleatly usless or inappropriate in the situation they only have to show that the other party did not do it and was thus negligent…

It’s this sort of “slam dunk” type lawyer technique that is responsible for much of the complaint about “Health and Safety gone mad” in modern society.

Sadly it’s a real issue in that people need to understand at a real level that what they are doing is dangerous, and that it does not matter what safety systems are in place because the can and do fail. Thus if you know something has risk you should procead with the degree of caution required by the situation not that of the perceived risk of the safety system failing.

In simple terms machine tools have guards to help prevent incidents caused by a chain of individualy highly unlikely but forceable classes of event such as tool breakage. They are not there to mittigate stupid behaviour.

That is your work practice should be based around the idea that the guards are not there, so you don’t push work into the power saw with just your bare hand, you wear a glove and use a “pusher piece” to distance your hand from the blade. When you do get an unlikley event (an unseen nail or stone buried in a piece of wood you are cutting) the guard limits the possible side effects (of bits of nail/stone/blade flying around).

Poul-Henning Kamp September 25, 2011 8:49 AM

I’m the author of the original piece.

I think a number of the commentors could have benefitted from actually reading my article before commenting on just the bits Bruce cited, you would find a lot of your questions answered there.

But Yes, there is a lot of hard work and border-drawing to be done before the proposal becomes a law anywhere, but we have people for that, they are called legislators, lawers, judges and juries.

The important message i my piece, is that it is possible to impose software liability, and leave software houses economically viable avenues to continue in business.

Randall September 25, 2011 8:16 PM

You need something like the credit card industry’s PCI rules: specific rules that increase as the stakes get higher and limit liability. ‘Nother idea: sellers of security-critical software have to pay, like, a 1-3% tax into a fund for compensating victims of breaches, and your tax can go up or down based on some kind of outside review of your practices and code or your past security record or something like that.

Nick Coghlan September 25, 2011 9:22 PM

Definitely worth thinking about – having source availability as a prerequisite for “buyer beware” disclaimers sounds perfectly reasonable to me.

It means software vendors have a stark choice:
1. Say “trust us”, keep the source code secret and pony up the cash when they inevitably screw up
2. Provide the source code to customers to allow them to do their own due diligence

The success of open source based companies (and the fact the internet itself relies heavily on open source software) assures us that this wouldn’t be the death of the industry as a whole.

vasiliy pupkin September 26, 2011 11:25 AM

Any electrical equipment has UL approval on its label, i.e. approval of independent safety tester with proper tools, people, skills & procedures.
Is it possible to have the same for S/W?
Just asking for input.

Nick P September 26, 2011 2:32 PM

@ vasiliy pupkin

“Any electrical equipment has UL approval on its label, i.e. approval of independent safety tester with proper tools, people, skills & procedures.
Is it possible to have the same for S/W?”

Well, it’s somewhat complex due to the complexity of software. Code review, static analysis, and certain formal methods have been shown to catch more bugs than testing & even prevent them before execution. As Dijkstra said, “testing only proves the presence [not the absence] of bugs.”

The real trick, though, is that programs must be designed for verification. The rigorous DO-178B process does this by requiring traceability between requirements, design & coding documents. Additionally, the program should be modular & written with safe constructs whose behavior is predictable. For instance, recursion & goto statements should be avoided. Exception handling should be available during any access to a resource that can cause failure. File & protocol header formats should be easy to parse.

A company that doesn’t not design software for verification is essentially designing software for black box testing. Black box testing may consist of functional use cases, failure attempt cases, or fuzzing. In any case, they are relying on testing because the source isn’t good enough to analyze by itself. If we do independent certification authorities, like the DO-178B groups, then we must have development standards that increase the verifiability of the application. Then, an independent group of reviewers checks it for flaws. If they find nothing significant, it’s stamped approved. This would be an improvement over current industry standards.

AppSec September 26, 2011 2:56 PM

I’m coming into this late, IANAL, and I haven’t read the full text (nor all the comments)… So with that said, ignore this or read on..

With all that said, this seems to be not feasible due to consistency, reasonable usage/ability of the end user, etc…

How many people are going to really be able to disable a piece of functionality (How fine grain does this functionality have to be? I can give you the source and a config that says: Build product (Y/N))?

What about a web application? Does that mean that I have to provide my server side code, application server, web server, database?

Oh and, by the way, my web page already is “buildable” — I use JavaScript and you have the ability to turn it on or off.. And it cost you nothing.. So, I guess I don’t have to worry?

I just don’t see how this can work.

Clive Robinson September 26, 2011 5:36 PM

@ Vasiliy Pupkin,

“Is it possible to have the same for S/W?”

Simple answer is yes and I sugested it some considerable time ago.

UL was originaly setup to reduce the costs to insurance companies and as such it worked.

The main problem is complexity and the number of failure modes. With locks and other mechanical items it is relativly easy to take them appart and test the individual parts and the way thy work with each other.

Importantly with mechanical devices there is a limit (adjacency) on how many other parts it interacts with.

Not so with software, with the llikes of memory leaks and freeing malloced memory more than once, the interaction can be almost impossible to predict.

Whilst as Nick P has pointed out there are methods to limit these sort of problems and recognisable interactions most software companies don’t want to go down that road.

A lot of this is actualy down to “legacy” and “code reuse” issues where earlier code that is relied upon has litteraly no quality control of any kind. Oh and the mantra is “if it aint broke…”

Anonymous 1 September 27, 2011 6:28 AM

AppSec: So what if most people can’t understand the source code, it’d be for those few who can and really need to be able to audit the code.

As for web apps, well if someone is paying you to use it as a SaaS system then I think they have every right to expect you to have to follow the same liability rules as someone who sells software to run on their own computers.

Marian Kechlibar September 27, 2011 12:04 PM

This proposal is crazy beyond belief.

Software does not run in a vacuum. It runs on a buggy and ill-documented OS, which runs on buggy and even less documented firmware, which runs on a buggy and often-failing hardware.

This is even more visible in mobile devices (ah, the future!). My software corp. (full disclosure: co-owner, 33 per cent share) has written about 2 million lines of code for Symbian OS alone, and about half of serious production bugs were traceable either to the underlying OS, or to specific hardware, often half-baked bricks from some Indonesian factory.

Anon September 27, 2011 2:35 PM

Then if you can trace the bugs which caused damage to the OS you should be able to get off and have the OS vendor held liable, same if you can prove it’s buggy firmware or hardware.

Clive Robinson September 27, 2011 5:32 PM

For those interested in why liability legislation is unlikley to work, think about “audit legislation” like Sabox and the effects that has had.

Rather than me write it all up (as I’ve done in the past on this blog) take a look at the Financial Cryptography web site for an up todate take on it,

https://financialcryptography.com/mt/archives/001331.html

The point being if you look closely at what has happened in audit you will see exactly what companies will do, and basicaly that is take quite a few quick steps backwards and things will be less secure than they currently are.

Marian Kechlibar September 28, 2011 9:17 AM

Anon: and can you imagine the cost and paperwork thereof…?

And if the vendor of the OS disagrees – hello litigation, court expenses etc.?

Expensive like hell, and a perfect deadweight killer for any non-OSS small corporation.

Nick P September 28, 2011 2:36 PM

@ Clive Robinson

I disagree with his position. There are certain ways to do it that work. The DO-178B situation is an example (although in the higher end of things). Any critical software that runs on an aircraft must go through this rigorous certification process & meet any stated requirements. Serious flaws found = start from scratch. The highest levels of DO-178B certification can cost up to $10,000 per line of code. On the bright side, if you pass you can make a decent amount of money. If that blog post was right, you’d think we’d get a few toy apps/systems & hardly anyone in the market.

Reality worked differently: there exist around a dozen DO-178B software vendors. We have OS’s, middleware, graphics drivers (reliable ATI drivers? FINALLY!?), sound, GUI’s, rigorous development tools, traceability, and on and on. The liability was failure to be certified and each company faced a ton of it. The result was they each applied rigorous development processes & invested heavily in ensuring the systems met their stated goals. We’ve seen similar results in other high-level safety- and security-critical certifications. The vendors even made their products try to accomplish many at once to increase ROI.

So, a suitable scheme in the US must produce a payoff of some kind (greatly reduced liability, govt contracts, etc.). The scheme should also allow for relatively fast certification by qualified independent groups (i.e. no useless redtape bs) and government certification of those groups. The development process and the product would be certified by a lab. In a lawsuit, the lab would just look at the development records to see if they were still following the process. If the company isn’t pre-certified, a lab can be used to see if it’s following the new baseline standard of quality practices or if it’s producing uncertifiable garbage. If it fails either test, then the company gets the big liability.

Zingus September 28, 2011 3:45 PM

Problem: liability cannot arrive in software ONE COUNTRY at a time.

I know americans with their huge domestic market… blah blah, but NO. Being the first to adopt that kind of legislation would simply KILL your software industry.

(The chinese would be delighted.)

Unless planning for the introduction of a combined embargo, well but would it ever work? And would we like to live in such a world?

RobertT September 29, 2011 12:03 AM

@NickP
“Well, it’s somewhat complex due to the complexity of software. Code review, static analysis, and certain formal methods have been shown to catch more bugs than testing & even prevent them before execution. As Dijkstra said, “testing only proves the presence [not the absence] of bugs.””

I’m not wishing to pick a fight, but from my experience the best place to hide a bug / exploit is in plain sight. By this I mean make the bug / backdoor / exploit a critical part of the system protocol. this way every implementation contains the problem. The formally verified solutions just contain a perfect implementation of the problem.

It is a little difficult to sneak these exploits into the logic function itself but below the logic function there is always a real world implementation, focusing on the real world and working backwards to the protocol (logic) is a good methodology to build in exploits that even “perfect code” must contain.

HynekK September 29, 2011 8:17 AM

Moreover nearly any software today is not programmed from the scratch and the software company has not the complete source code available. You always use 3rd party components and toolkits. This goes usually down several layers and a lot of companies would be involved.
Getting these “complete” source code would be nearly impossible and compiling everything from scratch would be impossible for any customer.

Hi September 30, 2011 4:06 AM

The same courts that currently sorts out patents would sort out liability disputes. Do you trust them enough? I do not.

Basically, it is “the guy with most money wins” type of game. And even if it would not, litigation fees are too big. Small business can not afford to defend them selfs in court.

vasiliy pupkin September 30, 2011 8:33 AM

@ Marian Kechilbar.
All components should be tested by independent lab and get UL-type certification including H/W. F/W and OS.
Why not?
Second posting.

Marian Kechlibar October 2, 2011 7:56 AM

Vasiliy: Can you imagine the world-wide migration from current status quo to the one you’re proposing?

vasiliy pupkin October 3, 2011 8:28 AM

Marian,
Thank you for input.
When anything (hardware) is attached to phone line, that unit usually has FCC certfication (as best as I know) like UL approval on electric device, meaning when something is attached to public infrastructure, the existing practice required independent verification of safety. For aviation industry everything (H/W, S/W, F/W & O/S) is going already through such process.
Existing status quo with S/W is not okay.
Doing something wrong 1000 times does not make it right.
Changes (migration) can be applied gradually with priority based on different aspects of risks assessment and cost-benefit analysis for existing/legacy systems, and starting particular moment -for all new systems.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.