Unreliable Programming

One response to software liability:

Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: “If our software’s errors cannot be demonstrated reliably in court, we will never lose money in product liability cases.”

Follow the link for examples.

Posted on July 11, 2006 at 7:47 AM26 Comments

Comments

DevOne July 11, 2006 8:40 AM

I’d like to see the increased timeframe for adding that and how many software companies would be willing to qa that the failure routes are random.

Of course, I’d then like to see the court orders to view source code and when they find those random errors what would happen.

Writing reliable code is hard enough.. Writing reliable random code failures… wow…

a. July 11, 2006 8:59 AM

Maybe that could be easier to transform to reality if the EULA type agreements could be used also everywhere inthe life?

If you work for a company / get married to / use the company x for your flights, you agree to this kind of stuff
..

[your employer/spouse/travel agent] reserves the right, at any time and from time to time, to update, revise, supplement, and otherwise modify this Agreement and to impose new or additional rules, policies, terms, or conditions on your use of the Service. Such updates, revisions, supplements, modifications, and additional rules, policies, terms, and conditions (collectively referred to in this Agreement as “Additional Terms”) will be effective immediately and incorporated into this Agreement. Your continued use of the(work/marriage etc) following will be deemed to constitute your acceptance of any and all such Additional Terms. All Additional Terms are hereby incorporated into this Agreement by this reference.

Thus, whichever agreement, all the details can always be updated by the other instance to avoid any responsibility of any kind. Maybe realizing how ridiculous that can be in the real life, maybe that could be changed for the software IRL ..

Fred Page July 11, 2006 9:25 AM

Very humorous, but (from the proof of fault description) if a user error causes a crash, that sounds like a defect to me – at least in every domain I’ve ever worked in.

D July 11, 2006 9:29 AM

This sounds a bit ridiculous. Put more engineering into the software to defer or shift liability – if they can’t get the core functionality of an app correct, how could they possibly get these additional “features” correct?

Assuming software liability became law, I suspect that the first time this is attempted and that attempt fails (poorly engineered blame shifting), that the law would be amended to increase penalties for companies found to be making such attempts. (Along the same lines that crypto isn’t illegal, but using it while commiting a crime compounds punishment.)

–D

TOMBOT July 11, 2006 9:45 AM

Attorney for Ford: “Your honor, these fatal rear-end collisions were a result of user error on the part of drivers traveling behind the victims. The Pinto is built to the same safety specifications as every other car in the market today. Our client cannot be held liable for these lethal explosions, because our model shows that without an error on the driver behind the victim, these accidents cannot be replicated!”

I believe this blogger is mistaking his particular map for the territory, as it were. This doesn’t bear much of a resemblance at all to how liability is found.

Ian July 11, 2006 9:47 AM

This assumes defects are indeed a crime, or at least that they incure a cost. So to ‘go to the effort’ to hide the fact that your code sucks may be less costly than fixing it. Maybe your custom linker just adds this ‘special feature’ for you and you can go on with your crappy code.

Chase Venters July 11, 2006 9:49 AM

I still think that vendor liability for security faults in their software is a bad idea. It’s like charging a front door vendor with liability because some big buy broke down someone’s front door, entered their home, and stole their jewelry.

There is of course a need to protect consumer data. The liability belongs with the people who hold it, because then they can make a calculated decision to determine:

  1. What vendor will supply my software?
  2. How much consumer data will I retain, given that a breach could make my company extremely liable?

When a number of these parties get burned, people will start to determine who vends secure software and who doesn’t (and some of those who don’t might perhaps take real steps to improve).

It might have an additional side effect of discouraging businesses from retaining data they shouldn’t.

robber.baron July 11, 2006 9:52 AM

I have no problem believing that companies would be interested in a solution like this.

My parents were involved in a car accident in the late 90s where the car’s cruise control failed to disengage while breaking on the highway. My father was forced to use the emergency brake and good driving skills to avoid rear-ending a car at 65 MPH going into a construction zone.

The car company tried every trick they could think of to suggest that it was driver’s error or otherwise not my parents fault. The irony being that they knew this was a problem because within a year they had recalled the car model because of other similar problems.

It took the threat of a really vicious lawsuit for them to offer anything to my parents who were nice enough to accept a new car and medical bills payment.

I could see a company trying anything to avoid responsibility for faulty software.

This also poses a problem for open source projects. If gaim crashes and causes monetary damages who do I sue? There isn’t a gaim foundation, there aren’t employees, there is only the individual developers. Do I sue those who contributed the code that lead to my specific class? Can I take the personal fortunes [or lack there of] in compensation?

Although I like the idea of financially motivating companies to write better code [especially the one I work for] I just don’t see it working out as intended.

TOMBOT July 11, 2006 9:57 AM

@Chase:

That depends entirely on the assurances the front door vendor makes to their customers. What doormakers and lockmakers have that the software industry doesn’t yet are UL stickers and ISO standards – instead we get stuck with the Common Criteria, which vendors get to make up as they go along.

@robber.baron:

You wouldn’t sue anybody, because by using free OSS you’ve already signed the bungee jump waiver. That’s why Red Hat is more of an insurance company than a software one.

Chase Venters July 11, 2006 10:03 AM

The critical difference I see here is between ‘accidental’ breakage and ‘intentional’ breakage.

If your car gets rear-ended and explodes (when no other cars on the market explode when rear-ended at that speed), then yeah – I think you should be held liable.

But if someone is actively trying to cause your car to explode, how could you be held liable?

So in other words, if someone uses Windows NT for Boeing autopilot, and it causes the plane to crash and burn, hold Boeing liable. But if the pilot was hex-editing kernel memory at the time of the crash, hold the pilot liable.

roy July 11, 2006 10:04 AM

Another sneaky idea:

The vendor creates two versions of source code, with negligible, but readily identifiable, differences. The official version never gets released. All releases are the undocumented version.

In a product liability suit, the vendor will discover the customer was using an unauthorized counterfeit, which it will prove in court, thanks to the official source code.

I think this scheme would work. No buyer ever gets to compare what he bought to the original official source code, and counterfeiting is improving astonishingly.

Prohias July 11, 2006 10:31 AM

I don’t believe the idea of failing unreliably will protect the program authors. One would have to prove that the program could fail unreliably (even if in different ways) and that may not be too difficult with automation, multiple machines and a long enough time window. Moreover, the affected party can make the assertion that unreliable failure was purposely tacked on, making the authors even more cuplable. All I see is scope for people in the audit business, if this becomes a reality.

Jeremiah Blatz July 11, 2006 10:32 AM

Amusing, but fallacious. Even in closed-source software, there’s no lack of smart, obsessive teenagers to willing to decompile apps and look for dirty tricks. Especially if doing so gives them the ability to stick it to the Man and be a hero (as opposed to a terrist).

Ian Woollard July 11, 2006 11:16 AM

I’ve actually seen code in test copies of production systems that randomly deallocated and reallocated memory if it ran out. In other words a piece of memory is owned by another application, and suddenly it gets deallocated and reallocated without warning.

The reason they did this was that the guys that put it in once got blamed when the system ran out of a memory resource, even though it wasn’t their fault, their code logged an error so they got dumped with the bug investigation. So instead of tracking down the memory leaks in hundreds of different places in the code, they just introduced this code that would, on busy systems just randomly crash it or do very strange things, but whatever happened wouldn’t log in their section of the code!

JakeS July 11, 2006 11:37 AM

Software fails unreproducibly anyway, and has done for decades.  IBM’s MVT* used to crash every now and then in the old days;  usually (a) we couldn’t figure out from the core dump why it crashed, and (b) if we did the same thing again it worked.  Same with modern systems.  Software is just so complex that often you can’t reproduce the chain of events that causes a crash.  You just shrug, reboot and carry on.

  • MVT, Multiprogramming with a Variable number of Tasks, was an operating system on IBM mainframes in the late 1960’s (if you don’t know what a mainframe was, ask grandpa).

Thom July 11, 2006 12:02 PM

Any willful hiding or forcing of errors would require considerable collusion I’m fairly sure would eventually be discovered. For it to succeed would mean every instance of similar deceit, everywhere, would need to be successful so no one started looking for it in other locations. People would need to be paid-off on a regular basis, or disappearing at a disconcerting rate. Thats not pure faith in the “good guys” catching things, just faith in the “bad guys” behaving badly.

Depending on what the software does and what its embedded in, it has the potential for inflicting death, damage and mayhem if it goes wrong.

Some level of liability and consumer recourse needs to exist for any commercial product.

Roger Binns July 11, 2006 12:53 PM

There are many more ways around this. For example sell two versions of the software – one for $99.99 and one for $100m, with the latter providing liability payments. Customers who choose the cheaper version have voluntarily decided to forgo liability.

Or don’t charge for the software at all, only charge for “service”. The customer will end up paying the same amount due to the economics of making software but now it will be in a different column.

Or agree to liability, but only on certified configurations. For example an exact stepping of a particular processor on a particular motherboard at a particular speed running exact versions of the operating system and auxiliary programs. Anyone using any other configuration is not covered by liability. This has already been the reaction of the ear-bud/q-tip industry. Now they say they for anything but sticking in your ears, yet that is exactly what everyone does.

There is already a solution today for people who want “assured” software. Almost all that has been Common Criteria certified has had a certain amount of due diligence applied – go out and buy more of that!

CubeDweller July 11, 2006 2:12 PM

@Ian – Yours is the first post I’ve seen that I can guarantee comes from a real world developer. That’s exactly how it works.

Grahame July 11, 2006 2:48 PM

@roger.binns: I have a friend who works for IBM in a program with “mathematically verified” routines (not the whole package). When I look at the price of that, I think your $100million is not that far wrong for truly reliable software.

The car analogy is just wrong. Would you by a safer car if it cost a million times more?

another_bruce July 11, 2006 3:14 PM

i am not impressed by the ideas to avoid liability suggested in the link, because they depend on keeping the source code absolutely secret; if it is revealed, the writer will have to explain in court why he inserted “sabotage code”.
one thing i learned practicing law is that it’s almost impossible to keep important things secret. here’s how this might go:
plaintiff’s attorney issues deposition subpoena duces tecum to software company (in this case duces tecum means “show code”). software company moves for protective order against showing code, citing proprietary interest/potential damage of disclosure. judge appoints special master, an expert in the field trusted by the court, to examine the code and issue report. special master’s report shows several malicious bogeys in the code. software company ceo forced to testify at trial that he put the bogeys in to cause the system to crash irreproducibly. counsel waxes wroth, jury rolls eyes.
wanna try to fool the special master by submitting another version without the bogeys? your submission will be under oath. you better be universally popular in your office and hope that nobody can decompile the original version…
in sum, following the author’s suggestions is just plain stupid.

Shachar Shemesh July 11, 2006 3:35 PM

I don’t buy the argument. It centers around one important factor – that a user can only sue for a crash.

There have been successful suites (settled out of court, naturally) of Microsoft over crappy Hebrew support in Word. A bug, as far as software liability goes, is simply not something the software vendor CAN hide.

Very much related to the above is the tactic of detecting when a “liability incurring situation” happens. 80% of the time, if you can detect that, you can also solve the bug, which has the added effect of not only releasing you from the liability, but also producing better software.

In another thread altogether, I think the liability should be spread according not to the people who introduced the bug, but according to the people who can fix it. This is the real reason that FOSS software should not be held liable while a free (beer) software should. It’s also the reason why a redhat package does carry liability, and it’s RedHat’s, not the upstream package writer’s.

Shachar

João Neves July 12, 2006 4:15 AM

I wonder why this article made me recall this one:

Kernel Hacker’s Bookshelf: Failure-oblivious computing
http://lwn.net/Articles/188059/

Of course, this just covers ignoring faults in a code path that is short and isn’t supposed to affect the global state, but I’ll bet someone will try to use something like this ignoring the early assumptions.

Clive Robinson July 12, 2006 8:26 AM

@Chase Venters

“It’s like charging a front door vendor with liability because some big buy broke down someone’s front door, entered their home, and stole their jewelry. There is of course a need to protect consumer data. The liability belongs with the people who hold it,”

There is a fly in the ointment of your argument, when the door is broken down and the “goods” stolen they have a fixed and calcubal value, (it’s what the inusrance industry has been reasonably good at for the past hundred years or so). And more importantly there is usually no future liability on you.

However when your personal data is stolen it has no real attributable cost for the purposes of judging loss to you as a potential victim (ie it is maybe 8 cent to the collecting organisation) But as it is almost infinatly reproducable and you the victim pick up the tab from now onwards, you are facing an unknown and potentially infinate future loss.

Civil Courts the world over attribute just about everything down to Pound Shillings and Pence (OK Dollars and Cents 😉 how do they pay compensation on an unknown and potentially infinate future loss. There is little or no case law on compensation for data/identity theft and this is unlikley to change as there is not realy any speciffic laws by which it can be judged…

The real solution to the problem of ID theft is “Disposable ID’s” which is just to horible to contemplate by any Government (or currently sane person)…

In essence the Disposable ID process would be simple, you get issued with a secure (by whatever means) ID token. If you find you have had your ID token used invalidly, you go to a court and make representation. If the Judge agrees either the transaction or your old ID token is revoked. If it is the ID token that is revoked it is at a certain date in the past and you get issued with a new ID token. Any open or pending transactions against the old ID are either judged invalid or are transfered by court order to the new ID token.

It sounds deceptivly simple however two questions immediatly arise,

1) Does a Disposable ID sound like a digital certificate?
2) Are there any known problems with Digital Certificates?

Both answers are unfortunatly yes which means you have problems to start with. Then you need to add the human dimension with all it’s attendant problems…

Birp July 12, 2006 10:05 AM

As JakeS tells, “Software is just so complex that often you can’t reproduce the chain of events that causes a crash”.

Even the author of article recognizes this: “Now suppose that there was a magical wand for taking snapshots of computer states just before crashes”.

I think that some responsability must be demanded for software vendors.
But I recognize also that it’s very dificult to guarantee a little part or a complex system.

Legally, it’s posible that the only way could be to market only fully integrated systems: hardware-OS-application. Or, as Roger Binns sugested, to certify every sofware only to be used on specific platforms.

Practically, the only way is active demand from us, the users, to the software companies: not using not reliable software?, sending a formal complain every time that a crash happens?,…. ?

dhasenan July 12, 2006 2:08 PM

You could just use a debugger on every application you run. Granted, the program has to be compiled with the appropriate options, which means you need the cooperation of the software’s creator.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.