Computer Security and Liability

Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.

The problem is that all the money we spend isn’t fixing the problem. We’re paying, but we still end up with insecurities.

The problem is insecure software. It’s bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.

And that’s the problem. We’re not paying to improve the security of the underlying software. We’re paying to deal with the problem rather than to fix it.

The only way to fix this problem is for vendors to fix their software, and they won’t do it until it’s in their financial best interests to do so.

Today, the costs of insecure software aren’t borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that’s borne by people other than those making the decision.

There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is a way to make it in those organizations’ best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.

Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.

Clearly, this isn’t all or nothing. There are many parties involved in a typical software attack. There’s the company that sold the software with the vulnerability in the first place. There’s the person who wrote the attack tool. There’s the attacker himself, who used the tool to break into a network. There’s the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn’t fall on the shoulders of the software vendor, just as 100% shouldn’t fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

We will always pay for security. If software vendors have liability costs, they’ll pass those on to us. It might not be cheaper than what we’re paying today. But as long as we’re going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they’re entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security isn’t a technological problem. It’s an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.

This essay originally appeared in Computerworld.

An interesting rebuttal of this piece is here.

Posted on November 3, 2004 at 3:00 PM18 Comments

Comments

Eric November 3, 2004 3:22 PM

I agree with the author of the rebuttal piece linked in this post. Passing more laws involves costs too– enforcement on the part of the government, and the sticker price of software, which will go up as companies pass on to the consumer the additional cost of liability protection. In the end, you get less variety of software and what is left costs more.

What we really need is for consumers of insecure software to start caring more about the security of the software they buy, and voting with their dollars. People complain about insecure software, then turn right around and buy those products, willingly making the tradeoff between security and convenience.

If we the consumers of software decide that security is our top priority, the marketplace will put a value on security, and that will drive companies to write better software.

israel torres November 3, 2004 3:48 PM

… build your church on sand and only faith can hold it from falling apart… for it is the faithless that get blamed and such must stop.

Israel Torres

Tom November 3, 2004 5:07 PM

Any software liability scheme needs a provision for Free software developers. It seems like a shame to penalize these developers, especially when so many of their software projects have good security track records.

Lorrin Nelson November 3, 2004 5:11 PM

A market externality is a very specific thing; I believe Bruce is using the term too loosely. See http://en.wikipedia.org/wiki/Externality.

In the case of a company A which purchases a server, gets hacked, suffers downtime, and loses profits there is no externality. The entity bearing the cost (company A) is the same as the one that made the decision (to purchase the server). We can wonder why they were short-sighted and we can claim other types of market failure (such as non-perfect information), some of which may warrant regulation, but it’s not an externality.

But suppose the hack results not in downtime but stolen customer data. In this case it’s the customers who bare the cost. In this case there is an externality: their costs weren’t taken into account when the decision to purchase the server was made.

Generally speaking the government should step in when the market can’t solve the problem. In the former case this isn’t clear. I would advocate the server manufacturer’s liability should be limited to the extent to which it made false claims about the server’s security.

In the second case, however, it seems less likely that the market can solve the problem. Free-market-types might quip that customers shouldn’t do business with a company that runs an insecure server. This is hogwash; the customer can’t figure that out and we can’t afford to have all our citizens wasting their time trying to do so. Company A should be liable for the damage it inflicted on its customers. If it wants, it can purchase insurance. The insurance company can take on the role of assigning different premiums based on whether comany A is running an insecure server or not. This is much more efficient than the customers each trying to do this analysis on their own.

Steve Wildstrom November 3, 2004 5:31 PM

The answer to this pretty much has to be in better contract terms when software is purchased.

The law of product liability already applies to software, but–and it’s a big but–product liability law only comes into play when the publisher’s actions cause property damage or personal injury, and economic loss doesn’t count as property damage.

If product liability does not come into play, contract law applies. In the absence of specific contract language covering damages, the laibility of the seller is generally limited to the value of the sale. But the parties can specify any damages they agree to, provided that they are generally consistent with law and public policy (i.e., a contract giving a wronged buyer the right to cut off the seller’s hand would not be valid.)

Bottom line is that it is up to buyers to demand better contract terms. Of course, this is very hard for indviduals or small businesses to do, but I continue to be amazed at the contract and licensing terms that enterprises accept when buying from Microsoft.

Jan Egil Kristiansen November 4, 2004 3:29 AM

I remember Y2k. My impression is that most of the effort then, went into placing liability elsewhere, not into actual solving of the problem.

Still, the problem was solved.

But why were managers scared by Y2k liability, when normal security problems doesn’t?

jay November 4, 2004 10:26 AM

This is bad from several areas.

Such legislation will only distort the scene, as principals reposition themselves to shed the liability onto someone else.

As was also pointed out, liability cost is not free money, ultimately the customers are the ones who pay.

In can make freeware, open software, and small independent developer projects financially untenable even if they are not in themselves security problems. Insurance for the independent developer can go through the roof, (and no one could afford to produce anything as a hobby product). Of course, the established major players might prefer this keeping the cost of entry so high that they are not easily challenged by garage shop startups.

John David Galt November 5, 2004 3:21 PM

As the original article points out, an attack is the result of multiple actions and omissions, but that doesn’t mean that none of the persons involved should be held liable for 100% of losses. The actual attcker, and the writer of any attack tool, are not merely negligent but malicious and should therefore be jointly and severally liable for the entire loss plus punitive damages.

As for the publishers of operating-system and network software, I somewhat agree. The big software publishers have spent years and megabucks successfully lobbying to get themselves excluded from principles like the “warranty of merchantability” which apply to all other products in the marketplace. That’s unfair and needs to be overturned. If a consumer buys an expensive product such as a word processing program and it doesn’t work, he should have an enforceable right to receive a patch that fixes it, rather than be told “we no longer support that, go buy the new version”. (And it ought to be banned as an unfair trade practice for a vendor to continually change its software so that consumers are forced to keep re-buying it in order to keep it working.)

But there are degrees of vulnerability, and there will always be new forms of attack which existing software cannot stop. So it would be excessive and unworkable to make the vendor liable for all attacks; no one would dare to stay in business of publishing software. The “warranty of merchantability” should only be read as requiring best practice as of when the product was written.

As for Steve’s comments about licensing terms: That’s true as far as it goes, but most software today is in fact purchased outright, not licensed. Under the Uniform Commercial Code, when a written contract is signed — or in the absence of a signed, written contract, when the money and goods change hands — the deal is complete and cannot be modified afterward without the consent of both parties. Therefore any so-called “icense agreement” inside a software package is legally and morally null and void, regardless of any sealed envelope or “click-wrap” mechanism. (UCITA would have changed the law on this point, but fortunately only Maryland and Virginia ever ratified UCITA.) The law needs to put a stop to the fraud software producers are now committing by pretending that these “licenses” mean something.

Adrian Lopez November 6, 2004 11:47 PM

I’ve become interested in this topic since learning of Microsoft’s End-of-Life policies for it’s operating system products. Under Microsoft’s Product Life-Cycle policy, Microsoft will stop releasing patches for Windows XP once it reaches the end of mainstream support (for Windows XP home) or the end of extended support (for Windows XP professional), effectively forcing a migration to newer versions of Windows and the higher-end hardware needed to run them. Worse yet, it’s possible (although I can’t quite tell from reading their website) that Microsoft will remove existing patches from circulation, making life more difficult for those who need to reinstall unsupported versions of Microsoft Windows.

Something should be done about such practices. I am not too fond of government intervention, but who is protecting consumers from this blatant dismissal of responsibility?

Dido Sevilla November 7, 2004 7:02 PM

The only problem with this scheme is that as it is it renders all Free Software development untenable. This is the reason why most of the Free Software/Open Source Software partisans are so opposed to any legislation of this kind. Perhaps a better tack would be to waive the liability requirements for software whose source is Freely licensable. If you want to keep your software proprietary, then you must be prepared to eat liability for security breaches. I think this is not a totally unreasonable distinction for any prospective software lemon law to make, because Free Software would empower people to make security fixes themselves whenever they become necessary.

Andrew Tappert November 10, 2004 6:22 AM

Bruce Schneier recently posted a provocative article entitled “Computer Security and Liability”. He is upset that the money being poured in to computer security–billions of dollars a year–is not spent productively.

The problem is insecure software, but rather than spending to correct the problem, he says, we are spending to contain it and deal with its ramifications. He thinks it would be more cost-effective to actually correct the problem: make software more secure (i.e., better designed, better implemented, and better tested). As the title implies, he proposes to make software more secure by imposing liabilities on insecure software. He sees computer security not as a technical challenge, but as merely a case of misplaced incentives.

Imposing liability implies government intervention. The test to determine whether intervention is justified is to ask, “Could the market take care of the problem on its own?”.

Gregory Haase posted a rebuttal to Schneier’s article entitled “Security Liability Laws are NOT the Answer”. He argues that consumers choosing secure software over insecure software will drive software vendors to correct the problem of computer security. One has to ask, why then does the problem loom so large today? To some extent buyers have been unaware of, or have ignored, the risks they face. Also to some extent security concerns have been outweighed by concerns over cost, efficiency, and compatibility. The Microsoft monopoly over desktop operating systems and office suites has reduced the flexibility of buyers to respond to security concerns. Awareness is increasing (the computer security industry is making sure of that). Choice is increasing (Apple and Linux are more viable alternatives now than ever before). In big business, the regulatory acts HIPAA and Sarbanes-Oxley are providing an additional kick in the pants. Is all this enough to push the market to correct the problem of insecure software?

To avoid answering that difficult question, let’s step back and pose another even more fundamental one. Schneier claimed that it is more efficient to make software secure than to deal with the problems of insecure software. This might seem unarguably correct, but in fact there is some room to argue. There is the possibility of making secure systems out of insecure software. This might be considered just playing with terms, since his idea of securing software includes better design, which might be understood to be mechanisms for containing and coping with imperfect implementations. However, at an higher level, where there is no playing with terms, isn’t it true that we’ve created some marvelous technology and achieved some amazing results with today’s quite insecure software. Does the world need more secure software, or can we muddle along just fine with the essentially the level of security provided by the market conditions as they are now, whether or not the factors cited in the paragraph above cause things to get better?

Schneier’s company, Counterpane, is in the computer security business. (They’re a “managed security provider”.) Is his message that their work is doomed to failure in a world without liability for software? Or that his company’s services wouldn’t be needed then? Which is it?

Clive Robinson November 11, 2004 10:56 AM

With regards to Security and Liability, making new laws will not realy change much just make for more litigants. There have been parellels drawn to the automabile industry (Lemon Laws), however cars still crash and people are still maimed and killed needlessly due to manufacturing defects why?.

The main problem with computer security and automabile safety is actually market forces. There are two obvious areas where this causes problems,

1, Time to market.
2, Product complexity.

As long as the focus of a development team is “time to market” the design will contain defects, as there will never be sufficient testing carried out to find the defects and eliminate them (see below about defects).

As long as new features are added to a product it’s comlexity will rise, it will rise at around 0.5(N^2) where N is the number of distinct features in a product that interact with each other. Testing tends to be a process that requires a fixed amount of time per interaction. So if you have ten features and it required a thousand man hours to test them thoroughly, adding one more feature will add around another 200 man hours of testing.

Then there is the nature of defects themselves, defects fall into two generic classes “known” and “unknown”. A generic defect starts in the unknown class and will eventually move into the known class as it’s side effects become apparent, and indepth analysis finds the root cause. The usual forms of testing can only find defects in the known class, random and black box testing only show side effects further analysis is then required to determin the class and root cause.

A classic example of this was metal fatigue in aircraft, the side effect was aircraft falling out of the sky, the then very new vibration and temprature cycling tests facilitated the indepth analytical investigation the found the root problem. The results where then passed on to aircraft manufactures, so that the effects could be alowed for in new designs.

How many software bugs get this level of treatment? (few) and are Open and closed source bugs treated differently? (yes)

One of the most significant bugs out there responsable for a large number of the security failings of modern software is the ANSI C89 standard, and the way it deals with passing variable numbers of parameters (the ellipsis … syntax and va_arg macro). It is probably on equal terms with metal fatigue in it’s effects, yet few software developers are aware of it or it’s effects.

One of the solutions to the “Time to Market” problem has been “Code Re-use” at first glance it appears to be an obvious solution to the problem. However it became entangled with object oriented programing and fourth Generation Languages in many peoples eyes and RAD Tools became it’s comercial face.

The hidden soft underbelly of RAD Tools was that developers never actually looked at the code, that was generated for them, and many would not have understood it if they had been able to see it. Often RAD tool generated code had serious functionality bugs that would take time to be fixed by the tool supplier (if there where enough complaints). I occasionaly wonder at how many security bugs slipped through then and now.

Object oriented programing, was another way that code re-use was supposed to happen to the benifit of developers. Well the downside of OOP is generally that the developer does not have the time to understand what somebody elses object does so they abstract it downwards. The result is that most OO projects have to many tiers and this gives rise to multiple hidden interactions…

In reality the majority of todays code is produced in a way where it is not possible to perform more than even moderate testing of functionality, and most senior software developers cross their fingers each time a release goes out the door. Most also know that the majority of the code cut is unknown, and often by (junior?) software developers who lack the experiance to perform even a half harted code audit.

The majority of current software development is not carried out as a science, or even an engeneering task, it is practiced in the same way that Elizabethan wheel wrights and coach makers and other artizans carried out their activites.

An attitude of “If it dosn’t break to often then it must be ok” and “If it breaks bolt another bit on” is not conducive to the production of reliable let alone secure code.

The major advantage that open source development has is that the code is available for peer review and education, the “collective commons” often distils the code to get the best spirit from it. Experianced eyes look over code, helpfull sugestions are made they become incorparated, the code ages and improves. Other developers with less experiance look at the code and study it, they learn from it and usually improve their own code. This does not happen in a closed source world, as there is no “business case for it”.

I fail to see how laws will improve the current situation, in fact I can see many many ways it will make it worse (patent law is a good example of how ill concieved laws do the opposit of that which was intended).

The solution I suspect lies through education, of users, developers and most importantly businesses, only then will people be able to look at the problem rationaly and make the appropriate laws if required.

Vin McLellan November 12, 2004 1:32 PM

I suspect that the goal of any new liability paradigm for software — in law, regulation, contract, or even some collective “UL” product review group — will only be to require that some evolving array of QA measures be used to qualify code prior to it being offered to the public as reliable.

Products or tools which have not (yet) been subject to this minimal level of quality assurance testing could be clearly labelled as such. (While there are obvious costs involved, Open Source development efforts could distribute and share the effort necessary to quality their production code, just as they currently share their development efforts.

The questionm then, is not how to make the code perfect — but rather how best to define (for both commercial products and development tools) a minimal level of QA testing as necessary to preclude known vulnerabilities, at least, and constrain unbounded risks.

If we can’t set standards for code that will last for a century, like the Hoover Dam — surely we can define a minimal level of care in QA that, if ignored, would be considered irresponsible behavior on the part of software developers who urge others to use it for mission critical functions?

Think of it as SOX signoffs for the members of a product’s development team, or their manager or coordinator.

Of course, for 30 years I’ve heard talk of the need for an Underwriter’s Laboratories for commercial software, and it hasn’t happened — so obviously the problem is difficult: technically, politically, and legally.

Nevertheless, if industry best practices can be made explicit and collectively endorsed, and minimial QA requirements can be defined, surely we could bull past this primitive market where buffer overflows and other common security coding flaws continue to plague us.

I suspect that too many organizations today — because they can; because they have found that ignorance is an effective defense against liability — prefer not to test their code with appropriate QA procedures and tools to identify weaknesses and vulnerabilities.

Can’t we define minimal requirements for an execution profile of what functions get called by a specific program, and then define a set of functional goals for Static Code Checkers, Runtime Code Checkers; and environment-specific Application Scanning Tools?

I don’t pretend to hold the answers here, but I suspect that only legislation will make it clear that software developers are expected to hold to a standard of responsible professional behavior, and to define “responsible behavior” to include sufficient pre-release QA testing of code offered to others for mission critical apps.

In a world where no one is considered responsible for not holding developers to some minimal standard of care in QA, aren’t we all doomed to more of the same?

_Vin

Nobody You'd Know November 24, 2004 12:25 AM

Liability for software sounds good when you look at the way it shifts risk around. There are two fundamental problems though, which have as symptoms many of the things people have said here.

First of all, it is an infringement upon freedom of association. You may want liability on the part of your vendor, but are you willing to pay for it? If so, then do so – but by what right can you tell others that THEY have to pay whether they want it or not? If they’re willing to put that risk on themselves, what business is it of yours or anyone else’s?

Second, like most government intrusions into the free market, it creates barriers to entry. Do you really want a world without free software, and with no small businesses playing either? Do you want to pay exorbitant prices to buy stagnant products from IBM and Microsoft, provided that those two companies don’t look at your proposed legislation and decide that cashing out and investing in something less risky is the better part of valor? Worse, do you want to see the entire US tech business just move to some place with more forgiving laws?

This sounds great as long as you’re only thinking about security. A security mechanism that ends up costing more than what it is designed to prevent is not a good deal, though.

Jack Gregory September 10, 2008 10:11 PM

I believe that stricter regulations should be applied to software production to limit the possibility of any or more errors on the merchandise. I also feel that there should be some limited to cost as they are applicable to the software but not as stringent as the laws and regulations that I feel should be set on the merchandise. Something must be done to organize the system of software development. I understand that there is a free market and people have the right to produce and price but there must be newer regulations set because after all it is a continuously changing and growing market.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.