Regulation, Liability, and Computer Security

For a couple of years I have been arguing that liability is a way to solve the economic problems underlying our computer security problems. At the RSA conference this year, I was on a panel on that very topic.

This essay argues that regulation, not liability, is the correct way to solve the underlying economic problems, using the analogy of high-pressure steam engines in the 1800s.

Definitely worth thinking about some more.

Posted on February 25, 2005 at 8:00 AM13 Comments

Comments

Davi Ottenheimer February 25, 2005 12:20 PM

Bruce, panels are great but perhaps you will find more traction in open venues such as the Secure Software forum where the (large) audience can participate and help set the tone for the group the panel is meant to represent.

Thanks for the link. A very thoughtful essay. I think this quote boils the problem down nicely:
“Most designs for engines and safety features were based on the assumption that owners and operators would behave rationally, conscientiously, and capably. But operators and maintainers were poorly trained, and economic incentives existed to override the safety devices in order to get more work done. Owners and operators had little understanding of the workings of the engine and the limits of its operation.”

Perhaps ss a fitting example of Microsoft’s mockery of the seriousness of this point, the Microsoft Certified Systems Engineer program has always been widely regarded to be a marketing/revenue system, which has absolutely nothing to do with real training or understanding the workings of distributed systems and their security. Twelve year olds were being awarded MCSEs in the late 1990s…

Also, note that the article claims it took over 120 years, thousands of deaths, and millions in damages before regulation started to appear at the turn of the century. Watt’s astute warnings in the 1870s did not create the market for safer boilers — extensive death and destruction did.

Why are we predisposed to wait for a major disaster before we start regulating? “Predictable Surprises” by Bazerman and Watkins, 2004, have a pretty good answer. They claim the following general characteristics of predictable surprises:

  • Leaders know a problem exists — and that the problem will not solve itself
  • People recognize that the problem is worsening over time
  • Fixing the problem will certainly cost money, while the reward is an avoided cost that is uncertain (but likely to be large)
  • The up-front costs will be significant, but the benefits will be delayed
  • And last, but not least, a small but vocal minority benefits from inaction and is motivated to lobby for its private gain

So based on the lesson of Steam Engines, and the theory of predictible disasters, I figure we could have as many as ten to twenty years (with many more ChoicePoint fiascos to come) before the general public will force regulation from Int’l leadership. That is unless elected officials and public figures such as yourself start actively pushing for real regulation right now. Thank you California.

Is Counterpane ready to make bold statements like Watt did to help advance the industry? Without active pressure from experts for safer boilers, steam engines might have suffered a much greater public backlash that would have stopped progress altogether…we already see reports wondering about a serious slowdown in people willing to use the Internet for commerce.

Bruce, it is not clear what you see as your role in all this but I hope you accelerate your work with legislators on regulation and campaign widely for federal adoption. And the next time you run into public figures like Howard Schmidt, please remind him that he is actually “pro-regulation” when he says that we need fair and balanced laws.

Mark Johnson February 25, 2005 1:30 PM

Great essay. Captain James T. Kirk once spoke of the human “ability to leap beyond logic.” This ability is evident when vastly powerful chess software is defeated by a human. The ability to process myriad possibilities and outcomes every second is still not enough to replace a highly skilled human in many situations. Flying is one such area. Would you rather have Chuck Yeager piloting your plane when it develops problems or an IBM?

jshea February 25, 2005 2:40 PM

Mark, I’d say that depends on the problem the plane has developed. If the problem is a bunch of terrorists on board, then I’d rather fly with the IBM.

Davi Ottenheimer February 25, 2005 6:15 PM

@Mark
Interesting point. I just heard Jeff Hawkins talk about this issue. He just published a book that claims to explain why technology fails to achieve real intelligence. Today we can confidently say “computers don’t make mistakes, people do”, but Jeff seemed to think that computers might someday be able to correllate data more accurately and thereby make predictions…

Robert I. Eachus February 25, 2005 8:48 PM

Nancy’s paper brings up some good points, but as a thirty year or so veteran of fighting these issues on both industrial and government projects, I know where the problems are, and they aren’t in software engineering. Davi Ottenheimer’s discussion of predictable surprises is on the right track, but in general the problem can be described in two parts:

1) The software engineer is the systems engineer of last resort. (Thanks to Dave Emery) In other words, the most difficult parts of any system design tend to get pushed into the software, especially when managers and financial people make systems level decisions.

2) A hierarchical system (company, project, etc.) means that someone up the chain of command will be making decisions who doesn’t understand the software–and has probably never read it. The classic case of this was the Ariane 501 disaster.

The final report glosses over just how horrific this case was. The flight guidance software was actually developed as part of an Ariane 4 upgrade, and the developers were not told (or even allowed to know when they asked) what the system parameters for the Ariane 5 were. So they put helpful notes in the code for whoever adapted it for the Ariane 5. Then some bean counter decided that since the flight control system hadn’t changed, the software could be reused unchanged–without review or testing.

Many people know that the guidance computers shut down (as instructed) when the Ariane 5 reached a point within 40 seconds that the Ariane 4 could never reach. This caused an overflow that the software treated as a hardware error–we can’t be there. Of course, both computers reached the same conclusion a few hundreths of a second apart.

But that is not the worst of it. The debugging information was sent to the engine controllers, which read it as guidance information and selected for maximum deflection of the engines. Of course, the maximum deflection selected was based on the moments of the Ariane 4. The Ariane 5 stack came apart in mid-air.

Following the careful analysis in the report, you might think that this was a sequence of unanticipated events. Well it was. However, if the course selected for the first Araine 5 had been slightly different, or if there had been a bit more wind, the engines could have deflected beyond the Araine 5 structural limits anyway.

A similar set of disasters involved the Airbus 320. I don’t want to go into all the details, but the (multi-version) software implemented the requirements, and the requirements forgot to include staying above the ground. (The deadly case was when the glide path for the runway was underground at the last waypoint. The waypoint would be crossed at the specified altitude, and the autopilot would then try to put the plane on the glide path as quickly as possible.)

As I see it the only way to avoid these types of disasters is to have software (or safety) engineers completely outside the project/company structure. And the only way that is going to happen is if the companies can’t get liability insurance otherwise.

Alexander Hammer February 25, 2005 9:35 PM

The subway in Vienna could operate entirely automatic without a driver – but the passengers refuse to enter the train, if they see there’s nobody sitting in the cockpit.

In a newsgroup a subway driver pointed out, that there are some rare situations, when a human driver can react more adequately than a computer. Consider a fire in a tunnel: if the train stops automatically, there is a risk, that it stops in the center of the fire. But a human can consider to speed up to reach a safe place.

Stuart Berman February 26, 2005 2:57 PM

I noticed very different points in the steam engine article:

  • Public pressure resulted in more safety devices, then government regulation and finally self regulation through associations and insurance initiatives. (Public response is key, limited regulation is helpful, standards are valuable and fair liability is effective.)
  • Operators were blamed instead of the engineers. “It is unfortunately very common to blame the operators for accidents when they have been put into a situation where human error is inevitable.” (When do we stop blaming end users?)
  • “Just as overly strict regulations unnecessarily inhibited electrical technology development in Britain in the last century, so poorly-written standards can inhibit the development of computer technology. Worse, standards can inadvertently shift responsibility away from the manufacturers and developers to government agencies that have much less effective and direct control over the safety of the final product. And poorly written standards may have no effect or even increase risk.” (The downside of too much regulation or misplaced legislation.)
  • This is premised on the risk of real damage to the public. (Versus all of the ridiculous legislation that is designed to simply benefit a group.)

I still come back to asking myself why Microsoft needs to be legally liable for their crap. The damage to individuals is not severe. The market (public) will move to a different platform if they are bother too much. The crux of most of the problems is not with the technology, but the model – in the US we don’t have an adequate identity system that can be authenticated. Pete Lindstrom on his site promotes publishing social security numbers – why should knowing my SSN yield any risk is the question we should be asking. Then ChoicePoint, Bank of America and T-Mobile won’t be issues.

David Mohring February 26, 2005 5:08 PM

I made the same argument in favour of regulation and a minimium set of expectations back in July of 2002, using the Plimsoll Line/Mark, the Ford Pinto and Explorer’s tires as examples.

http://groups.google.com/groups?selm=slrnaghlie.1h4.heretic@heretic.ihug.co.nz

“… Bruce Schneier claimed that for change to occur, the software
industry must become libel for damages from “unsecure” software,
however historically, this has not always been the case, since
most businesses can insure against damages and pass the cost along
to the consumer.”

Chung Leong February 27, 2005 3:11 PM

The analogy used in the essay is weak. Stream engines weren’t exploding because malicious people were deliberately making them explode. Even the most zealous of consumer advocates would not argue that manufacturers should be liable for acts of sabotage.

Thomas Sprinkmeier February 27, 2005 8:59 PM

Boilers were exploding, in part, because of the corrosive environment they were put in.

It could be argued that the internet is a similarly corrosive environment.

The corrosive agent is malware rather than oxidation, but the end-result is the same.

Products should be fit for purpose, able to survive in the environment that they will be deployed in. A tank needs to withstand enemy fire, a boiler need to withstand corrosion, and an internet-connected computer needs to fend off malware.

If you don’t want to armour plate your tank that’s fine, but then don’t sell it as a tank. If your software can’t survive more than a few minutes on the ‘net, that’s OK too, but stop pretending it’s “internet ready”.

http://isc.sans.org/survivalhistory.php

Davi Ottenheimer February 27, 2005 9:34 PM

@Chung

Imagine that a large consumer group finds that they are at great risk due to false promises and flawed design (resulting from sabotage or otherwise). They will demand manufacturer liability, even if they signed a waiver. If the risk is high enough, consumers will suspect fraudulent activity and demand redress. Do you want to run out and buy a car described as “unsafe at any speed”?

Just a minor nit, boilers were exploding, and not the steam engines. Steam engines benefited from demand for innovation, while the lowly boiler was ignored. Liability was put on boiler manufacturers after their foolish and inadequate designs had killed thousands of people and caused millions in property damage. But whether it was caused by sabotage or shoddy engineering is irrelevant to the people getting blown off the boat. The danger is simply a function of an overall risk equation (Risk = Asset x Vulnerability x Threat).

A consumer looks for ways to minimize risk, but all alone they rarely have the level of technical expertise or influence necessary. It also might be noted that the maturity of the law and likelyhood of litigation was probably lower at the turn of the century. In any case, if we find that a million people have their consumer records stolen due to sabotage of a database and only one thousand consumers of one million stolen identities actually experience ID theft, then the direct damage to the consumers still totals three million dollars. That is not to mention damage to merchants and providers due to fraudulent charges and other criminal activity. Yet the average individual consumer loss is still only USD$3000.

Sadly, it seems that damages probably have to be in the trillion dollar range before it registers as a real enough disaster or “Predicitible Surprise” that the market is willing to correct itself. What is three million dollars in 1900 property damage in today’s terms?

Back to the angle of your statement, if a consumer is going into an area where sabotage (threat) is high, then they most definitely will look for assurances of safety or a trustworthy merchant.

In other words, would you buy padlock to protect a valuable asset and say to yourself “who cares if this lock is vulnerable to being opened without my permission”? Or do you say “who cares if every other person here is a professional thief (threat)”? If you care about your assets, I seriously doubt you will keep all liability to yourself. You will surely seek products, calculate your risks and hope that manufacturers accept a reasonalbe amount of limited liability (e.g. a warranty or guarantee).

Funny, I doubt you would call yourself a “zealous consumer advocate” just because you want to buy products you can trust.

Davi Ottenheimer February 27, 2005 10:13 PM

@Thomas
Nice point. The survivability curve you linked is another way of saying “what risk does a consumer run by connecting a computer to the network”?

That demonstrates a blend of threat/vulnerability, so I just wanted to add some info on exposed assets and value (http://www.privacyrights.org/ar/idtheftsurveys.htm):
— 68.2% of information was obtained off-line versus 11.6% obtained online
— $52.6 Billion total U.S. annual identity fraud cost

So, 11.6 percent of 52.6 billion is just over 6 billion in total annual identity fraud cost related to computer security.

I do not know if this cost includes the average time spent by victims to clear their record. One survey said people have to spend about 600 hours, which is an increase of more than 300 percent from previous surveys.

Perhaps most interesting is the Privacy Rights recommendation on how to reduce identity theft:
“Cancel your paper bills and statements wherever possible and instead check your statements and pay bills online. Monitor your account balances and activity electronically (at least once per week).”

And here we are worrying about the state of computer security.

I take their point to mean that consumers should reasonably expect that their statements and bills online are more secure than the equivalent sitting in a standard mailbox or trash bin on the street. It is that assumption about “reasonable” security, so hard to define universally among online corporations that house identity information, which I believe needs to be federally regulated.

Clive Robinson March 1, 2005 7:23 AM

The argument between liability and regulation is a chicken and egg situation. You can not have liability without legislation (which is usually based on regulation) to say what you are liable for.

So it’s a mote point, that industry discussion groups can endlessly debate (sorry Bruce) and politicians can exploit for inaction (look at global warming for an example of where that gets you).

Two of the base areas that are usually ignored in these group discussions but are mentioned in the article are, education and understanding through the application of scientifically evaluated methods.

If you have been involved with training graduate level students over the past 20 to 30 years in software design, you may have noticed how few courses these days teach the fundamentals of computing (set theory, logic, fundamental data types and their storage under differing architectures, etc) and other fundamental knowledge that is independent of the programming methodology that is the current “flavour of the month” with employers.

Without appropriate teaching at a fundamental level you cannot get the required understanding to apply the knowledge that rigorous testing or science produces. Or to put it another way if all you have been taught is how to drop on screen widgets into a framework, how can you debug a problem with the widgets. Oh and you effectivly become redundent when your framework gets replaced with another biger and better one that uses new concepts.

Victorian engineers originally worked on the “if it breaks bolt another bit on” philosophy, until the only course left open to them by the laws of physics was through science and a fundamental understanding of the problems and issues facing them.

The joy of software is that it does not really have fundamental physical laws affecting it. No matter how much corrective software you bolt on people usually cannot see it. And due to the rapid improvements in technology the few laws that do apply usually get hidden from the user.

I have said before on a number of occasions that the expression “software engineer” is silly as the majority of software development is still at the “bolt another bit on” stage. Therefore the majority of practitioners should really be “software artisan” or “apprentice code cutter”.

Back in the early 90’s I used the example of the development of the cartwheel to show why this is true of software. To my horror people have incorporated that type of working practice as a methodology (software patterns).

Where I have been involved with the employment of lead programming staff I generally have tried to employ those that have a verifiable engineering or scientific background (electronics or physics), or can show several years of “safety critical” development in embedded / real time systems and tend to reject those with object oriented experiance (Unix system development was and still is a major plus;)

There are quotes going around today about programmers producing 500,000 lines of code a year using XYZ’s latest tools. This really impresses management when the developers want to by a new toy to play with.

However that’s 250 lines an hour for the average work year, now I don’t know much about you folks out there but the only way I could do that is by copying existing code without really understanding.

At the lowest level of “high level languages” this copying is by using library functions. As many of you know it is problems in the C library (via the standard) and programmers not understanding their limitations that gave rise to most of the security explotation issues we have seen.

Even when told many programmers do not have the fundamental understanding to be able to appreciate the information and then apply it in other areas (i.e. they do not realise that the language they are programming in was either written in C or developed using tools written in it).

My point is this that even if legislation came in it might be to late to do anything about it, the majority of programmers in lead positions are to young to have a good grip on the problems. Also they don’t have the time to get it and management see no advantage in paying for it (companies with a fast time to market tend to prosper whilst those with secure reliable products tend not to).

This is why in Europe the focus in the past has been on regulation and standardisation, not on free market forces. To name just a few benefits from this that the US has seen, the Plimsol Line and Lloyd’s Register Register, Scalable Mobile Phone Technology (GSM etc) and workable quality and safety standards (ISO 9000, BS7799, etc).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.