Information Security and Externalities

Information insecurity is costing us billions. There are many different ways in which we pay for information insecurity. We pay for it in theft, such as information theft, financial theft and theft of service. We pay for it in productivity loss, both when networks stop functioning and in the dozens of minor security inconveniences we all have to endure on a daily basis. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for the lack of security, year after year.

Fundamentally, the issue is insecure software. It is a result of bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the myriad effects of insecure software. Unfortunately, the money spent does not improve the security of that software. We are paying to mitigate the risk rather than fix the problem.

The only way to fix the problem is for vendors to improve their software. They need to design security in their products from the start and not as an add-on feature. Software vendors need also to institute good security practices and improve the overall quality of their products. But they will not do this until it is in their financial best interests to do so. And so far, it is not.

The reason is easy to explain. In a capitalist society, businesses are profit-making ventures, so they make decisions based on both short- and long-term profitability. This holds true for decisions about product features and sale prices, but it also holds true for software. Vendors try to balance the costs of more secure software—extra developers, fewer features, longer time to market—against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.

So far, so good. But what the vendors do not look at is the total costs of insecure software; they only look at what insecure software costs them. And because of that, they miss a lot of the costs: all the money we, the software product buyers, are spending on security. In economics, this is known as an externality: the cost of a decision that is borne by people other than those taking the decision.

Normally, you would expect users to respond by favouring secure products over insecure products—after all, users are also making their buying decisions based on the same capitalist model. Unfortunately, that is not generally possible. In some cases software monopolies limit the available product choice; in other cases, the ‘lock-in effect’ created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it is hard for an average buyer to distinguish a truly secure product from an insecure product with a ‘trust us’ marketing campaign.

Because of all these factors, there are no real consequences to the vendors for having insecure or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality. The result is what we have all witnessed: insecure software. Companies find that it is cheaper to weather the occasional press storm, spend money on PR campaigns touting good security and fix public problems after the fact, than to design security in from the beginning.

And so the externality remains…

If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security—especially the security of their customers—it also needs to be in their financial best interests.

Liability law is one way to make it in those organisations’ best interests. If end users could sue software manufacturers for product defects, then the cost of those defects to the software manufacturers would rise. Manufacturers would then pay the true economic cost for poor software, and not just a piece of it. So when they balance the cost of making their software secure versus the cost of leaving their software insecure, there would be more costs on the latter side. This would provide an incentive for them to make their software more secure.

Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem—and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.

Clearly, liability is not all or nothing. There are many parties involved in a typical software attack. The list includes:

  • the company that sold the software with the vulnerability in the first place
  • the person who wrote the attack tool
  • the attacker himself, who used the tool to break into a network
  • and finally, the owner of the network, who was entrusted with defending that network.

100% of the liability should not fall on the shoulders of the software vendor, just as 100% should not fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.

Certainly, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security services companies, direct and indirect costs of losses. But as long as one is going to pay anyway, it would be better to pay to fix the problem. Forcing the software vendor to pay to fix the problem and then passing those costs on to users means that the actual problem might get fixed.

Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.

Information security is not a technological problem. It is an economics problem. And the way to improve information security is to fix the economics problem. If this is done, companies will come up with the right technological solutions that vendors will happily implement. Fail to solve the economics problem, and vendors will not bother implementing or researching any security technologies, regardless of how effective they are.

This essay previously appeared in the European Network and Information Security Agency quarterly newsletter, and is an update to a 2004 essay I wrote for Computerworld.

Posted on January 18, 2007 at 7:04 AM65 Comments

Comments

Reader X January 18, 2007 7:19 AM

All well and good, but doesn’t this analysis assume that the costs of fixing the problem don’t outweigh the costs of mitigation?

This is not as flip as it sounds. Remember that interoperability, ease of use, etc. all create economic benefit for the consumer. Couldn’t the speed of the software development cycle outweigh the drag of bug-fixing? I’m not saying you’re wrong, Bruce, just that the devil is in the details and I believe the economic tangle is more complex than this essay might lead the reader to believe.

And, the fixing of bugs is not the end-all of enterprise security. One still has to monitor effective preventive controls, and the span of those controls is far from complete even if all the software is bug-free. So what are the savings, really?

Andy January 18, 2007 7:23 AM

Some truth in that. However, when we’ve been bidding for work we’ve had situations where bidding for the cost of doing things correctly (securely, or even just ‘properly’) would have had us laughed out of the building.

It’s one thing for software firms to increase their costs to reach (what I believe to be) an acceptable quality; it’s quite another thing for companies to accept that.

alabamatoy January 18, 2007 8:02 AM

As long as we continue to accept EULAs that exonerate the producer/seller/developer from all blame, the security posture of network products is not going to change.

Tanuki January 18, 2007 8:06 AM

Most of the security-failures I come across are not actually software-failings; they’re human-failings. People setting global read/write perms on filesystems because it’s easier; people leaving default passwords when they install gear; people cutting corners and taking the easy route by doing ‘just enough’ to fix a problem rather than taking time and getting to the bottom of it.

Carroll Smith [who managed Ford’s GT40 race program in the 1960s] had a saying “machines don’t fail but humans do”. This is still true. Don’t blame the software, blame what people do with it!

David Thornley January 18, 2007 8:16 AM

There are problems in mandating liability.

Most importantly, there is no such thing as completely secure software, so all software producers would be at risk. They could hire good security people, and do things right, and their software would still have some bugs and security holes.

If free/open-source software developers were held liable for problems with their software, that would kill the movement dead. Most contributors do this as individuals, and individuals usually cannot afford the risk. They contribute time and effort to free software, and most will not accept the liability in addition.

Currently, free/open-source software generally comes with a complete denial of liability. Every software company I’ve checked also denies liability, but they’re often less clear about it. If free/open-source software can get away with no liability, how do we impose it on the industry? Remember that free/open-source software is often sold, so a line between commercial and non-commercial software doesn’t work.

Assuming we restrict liability to proprietary software, we’re still putting small companies at great risk. They often don’t have the financial reserves to withstand a large legal judgment.

This means that we’d be giving the larger software companies less competition. I don’t think this is what we want.

Rich January 18, 2007 8:21 AM

Microsoft has stated that building secure code a priority (whether you believe they really care or are capable are separate issues). The business case is interesting with respect to Bruce’s comments. If you are a share holder, do you think it is good company business for the company to invest in building secure software? Is there a payback in the long run?

Martin January 18, 2007 8:22 AM

Well, it’s always very easy to propose how the laws should be changed to accommodate one persons point of view.
I would personally like to make the recording-, broadcasting- as well as radio industries legally liable to pay me torts whenever I’m exposed to a piece of music I don’t like. After all, it’s an externality to me that I have to change the channel or even buy an iPod to be able to listen to what I want.

Secure software is available, just look at the medical, aerospace and military IT industries. Billions are paid for 1980’s technology, because that is what’s been proven to work.

End users want the cutting edge. Yes, it would be nice if everybody just understood how to make secure software, but it’s coming. Even Microsoft is actively targeting security in their newer products – if that’s not a symbol of market demand, tell me what is..

The “regulate” knee-jerk outcry whenever anything is wrong is a symptom of how democracy is failing us. Politicians try to do everything, thus taking responsibility for everything and thus creating an expectation that they should solve everything.

Oh, if just a politician every once in a while would stand up and say “That is not for the government to solve. Have a good day.” instead of “That is a very interesting problem. I will schedule a hearing in a committe and draft a bill (oh, and do you have your checkbook on you?)”

denis bider January 18, 2007 8:27 AM

I support your general reasoning, but please don’t call for governments to pass laws in this respect. We all know what kind of laws they’re going to come up with. They’re going to be onerous, they are going to be stupid, and they’re going to impose a bureaucracy that will raise entry costs so that it will become nearly impossible to start up a small software business without a serious amount of venture capital to begin with.

It’s good to call for the culture for change, but the requirement for change should come from the users. No one likes insecure software. No one likes when their computer is hijacked. Microsoft has done some serious progress in the security area lately – IIS6 is much better than IIS5; Windows XP was better than 2000, and Windows Vista is way better than XP.

This is evidence that market pressures are working. People are already deciding what software to use on the basis of what is more secure. The more reliable, more secure, more trustworthy options are prevailing. In 10 years, the landscape in software security will be much better than it is today, and the standards will have become higher; progress is already taking place. What we DON’T need is legislation to rush this change and destroy the software market by transforming it into a highly regulated industry for everyone.

Clive Robinson. January 18, 2007 8:36 AM

@Bruce,

“Basically, we have to tweak the risk equation in such a way that the Chief Executive Officer (CEO) of a company cares about actually fixing the problem — and putting pressure on the balance sheet is the best way to do that. Security is risk management; liability fiddles with the risk equation.”

And this is incompatable with an Open development environment….

And I am not talking Open Source -v- Closed Source here.

If you look at Microsoft OS’s they originaly offered very very little in return for a large hunk of money, however they where prety much the only kid on the block offering an OS.

Third party developers produced inovate products to make up for the deficiencies or add value to these MS OS’s.

MS would let the market forces decide what was a good feature or bad. MS would then either develope an inhouse solution or just buy up a smaller (second line) company with a good product (think Memory Managment / Financial software).

IF YOU Brought in LEMON LAWS for software then only large players with deep pockets would be able to play.

One reason for this is liability cover a small company would not be able to as easily show “due diligance” and one hundred and one other things required for the “Product liability” insurance.

There are many many more but the end result is the same, very small entaties that have been the traditional startup of new inovative software ideas would not be able to get started. So only “safe” ideas will be implemented and thw software industry will almost over night look like the Telephone & Post industries of the 1980’s and 90’s, no real inovation and Patent Rich large organisations keeping the market closed…

Basicaly “inovation = high risk”, “no change = fixed risk”.

If you want an open and inovative software industry then don’t saddel it with stifeling risk, look for another method which would still alow inovation by very small organisations.

Perhaps you could try the UL type solution where an indipendent test house certifies software to give you some (slightly) “qualified purchase choice”. That being said however usefull software is beyond an kind of real or systimatic test.

Steve January 18, 2007 8:42 AM

What you say about security is true of software defects in general, isn’t it?

If I want “defect-free” software for a safety critical system, then I can pay millions of dollars to have the software carefully specified, written by experts using proof-carrying tools, and tested by experts in safety-critical testing. There will be only a very small number of defects in the resulting system, provided it’s fairly simple.

But software written to safety-critical standards is pretty rare, because almost all customers tolerate defects, even without liability on the part of the software vendor.

Likewise, customers tolerate security flaws in products. This is not just because they can’t tell secure product from insecure ones (although there’s an element of that), it’s also because most customers simply don’t mind taking the risk.

Liability will not solve that part of the problem, it will just mean that when customers take the risk of using a particular piece of software with possible security flaws, they can do so knowing that they’re covered for the damages.

Anonymous January 18, 2007 8:45 AM

@Martin

“Secure software is available, just look at the medical, aerospace and military IT industries. Billions are paid for 1980’s technology, because that is what’s been proven to work.”

Oh if only that was even remotly true…

Think of the number of aircraft in the not to distant past where the “fly by wire” systems have caused “flight critical” issues…

The number of Medical incidents involving incorect ionizing radiation doses which where due to software failier.

As for the millitary, just how many Patriot missile worked?

The secure radio sets that are not etc, etc, etc…

And these are just the ones that come to light…

Benny January 18, 2007 8:50 AM

Weren’t most of these anti-legislation arguments also put forth by the auto industry when the government made them liable for making bad cars? Was the government wrong to do that? If not, what do you believe differentiates the auto industry from the software industry w.r.t. liability?

Gavin January 18, 2007 9:21 AM

Alan Cox recently spoke to the Science and Technology Committee of the House of Lords, and made the exact opposite case about liability:

Cox said that closed-source companies could not be held liable for their code because of the effect this would have on third-party vendor relationships: “[Code] should not be the [legal] responsibility of software vendors, because this would lead to a combatorial explosion with third-party vendors. When you add third-party applications, the software interaction becomes complex. Rational behaviour for software vendors would be to forbid the installation of any third-party software.” This would not be feasible, as forbidding the installation of third-party software would contravene anti-competition legislation, he noted.

http://news.zdnet.co.uk/security/0,1000000189,39285532,00.htm

Davi Ottenheimer January 18, 2007 9:35 AM

@ Clive

“If you look at Microsoft OS’s they originaly offered very very little in return for a large hunk of money, however they where prety much the only kid on the block offering an OS.”

Maybe I’m missing something, but when were they the only OS on the block?

Peter Pearson January 18, 2007 9:56 AM

Bruce –

I think you’re overusing the word “externality”. If I buy insecure software because security is less important to me than being able to put animated dancing pigs in my interoffice memos, and I’m oblivious to the other costs that I’ll incur as a result of that insecure software, those other costs are not an externality. External costs are those imposed on others because of my choice of software; but those external costs result from many choices I make — browser settings, browsing habits, use of a firewall — and not just my fondness for dancing-pig software. The real motivational deficiency in this picture is the fact that I am not held responsible for damage done by my computer when criminals take control of it.

liability January 18, 2007 9:59 AM

I generally agree with Bruce on this. In most industries, federal regulation is needed where it is not necessarily in the best interest of corporations to willfully take action to protect consumers, think consumer safety, environmental protection, etc.

However, without creating new laws, I think existing legislation can be extended into the software industry. Why can’t consumer protection laws apply here. I also like the suggestion of UL type certifications and oversight.

Software (and the computer it runs on) could easily be treated as a true appliance. This would likely create a different class of hardware/software device that would be “sealed” (both the hardware and software would be untouchable by the consumer) and completely managed by whomever sold it, the same way any other consumer electronics device is manufactured, sold, and maintained. In this case, the hardware/software manufacturer would be the only one that could install software applications, update the software, change system settings, etc. on the device. The software manufacturer would also be completely liable if that software did anything “bad”. Breaking the “seal” on the software would void the warrantee and remove any liability from the software manufacturer.

There could still be a market for the DIY crowd, but liability for that class or software would fall solely on the DIYer, not the software manufacturer.

Don Marti January 18, 2007 10:18 AM

Liability would be a disaster for research and for collaboratively developed software. And any liability law that a lobbyist-infested Congress came up with would either make a too-narrow research/collaboration exemption, or one broad enough to make the whole law meaningless.

Why not discourage lock-in directly, by limiting the extent to which EULAs may legally restrict interoperability and reverse engineering, and with something like the proposed law at righttorepair.org for software customers?

Clive Robinson January 18, 2007 10:36 AM

@ Davi Ottenheimer

MS where one of only a couple of OS suppliers for the Intel 8086 / 80286 back around the time Dave Cuttler joined to write an OS “better than Unix”

I know it’s a long time ago but MS’s business model has not realy changed in this particular aspect (think firewalls and virus detection more recently). Basicaly they let other people do bleading edge and open the market and then they take over.

Marc January 18, 2007 10:36 AM

Sounds like Software Environmentalism.

In Washington State, we have a bunch of dams on the Columbia River that were built long before anyone paid attention to the long-term environmental effects. Now, we are paying a bunch of money to mitigate the damage they cause to salmon runs, bird habitats, etc. Some people want to just blow the dams up.

On the other hand, they provide us cheap electricity.

Last year, the Seattle area broke ground on a new sewage treatment plant. It took over 10 years to approve the plans, and required hundreds of millions of extra dollars to be spent – either (for no good reason) or (to protect us from environmental disaster down the road), depending on how you look at it.

Because of the thousands of environmental laws involved, a dam as powerful as the Grand Coulee will probably never be built again in this country. If “Software Environmentalism” creates a bunch of new laws, will anyone be able to afford the kind of powerful software we’re used to?

Andrew2 January 18, 2007 10:39 AM

Yes, but…

Mandating that all software vendors are liable for the security issues in their software is not the right answer either. There is room in the world for cheap, less-secure software. Remember, security is a continum of trade-offs, not a yes or no proposition.

The only thing I can think of is to impose liability for security failures only on vendors who claim their software is secure.

False Data January 18, 2007 10:45 AM

@Bruce: Why liability, in particular?

There are several mechanisms that could internalize the cost of security flaws. I’ve listed some suggestions below. Each has its strengths and weaknesses, as does a liability rule. So, why do you feel liability is the proper approach?

Here are some alternatives that have been used in other industries, translated to the software world:

  1. regulate the process of software development
  2. address the consumer’s information gap by providing independent evaluation of each software product’s security
  3. provide permits to manufacturers, revoking those for manufacturers who consistently produce insecure products
  4. develop a “software architecture code” analogous to building codes; essentially, formalize certain best practices in software architecture
  5. require software to meet security standards, as verified by testing laboratories, before it may be sold commercially

A liability rule provides flexibility by essentially shifting the regulation to the insurance companies. On the other hand, it imposes large transaction costs and, potentially, multi-year delays in recovery. Given the low typical price of a piece of software, and the fairly low losses per person, it would probably be necessary to allow class actions to make it work economically.

I agree that we need improvements in software security, and I’m not saying liability is necessarily the wrong way to do it, but it’s going to face stiff resistance and, without more analysis, I’m not convinced it’d do the job better than the alternatives.

mikeb January 18, 2007 10:46 AM

As others have stated there will always be risk associated with software no matter how much money & time are spent trying to secure it.

Business value is what drives software purchases, not security. If X piece of software will help me cut 25% off of my operating expenses but comes with some unknown, unquantifiable security risk, then heck yeah I’m going to go for the cut in OpEx and manage the security issues as they arrise.

Managing security is an expensive proposition. And the cost of that managment scales with the size of the company typically. But the benefits of using the software that works for me outweighs the costs of managing security.

Simple business decision making 101. If the benefits outweigh the risks the benefits win.

Perfect software doesn’t exist.

Yozons January 18, 2007 10:59 AM

@Andrew2 said it very well.

For one thing, every bug is not a security threat. Equating a bug in software to a security threat is overblown.

Even when the bug is serious enough to allow a hacker to do something, the “something” rarely involves the broken piece of software. Most such bugs cause programs to abort, and good OS ensures this. Yes, sometimes these bugs pose security issues, mostly related to privacy, by divulging information that should not have been leaked.

However, we all know that purposeful human actions are the biggest culprit.

Just like cars can be used for crime, bad software can be exploited by criminals. There is much good to be had from both cars and software, so assuming Bruce’s analysis and remedy were correct, we’d need to hold car manufacturers responsible in part for crimes committing using them. Why can a car be carjacked since they could engineer it so that it won’t drive unless the owner’s hands are on the steering wheel (or any other myriad “solutions” to the “problem”).

Obviously, truly insecure software that creates real security threats need to be fixed. The industry already does more rigorous testing of control systems software and the like.

Holding a software developer responsible for the misuse by criminals is insanity unless that flaw is really designed to allow the misuse or clearly invited its misuse.

Should home builders be responsible for flaws in the design that allow criminals to break through windows? How about allowing the owner to open the door to a stranger who then forces his way in? How about Santa coming down the chimney?

Criminal negligence is one thing, but assuming that a bug is criminal because someone exploits it is plain silly.

If you were to go this route, the economy would stop as it would need to be applied to all industries in that crime takes place with lots of things: cars, homes, hammers, screwdrivers, guns, rental trucks, airplanes, politicians, schools, telephones, electricity…..

As a software development company, we try to resolve bugs. Bugs make us look bad and often cause us trouble in that we have to respond to them. Sometimes the fix requires more software to resolve data issues. Many bugs are found that are not known to have caused any problem in the field.

We don’t leave bugs in because we’re lazy, don’t care, etc. But customers do want their products and features sooner rather than later, and for less money rather than more. Do you know any consumer or business that wants it any other way? Would you really rather pay $100k for a bathroom remodel rather than $25k, and wait 2 years rather than 2 months, so that it could be inspected and fully tested before you could get it?

Besides, there is no well established way to remove all such bugs, and testing by itself never can find them all. We don’t know of any bugs that have caused a security problem, though it’s possible they could have been exploited to reach other software (such as the OS or the web server). Most bugs we see cause DoS in that the software fails to operate, on occasion crashes.

Have you ever read a book with zero typos? Found a product with zero flaws?

Just like with most products, I’m okay with being accountable for flaws that create havoc on their own, but not flaws that are exploited by criminals who purposely attack weak points for their criminal activity. The criminals need to be prosecuted.

You guys should read some Milton Freidman. Learn how the markets and freedom will result in a better world than one of regulation, legislation and making criminals out of people who are not harming others.

Maltheos January 18, 2007 11:01 AM

I think that in general mandating liability for undiscovered bugs is a bad thing. ( too much risk for the small players) On the other hand mandating liability for bugs that have been disclosed for a given time without a working patch fall to the vendor seems more reasonable. A reasonable standard of disclpsure can be established, and a reasonable interval established.

Madman January 18, 2007 11:05 AM

There’s always someone who thinks the solution is to involve the government in more regulation. Totalitarianism doesn’t work, and the more control you give the government, the worse a country becomes. It becomes less secure, less productive, less innovative, more afraid and more corrupt.

We don’t need more laws.

We don’t need more regulation.

If you want to certify, that’s fine, since the market can determine if they will pay more for a certified product/process, but history has shown that they do not.

Most customers determine if the benefits of the software outweigh the costs. Risk is everywhere, and bad companies generally get found out and go away. The market does it better than legislation ever will.

Paul January 18, 2007 11:11 AM

mikeb wrote: “Business value is what drives software purchases…”

Now there’s some wishful thinking!

We’re not even at that point. Politics, fear, reckless hopefulness, and ignorance drive software purchases — especially in business. Companies regularly spend millions of dollars on massive software systems that have little or no business value, and frequently do not even reach completion.

Often it’s because there’s a perception that everybody else is doing it, and they’ll be left behind. Very often it’s because a company has been unable to solve something that is essentially a human problem, and spending a few million is less painful that making substantive changes — even though the software doesn’t solve the problem. Most often of all, it’s because the people who make purchase decisions are not the ones who have to live with the software.

Whatever the case, Bruce’s article is actually rather optimistic compared to what I’ve seen. If companies actually let software quality and utility of all kinds — not just security — drive their purchase decisions, it would reshape the software industry.

David Thomas January 18, 2007 11:19 AM

It sounds to me like this can be done at a company level rather than a market level. If I seek secure software, I should do business with those companies whose economic interests allign with providing me secure software. If a company is pushing a “trust us” marketing campaign, and their EULA still waves all or virtually all liability, they should be laughed at, scorned, ridiculed – they don’t even trust themselves, and yet they expect us to trust them. Obviously, software for which the developer assumes liability will cost more than software for which the developer does not, but who would you rather do business with?

Durable Alloy January 18, 2007 12:55 PM

@David Thomas

“[if] their EULA stills waives all or virtually all liability…”

Point to me one single EULA that does not do this.

Have you ever read the GPL, the BSD license, or the Mozilla Public License? These are the licenses used by Linux, FreeBSD, and Firefox, all of which are supposed to be more “secure” than the alternatives. Yet all of them waive developer’s liability. What developer in their right mind would add such a clause?

I agree with the remarks posted here that the business model needed is more like that of the insurance industry.

Preston L. Bannister January 18, 2007 1:00 PM

In this instance, the comments are better than the original article. 🙂

Liability law (i.e. suing software folks) Lawsuitsas a solution is a really bad idea. Every dollar you spend on lawyers is a dollar less you spend making the software better. For an extreme case you can look at light aviation. Liability lawsuits pretty much killed off the industry back in the 1970’s, without any real difference in safety.

Security is not something readily measured. Risk of company-crippling lawsuits is pretty much undefined. Likely only software companies with deep pockets and lots of lawyers would survive.

If the market in general is not asking for more reliable software from vendors, then perhaps the costs for greater security should be forced on the general market. Put differently, if you need better security, you should expect to pay more.

To make software more secure will take time and money. Spending money on lawyers is not an efficient solution. How could we spend money more efficiently?

One model would be to perform an extra cost “audit” on software after a general release. Audited releases would come out later and less frequently than general releases. This makes security a market choice. Companies that need the better security of audited releases pay extra for the added value. Anything found in the audit gets folded into a later general release.

We have software tools that can look for exploitable problems in compiled code.
Not perfect, but it helps.

We have software tools that can look for exploitable problems in source code.
Not perfect, but it helps.

We have methodologies for writing and reviewing source code with an eye to security. If done in-house the cost is minimal, but the customer cannot be sure of the result. If done externally (by perhaps the software equivalent of Underwriters Laboratory) the cost is (much!) higher, but the customer can have greater confidence in the result.

Better to let the market decide. Customers with need for better security should expect to pay more.

This fits with your question as to who should be responsible for security. The answer is and always has been: “It depends”. Security requirements are different depending on use. A server exposed to the Internet has different requirements than a server on an intranet, or a server inside a firewalled and physically protected room.

Liability is a bad solution. Lawsuits are a crapshoot. You never really know when you are safe, or how badly you are exposed. Better to have repeatable procedures that yield consistent results, and that will actually help improve the software.

Test rather than sue – seems a better choice.

Tom Davis January 18, 2007 1:52 PM

Increasing liability of the manufacturors and developers would not necessarily increase the security of available options. You only have to look to the small plane market to see that increased liability costs sometimes simply raises the costs of new products astronomically making the continued use of old products financially desirable while reducing the size of the overall market at the same time.

This is especially true given the extremely complex and unpredicatable nature of software development and deployment. There exists a software project whose primary objective is security (OpenBSD) and who do in fact produce a very secure product. However, they continue to produce new versions with security patches as new classes of vulnerabilities are discovered. Because of their proactive stance, the OpenBSD developers often discover vulnerabilities before crackers do, but there are no guarentees. Because the OpenBSD software can be distributed without charge, if the liability for security vulnerability were dumped on the developers laps, developement of what is undoubtably the most secure operating system in the world would stop instantly.

Antiapathy January 18, 2007 1:59 PM

@Reader X

“when we’ve been bidding for work we’ve had situations where bidding for the cost of doing things correctly (securely, or even just ‘properly’) would have had us laughed out of the building.”

That implies that your company know something about software security and are compromising quality for market share; if this is so then your company is probably much better than average.

My bug is with software vendors who do not know or care about security at all. I am sorry to say that every company representative and most software developers I have dealt with just want to get the job done as easily as possible then disappear. There is a lot of apathy out there.

@Bruce

There is an unspoken assumption in your article that companies could significantly tighten up the security of their offerings if only they were given the incentive.

I offer the more gloomy view that a lot of them aren’t up to the job. I don’t think I could produce any significant secure software because I have been studying various aspects of software security for years. The more I get to know, the more I realise just how hard (and expensive) good design and implementation is.

Qian Wang January 18, 2007 1:59 PM

I believe that this is something the market is able to take care of without external interference. All industries go through a predictable progression. When the industry is young, the focus is on adding functionality to its products because they are not yet “good enough” to fulfill the needs of the user. As the industry matures, users become satisfied with the basic functionality of the products and the focus shifts to other attributes. This is where quality (including security) or price or design become the chief differentiators.

We are already starting to see this in certain software segments. Certainly Microsoft’s claim of being focused on security and Microsoft, Apple, and the OSS community’s highlighting of the security of their respective products indicate that the shift is well underway in the operating systems market. Other software segments will go through the same process at their own pace. To impose a liability requirement on all software will effectively kill those segments that have not yet reached the “good enough” point by prematurely shifting the focus to security.

Martin M January 18, 2007 2:01 PM

Mandatory liability leads to greater risks which many small companies will not be able to handle. So it would lead to less competition and serious problems for the open source movement. Talking about externalities, wouldn’t mandatory liability be a gold mine for security companies while the large-scale costs mentioned above being an externality to them? 😉

Donald January 18, 2007 2:03 PM

Perhaps the term “computer security” rather than “information security” would be a bit more accurate.

Aside from my nit-picking, I wonder how this applies to non-capitalist societies or environments. Would you still consider computer security an economics problem?

Murphy's Amendment January 18, 2007 3:04 PM

@Gavin, who said:
“Cox said that closed-source companies could not be held liable for their code because of the effect this would have on third-party vendor relationships: …”

Interesting. I have respect for Alan Cox’s technical prowess as a Linux kernel hacker who works for Red Hat, which sells Linux commercially.

But doesn’t this argument presume that the source of a security fault (and therefore the liability) could not be accurately assessed?

Complexities do arise with the addition of third-part software. For example badly written 3rd party device drivers have long been the bane of MS Windows stability.

But in the event of a security breach leading to a liability lawsuit, it would first be determined whether the flaw was in Windows itself (liability: Microsoft), in a third party component (liability: 3rd party), or in a reasonably unpredictable emergence of the two together (liability: Murphy’s Law).

So taking Cox’s argument into consideration, it seems like the right thing to do is to write any software liability law to take this possibility into account, and require an expert determination of fault, if any.

Jose Curvey January 18, 2007 3:20 PM

@benny

The creation of the NHTSA seems to have retarded both technical progress and safety improvements in the automobile industry in the USA. Thus, if the American auto-industry is anything close to a predicative model, it would suggest very poor results:

http://en.wikipedia.org/wiki/NHTSA

Jose Curvey January 18, 2007 3:29 PM

My own spin on liability – if you were to limit the liability to some multiple of what the developer was directly paid by the injured party you would have little impact on the development of Free software since most users pay nothing to the developers in the first place.

Benny January 18, 2007 3:43 PM

Jose Curvey,

The Wiki article you linked is slapped with a big “Neutrality disputed” disclaimer at the top, and if you go to the “talk page”, it’s not hard to see why. Could you provide more objective support for your assertion that the NHTSA is responsible for retarding technical progress and safety improvements in the auto industry? Thanks.

Eric Crampton January 18, 2007 5:06 PM

Bruce,

Strictly speaking, there’s no externality if I suffer costs for having bought inadequate software, but there is an externality where other users suffer costs because my computer’s turned into a spam zombie.

We can easily say that consumers undervalue security because many of the costs of poor security are imposed on other people; relative to a social optimum, consumers would not demand enough computer security.

Where liability should lie is a really interesting question. We could say that the software companies should be liable, but a more interesting solution might be to make the computer user liable. Again, consumers pay too little attention to computer security because the bulk of the costs are borne by other people. Suppose that a computer user would be found liable if his machine were used in zombie attacks. Would consumers start paying a bit more attention to security? Seems pretty likely to me. They’d then demand products with proper security features. Set the liability rule such that the user is required to exercise due care.

Mike January 18, 2007 5:15 PM

No, No, No!

As brilliantly illustrated through both the DMCA and the SCO v IBM case; legal liability and the consequent litigation associated with it does not produce a result that benefits the customer.

It may make software more secure but it will not reduce the need to guard your own borders!

Sum of suggestion: more money for the lawyers.

derf January 18, 2007 5:39 PM

Let’s say I create a car that blows up fairly frequently when it gets into a minor accident. I expect I’ll pay quite a bit of money in lawsuits to the families of folks who died in my car flambes. Car accidents are a reasonable expectation for car manufacturers to anticipate, even though they really are outside of the scope of design of a vehicle that is supposed to get you from point A to point B without encountering other vehicles or obstacles directly.

In the same vein, Microsoft’s OS is supposed to function as an interface between the person at the keyboard and the underlying systems and software on the PC. It is absolutely a foreseeable event that there will be some software that will maliciously try to manipulate the systems and software without the permission of the person at the keyboard. As an example, if my personal banking details are broadcast to someone outside my home without my knowledge or permission, who is at fault?

We live in a litigious society, and fault normally lies with the primary negligent party regardless of the economics of the industry. In this case, I was using my PC within the guidelines put forth by Microsoft, yet my PC would still have been infected by malicious software simply because I used the built-in Microsoft browser to view a valid website that had a banner advertisement that specifically targeted holes in Microsoft’s browser for which no patches had yet been issued. Again – who would you judge to be at fault?

If the user is the one at fault, then I submit that no one without training in computer security should be allowed to own a computer. If Microsoft is at fault, then why should this company have diplomatic immunity from their liability?

Pat Cahalan January 18, 2007 6:40 PM

For the most part, I agree with Bruce. I find it astonishing that a company can produce a product, offer it for sale to customers, withhold the ability to examine the product (indeed, take legal steps to prevent people from reviewing the product’s safety and security standards), and make advertised claims as to the quality and security of the product, and yet at the same time disclaim all warranties and liability through a EULA.

I don’t know that liability is the best way to solve this problem, and I don’t know that regulation is the best way to solve this problem, but most software companies are trying to have their cake and eat it too, which is just disgusting.

Lots of the comments here reflect comments that have come up on similar threads before on this blog (there’s 185 posts about software liability)

http://www.schneier.com/cgi-bin/search/search.pl?Terms=liability&Realm=blog

@ Don Marti

Liability would be a disaster for research and for collaboratively
developed software.

I disagree (well, it could be, but…) To those that say, “Adding software liability will kill free software”: it’s been noted several times that software that is provided free should be exempt from liability law, as it is not being offered itself as a commercial product, moreover open source software’s code is by definition available for perusal, so presumably the “buyer” has the opportunity to examine the code him/herself for security bugs and defects, which is not the case with closed source software.

@ MartinM

Mandatory liability leads to greater risks which many small
companies will not be able to handle.

Not necessarily. For example, if a small company uses a library provided by another vendor in their product, the original vendor should be liable if the vulnerability is in the library.

@ Preston

Every dollar you spend on lawyers is a dollar less you spend
making the software better.

No, this is an absurd proposition. Every dollar you spend on lawyers is a dollar more in capital expenditures. I highly doubt most software companies have a development budget that equals their gross receipts. I would imagine you have to spend a great many dollars on lawyers before you get anywhere near touching the dev budget.

@ derf

The problem with our “litigious society” is not that it is litigious -> resolving differences in a courtroom is infinitely preferable to shooting it out at 10 paces. The problem is that we have two main basis for penalty -> what regulations impose (fines) and what courts can award (damages).

Far too often, in the case of regulation, the fines are too little to provide adequate incentive to the institution to change, and the court awards are not determined in a manner that encourages institutions to change, either. They’re either grossly punitive (very rare but well published), or they’re limited to provable damages, which can be incredibly difficult to quantify.

Mark Ennis January 18, 2007 8:40 PM

I agree with Eric Crampton when he says that a better place for the liability is with the users. As a developer who works for a small software company, it is my opinion that Bruce’s suggested liability proposal would put open source and small software vendors out of the market. The reason for lack of security in software products is the lack of willingness in the market to bear the cost of secure product. A combination of better education of users about the security risks and more substantial financial penalties for users who take insufficient care in securing their computers and networks would seem to be a more useful approach. Put the onus on the users to demand secure products because their financial costs when they suffer security compromises are substantion enough to affect their balance sheet.

I accept that where there is a lack of reasonable choice in the market due to effective monopoly, there is a problem in bringing market forces to bear to effect change such as better security. In this case, legislation may be required to address this, but I would prefer a solution that required software vendors to support multiple operating systems and open file formats and protocols rather than impose liability on the software vendors. Address the cause of the problem, effective monopoly, not the effect.

Erik N January 19, 2007 3:40 AM

While I agree that liability will change the picture, I think it is very difficult to hold vendors liable for general purpose software. The problem being that it is impossible to take into account all the possible scenarios of use of a general purpose product.

However, one could claim that a product should be “secure by default”, and that the vendor is liable for failure in a default setup. Users must also be held liable for their misconfiguration, and obviously assume all responsability in case of unlicenced software. And, despite errors in programs, or misconfiguration by users, it is and should be the intruder who abuses a security problem who is ultimately liable for all damage caused.

The liability you introduce on users and vendors should serve to pass it on: If one is attacked they only need to search one link up the chain of the attack to claim compensation. If so, everyone will make sure to that anything they may be held liable for can be passed on.

That is, the end user is not in power to do very much, but if he can pass on liability on the vendor, the vendor will have an incentive to avoid problems in the first place, and to track down the offenders.

Finally, I think, that in case of open source liability remains with the user since all information to perform an audit is available. Otherwise, open source would cease to exist.

Ben Liddicott January 19, 2007 3:48 AM

“100% [liablility] should not fall on the attacker”

Are you sure you meant that? What proportion of the damages should the attacker pay?

In reality blame is not a zero-sum game. Of course the attacker should have 100% liability. That doesn’t mean no-one else has any.

Ben Liddicott January 19, 2007 4:00 AM

@Peter Pearson

Spot on. Bruce needs to re-read that old Economics textbook.

If I choose insecure software from z-corp, and I suffer thereby, that is NOT an externality.

If I choose insecure software from z-corp, and Joe Bloggs suffers, THAT is an externality…

…but why should z-corp be any more liable than me?

There is an answer, but it is not the one Bruce gave. Z-Corp should be liable because they can mitigate the problem at the lowest cost.

Duckling January 19, 2007 4:45 AM

Bruce’s article seems to rely on the assumption that fixing the initial problem is cheaper than fixing the consequences.

This might be true in many cases, but is it always so in this area? After all, often when you buy security solutions, you tend to buy solutions that cover many systems (ie. firewalls, IDSs, centralized management systems, thereby spreading the cost.

I’m not entirely convinced that fixing all software problems is always better…

Bruce Schneier January 19, 2007 8:48 AM

“If I choose insecure software from z-corp, and I suffer thereby, that is NOT an externality. If I choose insecure software from z-corp, and Joe Bloggs suffers, THAT is an externality…”

It depends. The former is an externality to z-corp: they choose to sell software with security problems, and they do not suffer the consequences from the resultant breaches. In a fluid market, of course, customers can penalize vendors for this kind of behavior. But software seems not to be such a market.

“..but why should z-corp be any more liable than me? There is an answer, but it is not the one Bruce gave. Z-Corp should be liable because they can mitigate the problem at the lowest cost.”

Exactly. That’s the answer I give all the time.

Scott Carpenter January 19, 2007 8:56 AM

Code is speech. I think I should be able to post free software on my web site and disclaim liability for its use.

Bruce, I love your essays, but here I agree with the commenters who are pointing out what a massive problem this would cause for small shops and free software development.

markm January 19, 2007 4:10 PM

“The only thing I can think of is to impose liability for security failures only on vendors who claim their software is secure.”

I agree with that, but it needs further explanation. The huge problem is that, for software (and for very few other consumer products), a company can advertise widely that their product is good for X, but avoid all legal liability by burying a statement that they are not responsible for it actually doing X well, or at all, deep in the fine print of a “contract” that they purchaser can’t even read until he gets the product home.

E.g., a few years ago Microsoft ran many ads showing Windows servers running unattended and implying that a Windows (NT or at best 2000) server needed very little human attention. Everyone I’ve ever talked to who has used both Windows and Unix/Linux says that Windows will need more attention. However, if some computer-naive businessman bought Windows for his servers because he believed the ads were to sue Microsoft, whether for just the return of the purchase price or for the huge support costs incurred, Microsoft’s lawyers would just point at the EULA and case dismissed.

If we had a Congress that actually cared about the welfare of their constituents more than their campaign contributors, writing a contract that contradicts ones advertising would be evidence of fraudulent intent.

Jose Curvey January 19, 2007 5:06 PM

@Benny

The neutrality is disputed, but the numbers are not. One is free to make one’s own judgment based on the numbers. They are not normalized per mile traveled, but they are normalized to ratios which is arguably a better method as not all miles of travel carry the same level of risk.

neighborcat January 19, 2007 9:25 PM

Holy shit, I actually disagree with Mr. Schneier! Although his assessment of the current situation is accurate, and his solution is sound, I dispute the necessity and sustainability of government intervention. The availability of secure software can’t force users to make better decisions.
In short. I see poor software security as just one symptom of a larger lack of critical thinking skills in the population.

Clive Robinson January 20, 2007 2:42 AM

@Bruce,

If your assumption is that the software vendor,

“can mitigate the problem at the lowest cost.”

Then are you assuming that a security fault has a single well defined point of origin in the vendors code that is remotly fixable at negligable cost?

This is probably not true in the majority of cases. If you look a little more carefully I think you will find that trying to fix the problem (properly as oposed to patching around it) is going to be extreamly costly.

In my experiance most software is reliant on other software and so forth downwards through the code libraries -> OS -> Drivers -> Micro Code -> State logic -> system architecture…

Changing anything below the OS level is likley to require hardware changes so the cost there is very definatly non negligable.

Likewise most closed source code is effectivly “staticaly linked” which means changing the code libraries is going to be non negligable in cost.

Also as most code is written using a high level language which trys to abstract the underlying specifics out of the design (code libraries downwards) the vendor probably has little or no idea of where the problem is nor do they have the ability to fix it (without breaking something else).

As an example of this a large number of existing software is written using ISO Standard C and it’s libraries. It is now accepted that a large number of exploited security faults are due in large part to faults within the C Standard. What do you think would be the cost of fixing the C Standard?

Effectivly even a security concious vendor has to “patch around” problems at the lower levels over which they have absolutly no control. Unfortunatly the very act of “patching around” known security flaws in of it’s self gives rise to the potential to make new security flaws.

If you look back to the Victorian erra mechanical design was by “gentelmen” and carried out by “artisans” not engineers or scientists (as we would understand them today). When something broke they patched it where it broke.

Often the weight of this patch would inflict stress on other parts of the mechanisum and that would then in turn break. Even if it did not there was a secondary problem in that the patches decreased efficiency so savings where often made but cutting out other parts in current or subsiquent designs.

We still see this sort of behaviour in mechanical engineering today. As an example the Space Shuttle Disaster where the O ring burnt out and vented burning gases onto the main fuel tank, was based on a “tang and clevise” design from an earlier solid fuel rocket that had not given problems. However symptoms showed up in the early stages of the Space Shuttle design and changes where made but crucialy they where not tested in any meaningful manner. The result was that the changes where at best ineffective and may well have actually contributed to the disaster.

At the moment most software is developed in almost exactly the same way that Victorian engineering was carried out. The main difference is that instead of “gentalmen” deciding what was to be built you now have Marketing departments and boards of directors.

The “real work” is still carried out by artisans however (code cutters) using their jigs and “patterns” based on “what worked last time” put to gether in “code libraries” which few understand or even inspect (if possible).

Problems are usually addressed at the visable break by “bolting on a patch” or modification. Not unfortunatly where the problem realy is (think metal fatigue in North Sea Penta-Leg Oil Platforms for a modern engineering equivalent).

In these modern times there is in the main no engineering practice carried out by “software engineers” or any science by “Software Scientists”, they are at best “lip service” names to gloss over the sausage machine production of modern software development.

If you look at the history of engineering science and legal regulation that developed from Victorian steam engines and their boiler explosions, you might just wonder if we have learnt nothing from our past and are therefore condemd to re live it…

David Thomas January 20, 2007 5:14 AM

@Durable Alloy ‘What developer in their right mind would add such a clause?’

Those wishing to do business with those who look for such a clause.

Software published under the various F/OSS liscenses often is more secure, but it is for reasons unrelated to liability – although not, perhaps, entirely unrelated to tradeoffs of a profitseeking entity. First there’s the many-eyes effect. Second, there’s (typically) less motivation to rush something to market. I wouldn’t be surprised if there were other factors as well…

There are several reasons this doesn’t apply well to F/OSS, and the lack of profit to balance the risk is not necessarily the most significant. There’s also issues of actually determining liability amongst the developers, and again, I’m sure there are others.

Basically what it comes down to is that while this kind of thing works well in some circumstances, it is not applicable to all. That shouldn’t really come as too terribly large a surprise, and honestly it reinforces my desire to see this done with education and popular awareness rather than legislation.

My point was that when a company asking for our money whilst claiming security, we should expect them to put their money where their mouth is.

In discussing the issue with my father, sometime after my original post, we actually decided that the appropriate thing is to make those who deploy the technology legally liable, then allow developers to assume some of that burden in contracts. It is, ultimately, the person deploying the system who knows (or should know) the full scope of the problem being addressed, they’re the ones capable of putting in backup systems, redundancy, external firewalls, etc, and they’re the ones who know the full extent of the value of any targets being threatened.

Anyway, it sounded good to us, what do others think?

Antiapathy January 20, 2007 6:59 AM

@David Thomas

“I wouldn’t be surprised if there were other factors as well… ”

How about taking pride in your work? It’s not easy to recruit and retain such people (as opposed to some bullshitter who tells you what you want to hear at the job interview). I suspect that OSS tends to attract a lot of people who really care about what they produce.

Antiapathy January 20, 2007 8:19 AM

@Bruce

Your essay and the comments have tended to talk of “secure” and “not secure”. I wonder if part of the problem is that we do not have a suitable vocabulary to describe software quality.

There seems to be lots of information about software security standards available but this seems to be focused on organisations and working practices – not individual software products.

For example, when looking for a new car, I can get safety rating information for different vehicle models. When I want a washing machine I can get information about energy efficiency. Nobody sells a car that is absolutely safe and nobody sells a washing machine that is absolutely efficient.

Now supposing I wanted to buy a new web commerce program for my business and I could choose from:
Class A $100000
Class B $7500
Class C $250
Uncertified $25

Criteria for deciding what classification software should be given might be based on factors such as: Initial security design, Independent design review, Formal methods used for coding, Use of automated code checking for known exploits,Idependent testing and so on.

A vendor would have to submit documentary evidence to the standards awarding organisation that required work was completed that met the software classification. Note that I am not suggesting that the documentation is checked – I just want a trusted third party to keep documentation relating to the code release. In the event of a dispute, the documentation could be given to a court for verification.

I realise that there are quite a few issues with this idea but until we stop talking about extreme absolutes like “secure” that nobody can achieve yet, then we will have a problem.

Anonymous January 20, 2007 6:46 PM

@Antipathy “How about taking pride in your work? It’s not easy to recruit and retain such people (as opposed to some bullshitter who tells you what you want to hear at the job interview). I suspect that OSS tends to attract a lot of people who really care about what they produce.”

Absolutely. Also, the fact that they typically are developing it in part because they have a stronger than average interest in using it, and so flaws will effect them disproportionately in the population, and thus there is less of an externality.

Marko January 21, 2007 12:11 PM

The profit motive is a red herring. Some examples of non-profit-driven development environments: software developed by universities, software developed by the government, custom software for internal use only, and free software.

Mr Schneier proposes to fix the problem by making software manufacturers liable, but this solution would not help any of the non-profit examples I have listed.

Michael January 21, 2007 1:56 PM

One commenter says that while vendors might increase costs to “[I]t’s quite another thing for companies to accept that.”

(And many others echo this same sentiment.)

Malarkey. All that second company is going to do is figure out a way to avoid it’s own liability — Passing the externalities further down the chain. Eventually somebody is going to get stuck with the cost of the extra risk. Any of the freepers above who ignore the capacity of players in the chain to avoid their own responsibilities as purported evidence that liability is not necessary are simply hiding their heads in the sand.

Oh, and not to be a bully, but…

“If you want an open and inovative (sic) software industry then don’t saddel (sic) it with stifeling (sic) risk, look for another method which would still alow (sic) inovation (sic) by very small organisations. (sic)”

Frankly, if open and innovative software means quality control such as that exhibited in the above sentence, I for one would just as soon ask for boring and functional. And for those who seem to believe that small business is the be-all-and-end-all of joy, I only suggest that those same players should quit their whining when asked to pay the fare to ride the boat. If you haven’t built the capacity to pay that fare into your calculation, you don’t have a successful small business–you have a leach.

Dickh January 22, 2007 1:33 PM

I am astonished that none of the comments above mention that security awareness is still (largely) voluntary on the part of new Graduates. I hired a new CS grad this year who had taken mere hours of elective class on security issues.

When MS decided to take their stand on security, they decided to put their entire development staff through remedial training for several days at astronomical cost.

Until we, in the security business, teach more classes, make it impossible to graduate without a thorough grounding in the issues, and raise the bar on what we consider acceptable: we are part of the problem.

RobL January 22, 2007 5:12 PM

Seeing as how we are likely going to have to live with buggy code for years, would it not be better to focus on efforts that might prevent vulnerabilities from escalating into executable threats?

Since all attacks have as their goal some form kernel level behavior, such as data access or some executable function, there should be a way to prevent such behaviors.

PHE August 11, 2008 11:27 AM

Yes everything mentioned by is the natural result of the technology.
Tech. is not the matter that we can use it without any problems as we have seen in the past.
That is why we need to accept all propblems before we start to use any technology comparing with the older one.

Jacob July 31, 2009 10:57 PM

I don’t think liability is the answer here. It’s a battle of wits between those who write exploits and those who write patches. You’re saying people should be financially liable for failure to be smarter than others.

Failing to care about security in a sensitve application is one thing. (Diebold is a perfect example of this; they entrusted Windows with upholding democracy and forget to password-protect their source code on a public FTP site). Making an honest mistake is quite another.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.