Schneier on Security
A blog covering security and security technology.
« When Voting Machine Audit Logs Don't Help |
| Interview with Me »
January 23, 2009
BitArmor's No-Breach Guarantee
BitArmor now comes with a security guarantee. They even use me to tout it:
"We think this guarantee is going to encourage others to offer similar ones. Bruce Schneier has been calling on the industry to do something like this for a long time," he [BitArmor's CEO] says.
Sounds good, until you read the fine print:
If your company has to publicly report a breach while your data is protected by BitArmor, we'll refund the purchase price of your software. It's that simple. No gimmicks, no hassles.
BitArmor cannot be held accountable for data breaches, publicly or otherwise.
So if BitArmor fails and someone steals your data, and then you get ridiculed by in the press, sued, and lose your customers to competitors -- BitArmor will refund the purchase price.
Bottom line: PR gimmick, nothing more.
Yes, I think that software vendors need to accept liability for their products, and that we won't see real improvements in security until then. But it has to be real liability, not this sort of token liability. And it won't happen without the insurance companies; that's the industry that knows how to buy and sell liability.
EDITED TO ADD (2/13): BitArmor responds.
Posted on January 23, 2009 at 10:35 AM
• 39 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Let me see if I have this straight...
If our software fails, we'll give you back your money, but we're not liable. So, they never really lose money, at worse they break even.
Sounds like a great guarantee--for BitArmor's profits.
I offer the same guarantee on my $50,000 rot13 encryption scheme. If anyone using it has to publicly report a breach during the 3 year license, I'll refund the *entire* $50,000, not just a prorated part of it like BitArmor does.
Uh, you mean like AIG knew how to price and sell risk?
I agree with your point about software liability but I don't think the insurance industry is up to supporting that market.
Last October, the world financial system nearly imploded part because major insurers (e.g. AIG) didn't know how to buy and sell liability on mortgage backed securities. Some of these jokers used risk models which assumed houses could _never_ decline in value.
Data security products are even harder to assess than mortgages; if the insurance industry can't get the latter right they're certainly not going to have a hope of understanding the former.
I really don't think the market will support the cost of software which assumes liability for data breaches. Developing non-trivial secure software is a very time consuming task, a cost which would have to be passed on to the consumers. I think while most businesses would like to hold the vendor accountable for security breaches, it is probably less expensive for them to just buy insurance than to pay what it would cost to develop secure software.
How about a call for personal responsibility rather than pushing things back on the vendor? You shouldn't be able to not think about security. You must know how it works, not just write purchase orders and look the other way.
I also think you (Bruce) should give them more credit. No business will reimburse you for consequential damages from their products, ever, unless you count insurance companies.
@HJohn It's not a guarantee they will break even. It's a guarantee they won't profit if their product doesn't do its job. Even then, I suspect some idiot user will compromise security somehow and blame it on the vendor.
That said, it's closed source, so, obviously it's just a rot13 with a twist, and riddled with back doors. There's no other reason to the source code secret.
AIG knew very well how to price risk; what it stopped caring was how it was paying employees who sold risk.
Holding software companies accountable for "derivative" loss is a stretch - I know no industry that covers it.
Software liability would be a weapon against hard/exepensive to crack systems.
It is very naive to think that money and legal judgments can bring about a postive result in security.
Business is war, software liability would become a dangerous black market.
Would be interesting to read more about software liability and how bad it would be for open source.
Pinning liabilities to software would be a great way to kill innovation and make sure that open source and hobbyist programmers get eliminated, and make producing software a licensed profession, and sole preserve of huge multinational corporations with the specialized ability to navigate bureaucratic certification schemes, and lobby regulatory organizations.
Let's not go there.
"And it won't happen without the insurance companies; that's the industry that knows how to buy and sell liability."
Given the state of the current financial crisis, I think serious questions can be asked regarding insurance companies' ability to asses risk.
There has to be liability, but putting it at the software vendor won't work.
Assume that BitArmor products work very well. It's still unreasonable for BitArmor to take on more liability, unless they can control how people are going to use it.
Suppose an employee loses a laptop with confidential information, and the information gets out. Was the password on a sticky note on the bottom of the laptop? Was it "sexsexsex"? Guessable in other ways? BitArmor can't afford to be liable for other people's lack of knowledge and/or stupidity.
Conversely, if all I need to nullify the effect of data breaches is X many BitArmor licenses, why should I take time and expense to secure anything myself?
Liability needs to start at the organization with the information. That organization can determine its needs, buy insurance, and acquire and implement software appropriately. Companies like BitArmor will then succeed or fail based on whether they are useful in providing security.
The problem with data security is that the costs of a data breach are largely an externality to the organization with the data. Give the organization sufficient financial penalties in case of a data breach, and they'll start behaving as if data security was important. They'll try to make sure their systems are secure, and will try to buy secure software. Only then will businesses take it seriously.
Sounds like free to me.
"Data breaches must be publicly disclosed under US state breach notification laws to qualify for a refund."
According to some states a public report of a breach is just one letter sent to one customer, no? A news article or story is an option only to reduce the cost of direct mail for large cases.
Buy their stuff, send a letter to a customer that there may have been a breach, get it registered in a public database that no one cares about, and you get your money back.
Automaker: "Gee, that's too bad our faulty gas tank blew up and your family died. He're, we'll give you back the price of the vehicle. No gimmicks, no hassles."
If you have a portable device stolen with personal data on it, you SHOULD do a data breach report. You can't know that it wasn't broken into, you must assume that it was. However, as part of the report you should indicate that the data was encrypted and it's unlikely that the data is actually in the wrong hands.
Rich: It is much easier to show that a car or device was engineered in a bad / unsafe way compared to a piece of software. People can take cars / equipment apart and see where the design flaws are and how significant they are. Unless you have the code you really can't do that well with software.
Though, I do thing that there needs to be some sort of liability to SW firms, the hard part is finding the right level. As mentioned above, the SW firm usually has no control over the installation, updates, etc.
You most certainly can tell if there was a flaw in application design. Reverse engineer it.
Companies need to be held liable for the software they develop. If their choice is to use inexperienced or cheap labor to simply get a product out their quickly and something goes wrong then they should pay for the damages.
Personally, if I were running a company I would be more than happy to have insurance for that protection and fight the battle in court. Just like auto manufactures do all the time.
if somebody used my name without my permission to sell a (bad) product, i would sue the bleeding shit out of them.
@ Rich Wilson,
"Gee, that's too bad our faulty gas tank blew up and your family died. He're, we'll give you back the price of the vehicle. No gimmicks, no hassles."
A small but highly relevant point about your argument.
When you buy a vehicle you are buying the whole package.
When you build a modern IT system you have many vendors products involved.
I suspect (irrespective of licence terms) that the vendors would happily point the finger at each other untill the cows come home rather than own up to any deficiency in their product...
Also the average software product is orders of magnitude more complex than your average vehicle.
And complexity is an ideal place to hide information beyond reasonable recovery.
I'm curious, what liability does Counterpane/BT accept for any of their product or services?
These sorts of guarantee can only ever be a gimmick. No matter how good the solution, there's always the layer-8 vulnerability.
how does one assess the risk of a piece of software containing a vulnerability?
Unless you can statically verify your whole code base and know there are no vulnerabilities then you have no way of knowing if there are or not.
You either know you have vulnerabilities or you don't know the risk. thus no insurance company would take this up.
Also, if this company offered more than just a refund...imagine the cost of one vulnerability for this company.
A friend of mine lived in a bad neighborhood. He kept his car in a guarded carpark. When the car got stolen, they payed back the price of the parking for the last day. So the strategy works :-o
A strict liabiltiy approach required by law for consequential pure economic loss would be bizarre and anomolous. Liability for any breaches would be like imposing liability upon the manufacturer of a bullet proof vest, if the wearer gets a bullet wound. Firstly, it is possible to work around security products, just as it is possible to work around a bullet proof vest. As such, this would be anomolous to our idea of appropriate product liability in that it would make manufacturers liable for breaches over which they have no control.
Secondly, it gives insufficient weight to tradeoffs. confronted with the specture of indeterminate liability for consequential pure economic loss, manufacturers would either cease to make security products, or make bulelt proof comprehensive systems. Whilst one might argue that the latter is precisely what they should be doing, that isn't the case.
As I mentioned, with security you make tradeoffs. With computer security you make tradeoffs between safety and usability, safety and speed, and safety and price. It should as a matter of market freedom remain open to people to be able to choose the compromise conflicting objectives as they choose, and to purchase partial security systems based on their personal risk analysis.
In addition, your suggestion would be procedurally unworkable. Indeed, as you recognise, this would be a form of insurance. The consequence of this would, of course, be two fold. The first consequence is that product pricing would become quote based. This would prevent easy comparison of pricing, and cause market failure. The second consequence would be that certain high risk operations would not be sold security software at all. This would be a perverse result, since these would be the operations who needed it most.
As such, I do not support a legal framework which holds software companies liable in such broad terms.
However, that is not to say that they should not be liable under any circumstances. Indeed, if it can be shown that the product which they sold is defective, and the defect has caused loss, it seems that they should indeed be liable. This would be harmonious with product liabiltiy laws in most developed countries.
There comes the issue of what, in such a situation, the liability should extend to. For example, should it extend to all consequential loss? I think that it should not. For the same reasons as I mentioned above, indeterminate liability is a bad thing. A software company will not know, under this approach, whether they will be liable for the loss caused by the leaking of my email address, or the loss caused by the leaking of national secrets. Again, a quote based system would result.
Instead, I would support liability for damage to txtent that it is reasonably forseeable. The reason for this is clear if one looks at the inverse of the proposition. I would not support damage which would not have been reasonable to forsee - unforseeable damage. This is simply a practical necessity, and desirable in many aspects of our life.
That's the problem with security. The risk is either 0% or 100% of loss. It's never in between.
The guarantee is an interesting one.
I am aware of exactly zero state notification laws that require notification when the data are encrypted and the key is not revealed as part of the breach.
I wonder under what conditions it is even possible to ask for a refund, given this feature of the various states' laws?
Disclaimer – I work for BitArmor Interesting to see the initial post by Bruce and comments made on it. Here are my thoughts.
I think it is bad form for Bruce to ask for more responsibility from vendors and, when one does take on some responsibility, put them down heavily! You say software vendors should take on real responsibility, but in the next statement you tell them to effectively become insurance companies. Without security products that do what they claim to do, how can an insurance company even begin to understand the risks?
There obviously is a PR element to this, but without product capability to back it up, no company can do this. It would have been nice of you to have at least acknowledged that possibility and asked insurance companies to step up, instead of “pooh-poohing” the whole thing.
"I think it is bad form for Bruce to ask for more responsibility from vendors and, when one does take on some responsibility, put them down heavily!"
And what responsibility, specifically, are you taking on?
I can provide the same guarantee that you do for my "tiger attack prevention" rock. If you should ever be attacked by a tiger, I will cheerfully refund the $10 purchase price.
"There obviously is a PR element to this, but without product capability to back it up, no company can do this."
Of course they can. Even if their product does NOTHING. See my rock example above.
"Without security products that do what they claim to do, how can an insurance company even begin to understand the risks?"
Okay, that makes no sense. I used to work at an insurance company. The risks they deal with are well known. Even without any "security products".
I think this gnashing of teeth over software liability misses the point. Liability should only rest on those companies who wish to assume it. The whold "incidental / consequential" designation misses the point that some vendors embrace this type of risk as a way to differentiate the product. The Club offers (capped) compensation to users whose vehicles are stolen with the product in place. Surge suppressor manufacturers will repay you (again, a capped amount) if your stereo equipment is damaged due to a failure of their device. Bottom line is that software companies should only be held liable if they voluntarily designate a product as a "security solution" of as implementing "secure features." This could be further subdivided into specific classes (A, B, C, etc.), each with a maximum statutory liability ($100, $1000, $100,000, etc.). Liability would be incurred for each instance of product failure that results in loss of data that should have been secured by the secure software or its designated feature(s). The cost of the software would obviously reflect its designated class and allow buyers to titrate protection to a comfortable level. Many software vendors are incidental to overall security, will not bother to self-designate their products as "secure," and thus would not suffer any ill effect from increased development or insurance costs.
As for how to properly insure against liability, I'd say that insurance markets have been writing policies against the unforeseen for centuries. There's no reason to think that they wouldn't develop reliable models that incorporate reasonable metrics, like characters of source code / block design complexity ratings, networked function count, seniority and headcount of development staff, customer financials /data profiling, implementation restrictions in software license, breach history / response lag, and results of random source audits to come up with reasonable risk estimates. It's no more sophisticated than running my credit score to see how likely I am to crash my car in the next 6 months...
problem is that any security sw is then operated by "people" who have lots of security bugs themselves, they sleep bad, argue with their partner, are asked to tunnell CEO's games and porn browsing out from security, "you can't see these data", ... so not always it's security sw itself........
as all other ISPs/security company. they just cannot. unless they operate all of it and the client wants to pay the right money for it.
Stephen, I agree totally, liability should indeed rest upon those who assume it. That's hardly controvertial. If I promise to paint your house if you buy my software, I will be held to my promise. It's just a matter of contract.
However, there are issues, even such a scheme, of quantifying loss. Surge protectors indeed come with guarantees, as do good bike locks. However, the crucial distinction between those and security software is that whilst the loss in the former is physical property, the loss in the latter is not. Indeed, a surge protector manufacturer won't compensate you for the loss you suffer if a power surge kills your mobile phone, thus preventing you from getting out of a short-selling arrangement in time. That would be consequential loss. The problem of quantifying those losses is even more severe with security software because there is no physical property to tie the loss to. For example, if a data leak occurs, in actual fact, I have not suffered any direct loss. I presumably still have the data. The loss is entirely consequential, and is largely from loss of goodwill. It's very difficult to quantify the value of goodwill.
If, for example, I run a business, and the data I store gets leaked, I'll lose customers. But how much do I claim for? It's very difficult to tell.
Sure, you could avoid that difficulty by simply promising to pay 100 dollars for every security breach. The problem with that is simply that it it will suit noone's needs. It won't adequately compensate some users, and it will create an incentive to crack the software for others!
I'm not sure that the Insurance Industry will want to take on "software risk" in any real form.
Physical events tend to happen in diferent places at different times and importantly when viewed from 30,000ft average out to a fairly predictable value.
For risks such as "act of god" or "warfare / terorism / civil unrest" you are either not covered or have to go to the insurer of last resort (ie the Government in most cases) for relief.
The nature of software security and it's vulnerabilities is such that there is even at 100,000ft no sufficiently predictable measure to assess risk. This puts it firmly out of the usual business model for Insurance.
Lest the legislators force it down the Insurance Industries throat you would expect the insurers to lay the risk off onto other insurers or take out "stop loss" insurance.
Unfortunatly as seen at Lloyds of London back in the 80/90's this has the effect of creating liability spirals whereby the risk is actualy not spread across the industry but dressed up as something different and sold back to the original insurers.
Thus when things go wrong as eventualy they do the spiral has to unwind in the process atracting huge administrative and other costs that multiply the loss many fold. But importantly this cost is bourn not by the whole industry but just a small percentage of it ("old names" -v- "new names").
If and when the software industry develops reliable metrics (Function Point Analysis was one attempt) then and only then will the Insurance Industry be likley to treat it as a market as oposed to a subset of "exceptional risks"...
As I was once told by sombody in the industry,
"Only a fool or a gambler takes on unknown risk, and as the old proverb has it, A fool and his money...".
"I can provide the same guarantee that you do for my "tiger attack prevention" rock. If you should ever be attacked by a tiger, I will cheerfully refund the $10 purchase price."
And if someone where to buy that rock after doing research and make that choice that is their decision. They would know the worthiness of that protection.
Just as someone who would be selective in buying some application which provides security.
The problem with security software liability is that software is complex and users are great at misusing, misunderstanding, and unknowingly shooting themselves in the foot when it comes to using computers. I wouldn't want to take responsibility for my users either.
Thank god the economic meltdown has created some more naysayers. This is very nearly the only idea I've ever heard Bruce tout so wholeheartedly that I absolutely disagree with. Frankly, with all the op-eds and blogging I've read of his on the subject, I still find that many of the things you've all brought up today have largely been skirted, and I find them to be very real concerns.
Certainly, something more can be done, but on this issue, I would actually fall in line more with Bruce's own statement regarding disclosure laws when he said:
"The reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use."
If there is enough incentive for a breach, there will be a breach. All other arguments aside (as relevant as I feel them to be), I would stress this point. Even simply diving back into Bruce's own blog archives for the last week, we see an article on the presidential limo, a beacon of paranoia... yet even in the article linked it states:
"Meanwhile, military anti-vehicle weapons such as rocket-propelled grenades, anti-tank weapons and shaped explosives are easily available on the world weapons market. If terrorists were to score a direct hit on the presidential limo with modern weapons, no amount of armor would save the occupants."
Liken it to a bullet-proof vest. A manufacturer claiming it's Kevlar, but providing nothing more than canvas, isn't gonna last long. Sure, someone may have to find out the hard way, but that's the price we pay for free market capitalism. Holding them, and all other manufacturers accountable for deaths/injuries incurred in the coverage area of said vests would be ludicrous, since there will always be a bullet that can get through, or if the job really needed to be done, a grenade, missle, headshot, et al..
Not to mention Clive's remark regarding the complexity and interoperability of all parts of your average IT system, and the inevitable rabbit hole of litigation and finger pointing that would occur in even the smallest of breach complaints.
This is just a terrible solution on so many levels in my mind, although I can't offer up anything better aside from Bruce's own plea for removing the incentives altogether (which, understandably is impossible in plenty of situations, and in those situations, I would say the responsibility lies with the people in who's best interest it is to keep the data safe, since generally speaking, cases in which the street value of the data itself cannot be minimized, they ought to have the resources to do some good ole' fashioned R&D on the subject).
Not to mention the fact that, simply speaking, there is a huge market for insurance in this country. It's worked for everything from acts of god to carpal tunnel, and frankly, if the government is about to start tacking on more regulations and liabilities for corporations/industries, they ought to start back where they stripped them away before diving into software. Asking the government to deal with regulating the software industry (or the internet, et al) is like asking an elephant to be your 500 meter relay partner.
As others have commented here, unless the vendor is in charge of the deployment, it would be very difficult to apportion liability to the various parties. In addition, you would have the very difficult proposition of assigning a value to the customer's data.
Nevertheless, the aforementioned "guarantee" is a nifty bit of PR that ultimately does not reflect well on our vendor community.
I saw some other comments above, and I wanted to point out there are in fact federal laws that require the reporting of data breaches. For example, "The Fair and Accurate Credit Transactions Act 2003" (FAIR AND ACCURATE CREDIT TRANSACTIONS ACT OF 2003, Public Law 108-159, 108th Congress) available for you reading pleasure at:
More information on reporting and reports:
Google red flag rules for more information.
There are also various state laws that require different levels of reporting breaches.
Lets start with Bruce's horrific exit at Counterpane as a datapoint; his diatribe here appears to be fueled by less than laudatory motives...I personally think that Bitarmor's move is brilliant, anyone that has been around security in the last five years knows the team is first rate. Let's see what value they exit at, I guarantee this will be more than Counterpane.
Stanford School of Engineering
it is a terrible lie to put that in to product -terrible company dont purchase
BitArmor cannot be held accountable for data breaches, publicly or otherwise
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.