Schneier on Security
A blog covering security and security technology.
« $5M Bank Con |
| Domestic Spying in the U.S. »
October 13, 2005
Tax Breaks for Good Security
Congress is talking -- it's just talking, but at least it's talking -- about giving tax breaks to companies with good cybersecurity.
The devil is in the details, and this could be a meaningless handout, but the idea is sound. Rational companies are going to protect their assets only up to their value to that company. The problem is that many of the security risks to digital assets are not risks to the company who owns them. This is an externality. So if we all need a company to protect its digital assets to some higher level, then we need to pay for that extra protection. (At least we do in a capitalist society.) We can pay through regulation or liabilities, which translates to higher prices for whatever the company does. We can pay through directly funding that extra security, either by writing a check or reducing taxes. But we can't expect a company to spend the extra money out of the goodness of its heart.
Posted on October 13, 2005 at 8:02 AM
• 32 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
"we can't expect a company to spend the extra money out of the goodness of its heart."
Of course not, but that's what laws and regulations are for; to force them to do what they would not otherwise. If lax security is hurting the nation as a whole, Congress would be in the right to force businesses to conform to higher standards without having to compensate them for doing so. Are companies getting handouts to conform with the Sarbanes-Oxley Act?
How about taxing those with poor security more?
This looks problematic. There is not a universal standard (and the standards that could be drawn are constantly in technological flux) that can be applied to differentiate a 'good security' company from a 'bad security company' that would not be revealed outside of a deep audit. Do we need government sponsored security audits now?
Better choice would be to structure liability so that security failures are expensive.
I think this is a great idea, but it is going to open up a can of worms. Who gets to decide what 'good' security is?
Furthermore, who is going to maintain those standards? SHA-1 was a good enough algorithm last year, this year it is being phased out of upcoming products in favor of stronger algorithms. NIST was planning to do this by 2010. How long would it take a less technically competent department to change standards?
There is also the entire issue of having a government baseline for 'good security'; this could become a defense in a liability suit where a company earned the stamp of approval and then got compromised because the minimum standard was not good enough to stop a certain type of attack. The average juror is hardly qualified to comment on security, and having a government supported or established baseline is going to give jurors an easy way out.
Who are going to be the people doing the assesments of security controls, and what degree of audits are required?
There are so many ways this could go wrong (or right), it is amazing to think that politicians are thinking about it.
The reality of what happens is that the companies that don't want to bother will go through enough of the motions to get approved. But in many cases companies have employees that want good security who cannot cost justify it. This kind of measure allows those who want good security to get a budget and to take the measures they will believe will achieve their goals.
Bah. It smells to me like a deal to give companies a "due diligence" defense against liability if they can show they qualified for the tax break (whether or not what they did was actually effective). And of course, another part of the sweetheart deal is that the bureaucracy gets to grow to determine who qualifies and how. And the politicians get to pose as having "done something" about security.
Bottom line: The classic "Iron Triangle" once again.
"could" be a meaningless handout? this administration is all about direct transfers of tax monies to large corporations. of course it will be a meaningless handout.
Also, in that article there is even a more profound section:
"Lawmakers also plan to address liability concerns, he said, as they want to allow companies to take some risks in coming up with new cybersecurity tools without having to worry about being sued if they fall short."
So in other words, they want to reduce liability if people create flaky systems. This would be like the FCC coming out with a regulation that says "If you come up with an innovative drug that can have a potential positive effect, you can't be sued if it goes horribly wrong". The makers of Thalidomide would have loved such legislation!
In order for security to advance people (and companies) need to be able to trust the products; there is already enough snake oil out there, if these types of protections are put into place it will only make it more prolific.
I really can't see that working. As others have said, you can't quantify security. For example, say one of the requirements is that the company use a firewall. Who says what firewalls are allowed to be used? Who audits that the firewall has been correctly set up? Who ensures that data aren't taken out on a USB pen drive instead?
"I think this is a great idea, but it is going to open up a can of worms."
Well, good ideas usually don't open up a can of worms ;-)
"I am a CMM Level 5 organization"
Substitute SCC ("secure computing certified") for CMM.
Sure, there will be problems with this... companies will buy ratings from unscrupulous auditors, for example, but the underlying principle is fine. I agree with Bruce that the underlying problem is externality. You can remove it with regulation, you can remove it with liability laws, and you can lessen it by providing a financial incentive (essentially, turning a cost-center into a profit-center). I suspect that in the long run we'll wind up using all three in the IS/IT industry, we use all three everywhere else...
when the corporate welfare gravy train approaches your station, you can hear it a mile away. as an american taxpayer, i don't way to pay more for your good security, i want you to get hammered in court if your poor security is breached. let me put this in terms the proponents may be able to relate to: i am also an american driver. should i get a tax break for not running over you in the crosswalk, because after all, your life is an externality to my economic decisionmaking calculus.
I don't like it too much. First we all know that security is as much about people than technology. The history of the guy 'taking' 5M€ out of European banks some days ago is just an example.
Unfortunately I fear that the people who will analyze these security levels will most probably look at the technicological aspect.
Let's make insurance companies more aware of what is security and make it so that they increase their fees to companies who don't have good security.
Let's force companies to release security notices when they have security issues to make the public aware of the issues. If they hide these notices, they should be punished heavily.
Let the market decide. Don't make the government involved too much.
"How about taxing those with poor security more?"
That would accomplish the same thing.
I'm not a political expert. The exact mechanism of the system doesn't matter to me. I could be a tax break, an extra tax, a fine, a jail term, whatever. I just want the externality dealt with.
Certainly different political parties prefer different mechanisms for dealing with externalities, and I'm okay with them fighting it out.
"Better choice would be to structure liability so that security failures are expensive."
I started out believing liability is better than regulation, but now I think a combination of the two is best. But honestly, this is not my area of expertise.
"I think this is a great idea, but it is going to open up a can of worms. Who gets to decide what 'good' security is?"
100% true. This is indeed a problem. But we seem to manage with fire codes and the like. True, it's a sloppy system -- OSHA could use some improvement -- but in the main it works.
This would create a very brittle system, because it would create a de facto security system nationwide.
1) Congress and the IRS specify what is secure
2) All companies default to that system to qualify for the tax break
3) Any vulnerability in that one system eventually gets uncovered and exploited on a nation-wide basis.
The article jumps off with "'My fear is if we do [heavy-handed regulations], we'll stifle innovation,' [Rep. Dan Lungren] said. 'How can we predict what the best way will be (to manage cybersecurity) in most of these instances?'"
I agree 100% that the externalities must be addressed, and incentives seem to be the path most favored by American companies (just watch "The Corporation" for an interesting profile).
But I am curious why the current American ruling party would treat security liabilities any different than, say, environmental concerns? Both are external and cause residual harm to citizens, no?
I also disagree 100% with the idea that regulation stifles innovation. Quite the opposite innovation traditionally comes from incentives, whether they be regulatory-based reasons for thinking about things differently or otherwise.
Cars are an excellent example of this phenomenon, since there have been amazing innovations in gas engine design directly as a result of regulations on emissions, not to mention safety. On the other hand diesel has been less regulated until now, and the innovation has accordingly been much less impressive. In fact, I suspect the next generation of diesel engines (2007) will be far superior to their predecessors in every way due to the recent clean-air acts in several states that banned high particulate matter and excessive NOx emissions (for passenger cars, of course, since large "work" vehicle manufacturers were able to lobby against the regulations). But I digress...
The latest boom in "data privacy" technology is a direct result of heavy-handed regulation of personal identity information, including the health care and financial data of citizens handled by corporations. As long as the regulations exist the technology will continue to evolve and benefit not only the citizens with data at risk, but the companies who come to better understand their information management/risk.
Still a far better idea to do as(I believe) Bruce has proposed often: Make sure liability accrues to the negligent.
Wielding the tax stick is commendable, but it has the undesirable side effect from which all well-meaning (or impressive-looking) but ineffective security measures (which Bruce likes to call "security theatre") suffer: great inconvenience to everyone except the criminal.
Nothing but legal binding of financial liability to the negligent or malfeasant will serve justice exclusively, and lest you be concerned that legal measures operate too late to be of concern, remember that businesses already have evolved a very effective strategy to deal with financial liability: insurance. And because insurance is a business - a big business, too, I might add - then the job of properly assessing risks is a clear bottom-line issue for them, and the oldest law in the business world (next to the law of supply and demand, which is its progenitor) is: if there's a buck in it, it will happen.
And, of course, the societal benefit of insurance - a by-product of its direct financial stake in its customers' risk profiles - is that it sharpens the insured's focus on preventive measures that mitigate or avoid risk, by means of its rate structure. In effect, beyond the pooled-risk business model, the service that insurers perform is the calculation of the conversion of costs that are unquantifiable and unpredictable to those that are quantified and fixed. Needless to say, if this were not such a valuable service, it would not be such a big business.
An excellent point. Indeed, one of the most memorable points made by an architect describing his craft is that limitations liberate by defining the problem.
Nevertheless, remember that innovation costs. For the innocent, these are extra costs; for the guilty, they're just a cost of doing business.
"I started out believing liability is better than regulation, but now I think a combination of the two is best."
Regulations are meant to remap liabilities and ensure they are dealt with by the most appropriate decision agent. In other words the point of regulation is to clarify who officially bears the liabilities, rather than let it be dumped by the most clever (legally savvy?) onto the least. For example, if a vendor sells you software without source should this be taken to mean a certain level of quality is implied, or should they be allowed to give a disclaimer that nothing is implied and they hold no liability? This can be also handled by a certification process, where source has been reviewed and approved, and therefore does not need to be disclosed because a third-party attestation comes with it.
Punishing those with lax security would also be nice. For example, you let someone's unencrypted SSN out into the wild? $10K per SSN lost, 50% to the government, 50% to the person ( and they get it tax free ) who's assigned that SSN.
The contrast between regulation of security practices on the one hand, and increased liability for security breaches on the other, is a good example of predictive vs. reactive security.
I think that the latter will be much easier for the government to get right. It's easier to tell whether a security breach has happened, than to tell how likely it is for one to happen.
This would leave prediction to the corporations. They would have to study good security practices in order to avoid the liability. I think they'll need the flexibility to try various approaches to this, because we know it's a hard problem.
I think we're over thinking this. I believe no tax is fair. The only way to make it less unfair is to let everybody pay equally. If the tax is 30%, then that's the tax. No funny stuff. Using taxes to punish/praise people barely stops short of blackmail/favoritism. Simply allow the the courts to levy fines against poor performers and allow the insurance companies give credit to their policy holders that use good measures.
Ok, this is what I was talking about earlier. Environmental decisions are similar to information security decisions in that they often handle issues of externalized and residual harm:
"The Bush administration on Thursday proposed changing environmental rules to give U.S. coal-fired power plants more leeway to expand aging facilities without installing expensive equipment to cut air pollution. Utility industry officials applauded the plan, but it drew fire from environmental groups and some states, who said it would make the air dirtier and give energy companies a benefit they failed to get in congressional legislation last week."
So, is it fair to say that even if the US Congress does decide to federally regulate security in order to help secure information (and fix the market by shifting liabilities) the Bush Administration will swing into action for corporations and try to block the measure?
Okay, I'm sorry. But this smells like it has Microsoft all over it.
As in.. "Hey Dub'ya (et.al), set up some kind of mandate forcing companies to comply and get a nice tax break that we'll work out in the wash later. We'll get them dang upgrades and OneCare subscribers! We'll turn all them NoOneCares to We'dBetterCare!"
1) Numerous people have suggested liability as a better solution. Question: How does a company show that it was not negligent in a security breach, and is the process any different than proving that it should qualify for a security tax break?
2) A tax break is not the same as a handout. Reducing tax liability is not the same as handing out already collected tax money.
I think there is another point to this -- security doesn't come cheap, and we either pay up front (though paying an increased cost for the goods & services the company produces and sells), or we pay through tax breaks. I, for one, prefer the former -- let them factor the cost of security into the cost of doing business -- after all, that is what I as a home user have to do (security is part of the cost of owning a home PC.)
The FCC (Federal Communications Commission) isn't in the business of regulating drugs in the USA -- that would be the FDA (Food and Drug Administration).
> as an american taxpayer, i don't way to pay more for your good security, i
> want you to get hammered in court if your poor security is breached.
If you have a 401k or 403b, or invest in mutual funds (or have money in a bank, for that matter), you may very well wind up paying *lots* more for someone else's bad security than you think.
In fact, if you live in the U.S. and some city or state owned computer system winds up being defended in court, you're hemmoraging your taxpayer money all the live-long-day.
Your bank can get hammered in court for having their poor security breached, you may wind up having your bank declare bankruptcy. Then you get only up to $100,000 of your money back (which, by the way, would come from the FDIC, which you as an american taxpayer subsidize).
The car analogy doesn't really work, because the fault/liability equation really only applies to the driver and the victim of the accident. When companies are sued and have to shell out big payouts, the money comes from the stockholders, most of whom had nothing to do with the decision making process.
> I believe no tax is fair. The only way to make it less unfair is to let
> everybody pay equally.
That's pretty simplified. How do you define "equally"? By a strict ratio of how much they pay vs. how much they earn? How about how much they pay vs the actual benefit they get from chipping into the system? Does the benefit need to be realized or just potential? Is it unfair to have everyone pay into a disaster relief fund if some people never have a disaster? Etc.
Flat taxes are emotionally cuddly ("Everybody pays the same!") but intellectually very suspect.
"money comes from the stockholders, most of whom had nothing to do with the decision making process"
Yes, those poor people who don't read the EULA when they buy something and find out it's horribly broken. Wait, am I on the right log entry?
Stockholders buy the stock, usually through a decision making process related to evaluating a company's performance metrics and past reports as well as its aptitude for risk.
Transparency of that company, however, is a very separate issue as the Enron debacle demonstrated.
"the fault/liability equation really only applies to the driver and the victim of the accident"
I have one word for you: Firestone
Ok, two words: Ford
"'It’s about externalities – like a chemical company polluting a river – they don’t live downstream and they don’t care what happens. You need regulation to make it bad business for them not to care. You need to raise the cost of doing it wrong.' Schneier said there was a parallel with the success of the environmental movement – protests and court cases made it too expensive to keep polluting and made it better business to be greener."
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.