Schneier on Security
A blog covering security and security technology.
« The Discovery of TEMPEST |
| The Presidential Limousine »
January 21, 2009
Breach Notification Laws
There are three reasons for breach notification laws. One, it's common politeness that when you lose something of someone else's, you tell him. The prevailing corporate attitude before the law—"They won't notice, and if they do notice they won't know it's us, so we are better off keeping quiet about the whole thing"—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.
That last point needs a bit of explanation. The problem with companies protecting your data is that it isn't in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company's security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.
So how has it worked?
Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.
I think there's a combination of things going on. Identity theft is being reported far more today than five years ago, so it's difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone's home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn't make much of a difference, reducing the effect of these laws.
The laws rely on public shaming. It's embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there's an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint's stock tanked, and it was shamed into improving its security.
Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like "crying wolf" and soon, data breaches were no longer news.
Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.
I'm still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it's not going to solve identity theft. As I've written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use.
Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.
This is the second half of a point/counterpoint with Marcus Ranum. Marcus's essay is here.
Posted on January 21, 2009 at 6:59 AM
• 39 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
A very timely blog, considering the Heartland Processing Systems breach.
They claim that issuing a press release stating that probably 10s of millions of credit card transactions were sniffed from their systems on the same day as the inaugeration was just coincidence.... mmkay guys, whatever you say.
It was a coincidence in the sense that, coincidentally, an invasion from Mars didn't happen the previous day, so they needed to bump the disclosure by 24 hours.
It might be important to note that in many cases companies are still not disclosing data loss because in some cases it's cheaper to pay the fine _if_ they get caught than to pay the cost in loss of business if they disclose.
I write security policies for companies, and in some cases when I get to the disclosure section, I end up having a quiet discussion with their legal counsel where certain 'adjustments' are made to the policy.
We need someone to tell the UK government about this sort of thing. We hear about sensitive information leaks from government departments on a weekly basis at least but I bet it's only the tip of the iceberg that gets out. Without people hearing more about this they'll never understand the issues around it. If the government would just say "we lost x data today and this is what we're going to change to ensure it doesn't happen again" I'd be a little happier.
Many years ago, my wife's wallet was stolen. Someone then used her credentials to do several things, including open a Best Buy charge account and buy $2500 worth of stuff. We filed a police report, wrote letters to everybody who said we owed them money, and that was the end of it. No trace of it on our credit reports. The loss of the pictures in the wallet was more annoying.
So, it isn't necessary to forbid companies from extending credit to people with shaky credentials. It's necessary to give the theft victim quick and easy ways to demonstrate that he or she didn't ask for or use the credit, and isn't responsible. (It seemed odd to me that Best Buy would give credit freely, then write off $2500 of charges easily, but I figured they knew their own business best.)
The harm that impersonation does is when the victim has trouble establishing lack of debt, or can't get negative references out of his or her credit report, or winds up dunned by collection agencies, or has their identifying information sitting in a police file for later misinterpretation. That specifically is what needs to be addressed, not corporate decisions on the optimal freedom in extending credit.
I haven't read the CMU study, but my first thought was about endogeneity. Perhaps the states that did pass laws had a bigger problem with identify theft in the first place. If this is true, and they didn't use instrumental variables, then we don't really know what the real effect of the disclosure laws on identity theft. We can never know what would have happened had they not passed the laws.
It seems evident that the incentive structure is not right yet.
Legal liability towards customers whose information has been compromised, to the tune of (say) $100/customer/(month elapsed between compromise and disclosure), might focus more assiduous boardroom attention on security.
Is data localized enough today to show State-level granularity? Or is so much stuff shared across so many databases among so many companies as to nullify the effect of State laws?
Or make the company that suffered the breach liable for any damages sustained by individuals because of identity theft caused by said breach. This is probably hard to prove in court, but even if a few people sue and win, it will incentivise companies to invest in better security.
The followers of this blog, of all people, should understand the difference between identity theft and credit card theft. Having your CC # stolen is a real hassle, but nothing compared to having your identity stolen.
What surprises me about the effect of breach notification laws is that in many cases, they seem to have the opposite effect intended. A public relations disaster actually motivates people in administrative capacities to passive-aggressively undermine their own systems of accountability, because breaches that go undetected need not be reported. The current laws in many states rely upon self-policing, and that implies a cost that must be born by the company itself, significant disincentive to do it right. There is no such thing as a popular, money-losing department.
Because no system is entirely secure and delegation of responsibility places the responsibility for data loss on the shoulders of people who, in many cases, have no practical understanding of how to properly secure a data system, you'll find instances of middle managers willfully undermining their own audit trails,, refusing to observe security best practices, engaging in security theater, making massive expenditures on nonsense products, doing everything they can to outsource liability, and generally walling themselves off from reality. There's nothing worse than working in a security role for someone who sees reality as a threat to their own livelihood.
"The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use."
-Amen to that!
So I just had my identity stolen and it amazes me that people got $10,000+ plus in instant credit using innacurate data about me (wrong DoB, wrong phone).
A lot of identity theft could be stopped if there were some waiting periods for "instant" credit (24 hours) and/or more verification at the time of grant (e.g. take a photo + thumbprint of the requestor). Even though the thief used my data relatively locally, law enforcement cannot (or will not) follow up since it is "outside their jurisdiction" and they have nothing to go on from the banks to find the person.
Better (self-)regulation of instant credit would reduce a lot of fraud and identity theft.
Our head of IT Security (of a major telecom) told us once, "We have one key metric: Don't show up in the Wall Street Journal for a security breach."
Unfortunately, this is how many executives think. We had been pushing to implement laptop encryption for years. Guess what it took to get it implemented? Some HR moron losing their laptop with all employees info on it, and having to disclose that publicly.
These laws work because they get attention to security issues at very senior levels.
In a land of no responsibility and multi-billion dollar government bailouts for the largest failures in history, why would anyone waste money on data security? There is no shame and no consequence, so why bother?
If we really wanted a change, we'd have to play RIAA style or HIPAA style - a rather high dollar penalty per record lost. That would motivate companies a bit more and possibly pay off the national debt in the process.
Identity theft, loss of personal data, or password cracking - like many other security related issues - are also used by insiders as a means for playing corporate politics.
Two clowns at a place I worked who were getting tired of my shutting down their clandestine "warez" sites and installing TCP wrappers on the systems (making their activities more trackable) helpfully offered to run "crack" on the password file. Then, before I was able to do anything formally about all the cracked passwords (of which there were hundreds), they "anonymously" notified the press.
After some private conversations with the author of the resulting article, the damage was suitably undone, but it took time and effort I shouldn't have had to spend; and nothing was done at the corporate level to discipline the creeps in question.
Note well that the agenda did not take into account potential damage to the employers or the particular users, all that mattered was that the sysadmin would be given a black eye in the press.
"They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft."
Not sure I understand this. You may expect the number of *actual* security breaches to go down, if the law achieves its intended effect by forcing companies to take security more seriously.
However, you would also expect the number of *reported* breaches to go up, as companies are now legally required to report things when they may have kept silent before.
The net effect could go in either direction, but I would not be at all surprised by an increase in the number of reported breaches.
Oh sorry, I see it now. Ideally, you would hope for the number of self-reported breaches of company security to go up, at least relative to the number of actual breaches, but you would hope for the number of identity thefts reported by individuals to go down.
The law wasn't intended to make fraud incidents go down, but rather to limit their impact on the consumer by making them more vigilant and able to detect incidents faster.
I think that the paper does an embarrassingly poor job of laying out that what's being studied is not the intent of the law, but what they're able to look at.
Most all data in commercial and government systems are "exposed" or "compromised" to one degree or another virtually all the time. Should each citizen therefore be mailed 100 breach notices every day? Legally and ethically speaking, we do not have a competent definition of what is and is not a security breach. The result is confusion and excessive anxiety on the part of data holders, data subjects, legal authorities and the media. When essentially all data holders are shamed all the time, the shame become meaningless and the notices are just noise. http://hack-igations.blogspot.com/2007/09/... –Ben
This is an extension of Bruce's oft-made point that when a threat becomes routine, that's when it's really dangerous (i.e. car crashes vs terrorist attacks).
Losing millions of people's data is now routine.
How strong is the correlation between "state in which someone lives" and "state under whose laws that person's data is held"?
It seems to me that comparing ID theft rates between states with different laws isn't actually going to measure the effectiveness of those laws in reducing ID theft, as it is to measure the correlation between which state's laws the data is held under, and where the person is actually located.
I agree with Bruce's point,
"The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use."
And I have some suggestions on that score.
1, Give ownership of the data back to the individual. So that they can make changes to their credit files etc.
2, Make the organisations responsable for the data loss not just post a letter to the person BUT must also put an indicator to that effect in a persons credit file for every occasion with all major credit file organisations.
3, Make all credit offering organisations check applications against the credit files prior to making an offer.
4, For all credit applications have a postal warning/verification process.
5, Make insurance for all organisations holding personal data mandatory. And also remove ability for organisations to hide behind cut-out and off shore companies etc.
6, Remove "Crown Privelage" from all Gov Depts holding data on individuals.
7, Make compensation for personal loss compulsory with additional damages equivalent to the cost of repairing a persons credit.
8, Make process of claiming compensation simple and low cost.
The effects of these will hopefully make organisations take the liability back in house.
And if they find that their method of business is not cost effective under these rules then "tough" they need to either change it so it is, or change to some other mode of business.
I agree that the best solution would be better authentication, but it raises a bunch of different questions.
I wonder if one problem is that companies won't rationally improve their security unless the incentives are just gigantic. If so, one approach is regulation like Sarbanes-Oxley or the PCI DSS that forces the company to take specific good measures. Another is to make security cheaper. Develop software right, get it shipped in a secure default configuration, teach students security principles and IT people security practice.
The US has a giant information security agency that could probably help the private sector a lot. No joke -- fraud is a multi-billion-dollar problem; it could be worth dedicating a tiny slice of the INFOSEC budget to private INFOSEC.
The study also ignores that the California notification law might be reducing breaches nationwide since it provides an incentive to any company doing business in California -- in other words, any big company.
Shifting some liability from merchants is still interesting. Even if we had better authentication for CC purchases, big insecure databases would still be bad thing. And it's intuitively unjust for the merchant to get stuck with a loss caused by the bad security of a data-breacher and a credit-card company.
Maybe the cost of notifying people of security breaches will act as a bit of a deterrent. Postage to millions of customers adds up. When they all call the customer service line, that costs even more.
The timeliness is impressive.
There is a way to help the disclosure rules gain weight: campaign against them. I know I'm alerting all of my friends to Heartland's failure.
The idea is patterned after China, where a single verbal naysayer can replicate their message through the mass of cellphones and drive a company under overnight. If sufficient word-of-mouth gets around, it will get to the people who can choose to work with Heartland. If they stop choosing Heartland, the company will take notice.
@DigitalDoc: That's definitely a problem. I consulted with a health care company that had absolutely no data protection in place, and when I pointed that out, the engineers just shook their head and told me management decided that the cost of protecting the data was more than getting fined every single day for the next ten years...
"the cost of protecting the data was more than getting fined every single day for the next ten years"
that's quickly changing. california sb 541 and ab 211 are a good example of the public response to this kind of thinking. there are now fines of up to $25,000 for each patient record accessed, used or disclosed in an unauthorized manner. also administrative penalties of up to $100,000. disclosure of a breach is required within five days or there's a fine of $100 per violation for each day late up to $250,000 max. illegal use of patient information for financial gain can brings $250,000 per violation. and finally, patients can claim up to $1,000 in damages even if a breach has not yet been proven harmful to them.
these penalties are obviously a response to the management that tried to calculate risk the way you described.
in fact, if you look at the comments by the head of UCLA medical after the last couple year's of privacy fiascos, you can understand why schwarzenegger dropped the hammer with these new fines.
i'll present more details in my webinar on health-care privacy and compliance next week. :)
New website at NSA.gov. Alas, same as the old site, just prettier and a bit better organized.
any comment on what Russell Tice had to say wiretapping (and will talk about again Thursday night) on MSNBC's Countdown with Keith Olbermann?
Bruce, you're missing the point in a very common way -- shaming only has a long lasting effect when it reduces buying of the product.
"Buying" is done by customers. In these cases, the consumer who bears the cost IS NOT the customer. The decision on which company is used to warehouse data is not done by the person who pays the price for breaches -- the user of the credit card.
Which makes this whole exercise pointless. Either the markets have to be regulated in order to align the customer with the consumer (make them the same person), or we must create a costly legal liability for the vender -- either a civil liability, or a regulatory one.
"Shaming" is only of value when reputation among the customer is affected. Consumer is not generally equal to customer -- it's the mistake everyone makes, and why so many analyses are out of whack, left, right and technocratic.
Great post. Very interesting comparison results for states with/without breach notification laws.
You're right that the real result we want isn't better reporting so much as better security and compliance so we don't need so much reporting.
There's a good blog post deconstructing the myth that security processes are necessarily difficult to achieve here:
Your comment 'The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it's to make it difficult to use.' is spot on.
Unfortunately, in the area of credit specifically the industry has become very aggressive in giving out cheap, easy, credit. This makes it easier for the identity thieves to take advantage of the system. It probably has also contributed to a lot of the problems with bad debt in the industry.
I think as an industry of security professionals, we need to start providing more specific, constructive guidance on the types of steps we need to take in terms of identity, authentication, securing of information, etc. Often it is very easy for us to say 'that was a bad idea' but we need to be saying more 'do it this way'.
Breach laws are fine - for what they are an after the fact response. There needs to be an effort to prevent breaches or at the very least make it harder for them to happen. Clearly retailers would rather clean up after than build in measures to protect data and secure systems. Some of the many breaches occur because companies are NOT doing the basics.
are there any sites that provide notification of data breaches? i would have to wonder how such sites provide up-to-date information (e.g., company official disclosure, heresay).
I wonder if lawyers could get involved. A class action lawsuit could be a deterent.
recently heard something on NPR about the rates of (1) identity theft and (2) unemployment. #2 might have been "economic downturns" or something like that. apparently it was back in the dot-bomb days that there was a lot of ID theft, and it declined pretty much until the current economic collapse. the guy on the radio seemed to think that criminals have a harder time with the usual kind of money-making activities, so they have to work a little harder -- ID theft is apparently more difficult than peddling drugs? this also makes it more difficult to determine effect of various laws on ID theft rates.
I have had a look at the share value of TJX. It seems their well-known security breach didn't impact the share value. Does anybody know about any research in the correlation between security breach publication and share value?
After some additional conversations with the authors, I still have issues with some of what's in the paper, but would like to apologize for calling it "embarrassing." I had missed an important element in my analysis.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.