Schneier on Security
A blog covering security and security technology.
« Friday Squid Blogging: Squid Also See Through Non-Eye Organ |
| Did a Public Twitter Post Lead to a Burglary? »
June 15, 2009
The "Hidden Cost" of Privacy
Forbes ran an article talking about the "hidden" cost of privacy. Basically, the point was that privacy regulations are expensive to comply with, and a lot of that expense gets eaten up by the mechanisms of compliance and doesn't go toward improving anyone's actual privacy. This is a valid point, and one that I make in talks about privacy all the time. It's particularly bad in the United States, because we have a patchwork of different privacy laws covering different types of information and different situations and not a single comprehensive privacy law.
The meta-problem is simple to describe: those entrusted with our privacy often don't have much incentive to respect it. Examples include: credit bureaus such as TransUnion and Experian, who don't have any business relationship at all with the people whose data they collect and sell; companies such as Google who give away services -- and collect personal data as a part of that -- as an incentive to view ads, and make money by selling those ads to other companies; medical insurance companies, who are chosen by a person's employer; and computer software vendors, who can have monopoly powers over the market. Even worse, it can be impossible to connect an effect of a privacy violation with the violation itself -- if someone opens a bank account in your name, how do you know who was to blame for the privacy violation? -- so even when there is a business relationship, there's no clear cause-and-effect relationship.
What this all means is that protecting individual privacy remains an externality for many companies, and that basic market dynamics won't work to solve the problem. Because the efficient market solution won't work, we're left with inefficient regulatory solutions. So now the question becomes: how do we make regulation as efficient as possible? I have some suggestions:
- Broad privacy regulations are better than narrow ones.
- Simple and clear regulations are better than complex and confusing ones.
- It's far better to regulate results than methodology.
- Penalties for bad behavior need to be expensive enough to make good behavior the rational choice.
We'll never get rid of the inefficiencies of regulation -- that's the nature of the beast, and why regulation only makes sense when the market fails -- but we can reduce them.
Posted on June 15, 2009 at 6:45 AM
• 65 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
My opinion is the battle/war is lost. How about we enable an apparatus such that we can "proxy" our identity at all times ... a grown-up avatar that we use. That is the identity we use, and is linked to the real person with PKI. That identity establishes a credit rating, and hence we protect it, but never has to be traced to us, yet we can operate it through PKI.
Could you describe a couple of examples (hypothetical is okay) of "broad" and "narrow" privacy regulations for us?
# It's far better to regulate results than methodology.
# Penalties for bad behavior need to be expensive enough to make good behavior the rational choice.
Take these two together and imagine a small business that, with good intentions, chooses a methodology that fails to produce required results. Are the penalties large enough to bankrupt the company? Do we declare that if you can't devote $100K/yr to info security, you have no right to be in business? Does the regulation, in effect, require all small businesses to contract with info security specialists to handle all info processing?
All regulations tend to have the effect of screwing the small business, unless small businesses are specifically excluded.
I don't pretend to know the answer, but there are a number of possible "answers" that simply move the pain from one victim to another.
If someone opens a bank account in my name, and I find out about it, I'll get the address corrected and have the replacement card sent to me. And thank them very much.
@the dude: and later be prosecuted for receiving stolen goods?
The problem is that in this day and age, a small business can leverage an enormous amount of private data, so it makes no sense to specifically exclude them. It seems to me that you could make this work by making the penalty a function of the size of the breach and the company's revenue.
"Does the regulation, in effect, require all small businesses to contract with info security specialists to handle all info processing?"
If thats what's needed to get the result then yes.
Why should my privacy be compromised because some company can't "afford" to keep that data private? Thats the cost of doing business with private data!
I am in this situation. Its easy to deal with. All personal data (CC details etc) is done through a 3rd party. I am in the EU, I can't afford not to comply with the privacy laws no matter how small I am.
while i'm usually a fan of state-centered laws, for privacy we should have a general, national-level standard that doesn't rely on industry self-regulation.
europe's opt-in laws are a much easier to enforce/maintain method than what we have here.
what MA just did with privacy (mandating specific techniques to protect pii) is crazy over-compensation and is entirely un-maintainable. what CA is doing is much better, but is still a patchwork.
Bethan above is correct in that specifing how it should be done by law (the USA way) is going to be a failure almost before the consultation has started.
The European method is to usually specify a legal and technical framework into which specific types of requirment are put. It is then upto standards organisations to make more definate "tests" etc to verify compliance.
Also I will hark on about Insurance (as I usually do) it has not happened because there is no effective market for it as such. Legislation with uper limit penalties per offence will make the possability of such a market possible.
I feel that it is only with such a market is itgoing to be possible for even large companies to take rational decisions about how to maintain personal records and other identifying data secure.
We in the U.S. do that as well (on the national level). Congress will pass a fairly broad law, and the so-called "implementing regulations" created by outfits such as the Federal Trade Commission or the sundry banking regulatory bodies will - so the theory goes - fit into it and provide particular guidance as to what is and is not permissible.
And what's the meta-meta problem?
Individuals don't take part in most markets. We are objects being bought and sold between larger entities. So we can only discuss patches to a system which is structured in such a way as to consider us human beings as resources to be bought and sold.
We see some patches in the EU where there still exists some "market" where individual interests are still reflected -- most EU countries still have political parties to some extent, instead of the captured entities called political parties in the US.
I doubt that the US is capable of implementing any kind of meaning privacy laws -- structurally, individuals are almost completely irrelevant.
Making the penalty a function of the revenue would only lead to big companies outsourcing their data to dedicated small data-farming or -selling businesses that mysteriously don't make any profit at all. I think the penalty should depend on the breach alone. If you can't operate your machinery safely, then don't operate it at all.
IT security doesn't cost the world, especially if you consider that a small business should have a smaller IT that is easier to maintain as well as less sensitive data that can leak. It shouldn't be the customers problem if you can't afford decent infrastructure.
If the risk of getting sued is too great, than just avoid saving private data all together. Information that you don't have in the first place can't leak. And that's best possible security one can wish for.
The Hartford in Conn. actually offers "data privacy coverage." The limits seem to be a bit low, and coverage details aren't all that apparent in the news release, but this might benefit small to medium-sized business:
"Reimbursement for expenses related to responding to a major privacy event. Some examples include notification of affected parties, costs for managing the crisis, data privacy regulatory fines, cost associated with credit monitoring, and investigation of the event by outside experts."
"Do we declare that if you can't devote $100K/yr to info security, you have no right to be in business? Does the regulation, in effect, require all small businesses to contract with info security specialists to handle all info processing?"
Yes. Absolutely. If that is required to protect information and customer data.
If someone opens a bank account in my name, it's the bank's problem, not mine.
Why should the bank be allowed to pass off this fraud, due to their own incompetence, to other customers? This is part of the whole problem. Opening a "bad" bank account is no different from cashing a "bad" check. The bank is responsible for verifying the identity of the person opening a bank account the same way they are responsible for verifying the identity of a person cashing a check. If the bank gets it wrong (in either case), then they are liable for the bad account or the bad check.
The problem is that (for some unknown reason), banks have been allowed to pass their fraud losses for bad accounts directly onto their customers.
When banks are truly held responsible for verifying the identity of anyone opening a bank account, then this problem will be reduced to acceptable levels (will never be eliminated). Will there be a "cost" to this? Absolutely. Banks will need to spend more to verify their customer's identity. Customers may have a waiting period when opening a bank account, and in-person verification may be a requirement.
The same should hold for other businesses that store/use personal information. When this is no longer an externality, and businesses are held responsible, the problem will go away.
Unfortunately, the only way I see this being fulfilled is through government regulation, by forcing the financial incentives (i.e. fines) necessary to change this from an externality.
As Samsam points out, any regulations hit small businesses harder than large ones, because the cost of complying with regulations isn't linear with the size of the business, and in a results-oriented regime with significant penalties there is no good way to manage the risk.
Moreover, I've been reading a few stories about people with low-end websites screwing up due to basic misunderstandings.
Possibly, what we need to do is encourage hosted solutions, which will be run by people who actually know what they're doing. I don't know the best way of encouraging that, or keeping the cost low enough for very small businesses and/or hobby websites, but I don't see a better way out of the problems.
there are plenty of methods to control costs of pii protection, especially considering that most pii wrecks occur from inside the business. seems like it would be easier for a small business to maintain a secure pii system, as there would be fewer employees and machines to monitor for screw-ups.
"We'll never get rid of the inefficiencies of regulation -- that's the nature of the beast, and why regulation only makes sense when the market fails -- but we can reduce them."
Bruce... markets fail because of regulation. Regulation distorts market equilibrium.
Markets don't fail - they are the result of thousands or millions of economic transactions. Central planning does fail, because it doesn't have any price signals to operate on.
There are some simple, low-cost solutions a small business can do to protect data.
Last year, as I was going through the process of buying a home, I thought about all the personal information my mortgage broker was dragging around on his laptop. Was his drive encrypted? No.
I tried to help him understand the value of Truecrypt... but his response was non-committal.
A simple requirement - such as whole disk encryption- should be mandatory... regardless of the size of the company.
I'm a fan of simple non-gooferment solutions. It would seem that "copyright" is an already existing "solution". If your stuff (e.g., your NAME, your SSN, your PHONE NUMBER, and your EMAIL NAME) was "yours" (i.e., copyrighted), then each entity that wanted to "copy" it (i.e., save it in a database) should have to have your permission by some type of written agreement. Seems that would prevent Credit Reporting Agencies, Google, and such from being disinterested in our collective satisfaction. imho!
@Adam M.: While it may be true that "markets do not fail", they might reach an equilibrium that is not desirable.
@ Andre LePlume,
Thanks for the update, most of the US legislation I've had to deal with is in the Petro-Chem industry and some of that handed down by the EPA was a bit of a joke specifing equipment that was nolonger for sale in one mandated test.
"IT security doesn't cost the world, especially if you consider that a small business should have a smaller IT that is easier to maintain as well as less sensitive data that can leak."
I'm not sure that is true, first off in the UK most of the data losses have been due to equipment loss, or malware.
The cost of physical security is often related to perimiter security therefore the actual cost of this for a low rise office block would be the same as for a high rise block with the same basic ground floor area.
Likewise large organisations tend not to share their premises with other businessess so that again tends to reduce security costs.
When you talk about "malware" then most of it these days is via the internet. The skills and equipment needed for this is not something most small to medium size businesses can easily justify as a proportion of operating costs.
However the up and coming infection vector these days although still quite small are USB drives people use at work and home and for which encryption provides absolutly no protection.
Likewise for "road warriors" encryption is often only of use if they turn the machine off. Many don't and either let it "sleep" or put it into heibernation which leaves the data vulnerable.
For DBs of sensitive data the "old iron" mainframe and terminal model is actualy very good. Upto date would be diskless clients to local servers with no external connectivity and encryption on the wire with no USB or other ports on the diskless clients.
"The Hartford in Conn. actually offers "data privacy coverage." The limits seem to be a bit low, and coverage details aren't all that apparent in the news release"
Hmm from what you quote "expenses" may well not cover fines or other liability that might arise, so it might be little more than "legal costs" cover (which in of it's self might be worth while).
The problem that worries a lot of insurance organisations is "a small fine for each and every record lost" lets say 10,000,000 records at $25 each, that would make significant whole in any organisations budget. Then of course there might well be a class action to follow from those ten million people...
As a % of the anuall insurance payout these figures are small however unlike normal risks that spread evenly with time you have "zero day" issues where many many organisations could be hit badly within a couple of days, and that is scarey stuff for most insurance organisations.
Ah, this is why the study of political science, economics and philosophy are a good foundation for managing technology today...
"1. Broad privacy regulations are better than narrow ones."
A balance of broad and narrow is best, as managed through different representative levels of governance.
In the US this is often enacted as a mix of State and Federal. HIPAA/HITECH, for example, are not only compatible but meant to work together with laws like MA 201 CMR 17 and CA AB 211 and SB 541. You want representation as near to the problem as possible, with roll-ups to the macro level only when efficiencies are obtainable through forms of consensus.
"2. Simple and clear regulations are better than complex and confusing ones."
Who wants confusing anything? The issue here is the level of expertise necessary for fair representation. If a professional is always required, then the cost of managing the regulations is higher. This is not something that is easily dictated away, however. Privacy in many areas is straightforward, but experts are often still required to decipher new behavior and rules especially in tech. The simple stuff should be made simple, but you need the rest referred to a forum for some kind of precedent to be established.
"3. It's far better to regulate results than methodology."
Ah, here I go back to the example of glass manufacturing in the USSR, which I've posted before. Just measuring output, without any perspective of inputs and limitations on the market, can create bad incentives and unhappy outcomes for everyone except those being measured.
"4. Penalties for bad behavior need to be expensive enough to make good behavior the rational choice."
Expensive is a great relative term, but what's the real measure/penalty? Same with the term rational...you think people are rational about doing things such that they always want what is inexpensive? I have one word for you: purses.
The bottom line is behavioral economics is slowly proving that markets operate based on very "irrational" things like fad, fashion, pride, hope, jealousy, prejudice, etc.. Did people start buying the Hummer because gas prices deflated and the weather prediction was for zero headwind? No, people often spend beyond their means and can do very surprising calculations of what is "expensive" to them.
Those who say markets do not fail are the modern equivalent of saying that kings do not fail. Brilliant, we can say that someone wins in a contest. This is the absolute lowest standard of measuring success. Once you add in things like freedom, liberty, and more modern concepts of fairness including privacy then it really matters not just whether your market is succeeding but how it operates.
@ Adam M.,
"Markets don't fail - they are the result of thousands or millions of economic transactions."
Not quite true markets do fail due to various reasons such as becoming no longer of value.
"Central planning does fail, because it doesn't have any price signals to operate on."
True but "central planning" and regulation are not the same.
Regulation is mostly just an additional risk in any given market to prevent certain undesirable features that would distort the market (price fixing, cartels, and provide desirable features such as human/animal/environmental safety etc).
Central planning is actually participating actively in the market by manipulating resources or tarrifs. Many nations in the past have attempted to control all markets by way of the money supply and taxation.
And you could argue that since the deregulation of the money markets during the Thatcher / Regan era that it has had the exact oposit effect of that that a free market is supposed to achieve.
Any way I'm not keen on "economic science" as it is a term that appears to be well... let's just say wrong for the sake of peace and quite ;)
"The European method is to usually specify a legal and technical framework into which specific types of requirment are put. It is then upto standards organisations to make more definate "tests" etc to verify compliance."
An interesting version of this is going on with utilities in the US today.
The Federal regulatory body for energy (FERC) has tried to regulate cyber security with a short deadline. FERC turned to a private industry organization (NERC) to publish specific guidelines. At the same time the federal standards group (NIST) is trying to publish guidance (SP 800-82) specific to utilities. NERC is currently ahead of NIST's guidlines because of the review cycle required for public acceptance. The same happened with PCI, as they pushed through the DSS regulations very quickly and most governing bodies (except the state of MN) have let PCI DSS stand alone.
EU attempts at SOX-like legislation is another good study. It seems that the EU has let each member country come up with vastly different interpretations of what entities are regulated and how. Aside from the many different languages, which lead to variations of their own, it does not seem likely that Denmark and France will ever see a "public" company as the same thing. This makes for a very challenging test.
the NERC/FERC example is interesting. Some are apparently not all too impressed with progress, and want to exert some more federal control:
Homeland Security Committee Introduces HR 2195, a Bill to Secure the Nation’s Electric Grid
"The findings were disturbing. Most of the electric industry had not completed the recommended mitigations, despite being advised to do so by the Federal Energy Regulatory Commission and the North American Electric Reliability Corporation. This effectively left many utilities vulnerable to attacks. Furthermore, in spite of existing mandatory cybersecurity standards, the North American Electric Reliability Corporation (“NERC”) recently reported that many utilities are underreporting their critical cyber assets, potentially to avoid compliance requirements."
Business would like to paint this picture as a privacy initiative that is encroaching on normal commercial activities. That is not what is happening. The commercial activities are the item in transformation. Products and companies are being formed that use personal data for profit. Some of these products may be well made and protective of PII, others are not. Right now, the only defense for the citizen is to make these efforts as expensive as possible. Even if the privacy protection method is absurd, the very cost of it may prevent an Experian-type from investing in some horrible database. Once data has been shared between an Experian and another entity, it is unlikely to be expunged until some damage is done.
I am not anti-business, but if a company changes what it is doing with data, they rarely notify anybody. If you are lucky they will at least review the legality internally, but there will be no advocate for privacy present.
I think the way out of this mess is to create a standard way to share marketing data that protects all parties, instead of making laws and hoping that companies follow the spirit over the letter.
If you can't run your business competently then you have no business in the business.
If you are ignorant of IT, then running a business in IT is plain stupid.
@ Clive Robinson says:
"Up to date would be diskless clients to local servers with no external connectivity and encryption on the wire with no USB or other ports on the diskless clients."
Yeah, I can make my workstations perfectly secure--I just pour concrete over them until they are completely encased. Perfect security!
Oh, you wanted to actually USE the machines? Then neither my solution NOR YOURS works in the real world.
"Regulate results over methodology"
@greg: How many things should a business have to monitor and/or regulate? Each additional thing is a drag on small businesses (and it happens that, for reasons irrelevant here, I would rather favor small than large businesses). Bear in mind that computers and the Internet have been becoming more important in lots of unrelated businesses, so that you appear to be suggesting that all entrepreneurs should be well aware of IT.
I wouldn't want to deal with an ISP that didn't "get IT, but I might want to deal with a mechanic, or restaurant, or small vendor. (One of my hobbies would be in serious trouble without the ability to order over the web from what are essentially garage operations.) Some of these places really can't last with an additional $50K/year regulatory burden.
Therefore, I think it's in our interest to look for a solution that isn't going to hurt small businesses that much. It certainly is in mine.
Basic market dynamics don't work because they've been preempted by the government mandates and intrusions like social security and income tax.
Two brief points.
1. Copyright is regulation. It's an external constraint placed on an unregulated market.
2. Instead of presuming that only regulation can work, examine the thesis making customer privacy a functioning part of an efficient market. In short, give it a role just like any other marketable commodity. Or does digital technology fundamentally undermine that role, in the same way it undermines the reproduction of recorded audio and video? Be sure to show your work.
"I think the way out of this mess is to create a standard way to share marketing data that protects all parties, instead of making laws and hoping that companies follow the spirit over the letter."
One minor problem with out a law to enforce it's use not everybody will use it.
There is the old engineering joke,
"Standards are like toothbrushes every one agreas you need one, but nobody else wants to use yours..."
And as is often forgotten these days, as long as there is one apple in the barrel going soft and brown, the rest will surley follow in short order unless it is removed and the other apples and barrel cleaned.
One asspect of certain markets is their efficiency is all to often better expressed as "lowest common denominator" due to the perception that this reduces cost and therefore increases profitability.
Whilst there are some markets where this appears to work (think of goods in a 1$ store) the old addage about "you get what you pay for" still applies in 99% of cases.
Further the sort of standards required need to be "built in" from the start not added as an after thought. So the current business models would have to change fairly drasticaly.
I can see the simple calculation of cost by the companies dealing in PII and then watch the amount of money they will spend on spin and lobbying to prevet any such proposels becoming law.
It will take a fairly major event to stop the politicos and joe average listening to such spin and lobbying and do as the people (should) wish.
Relying on policies and compliance with those policies does not empower the individual at all. All it does is create artificial incentives for organizations to 'protect' privacy, or give the individual some legal handle after the damage is done. Let's all hope that works. One of the earlier commentators suggested hosted solutions, which is one way to empower the individual who's privacy is at stake: being able to choose your own data center/provider.
Technology/innovation will probably do more for you and me than a ton of legislation can.
@David: How much damage should a small business be allowed to cause when they choose not to monitor and/or regulate? Each additional thing is a drag on all the rest of us.
Between Efficiency and Complexity, the small business can do more - good and bad - than at any time in history. And this means that you can be small and good, but you can't be ignorant or uncaring, and you probably have to be pretty smart.
They aren't the only ones - it's pretty much impossible for a hunter-gatherer to survive in modern society; they need to be part of a larger group, if only in the way they participate in society, because there's no place left to just go hunt the deer, pick the berries, and set up a lean-to.
Similarly, the modern small business just has to be better than the ones in the past. And that includes how to deal with the private information of both customers and non-customers. If they were mostly getting it right, I don't think there would be any push to regulate. But .. they're not getting it right.
"Markets don't fail - they are the result of thousands or millions of economic transactions. Central planning does fail, because it doesn't have any price signals to operate on."
Now, that's just silly dogma. Have you not been paying attention to the last few years' worldwide banking/ mortgage/ derivative/ corporate collapses?
Did you oppose the "central planning" of the US banking bailout, or did you prefer the entire banking system to fail as well?
It seems, everyone is pointing the finger the other. I am getting tired of people acting like a small children and refusing to take responsibility for your own actions. Yes, Banks or Businesses(including small) must be seriously penalized if they do not protect your most private data well. That must be their responsibility if they decide to maintain that data. But you have to meet them half way. You are also responsible for protecting your data. There are infinite resources out there to help you do so (including this site). You cannot just let your child sit around the whole day playing video games, eating chips and pizza, then later blame the fast food industry for his substantial weight. It is time to take matter into your own hands and start protecting yourself, and educating other about protecting themselves(like this site does). If you cannot even do the basic to protect your identity, I do not think the Businesses/Banks/etc will help you very much
"(e.g., your NAME, your SSN, your PHONE NUMBER, and your EMAIL NAME) was "yours" (i.e., copyrighted)"
copyright... to be PRIVATELY enforced by pinkertons.. or by 7 year old's from chad with ak-47s.
and the "not-so-hidden" cost of lack of privacy? Where is Forbes on that issue? Or am I being naive?
First off my apologies for not getting back to you sooner but a storm over London broke the mobile broadband where I am.
With regard to your comment,
"Oh, you wanted to actually USE the machines? Then neither my solution NOR YOURS works in the real world."
Whilst I would agree that your solution (embeding in concrete) makes physical access to the machines more than a little difficult, I would compleatly disagree with the rest of your statment.
If you go and look up most of the available studies on "office efficiency" you will see that IT in general has reduced efficiency in the major interactive task computers are used for by humans (ie producing documents).
In fact there are some studies that indicate office worker efficiency peaked in the early to mid 1970's and has declined ever since.
The mid to late 1970's was the start of computers being used for general business related tasks (ie not just accounting and payroll).
Which at first sight would appear to support your argument. However the reason I disagree with you is that dealing with PII is not generaly something humans have contact with after data entry (which can very efficiently be done on a terminal or diskless client and often is).
By far the bulk of such data is dealt with in an automated fashion and as such the machines doing the work (headless servers) do often run in a concrete box (the server room) without much human interaction (which again can mostly be done remotly).
It's horses for courses and your argument is a little like saying to the owner of a donkey "it will never win a horse race", when all the owner actually wants to do with it is continue to take vegtables to market...
Your a good fellow and a pretty good security pro. But I believe your conclusions here are exactly backwards.
Let me try to set you streight as to why.
First you suggest that " Broad privacy regulations are better than narrow ones."
This is only partly true. PII changes and as such specific regulations that are focused on that information being private and kept that way would be far better than some broad regulation that leaves too much to interpratation, especially in say a court proceeding.
Second you contend: "It's far better to regulate results than methodology." This is a non starter as without some means or method by which to set a privacy regulation, you cannot logically have good knowledge of any results.
@NoSuchThingAsIdentityTheft: Exactly. I've been saying just that for years. If a bank gives someone money; it is the LENDER's responsibility to get the money back from THAT PERSON. If that person gave them MY information fraudulently and the bank didnt bother to verify it, well then the bank is defective and needs to fail and be survived by a fitter organization that can confirm identity competently.
Information about a person should legally be the property of the person it refers to; not the property of someone who happens to be entrusted with it (temporarily). If I let you use my chainsaw to cut down a tree for me, it does not become your chainsaw. If I let you use my car to drive me to and from a Dr. appt; it is not somehow now magically your car.
Similarly if I loan you (you being a credit bureau in this instance) my name + residence info + work info in order to validate financial transactions, when you have completed your task you are not allowed to give (or sell) it to someone else without my permission.
Furthermore if you (still a credit bureau) copy down my info wrong and then tell others that false info even when I tell you it is wrong, you should be financially liable for any inconvenience I incur through your ineptitude.
@bob: "If a bank gives someone money; it is the LENDER's responsibility to get the money back from THAT PERSON. If that person gave them MY information fraudulently and the bank didnt bother to verify it, well then the bank is defective and needs to fail and be survived by a fitter organization that can confirm identity competently."
A few days ago, someone in front of me in a checkout lane used an ancient form of payment, if I remember correctly, it is called a check. They IDed the person, made sure they were the person authorized to use the check book, and wrote down their drivers license number to make sure they could track them down if the check was bad. Why? Because it was their problem if it bounced and they would have to eat it.
Then I checked out, and handed them my credit card. They swiped it and handed it back. They didn't check the signature. And if they would have, they wouldn't have seen a signature, they would have read the words "ASK FOR PHOTO ID." Why didn't they? Because it wasn't their problem, someone else had to eat it.
If there were more incentive to authenticate people or transactions when they occured, not only would the problem improve and fraudsters get caught more frequently, but we could quit wasting so much time and money on protecting information--no matter how much protection you give information, once it is disclosed, it is disclosed, and good luck finding out whose fault it is.
Adam: "Markets don't fail - they are the result of thousands or millions of economic transactions."
You've got to love dogmatism -- or defining terms tautologically.
Bridges don't fail -- they are the result of billions of electron interactions.
Ecologies don't fail -- they are the result of billions of energy exchanges.
Governments don't fail -- they are the result of millions of social transactions.
Dot, dot, dot.
@ Bruce, et al.
"Because the efficient market solution won't work, we're left with inefficient regulatory solutions."
Perhaps this is just parsing the word "regulatory" (though I don't see it that way) but giving our personal data the same legal status as private property would likely help. Firms holding my data without consent would be committing theft. Firms holding my data with consent would be in breach of contract if there was a data spill. Even in cases of identity theft, the source of one's personal info by the identity thief would be viewed as a seperate act of theft and could be investigated as such.
I might be mistaken, but I thought I got the basis for that idea from you, Bruce. As I recall you were arguing for consumers to "own" their credit files, only consenting to opening the file when applying for credit.
Nobody complains about the incredible expense we endure to keep our intimate body parts private, so why the fuss about data? Well designed systems and procedures adhered to with discipline should easily do the job.
I've had a solution for some time and tried to tell anyone who will listen about it;
Pass a law making individuals the owner of their personal information.
Require anyone who stores a person's information to pay them some fee like .1 to .01 cents per piece of information into a social security type account.
Anyone who uses personal information without paying has now Stolen Property and legal action can be taken.
3. RESULTS not method
4. Penalty of law
Please if you have a moment let me know what you think.
if people own their own information, then how would media work? how would research libraries work? first amendment rights? freedom of information act?
data subjects should not own their data, but they should have the right to determine how/if their data affects them.
In the December 21, 2004 edition of the Spyware Weekly Newsletter is the writing "How Much Privacy Is Too Much Privacy?" This writing can be read at:
Among other things, according to what is said in the writing, there have been situations where an interest in privacy brought about unintended consequences. When it comes to privacy laws that are more of a difficulty than a help for the public, the writing's author mentions the issue of lobbying. Specifically, a beneficial privacy law may be adversely altered and tweaked by business interests. In addition, a less desirable national law may be able to overrule one or more desirable state laws.
@bethan - Why is that exactly? The fact you refer to people as data subjects instead of individuals is a little disturbing....But I will play
--- I would think you could strip the identifying data and still utilize the information of research and freedom of information purposes.
As far as Press and Speech both are protected by higher laws (in the US) and would be immune to this type of law.
the subject of data is the data subject, whether it's a dog or an operating system or my mom. i actually spend a fair portion of my time working to educate data subjects about their rights with regard to data pertaining to them, because i recognize them primarily as people instead of as products (which is what we are - web service providers [google, amazon, etc.] have productized us, and i'm ok with that to a large extent).
and how could one law be immune to another law? which would trump? your right to control what is said about you in some regards and not others? where does the line get drawn, and for whom?
it's my opinion that entities should not be able to benefit from your image, name, reputation, PII, etc., without your fully-informed opt-in consent. i'm dog-tired right now, so am not going to take a lot of time to articulate it, but imo - that's pretty much the extent of your data subject rights.
The idea that reasonable security protocols are outside the means of small businesses is a myth. Sophisticated network security hardware can now be purchased for a few hundred dollars. Antivirus, antispyware, and anti-rootkit software can be purchased for less, or obtained free (e.g. ClamWin, SpyBotSD).
Depending on the size of the buisness a compentent contractor can usually install these measures in a day or less, and if the company pays more than three grand for the whole package they didn't do basic comparative shopping.
Even with hundreds of thousands of dollars in security measures, access can be breached. In property law you don't hold victims responsible for their own burglaries, but they are liable for "attractive nuisances". Was the company negligent or did they implement reasonable procedures to ensure privacy?
This is the criterion on which these cases should be judged.
Hi Bruce, thanks for bringing this topic up and my compliments to your blog which I have just begun reading.
In contrast to your advice I thought I might share some suspicions I formed in 1999. You have identified the main problems or challenges to privacy and the along with the cited article, the hidden transactional costs to maintain privacy and the expected failure of markets and 3rd parties to protect privacy.
My thinking in 1999 was not about privacy so much as the integrity of ones identity. This is slightly different because personal identity does not so much involve 'assets', communications or secrets. Just who you are and your right to represent your own person.
I believe my own approach came from simply watching how the early internet use was expanding from private networks like compuserve, prodigy and AOL, to individual accounts from a local ISP usually dialup. This brought to the fore a wave of new issues not limited to, but including the decentralization of communications and concerning reliability of information; authority. It ocurred to me in sci fi fashion that we may be entering a period of time in which there is no history. Or rather, because there will be so much information that in the future, it will be impossible to determine 'truth' directly because of all the garbage and conjecture. That counter-intuitively all this storage and traffic would actually diminish a researchers' ability in the future or from an 'alien' perspective to see how causality expresses itself.
The other side of having too much information in a system has to with protecting ones identity. Identity theft was rising, true, but awareness of this was also a popular water cooler topic as people were not immediately comfortable making transactions online. The typical mindset of the water coolers was to fixate on the worst case scenario which in their water cooler minds is cash. The fact that someone could access their water cooler savings accounts or really credit cards. Which at that time was the water cooler savings account. But of course there is much more at stake if someone is doing things in your 'name'.
Previous to my experience as an administrator of a windows NT token ring network I had been employed by yale university to perform research into the concerted evolution of tandem gene arrays. I was aware at that time as I am now, that there is no physical definition of identity as this was before Venter busted up those fat academics and beat their pants off with his own genome sequencing efforts. Biometric means of authentication are not being challenged here. I am talking about really who and what we are and what makes us distinguishable in the case of intra species comparisons and twins. There are about 10^9 base pairs in the genome organized as we now know thanks Craig in about 39,000 genes which is less information than some species that we classically claim superiority over, and based on this biological superiority feel free to masticate and modify.
Since I am not writing a book here I will fast forward and suggest the meat. The government cannot possibly protect your identity when in the end it would much rather hold it hostage. Markets or 3rd party providers like microsoft offering consolidated passports or tokens act to protect your identity in so far as you are solvent and have a credit card. On the widest possible angle, there is no identity. Which means in the end identity is YOUR FUCKING PROBLEM, citizen.
Because of increased information collection with no authority or limits my theory at the end of all this mess means one reality:
Citizens must alias.
And rather than punishing individuals for aliasing and maintaining multiple identities, that we be allowed or tolerated to live our lives through multiple identities as the only recourse to protect essential liberties. The constitution and bill of rights was long before the internet.
Since the government and markets cannot protect your identity, you should diversify this identity using basic principles that support portfolio theory and epidemiology. In the end the government will protect who and what you are as far as it needs to efficiently collect taxes. The education system likewise will only go so far as to produce reliable taxpayers. We should not expect anything else.
My theory in 1999 was that the expanding use of the internet, irresponsible information collection and the greed of the government would create 'selection forces' that would reward those who engaged in aliasing.
I offer this as a contrast to the need to protect 'privacy', whatever in the world THAT is. And the costs to maintain 'privacy' are the same transactional costs maintaining any idea the environment didn't come up with itself - like evolution and democracy. Evolution must be demonstrated every day and not taken for granted. Democracy works best in a world without it.
We all live in a period of time called the fossil fuel energy surplus at the end of an interglacial gap. Our art, our science and our lives are all products of this surplus. The surplus influences everything, including our perspectives on security. Read Mitnick and at the end of every chapter all he can advise is more costly layers and barriers to maintain basically a feudal trust model. I say read Mitnick because he has broken the law, aliased and survived. White hats have a hard time acknowledging that kind of experience, though their blogs are much more entertaining since the gov has not yet told them what to write.
I would encourage anyone to diversify their identities as much as possible. Not to hurt people, conduct sabotage or do anything the institutional agencies would cringe at. But to represent your own force as individuals to make those agencies adapt.
On the origin of agencies and as Bruce has said, regulation works best during periods of 'starvation'. Agencies form themselves to consolidate transaction costs. But it is just the opposite with individuals. This is because inside we are ourselves mosaic, plastic, and never done. ;)
Thanks again Bruce!
@bethan, but also more in general; how should data subjects not own their data? (when referring to people, that is). Should they never? Or sometimes? If not, who should own it instead?
We are all 'physical body' subjects. Should we not 'own' our bodies? What does owning mean here? To be able to shape and mold? To control location, treatment, exposure and appearance?
For now I cannot but disagree with bethan (again, when limiting data subjects to people) and posit in contrast that we should absolutely, totally 'own' our data.
do you own your voting record? can you change it? if you can't change it, how can you say you own it?
can i take your physical body and force it into a spread sheet, or direct a marketing campaign to your elbow? No, but I can direct a marketing campaign to your penchant for camel cigarettes and diet dr pepper, right?
what if I've seen you, with my own eyes, buy a soda and a pack of smokes ever day, and decide i'm going to offer you a bargain if you buy a week's worth at a single time. that's a sales pitch based on my knowledge of your consumption habits.
the internet has enabled me to determine the consumption habits of millions of people - it's ::my:: knowledge that I've determined based on queries that i've written. the same way I watched you buy the soda and pack of smokes. that data enables me to place advertising selling soda and nicotine patches on websites that you frequent.
if you don't like it, you can change the privacy settings on your computer, pay with cash, move into the forest. but if i see it with my eyes, or with my collection methods, then it's mine.
" but if i see it with my eyes, or with my collection methods, then it's mine."
If you went and saw the rolling stones and recorded what they played and sang then you could be in a lot of trouble.
Likewise if a peeping-tom put a "night-light" video camera in a postition where he could see you through your bedroom window and recorded what you do there and put it up for sale on the internet you might very well have something to say about it.
You need to look at the thorny areas of "implied consent" and "expectations of privacy", then see how Journo's deal with "in the public interest", not the fact that you have collected the data and therefore you belive it gives you rights that you would otherwise not have.
You might also want to look into "copyright" and "derived works" as well, because it could easily be argued that what I do is (in Europe anyway) effectivly a "freedom of expression" and therby effectivly a "performance".
yes, i get all that. thank you. i was speaking of legally collected data, not protected data, data collected illegally or in conflict with posted privacy statements, data harvested where there's an expectation of privacy, etc.
if i wasn't clear, my apologies, but what i was discussing was consumption habits, usage habits on sites that collect and sell the data (as many do), gov't collected data (voting, census, felons), etc.
Also, would creative works or derived works count as 'data'? Doesn't seem like it would, but i'm relatively new to privacy law.
"i was speaking of legally collected data, not protected data, data collected illegally or in conflict with posted privacy statements, data harvested where there's an expectation of privacy, etc."
The definition of what is "legaly" or "lawfully" collected is the massive grey area behind all forms of data collection about people (in whatever form).
Often people will say "that that is not lawfully prohibited is lawfull" however that is an excuse.
As an example it is expected that a financial transaction on a card is correctly recorded with date, amount, location etc etc.
Ostensably this has been previously recorded for accounting and fraud prevention. However today there is now software to "data mine" this information for all sorts of associations.
Now there are two issues,
1) As a long term customer did I give consent for this "new" use of the data relating to me?
2) althought the data was lawfully collected in the US etc is it being moved and used in another juresdiction with different (or no) legislation pertaining to the data.
It has become abundently clear that US based companies have breached the EU "safe harbour" regulations on personal data repeatedly as well as breaching EU & UK Data protection legislation that should cover UK citizens.
Behind this are two areas of great concern, what does and does not get covered by "implied consent" and "juresdiction shopping". Various courts around the world are only just starting to see these questions come before them and there are significant problems.
For instance in a UK Court the Tribunal of Justice (the Judge) is only alowed to rule on UK legislation in UK courts. If evidence or action based on forigen legislation is required an "expert" on that legislation is required to give testomony on the interpretation etc.
If you think about it if used in the right way this will enable an organisation to drive a "Clapham omnibus" through any UK legislation that a UK Citizen might hope to use to curtail the activites or seek damages arising from the use of their personal data by the organisation.
With regard to,
"would creative works or derived works count as 'data'?"
You have to ask two questions "what is a work" and "what is data".
The definition of data used by technical personel is most definatly not the same as courts use.
Likwise a "work" is not regarded in the same way as you would think.
On the Internet you will see an argument such as "a file of data is just a binary number, and you cannot copyright a number".
Any legal person would laugh them selves silly over that argument.
However you then get a more subtal argument coming from the techs "If I have two files neither contains the data of the work, therfore neither is copyrighted" again the legal person would have fits of laughter, simply because using both and an apropriate transform the original work can be recovered.
However this last point is an area of great concern. If I send you by Email a file of data that is encrypted via a one time pad (OTP) (that I previously sent you in the post). And it is sufficiently large, and we both carry out the proper procedure with OTPs which is to securly destroy the key file after use we have a problem.
A third party could argue that any file that is less than the size of the Email file (or in some cases larger) was what was sent in the Email.
Neither I nor you can prove that what I sent was not the file the third party is claiming it is. All we can do (via time stamps etc) is show it is unlikley. In a criminal case this would probably be sufficient (unless there is other supporting evidence) as the burden of proof is usually "beyond reasonable doubt". However in a non criminal case the burden of proof is usually what you might call "on reasonable probability" which roughly translates to which ever legal representative's story the judge likes best...
Which taken with the previous problem of defining a "work" and it's representation in a digital form a legal person uses, leaves a very very uncertain chance of getting even remotly to the truth of the matter.
@bethan (and @Clive)
This is exactly the sort of discussion I was hoping for.
Regarding your comments, Bethan; When someone's data or observed behaviour (which is also data) is captured, that person (data subject, thanks for coining), has something to lose. He could for instance be exposed, embarrassed, harassed or whatever, by ill-meaning data-capturers. This is an acknowledged form of harm to individuals, against which laws are instated in many countries.
Individuals themselves are also aware of the risks of being 'observed' or 'captured', and commonly display a degree of consciousness and responsibility regarding their (chance of) exposure. Close the curtains, protect privacy. Maintaining a degree of privacy is common, and not only for those 'who have something to hide', viz. http://papers.ssrn.com/sol3/papers.cfm?...
Surveillance, spying, or otherwise secretly observing or recording peoples behaviour (physical or otherwise) is usually not allowed as Clive pointed out; (interesting exception: government or police).
For me, I'm not concerned with the data capturing activity, legal as it might be, but with the subsequent _uses_ of the data about the data subjects. Here you claim that data subjects can not expect you to limit your use of their representations. This I find disturbing. There is, again as Clive pointed out, a certain expectation of intended use. Unfortunately, there is often no way for the data subjects to prevent deviation from those expectations (which I find even more disturbing). This lack of control for the data subjects is (for me) an area of great interest/concern (which I try to elaborate at http://egosphere.blogspot.com )
The situation you describe, where a persons data, once captured, could be put to any use that would benefit the capturer, seems to me exactly the thing that most people would like to protect themselves from, if they could.
Lacking any control over your representation (its correctness, storage, distribution or exposure), even when it was once willingly conveyed to some organization (such as your energy company) makes you dependent on _their_ willingness and ability to change your relation with them. You might become powerless to influence their behaviour, since their representation of you, once acquired, is the truth for them. Never mind the 'unintended uses' that might occur (see http://datalossdb.org/ for plenty of examples).
I'm looking forward to your reaction here, I find this discussion most valuable.
the phrase "data subject" is a standard phrase used regularly by a lot of people who work with privacy issues.
i'd like to point out that the topic area is broad enough, and the number of countries with various laws is considerable enough, that it is necessary to either limit the discussion to theoreticals, or to specific areas of privacy.
i was speaking purely of data collection in the manner ::legal:: in the US. With regard to a company using data in the manner they choose if they collected it [legally], i was speaking of aggregate data.
in every country that i know of that has privacy laws, there are laws in place that specify what must be done if pii is collected, and -from what i've read- those countries talk about notification, intended use, choice, security, etc. which covers a lot of what this discussion has brought up.
personally, i think the US should approach privacy from a more fundamental and holistic perspective. it would allow a consistent expectation, and it would be a helluva lot easier to train consumers to be smart with regard to privacy protection.
"data subjects can not expect you to limit your use of their representations."
i'm wondering where i said that. the word 'representation' implies pii, the use of which would be covered by legislation and/or policy.
i'm a pragmatic girl, and the internet is the wild west of law. those "big gray areas" will get clarified into tidy grids when people care enough to articulate where the rights of one group infringe on the rights of another.
my personal area of interest is actually images. where do privacy rights meet up with image ownership? if someone takes a pic of me at a public space, should they be able to use my image for whatever purpose?
"if someone takes a pic of me at a public space, should they be able to use my image for whatever purpose?"
The answer to that question is simply no in probably all juresdictions (because the way it is phrased is to broad and there are laws against blackmail and many other uses).
And if you try to limit it to say personal-v-nonpersonal use you will run in to issues likewise public-v-private property, or primary subject-v-in field of view.
In France for instance where there are quite strong laws taking a photo of a person in a public place for anything other than private use and even then is potentialy going to get you into trouble should the "subject" be aware of it at the time or subsiquently (which makes France a popular destination for some "Stars" etc).
In other countries you are "fair game" if on public property or you are visable from a place where a person may lawfully take a photo.
You then come across "peeping tom" laws in some juresdictions and then a "common law" "expectation of privacy" in others.
Also there is the oposit issue, even when there are laws against photography you might have an overriding reason to take one which makes it legal to do so. For instance if you see somebody doing something that is not lawfull and wish to use it only as supporting evidence.
Even Journos get specific exemption in some juresdictions. And the two absolut overiding ones are "National Security" and for "the prevention of genocide" (these pretty much override any other legislation including the likes of manslaughter).
Even when you sign a "release" there are still usually conditions binding the use of the photo under "unfair terms of contract" legislation etc.
All of which kind of brings you back to intended use of the photo...
Not just when it is taken but at some other point in the future.
An example of which was in the UK, a young lady working with a media organisation went on a beach trip with a number of her co-workers. She happened to go topless as did a number of others "holiday snaps" where taken and nobody thought anything further about it.
Then a few years down the road she starts dating a member of the Royal Family. One of her old co-workers decides to cash in and sells a picture of the young lady topless to a "red top" paper. Suffice lt to say neither she nor her family or (presumably the Royals) where pleased to see the photo blown up to page size in the morning newspapers. The paper tried to claim it published out of "national interest" which nobody realy excepted.
Likewise a young University Student who was a little the worse for wear at a social event had her photo taken. Several years down the road she is a UK Minister and the photo of her is published which apparently shows her holding a "spliff" (hand rolled cig with drugs in). The newspaper that published it quite rightly claimed it was of national interest because she would have influance on laws relating to the use of "recreational drugs".
However in either case when the photos where taken they where of private social moments all be it in publicaly accessable places and the photos where taken lawfully and their only value at the time was sentimentality. It was only later that they became of comercial value due to what the young ladies had chosen to do with their lives either privately or publicaly (respectivly).
It is dealing with not just the non ephemeral nature of photos and data that is going to be the hardest thing for society to deal with.
And you don't have to be famous or of national interest for a photo or Email to have global effect on you (think of the Wii Fit Girl in her underware put up on the net, or the young lady whose steamy email to "her squeaze" got forwarded around the glob).
what brought this issue to my attention is (again, in the US) people posting pics of children on sites that advocate crimes against children. the pics are typically taken at school events or public spaces and then posted to one of these websites, where often the child is rated, discussed, etc.
according to law enforcement and prosecutors, so long as no one actually states that they are going to assault the child, or tells someone else to assault the child, there's no crime.
One of the arguments against owning our PII is that it's hard, legally, to nail down who, if anybody, owns what.
Your checking account number was issued by your bank, and if you go to a different bank, you don't take it with you. Do you own it? No. Does the bank? No. You can't own a number, any more than a musician can "own" middle C on the musical scale.
A musician strings notes together into a unique melody and he can copyright and "own" that. A bank account number, by itself, isn't really PII. If it's put together with a name, address, or SS number, it becomes a unique combination that can only point to a specific data subject. My thought is that any combination of PII bits that uniquely point to a data subject should automatically be copyrighted by the data subject.
In the US when a photographer takes a photo they automatically hold the copyright to it. Putting it on their web page for all to see does not give others the right to copy the image and profit from it's use or sale. Allowing someone else to use the image on another website does not transfer ownership of the copyright.
In the same way, when a data subject gives his bank an extensive list of his PII, so that the bank can verify his identity, send him monthly statements, and comply with the federal reporting laws, it should in no way give the bank any right to sell it, rent it, or share it for profit.
The argument that privacy laws which are too broad will have unexpected negative results when lobbys and business interests adversely alter and tweak them, has a simple solution. Throw away once and for all the flawed concept of "corporate person-hood". Without it Nike and AT&T can't hide behind personal privacy laws because they are not people. Also their lobbying of the government and influence over political campaigns using inhuman amounts of money would be a thing of the past, closely followed by paper tiger legislation like the Gramm-Leach-Bliley Act.
The Interactive Advertising Bureau, a trade group whose members include AOL, Google, Microsoft, Yahoo and most major online media sites, is one of the loudest supporters of the "privacy costs to much" argument. They claim that any privacy regulation would be a huge blow to commerce.
When corporate profit, or even commercial viability, becomes a valid argument for leaving our privacy unprotected, that's fascism. They have a right to compete in the market place, but they do not have a right to make a profit.
When did corporate profit become more important than our right to privacy?
When did commercial viability become a right that is violated by protecting personal privacy?
If Google, Yahoo, Microsoft, and to hear some tell it, the internet, can’t survive without taking our right to privacy then I say let em die. It’s that basic.
If they CAN survive without taking our right to privacy, and I believe they can, the question becomes, why don’t they?
The answer is obvious, isn't it?
It is useful to try everything in practice anyway and I like that here it's always possible to find something new. :)
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.