Schneier on Security
A blog covering security and security technology.
« Detecting Being Watched |
| Bypassing the Chain on Hotel-Room Doors »
April 6, 2010
Privacy and Control
In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy's and Larry Ellison's comments from a few years earlier, and you've got a whole lot of tech CEOs proclaiming the death of privacy--especially when it comes to young people.
It's just not true. People, including the younger generation, still care about privacy. Yes, they're far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They're not technically sophisticated about privacy and make mistakes all the time, but that's mostly the fault of companies and Web sites that try to manipulate them for financial gain.
To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it's no longer private. But that's not how privacy works, and it's not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone's e-mail conversations--your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.
People's relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they're thinking about it, and less likely to if they're thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control--and privacy--users have over their data. We all give up our privacy when we're not thinking about it.
Group behavior matters; we're more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it's gone. Even if we don't have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.
Here's the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users' information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.
You can see these forces in play with Google's launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but--and Google knew this--changing options is hard and most people accept the defaults, especially when they're trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.
Facebook tried a similar control grab when it changed people's default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.
Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control--or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.
With all this privacy erosion, those CEOs may actually be right--but only because they're working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that's where we socialize these days, and it's hard--especially for the younger generation--to opt out. As long as privacy isn't salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There's no malice on anyone's part here; it's just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can't rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.
This essay originally appeared on Forbes.com.
EDITED TO ADD (4/13): Google responds. And another essay on the topic.
Posted on April 6, 2010 at 7:47 AM
• 64 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
So, basically, the people who say that privacy is dead are the very people who have a vested interested in dead privacy.
Any commentary on the Apache video released by Wikileaks? In particular, I am curious on your take on the efforts the groups claims have to undertaken to break the encryption on the, presumeably, military grade encryption.
I'm not sure legislation is the answer. Why not let the market govern what is accepted in terms of privacy? User backlash will keep companies in check. If Google or Facebook continue to change their policies and divulge more personal information then people will stop using those services. If people don't stop, then maybe they don't value their privacy as much as they thought.
"The best way to predict the future is to invent it." -- Alan Kay
Likewise, the best way to predict the death of privacy is to kill it.
Writing yet another polemic about Google and FB seems almost too easy. What do you think about your own employer's well-documented experiments with secret deep-packet inspection trials with Phorm?
As for privacy legislation, well, doesn't Europe already have it...and is that solving the "problem" you're identifying?
I don't see why privacy, in this sphere, has to be legislated. I don't give my real name out to grocery stores for those savings cards, and I don't join sites like scummy GoodReads that hijack your address book. This doesn't take a law, it just takes a brain.
You forgot to mention an important point.
In the US the person who "collects personal data" owns it.
This is not true in other juresdictions, which is why these "privacy is dead" merchants are nearly all US based...
The easiest way to rebalance the issue is to remove this "finders keepers" view-point on personal data, and also get on the Supreme Court's case about "secrecy=privacy" it is a very very myopic view point.
And one, one of "M'learned justices" found out to his cost is not compatable with modern life.
Perhaps the easiest way to deal with it is to "out all justices" by making their lives in all the most intermate detail as public as possible.
Being on the wrong end of the 541ty stick tends to encorage your perspective to change, especially when you end up looking like a goldfish in a bowl...
I'm not so sure that privacy is ultimately about control or secrecy. To paraphrase another saying, "if all you have is a lawyer, all problems look like a legal position".
I'd claim that privacy is really about relationships. We have a (culturally-dependent) expectation when we tell someone something, about how far that information will go, based on our relationship and the nature of the information. If the recipient breaches that expectation, and uses the information in a way we did not want or expect, we get upset. And that applies whether it is a friend or a corporate entity.
Now after a breach, we may employ lawyers. The lawyers, in turn, hopefully find a way of expressing such broken expectations in a way that either shows the discloser was at fault, or the expectation was foolish. Overlaid with a history of (culturally-dependent) precedents and legal structures that might have been set up to tackle different issues.
Not to say that privacy laws can't be improved - but there's a risk that adding more laws may make a bad situation worse. Especially when there is not necessarily total consensus on the expectations.
Perhaps we need a commercial "freedom of information act". If I could file a form and compel Facebook to give me a copy of all the data that's not "private" that they keep on everybody, we'd find a lot more of the data was "private" and Facebook users would "need" a lot more privacy.
> This doesn't take a law, it just
> takes a brain.
Taken to the extreme, no law is ever required, just a brain and a gun.
We made laws and governments so we could free ourselves from continually fighting and defending ourselves from each other.
Society should prohibit behaviors that would strip rights from others. That's not because "others" lack the brains (or guns) to defend themselves, but because society is better served when those brains can focus on creating and producing rather than self-defense.
I am wondering what the ramifications of all this compelled privacy relaxation will be. I don't deny that there will be some, but I would be interested in Bruce's thoughts on this matter. Many of his writings make the assumption that strong privacy protections return significant benefits to society. It's probably a good assumption, but we can't properly weight costs and risks without an enumeration of the risks.
I think most costs relating to a loss of privacy stem from external factors that are enabled by loss. If third parties can cause absolutely no harm with your information is there any value in privacy? I am sure philosophers would think so, but what about an economist? Perhaps the key to the privacy problem is marking certain information as privileged wherein even if it is disclosed parties will be prohibited from acting on it.
I wouldn't attribute much of this to malice though. Service providers like Facebook and Google are more interested in getting people to use their service. Yes, it would be nice if the first thing a user saw was a "what do you want your defaults to be? Secure, or Open?". Give them the option at the onset. But, they don't.
"Never attribute to malice what can be adequately explained by stupidity." - Nick Diamos
"I wouldn't attribute much of this to malice though. Service providers like Facebook and Google are more interested in getting people to use their service."
Of course. What I mean by not attributing their actions to malice is that those companies are not deliberately killing privacy. They're trying to maximize their profits, which has the side effect of killing privacy.
"Never attribute to malice what can be adequately explained by stupidity."
Sufficiently advanced cluelessness is indistinguishable from malice.
Not that I believe for one second that either Google or Facebook is clueless about the effect their moves have on their customers. They just regret the inconvenience to the same extent cats weep for birds.
"Why not let the market govern what is accepted in terms of privacy? User backlash will keep companies in check."
I believe that it won't, and that economic incentives don't allow for a standard market where the users will moderate the companies' behavior. That was the primary point of the essay.
"As for privacy legislation, well, doesn't Europe already have it...and is that solving the 'problem' you're identifying?"
Europe, and Canada, have much stronger privacy laws, and have had some success reining in these encroachments. But yes, there is a lot lacking in the European Data Protection Act.
"In the US the person who 'collects personal data' owns it."
This point fell into the bit bucket out of word length considerations. It seemed secondary. But yes, it is important when examining the economic incentives of the various players.
The problem is the intrinsic contradictions and unbalance. If data that is not secret cannot be private, how can a company then keep it secret in order to sell it?
Why not make a clean cut: There is just private and public data. And you define data as public if the owner publishes it or gives it to somebody who doesn't give him control over his data. That would create an incentive for companies to give the users the control over their data, because else it is public by definition and they are forced to publish it.
All in all, well put. However, we should make to executives put their private data where there mouth is. That is, when those data grubbing executives put their private (and controlled) data where we can see ALL OF IT, including their company board of of directors, all members of all lobbying organizations, Congress, all politicians, etc et al, then you might have a glimpse of "privacy is dead". But, those people who want your privacy to be dead, still maintain that their data should remain private, secret, and controlled only by them. Hyprocrisy abounds.
I wrote about this in May 2009, same conclusion. You might find it of interest:
I also suggest checking out the book /Understanding Privacy/ by Daniel J. Solove. A good read from a GWU prof. here in DC.
It's the same old story - business wants to make money, consumers want some protection against being exploited.
The idea that the market will sort things like this out is laughable. Consumer protection laws are needed precisely because the market fails at this sort of thing.
Loss of privacy is far worse than buying a duff toaster though. Once your private photos are on the internet and have been turned into a meme on 4chan it's going to haunt you for the rest of your life. All the employers I know go straight to Google before interviewing any promising looking candidates. In the current jobs market any negative association, even simply by having the same name as a criminal or being part of a Facebook group with a dodgy sounding name, is enough to get your CV shredded.
I've recently started to consider the ways that a "right to privacy" will be very challenging to implement, given advancing technology.
The alternative that I'm considering is a "right to anonymity". This is a right to hide and protect our true identity.
This would mean that we never have to provide "true" information regarding who we are. In cases where trust is required, such as applying for home loans, we would provide proxy credentials that allows our credit worthiness to be assessed, perhaps via a 3rd party. (Trust escrow, trust insurance?)
The right to anonymity will be helped by improving technology, not just weakened by it.
Source code has no privacy, and that makes it safer.
When I disclose more, I get much more valuable feedback & help that dwarfed the importance of my privacy. People are largely good.
So if G and FB are OK with privacy being dead, why don't they share the details of their advertising operations and source code? Or do they mean privacy is dead for everyone but themselves.
Well, part of this is a legitimate exercise in branding (who knew what a "Wall" was a few years ago?), but part is surely intentional yet deniable obfuscation. Other terms that have changed (they would probably say clarified) FB meanings are "content", "platform", and "site".
I prefer a balance between what a provider (like Facebook) and a customer can do without too much government meddling, and reasonable protection of customers.
Several years ago, I was part of an engagement (full disclosure: I was not the primary person assigned to it) that dealt with an industries' collection of data on the web. It was concluded that there are four criteria that a collector should abide by when collecting personal information:
1. Notice - they should notify the user of what data they are collecting and how it will be used.
2. Choice - users should have a choice as to when or if their information is colelcted and how it is used.
3. Access - there should be a mechanism for users to change, correct, and remove their information. (this is of particular importance if there is changes -- they shouldn't collect information for one purpose, then change their policies and use information for another).
4. Security - Personal information should be protected from unauthorized disclosure, modification, or destruction, and users should be informed if this is being done or not.
Perfect? No, but what is. Regulations such as that would allow for providers and users to mutually agree to the use of personal information without too much government meddling, while at the same time government would be ensuring that users have ample information to enter such an agreement.
The public good of the "right to anonymity" is that it helps protect people who exercise unpopular liberties.
This is equivalent to the first amendment protecting unpopular speech.
"The alternative that I'm considering is a "right to anonymity". This is a right to hide and protect our true identity."
Well, that doesn't really work with social networking, does it?
"Well, that doesn't really work with social networking, does it?"
Right "anonymous", cause you can't create social networking accounts (or comment on blogs) without your birth certificate and SS card on hand...
Anonymity in social networking:
@"Well, that doesn't really work with social networking, does it?"
You will need to be more specific with your objection.
A right to anonymity implies that the user is allowed to anonymize, but doesn't require it.
In social networking situations, a user could provide false and misleading information to the site, and to other users. A user could utilize different aliases for different purposes. For example a user could entirely separate his "I like movies" profile from his "I like kittens" profile.
Neither democracy nor freedom can exist without privacy. If you believe otherwise, you are just plain wrong. There is no debate here. The keystone of this country is a fundamental right to privacy: the right to vote, anonymously.
Until goopticon and flakebook can read my mind, the only thing dead here is a corporation's public decree to protect its virtual shareholders, because they damn sure haven't killed my ability to keep something to myself.
Although, my actual response to this is "big deal". Why concern ourselves with what flakebook makes public by default or not, or what their mantra is, when everything you send them (publicly available after-the-fact or not) is still mirrored half-way down the pipe and sent off to the NSA sniffin' hole for 'processing'?
The US is fuckin up, big time.
I disagree that legislation is necessary.
You are looking at the problem backwards. The problem is not that these companies are not responding to user demand for privacy. The problem is that they are.
Making a purchase (or a use decision) in a free market is the primary feedback mechanism to tell the seller/provider if they are on the right track or the wrong track. The consumer decision usually takes many concerns into account, including value vs. risk.
If people continue to use such services, then that means that the value of the features that are present outweighs the risk/negative value of the features that aren't present in the desired form.
When you call for legislation you are saying "I know better than millions of people acting in their own interests" and you are trying to make a decision for them. With legislation, you might solve a problem that you perceive (and which many others perceive but don't judge as risky as you do). However no one will ever know the lost opportunity cost as a result, and we aren't particularly good at identifying undesired side effects ahead of time.
I'm with Eric: deregulate everything and let the Enron's of the world sort it all out for us. Surely they have our best interests in mind. Corporations and governments always do.
Oh BTW, Thomas Jefferson called, and he's pretty pissed. Says he wants his country back.
@Shane "the right to vote, anonymously"
I've been thinking about this too, but how to implement it? What are the consequences?
Is everybody issued some sort of voting token that can be exercised anonymously? Every race that a person could vote on should require an independent token. On voting day there should be no possible link between the voter and the token.
What about vote trading/selling? It seems that these actions could not be made illegal.
I think you misunderstood. I'm speaking about the privacy of your actual vote. You have a right to vote for a candidate without the world knowing that you voted for that candidate (or against another). The choice to share that information remains your own (or so we hope, nowadays, thanks to Diebold and the like).
@Shane "I'm speaking about the privacy of your actual vote."
Got it, you were referring to secret voting, not anonymous voting.
Great article. As I think it become a real problem today to control your private information in Internet especially when large dotcoms are not really interested to protect it.
I think the younger generation is still very concerned about privacy from the point of view of embarrassment; it's just that, like everyone else, their idea of what is embarrassing is different. I think, like anyone else, they do a subconscious assessment of risk (albeit a poor judgment) ... therein the willingness to post photos but they'd never give out information on sexual diseases they may have. The CEOs are relying on that old adage that "if you say something is true often enough, then it is true". Or as Adolph Hitler said, "Tell a lie often enough, loud enough, and long enough, and people will believe you."
There are also net benefits to the populace as a whole when everything is public. Finding useful information becomes much easier, whether that's personal accounts of medical treatments, or interesting people to follow. Granted, the option to make these private if you wish is useful, but the optimal default is less clear cut.
Isn't everything we produce automatically protected by Copyright? Therefore everything we "produce" on those "social" sites has Copyright protection. If they, or anyone else, use your "copyrighted" work without your authorization they are already in violation of your rights. Sue them, DMCA them, etc...
As for "having no malice". One has to be blind, deaf and dumb to fall for that. Your personal information and what you produce on their sites is THEIR PRODUCT. That's the way they make money. It's in their interest to rob you of as much information about yourself and your habits.
That's why I said Copyright protection would be a better way to fight them. Your habits are your expression and expression is automatically protected as soon as is produced. You don't have to thought, reflect, meditate about it, requires no more attention that being alive. And Copyright as also something called "image rights". They want to use information about yourself make them pay!
People have to start to understand that their personal data is valuable to everyone else and everyone else is making money out of it. Demand a piece of the action. That'll slow things down.
Celebrities already do this and perhaps all of you will consider this laughable applied to "anonymous" persons but it's not.
It's not about financial reward, but about leverage. Do you value privacy? Then you've to apply leverage.
What I'm curious about are anonymous posters who are critical of the need for legislation. Would they mind if we listed their IP and email addresses,? How about their street address and phone number? Academic history? ... Would they mind if we verified that their employers were not Facebook or Google? By posting anonymously they apparently would mind, but how do we reconcile this with their criticisms of giving others the option of privacy?
As with other hidden agendas, if you are interested in the truth, just follow the money.
"What I'm curious about are anonymous posters who ..."
Problem is there's no ROI. Revealing how easy it is to identify an individual actually has a negative ROI. Well, it does until it a point... Kudos to the entrepreneur who exploits this.
I don't find the direct/indirect distinction convincing when it comes to private companies attitude towards privacy. It's more intertwined that. Yes, companies want to maximize profits, that an aim; but in a case like Google or Facebook the death of privacy is a direct consequence of that aim, not an unfortunate side affect.
So I do think they act with malice. They must because any strengthening of privacy laws directly limits their profits.
'It doesn't take a law'
Does it take HIPPA to protect medical records? Does it take SOX to prevent analyts from taking naive investors to the cleaners? Does it take speed limits to keep teenagers from racing down a neighborhood street?
If these circumstances require legislation what's different about privacy?
@ Eric at April 6, 2010 3:32 PM
"When you call for legislation you are saying "I know better than millions of people acting in their own interests" and you are trying to make a decision for them. .<snip>. However no one will ever know the lost opportunity cost as a result, and we aren't particularly good at identifying undesired side effects ahead of time."
This is the contradictory issue.
We are poor at identifying undesired side effects ahead of time so, as a result, millions of people put information on to facebook which at a later date may become a problem for them.
As a 15 year old you may think it is cool to write about getting drunk and smoking pot, but as a 20 year old applying for a job this can come back and bite you. At various stages in your life you have different sets of criteria for how the cost / benefit ratio works out for you but the loss of privacy is a single shot.
You are basically saying that decisions made at the age of 13 should be able to come back and haunt you for the rest of your life.
This makes no sense to me.
Most companies, Google and Facebook included, do not explain the cost / benefit in any meaningful sense. Its all well and good for a collection of people who are IT literate enough to read Bruce's blog and comment on it, to say that people are making informed rational decisions to trade privacy for "cool features" but in most instances I doubt that is true. It can not be regulated by the market because the efficient market hypothesis doesnt apply.
The problem with debating the right to privacy is for most people it is only important when it is too late.
@ Paul Simon at April 6, 2010 1:10 PM
"Source code has no privacy, and that makes it safer.
When I disclose more, I get much more valuable feedback & help that dwarfed the importance of my privacy. People are largely good."
Can I assume you are being sarcastic?
@Eric: You assume that market economy is always a nicely self-regulating feedback system. But it is only in very specific circumstances, and most of the time it is not. Or why else is an antitrust division needed, for instance?
But most of the time it is more like plain evolution, with lot's of struggle, predators and preys, and a few survivors. Regulation is the attempt to make this bloody game a bit more human, just like we have medical science and don't let sick people die just because they are not strong enough to survive on their own, instead of just let the nature regulate itself.
"I disagree that legislation is necessary."
Legislation is required when there is a market imbalance or a "race to the bottom" condition, otherwise you end up with "Lemon Cars", avoidable injuries and deaths, and the market colapsing...
Contary to what you state with,
"You are looking at the problem backwards. The problem is not that these companies are not responding to user demand for privacy. The problem is that they are."
The companies are not actualy addressing user concerns about privacy they are doing all they can to pretend the user has no concerns by various tricks and statments that Bruce has highlighted.
What the very few companies are doing is deliberatly restricting 'consumer choice', and as such they are operating in the same way as a monopoly or cartel might (practices for which we have legislation to stop).
Thus your comment,
"Making a purchase (or a use decision) in a free market is the primary feedback mechanism to tell the seller/provider if they are on the right track or the wrong track."
Does not apply as there is no 'freedom of choice' in the information service market place to make available this "feedback mechanism" you claim will alow the "purchasor" to apply the desired correction (which it won't anyway for other reasons peculier to information services).
You are also ignoring the issue of 'utility' when you say "Making a purchase".
In economics utility is usually defined as, "the total satisfaction received from consuming a good or service".
However it can be better defined for information services as 'time variant perceived usefulness against current known cost'.
The reason for this is to show that there are several hidden and some not so obvious assumptions in the normal economic use of "utility" which take "information services" far from the norms of a traditional market.
Some of the hidden economic assumptions are,
1, The distance:cost metric is non negligable and attributable.
2, The change of cost with respect to time is not discontinuous (ie not a step or cusp function).
3, The "purchasor" is the only person to "use" the good or service.
And the not so obvious issues that effect,
4, Time variant perception.
5, Current known cost.
First off, (1) for a freemarket to exist in the traditional meaning their has to be a non negligable distance:cost metric with a positive corelation (ie cost goes up with distance). Otherwise you get a "first to market takes all" single supplier market (thus no direct competition thus no freemarket).
With a traditional "good or service" there are very real physical attributes that ensure you cannot have an instant global market by one entity (mass / energy / localisation of effort).
However these "information services" have no physical constraints that the user perceives as a real cost.
The exceptions being time and bandwidth, and as some will realise AJAX/Web2.x was brought in as the next step solution to keep the issue of them negligable (after local caching etc). And other solutions will be brought in to further this aim of either removing or limitting to a "one off" the time/bandwidth perception costs as information services develop.
Thus the actual costs of an information service are hidden to your "purchasor" behind the 'eat as much as you want for free' and the fixed monthly monetary price for connection (usually paid for by others).
Secondly (2) the assumption of the "non step function" with cost to time.
Few models can survive let alone deal with discontinuous functions (step or cusp), and free market economics (like many other models) general excludes the issue by externalising it. In the case of economics it's 'pass it off' on to insurance or the state (as the 'insurer of last resort' as recently evidenced by the banking crisis, prior to that for those who can remember was the colapse of the Lloyds Insurance market due to the LMX spiral).
That is a catastrophic loss is often considered probablistic in nature and from a sufficient high it can generaly be considered "uniform" to some other measure such as the size of a market.
However this is ONLY TRUE of physicaly constrained goods or services. Intangable information services are not physicaly constrained in any such way therefore an information service market can go from 0% to 100% failure in near zero time (think a zero day break with UTC time based payload, or the banking crisis or Lloyds).
You cannot currently insure against such effects, nor is it likley in the near future, so there is no real fallback (which is why the APT people are geting their undergarments wadded up).
However it is not just markets that are effected by catastrophic failure but individuals, speak to anybody who has suffered catastrophic loss, insurance never compensates for the loss in anything like a meaningful way. In general the legal system does not deal with non physical loss as it has no way to effectivly evaluate it.
And as we know from car drivers and information backups few people can actualy evaluate their risk in a meaningful manner.
Further in any new market there is the issue of "events to unfold", risk evaluation only becomes effective where the majority of the risks within a given time frame are known and can be quantified (as with mature markets). In the information systems market place the only thing we know is 'we don't know'.
The reason for this is our 'real tangable' physical world models do not apply to the 'unreal intangable' information world, they are at best a very very tiny subset that are not in practice transferable easily or at all...
Thus your comment,
"The consumer decision usually takes many concerns into account, including value vs. risk."
Is in practice effectivly worthless with aninformation service market as we know from many studies that even in the known tangable world the "consumer" over rates "value" and under rates "risk" as a norm...
And the worlds largest industry (marketing) does it's best to re-enforce this situation (just look at Microsoft's '8 second' 'I'm a PC' adverts to see this).
Thirdly (3) there is the assumption that the "purchasor" is the only person to "use" the "good or service". And this implies that they solely take the hit of failure.
This is wrong even in the physical world, and this is an area that freemarket economics has studiously ignored (markets and players in isolation).
For instance take the issue of a machine shop power tool like a metal guillotine, press, etc. In the freemarket the machine shop owner would see no utility in safety guards, they are an extra expense that also slows down the use of the machine. Likewise the machine operator often sees the safety guard as an impediment to doing their job and earning more. Thus on the surface their drivers appear to be the same (don't want safety guards).
However the risk is not the same for the owner and the operator. In the case of the operator they are looking at catastrophic loss to themselves (ie hand chopped off) and consiquently their ability to earn. To the owner the only loss in a free market is the time to clean down the machine and get another operator to take over. The owner very very rarely has to consider that a replacment operator is not an option. For the operator who cannot work they either starve or fall back on others such as the state.
It is for this reason we have safety laws (ie the state does not like picking up the tab ant more than the insurance industry), which enforce the use of safety guards and adiquate insurance in the workplace.
However when you step from a physicaly constrained traditional "good or service" to an 'information service' "good or service" that is not physicaly constrained you have real issues arising.
The "owner" may be faced with the catastrophic failure as well as the operator as the norm. We have seen this with businesses that fail to keep and properly test backups. And we have recently seen the effect of market wide failure.
The 'freemarkets and players in issolation' viewpoint just does not apply any longer and we need to wake up to this fairly soon. The alarm has gone off we cannot pretend it has not by hitting the snoze button and staying in the land of sweet dreams.
We then need to look at how this effects a "purchasors" view of utility of an information service.
First off is the issue of negligable or zero monetary cost, in a traditional model this would indicate a near infinite utility. This obviously does not make sense thus there are other costs to consider.
Two immediatly measurable costs are the "purchasors" energy and time costs.
However the "purchasors" energy expenditure is effectivly the same as money thus although it is an attributable cost to any given information service, a "purchasor" invariably dosen't view it that way.
Nor for that matter do the information service suppliers (other than as a way to externalise a significant part of their service costs).
So although real, measurable and attributable to a given information service energy is treated by the "purchasor" as an infrastructure cost, thus they do not general count the energy cost in when evaluating an information service.
However the time cost is very real to the "purchasor", if they cannot get what they want in a very short time period from any given information service they have three choices,
A, Switch to an alternative information service.
B, Accept the time loss.
C, Mitigate the time loss.
In a restricted market with 'no choice' there is not the option to switch.
Also it may not be possible to switch because the "purchasor" does not use a lot of information services in isolation (that is they use them as a communications tool to others).
So immediatly there are two effectivly monopolistic constraints acting on the users perception of 'utility' with respect to the time cost. That is they don't have a choice as an individual just at best a 'hurd choice'.
They can then only have the choice of accept or mitigate. Invariably they will chose to mitigate, by learning the system.
However this is not in the information (gathering) service providers interests, as this will alow the user to side step their revenue model.
So conversly to what you would expect it is in the information service providers interest to have a very very complex and difficult to navigate service. They simply provide 'scripted actions' for "purchasors" to click on to give 'canned results'.
Providing the "purchasor" of an 'all you can eat for free' information service percieves sufficient utility in the 'scripted actions' they will follow the path of least resistance in the majority of cases.
Those that would chose otherwise invariably find the non scripted interface too complex or tedious to navigate and use properly, and thus go to the 'canned scripts'.
Thus the service provider pushes the "purchasors" into their chosen business model.
And has been seen if some users actually do work out how to use the non scripted interface and enable privacy and post how to do it on the internet and it gets traction... Invariably the information service providers introduce a new function that renders the post ineffective and provide a script to set the defaults back the way they want them to be.
To pretend this information service type is a "freemarket" is at best a very poor interpretation, more correctly it is a cartel or monopoly based on a concealed business model.
Unfortunatly this model has a hidden catch for your "purchasors" in that there is a time lag involved in their perception of cost and it invariably is catastrophic when it occures.
That is when a "purchasor" or one of their co users of an information system releases "personal information" they don't immediatly see what effect it has. That is the "pigeons may not come home to roost" for some time, and frequently the longer the wait the worse the impact.
Thus they cannot easily evaluate the risk and thus the longterm cost to themselves.
In this respect it's a little like being a teenager who on a first job makes the mistake of going topless on the "company summer beach trip". A photo or two is taken but there are no consiquences work or social wise so it gets forgoton. A few years later she meets a man who she decides to marry. Unfortunatly being a Prince he attracts a degree of press interest, one of her ex co-workers digs out the topless photo and sells it to the tabloids. The young lady wakes up one morning to find herself being sanctioned by all and sundry and her morals called into question and thus as to if she is a fit and proper person to marry a prince...
If you think this is a chance one off, ask yourself how many politicians have had photos dug out of them using drugs or acting in ways that will earn moral censure whilst at University etc?
Then how about quite ordinary students at a UK University who others posted photos of their fairly mild party antics on facebook etc and have found themselves being expeled from university just before graduation?
Many people especial those of less mature years do not have the life experiance to judge risk and they are being exploited for this short comming by some information service providers who gather personal information for proffit just like any drug dealer does.
And as I assume you are aware we have laws to criminalise drug dealers, the sellers of defective or counterfit goods and those that blackmail or defraud us, all these are considered necassary protections for society.
Thus I acctually see no reason why the trading in personal information should not be considered for similar legislation, 'the unrestrained right to know' is most general touted by those with a profit to make (marketing execs etc).
The simple task of giving people the right to excercise control of personal information pertaining to them to prevent others making profit from it is not exactly a world shattering idea, or moraly objectionable idea is it?
@Christian... just yesterday I wrote up the tale of a person who was threatened with livelihood-crushing legal action for doing pretty-much what you described with publicly accessible Facebook data.
His very simple plan was to build social network relationship maps.
What do you think of the new Digital Due Process coalition for revising the Electronic Communications Privacy Act? http://www.digitaldueprocess.org/index.cfm?...
I'm all for it, as there are certainly many rules in the physical world that could and should be transferred to the online one, but I wonder if a more fundamental shift is needed in how we view personal data. I work for a nonprofit, The Common Data Project, where we're working on building a member-based data bank where people can control how their own data is shared and re-used. It wouldn't immediately address with all the privacy issues you mention, but we believe we have to change some basic assumptions about how data is collected and shared.
@ Clive Robinson
Very well put. Do you mind if I use some of that conetent?
the power to control personal information is not a new idea....it has been accepted as a principle "the self - determination principle " in europe and is present throughout european legislation and case law . A. Westin was one of the first scholars to introduce this concept
I completely agree with you that legislation is necessary because I believe that privacy is a necessary social good.
However, it is a modern-day social invention, and so arguably is not a fundamental right. Social, regulatory, and market forces play a role in the livelihood of Privacy.
Because market forces are deconstructing it, despite the resistance of social forces, regulatory forces must be brought to bear.
Again, this is contingent on society placing value on privacy. If we don't care about privacy, then let it die.
@ Clive Robinson
Your comment has good raw content and was longer than most blog posts. If you haven't yet, you should publish it.
The reason social networks don't work in the market is that network effects make it a natural monopoly.
The reason I'm on Facebook is that many of my friends and family are. That enables me to keep track of them easily. If I were to join another social networking site, it wouldn't do me any good. So, it's Facebook or nothing.
If the network was a generic one, and there were various services I could buy into to join it (and I don't know how that would work), then I could choose to buy one with more privacy safeguards. That would be a legitimate market-based solution. As it is, I can either be on Facebook or on no useful social site.
@Z "I'm not sure legislation is the answer. Why not let the market govern what is accepted in terms of privacy? User backlash will keep companies in check."
So how much is Facebook paying for that quaint, naive and ultimately profit making for those who seek to continue with impunity to kill privacy and profit off peoples information belief set.
The current economy is an example of how well letting the market place govern works. The market can't and won't govern because those seeking to make a profit are not willing to take a hands off and let the market govern on it's own.
According to an article by Cory Doctrow, when adults seem willing to give up privacy for the sake of security, it may give kids the message that privacy is not important. At the same time, adults are concerned about kids not valuing privacy...
Bebo kids will value privacy when they see adults do too
In the area of medical privacy, the claim has been made that there are other things (such as cost containment and preventing fraud) that have been established as being more important than privacy. In particular, there is the writing "Health Privacy: The Way We Live Now (Gellman)" from Robert Gellman at http://www.privacyrights.org/ar/gellman-med.htm on the Web. One might imagine this same explanation being applied to lax privacy protections in other areas. Yet one could ask the question: for those who are involved in something such as medical fraud prevention, would they care about the privacy of their medical information?
For some parties, it might seem that privacy is not important unless their privacy is involved. In 2003, a consumer group posted partial Social Security numbers for several California lawmakers who had opposed a financial privacy bill. Though the posting was done after the bill failed to pass, lawmakers (not surprisingly) were unhappy. See http://www.sfgate.com/cgi-bin/article.cgi?file=/...
There are already legal boundaries that outlaw wiretapping, interception, and eavesdropping by communication companies.
Which is why the Phorm affair in the UK was a crime. It was wiretapping (as well as copyright infringement, fraud, computer misuse, intellectual property theft).
And I didn't even need to mention privacy and personal data; there's enough there already.
Which is why I'll never use BT again for rest of my life.
I guess one wider question is whether those eavesdropping/wiretapping laws should also apply to social networking services (after all, aren't they communication service providers of a kind?).
The essential difference is that all the users are all subject to the terms & conditions of a single provider (for better or worse).
Perhaps the problem is now that commercial organisations want to participate in social networks, they face their intellectual property/commercial communications with other participants being used to promote competitors.
A communication network isn't a true communication system without privacy/security/integrity of your communications. Without the essential characteristics of privacy/security/integrity "it just doesn't work' (tm) without resorting to encryption.
Disagree that legislation is the solution. Technological solutions have not been exhausted yet. For example, does Google NEED all the personal data to provide the service it does? Does Facebook NEED all the personal data to build a social network? Cannot cryptography in its modern state solve some of this?
Apparently it can, but privacy doesn't provide sufficient differentiator. That's why Hushmail is not as popular as GMail. That's why so many startups in the field fail. People really do not care about their privacy (yet) to create sufficient market force for the technology to be advanced.
Strangely enough, people care about the privacy sufficiently to make this a sexy topic for politicians. I don't understand this - it should be the other way around. You first make your personal economic choices (which service to use or to avoid), and only then (if the market doesn't have solutions for you) you hope your MP will solve this.
Maybe part of the problem is that your personal information has a price tag, and companies are willing to give you free services in exchange for it. Private email will cost money, while non-private one will not. If people "care" about their privacy enough to support certain politicians, but not enough to prefer more expensive but private services, then they do not deserve any legislation on the matter.
As one of the posters said, if the large group of people active freely are not able to create conditions for privacy, legislation is either redundant or will not help.
You say that Privacy is about control. You don't want others to sell health records that refer to you, or that the NSA eavesdrops on your e-mail conversations. You may not mind sharing your personal lives and thoughts, but you want to control how, where and with whom. You say that a privacy failure is a control failure.
However, you continue by saying that the real problem is that companies make big money over your and my backs. This is about us needing to be treated with dignity and respectfully. This is what market (and other) forces laugh at, and that's what the problem is about.
Me controlling the whereabouts of my information is not going to stop this problem. First of all, it hinders respectful parties that provide service that I do like. Secondly, it doesn't stop disrespectful parties from brutally making money off of me. Thirdly, I would not know how to set my privacy settings as any setting only makes it more difficult or easier for *both* kinds of parties to provide what they consider to be a service to me. I would be quite surprised if disrespectful parties would be impeded to do what they want even if they put such controls in plain sight.
I therefore disagree with you. To me, privacy is about people respecting one another, and caring enough to value each others dignity. Rather than focusing on controlling the whereabouts of information about me, I would be interested in controlling the kinds actions that would be consequential of knowing things about me. And since I cannot enforce this (or the control of the whereabouts of my information of that matter - article 10 of the European Convention of Human Rights (ECHR) prohibits this to a large extent), I would like to see some good legislation, stiff penalties and proper enforcement.
Thanks - this was helpful and informative.
Hi Bruce. I was wondering what you thought of the changes to online gaming forums recently. It's part of a larger push to merge facebook and games like World Of Warcraft (which has 11.5 million subscribers) into one community where online personas and real world indentity are no longer seperate. The existing player base is obviously outraged; http://forums.worldofwarcraft.com/thread.html?...
Activision are planning on having users real names posted in forums when they make a post. This means visitors to tech support, social & customer service forums will see your real life identity if you post, and thus the once anonymous sanctuary of online gaming is now gone.
This brings about safety issues for parents, concerns for employees who are worried their gaming habits are not viewed favourably, women who get harassed, gay and lesbian gamers who play in GLBT online guilds as well as racial minorities who may be discriminated against in-game. Gamers can be hacked and have their accounts used to post in forums advertising illegal material. This means those who do not wish to participate in the public outing of their name may not even have a choice in such a circumstance. There seem to be a lot of security issues raised as this corporation is pushing ahead with massive changes. Thanks for reading!
Public proclamations on these issues may have the tactical legal effect of attempting to define "social norms" with regard to privacy
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.