Entries Tagged "control"
Page 5 of 8
If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization — domestic and international.
You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.
Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.
This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.
And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation — or maybe a conversation inside Facebook.
Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.
We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.
This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.
This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant — even though it occurred at the phone company switching office and not in the target’s home or office — the Supreme Court must recognize that reading personal e-mail at an ISP is no different.
This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.
I was in Dubai last weekend for the World Economic Forum Summit on the Global Agenda. (I was on the “Future of the Internet” council; fellow council members Ethan Zuckerman and Jeff Jarvis have written about the event.)
As part of the United Arab Emirates, Dubai censors the Internet:
The government of the United Arab Emirates (UAE) pervasively filters Web sites that contain pornography or relate to alcohol and drug use, gay and lesbian issues, or online dating or gambling. Web-based applications and religious and political sites are also filtered, though less extensively. Additionally, legal controls limit free expression and behavior, restricting political discourse and dissent online.
More detail here.
What was interesting to me about how reasonable the execution of the policy was. Unlike some countries — China for example — that simply block objectionable content, the UAE displays a screen indicating that the URL has been blocked and offers information about its appeals process.
Definitely strange bedfellows:
A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.
The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.
A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:
A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.
This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:
First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.
TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.
It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?
But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.
It used to be that just the entertainment industries wanted to control your computers — and televisions and iPods and everything else — to ensure that you didn’t violate any copyright rules. But now everyone else wants to get their hooks into your gear.
OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.
Microsoft is doing some of the most creative thinking along these lines, with something it’s calling “Digital Manners Policies.” According to its patent application, DMP-enabled devices would accept broadcast “orders” limiting their capabilities. Cellphones could be remotely set to vibrate mode in restaurants and concert halls, and be turned off on airplanes and in hospitals. Cameras could be prohibited from taking pictures in locker rooms and museums, and recording equipment could be disabled in theaters. Professors finally could prevent students from texting one another during class.
The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That’s a difficult security problem even in its simplest form. Distributing that system among a variety of different devices — computers, phones, PDAs, cameras, recorders — with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.
Once we go down this path — giving one device authority over other devices — the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can a burglar, for example, enforce a “no photography” rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get “superuser” devices that cannot be limited, and do they get “supercontroller” devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
It’s comparatively easy to make this work in closed specialized systems — OnStar, airplane avionics, military hardware — but much more difficult in open-ended systems. If you think Microsoft’s vision could possibly be securely designed, all you have to do is look at the dismal effectiveness of the various copy-protection and digital-rights-management systems we’ve seen over the years. That’s a similar capabilities-enforcement mechanism, albeit simpler than these more general systems.
And that’s the key to understanding this system. Don’t be fooled by the scare stories of wireless devices on airplanes and in hospitals, or visions of a world where no one is yammering loudly on their cellphones in posh restaurants. This is really about media companies wanting to exert their control further over your electronics. They not only want to prevent you from surreptitiously recording movies and concerts, they want your new television to enforce good “manners” on your computer, and not allow it to record any programs. They want your iPod to politely refuse to copy music to a computer other than your own. They want to enforce their legislated definition of manners: to control what you do and when you do it, and to charge you repeatedly for the privilege whenever possible.
“Digital Manners Policies” is a marketing term. Let’s call this what it really is: Selective Device Jamming. It’s not polite, it’s dangerous. It won’t make anyone more secure — or more polite.
This essay originally appeared in Wired.com.
The TSA has a new photo ID requirement:
Beginning Saturday, June 21, 2008 passengers that willfully refuse to provide identification at security checkpoint will be denied access to the secure area of airports. This change will apply exclusively to individuals that simply refuse to provide any identification or assist transportation security officers in ascertaining their identity.
This new procedure will not affect passengers that may have misplaced, lost or otherwise do not have ID but are cooperative with officers. Cooperative passengers without ID may be subjected to additional screening protocols, including enhanced physical screening, enhanced carry-on and/or checked baggage screening, interviews with behavior detection or law enforcement officers and other measures.
That’s right; people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can’t lie.
EDITED TO ADD (6/11): Daniel Solove comments.
In the information age, we all have a data shadow.
We leave data everywhere we go. It’s not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It’s automatic road-toll collection systems, supermarket affinity cards, ATMs and so on.
It’s also our lives. Our love letters and friendly chat. Our personal e-mails and SMS messages. Our business plans, strategies and offhand conversations. Our political leanings and positions. And this is just the data we interact with. We all have shadow selves living in the data banks of hundreds of corporations’ information brokers — information about us that is both surprisingly personal and uncannily complete — except for the errors that you can neither see nor correct.
What happens to our data happens to ourselves.
This shadow self doesn’t just sit there: It’s constantly touched. It’s examined and judged. When we apply for a bank loan, it’s our data that determines whether or not we get it. When we try to board an airplane, it’s our data that determines how thoroughly we get searched — or whether we get to board at all. If the government wants to investigate us, they’re more likely to go through our data than they are to search our homes; for a lot of that data, they don’t even need a warrant.
Who controls our data controls our lives.
It’s true. Whoever controls our data can decide whether we can get a bank loan, on an airplane or into a country. Or what sort of discount we get from a merchant, or even how we’re treated by customer support. A potential employer can, illegally in the U.S., examine our medical data and decide whether or not to offer us a job. The police can mine our data and decide whether or not we’re a terrorist risk. If a criminal can get hold of enough of our data, he can open credit cards in our names, siphon money out of our investment accounts, even sell our property. Identity theft is the ultimate proof that control of our data means control of our life.
We need to take back our data.
Our data is a part of us. It’s intimate and personal, and we have basic rights to it. It should be protected from unwanted touch.
We need a comprehensive data privacy law. This law should protect all information about us, and not be limited merely to financial or health information. It should limit others’ ability to buy and sell our information without our knowledge and consent. It should allow us to see information about us held by others, and correct any inaccuracies we find. It should prevent the government from going after our information without judicial oversight. It should enforce data deletion, and limit data collection, where necessary. And we need more than token penalties for deliberate violations.
This is a tall order, and it will take years for us to get there. It’s easy to do nothing and let the market take over. But as we see with things like grocery store club cards and click-through privacy policies on websites, most people either don’t realize the extent their privacy is being violated or don’t have any real choice. And businesses, of course, are more than happy to collect, buy, and sell our most intimate information. But the long-term effects of this on society are toxic; we give up control of ourselves.
This essay originally appeared on Wired.com.
EDITED TO ADD (5/21): A rebuttal.
What took place on a peaceful Californian university campus nearly four decades ago still has the power to disturb. Eager to explore the way that “situation” can impact on behaviour, the young psychologist enrolled students to spend two weeks in a simulated jail environment, where they would randomly be assigned roles as either prisoners or guards.
Zimbardo’s volunteers were bright, liberal young men of good character, brimming with opposition to the Vietnam war and authority in general. All expressed a preference to be prisoners, a role they could relate to better. Yet within days the strong, rebellious “prisoners” had become depressed and hopeless. Two broke down emotionally, crushed by the behaviour of the “guards”, who had embraced their authoritarian roles in full, some becoming ever-more sadistic, others passively accepting the abuses taking place in front of them.
Transcripts of the experiment, published in Zimbardo’s book The Lucifer Effect: Understanding How Good People Turn Evil, record in terrifying detail the way reality slipped away from the participants. On the first day Sunday it is all self-conscious play-acting between college buddies. On Monday the prisoners start a rebellion, and the guards clamp down, using solitary confinement, sleep deprivation and intimidation. One refers to “these dangerous prisoners”. They have to be prevented from using physical force.
Control techniques become more creative and sadistic. The prisoners are forced to repeat their numbers over and over at roll call, and to sing them. They are woken repeatedly in the night. Their blankets are rolled in dirt and they are ordered painstakingly to pick them clean of burrs. They are harangued and pitted against one another, forced to humiliate each other, pulled in and out of solitary confinement.
On day four, a priest visits. Prisoner 819 is in tears, his hands shaking. Rather than question the experiment, the priest tells him, “You’re going to have to get less emotional.” Later, a guard leads the inmates in chanting “Prisoner 819 did a bad thing!” and blaming him for their poor conditions.
Zimbardo finds 819 covering his ears, “a quivering mess, hysterical”, and says it is time to go home. But 819 refuses to leave until he has proved to his fellow prisoners that he isn’t “bad”. “Listen carefully to me, you’re not 819,” says Zimbardo. “You are Stewart and my name is Dr Zimbardo. I am a psychologist not a prison superintendent, and this is not a real prison.”819 stops sobbing “and looks like a small child awakening from a nightmare”, according to Zimbardo. But it doesn’t seem to occur to him that things are going too far.
Guard Hellmann, leader of the night shift, plumbs new depths. He wakes up the prisoners to shout abuse in their faces. He forces them to play leapfrog dressed only in smocks, their genitals exposed. A new prisoner, 416, replaces 819, and brings fresh perspective. “I was terrified by each new shift of guards,” he says. “I knew by the first evening that I had done something foolish to volunteer for this study.”
The study is scheduled to run for two weeks. On the evening of Thursday, the fifth day, Zimbardo’s girlfriend, Christina Maslach, also a psychologist, comes to meet him for dinner. She is confronted by a line of prisoners en route to the lavatory, bags over their heads, chained together by the ankles. “What you’re doing to these boys is a terrible thing,” she tells Zimbardo. “Don’t you understand this is a crucible of human behaviour?” he asks. “We are seeing things no one has witnessed before in such a situation.” She tells him this has made her question their relationship, and the person he is.
Downstairs, Guard Hellmann is yelling at the prisoners. “See that hole in the ground? Now do 25 push-ups, fucking that hole. You hear me?” Three prisoners are forced to be “female camels”, bent over, their naked bottoms exposed. Others are told to “hump” them and they simulate sodomy. Zimbardo ends the experiment the following morning.
To read the transcripts or watch the footage is to follow a rapid and dramatic collapse of human decency, resilience and perspective. And so it should be, says Zimbardo. “Evil is a slippery slope,” he says. “Each day is a platform for the abuses of the next day. Each day is only slightly worse than the previous day. Once you don’t object to those first steps it is easy to say, ‘Well, it’s only a little worse then yesterday.’ And you become morally acclimatised to this kind of evil.”
EDITED TO ADD (5/13): The website is worth visiting, especially the section on resisting influence.
A review of Access Denied, edited by Ronald Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, MIT Press: 2008.
In 1993, Internet pioneer John Gilmore said “the net interprets censorship as damage and routes around it”, and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his ‘Declaration of the Independence of Cyberspace’ at the World Economic Forum at Davos, Switzerland, and online. He told governments: “You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear.”
At the time, many shared Barlow’s sentiments. The Internet empowered people. It gave them access to information and couldn’t be stopped, blocked or filtered. Give someone access to the Internet, and they have access to everything. Governments that relied on censorship to control their citizens were doomed.
Today, things are very different. Internet censorship is flourishing. Organizations selectively block employees’ access to the Internet. At least 26 countries — mainly in the Middle East, North Africa, Asia, the Pacific and the former Soviet Union — selectively block their citizens’ Internet access. Even more countries legislate to control what can and cannot be said, downloaded or linked to. “You have no sovereignty where we gather,” said Barlow. Oh yes we do, the governments of the world have replied.
Access Denied is a survey of the practice of Internet filtering, and a sourcebook of details about the countries that engage in the practice. It is written by researchers of the OpenNet Initiative (ONI), an organization that is dedicated to documenting global Internet filtering around the world.
The first half of the book comprises essays written by ONI researchers on the politics, practice, technology, legality and social effects of Internet filtering. There are three basic rationales for Internet censorship: politics and power; social norms, morals and religion; and security concerns.
Some countries, such as India, filter only a few sites; others, such as Iran, extensively filter the Internet. Saudi Arabia tries to block all pornography (social norms and morals). Syria blocks everything from the Israeli domain “.il” (politics and power). Some countries filter only at certain times. During the 2006 elections in Belarus, for example, the website of the main opposition candidate disappeared from the Internet.
The effectiveness of Internet filtering is mixed; it depends on the tools used and the granularity of filtering. It is much easier to block particular URLs or entire domains than it is to block information on a particular topic. Some countries block specific sites or URLs based on some predefined list but new URLs with similar content appear all the time. Other countries — notably China — try to filter on the basis of keywords in the actual web pages. A halfway measure is to filter on the basis of URL keywords: names of dissidents or political parties, or sexual words.
Much of the technology has other applications. Software for filtering is a legitimate product category, purchased by schools to limit access by children to objectionable material and by corporations trying to prevent their employees from being distracted at work. One chapter discusses the ethical implications of companies selling products, services and technologies that enable Internet censorship.
Some censorship is legal, not technical. Countries have laws against publishing certain content, registration requirements that prevent anonymous Internet use, liability laws that force Internet service providers to filter themselves, or surveillance. Egypt does not engage in technical Internet filtering; instead, its laws discourage the publishing and reading of certain content — it has even jailed people for their online activities.
The second half of Access Denied consists of detailed descriptions of Internet use, regulations and censorship in eight regions of the world, and in each of 40 different countries. The ONI found evidence of censorship in 26 of those 40. For the other 14 countries, it summarizes the legal and regulatory framework surrounding Internet use, and tests the results that indicated no censorship. This leads to 200 pages of rather dry reading, but it is vitally important to have this information well-documented and easily accessible. The book’s data are from 2006, but the authors promise frequent updates on the ONI website.
No set of Internet censorship measures is perfect. It is often easy to find the same information on uncensored URLs, and relatively easy to get around the filtering mechanisms and to view prohibited web pages if you know what you’re doing. But most people don’t have the computer skills to bypass controls, and in a country where doing so is punishable by jail — or worse — few take the risk. So even porous and ineffective attempts at censorship can become very effective socially and politically.
In 1996, Barlow said: “You are trying to ward off the virus of liberty by erecting guard posts at the frontiers of cyberspace. These may keep out the contagion for some time, but they will not work in a world that will soon be blanketed in bit-bearing media.”
Brave words, but premature. Certainly, there is much more information available to many more people today than there was in 1996. But the Internet is made up of physical computers and connections that exist within national boundaries. Today’s Internet still has borders and, increasingly, countries want to control what passes through them. In documenting this control, the ONI has performed an invaluable service.
This was originally published in Nature.
Wine Therapy is a web bulletin board for serious wine geeks. It’s been active since 2000, and its database of back posts and comments is a wealth of information: tasting notes, restaurant recommendations, stories and so on. Late last year someone hacked the board software, got administrative privileges and deleted the database. There was no backup.
Of course the board’s owner should have been making backups all along, but he has been very sick for the past year and wasn’t able to. And the Internet Archive has been only somewhat helpful.
More and more, information we rely on — either created by us or by others — is out of our control. It’s out there on the internet, on someone else’s website and being cared for by someone else. We use those websites, sometimes daily, and don’t even think about their reliability.
Bits and pieces of the web disappear all the time. It’s called “link rot,” and we’re all used to it. A friend saved 65 links in 1999 when he planned a trip to Tuscany; only half of them still work today. In my own blog, essays and news articles and websites that I link to regularly disappear — sometimes within a few days of my linking to them.
It may be because of a site’s policies — some newspapers only have a couple of weeks on their website — or it may be more random: Position papers disappear off a politician’s website after he changes his mind on an issue, corporate literature disappears from the company’s website after an embarrassment, etc. The ultimate link rot is “site death,” where entire websites disappear: Olympic and World Cup events after the games are over, political candidates’ websites after the elections are over, corporate websites after the funding runs out and so on.
Mostly, we ignore the issue. Sometimes I save a copy of a good recipe I find, or an article relevant to my research, but mostly I trust that whatever I want will be there next time. Were I planning a trip to Tuscany, I would rather search for relevant articles today than rely on a nine-year-old list anyway. Most of the time, link rot and site death aren’t really a problem.
This is changing in a Web 2.0 world, with websites that are less about information and more about community. We help build these sites, with our posts or our comments. We visit them regularly and get to know others who also visit regularly. They become part of our socialization on the internet and the loss of them affects us differently, as Greatest Journal users discovered in January when their site died.
Few, if any, of the people who made Wine Therapy their home kept backup copies of their own posts and comments. I’m sure they didn’t even think of it. I don’t think of it, when I post to the various boards and blogs and forums I frequent. Of course I know better, but I think of these forums as extensions of my own computer — until they disappear.
As we rely on others to maintain our writings and our relationships, we lose control over their availability. Of course, we also lose control over their security, as MySpace users learned last month when a 17-GB file of half a million supposedly private photos was uploaded to a BitTorrent site.
In the early days of the web, I remember feeling giddy over the wealth of information out there and how easy it was to get to. “The internet is my hard drive,” I told newbies. It’s even more true today; I don’t think I could write without so much information so easily accessible. But it’s a pretty damned unreliable hard drive.
The internet is my hard drive, but only if my needs are immediate and my requirements can be satisfied inexactly. It was easy for me to search for information about the MySpace photo hack. And it will be easy to look up, and respond to, comments to this essay, both on Wired.com and on my own blog. Wired.com is a commercial venture, so there is advertising value in keeping everything accessible. My site is not at all commercial, but there is personal value in keeping everything accessible. By that analysis, all sites should be up on the internet forever, although that’s certainly not true. What is true is that there’s no way to predict what will disappear when.
Unfortunately, there’s not much we can do about it. The security measures largely aren’t in our hands. We can save copies of important web pages locally, and copies of anything important we post. The Internet Archive is remarkably valuable in saving bits and pieces of the internet. And recently, we’ve started seeing tools for archiving information and pages from social networking sites. But what’s really important is the whole community, and we don’t know which bits we want until they’re no longer there.
And about Wine Therapy, I think it started in 2000. It might have been 2001. I can’t check, because someone erased the archives.
This essay originally appeared on Wired.com.
Sidebar photo of Bruce Schneier by Joe MacInnis.