Entries Tagged "terms of service"

Page 1 of 1

Uber Uses Ubiquitous Surveillance to Identify and Block Regulators

The New York Times reports that Uber developed apps that identified and blocked government regulators using the app to find evidence of illegal behavior:

Yet using its app to identify and sidestep authorities in places where regulators said the company was breaking the law goes further in skirting ethical lines—and potentially legal ones, too. Inside Uber, some of those who knew about the VTOS program and how the Greyball tool was being used were troubled by it.

[…]

One method involved drawing a digital perimeter, or “geofence,” around authorities’ offices on a digital map of the city that Uber monitored. The company watched which people frequently opened and closed the app—a process internally called “eyeballing”—around that location, which signified that the user might be associated with city agencies.

Other techniques included looking at the user’s credit card information and whether that card was tied directly to an institution like a police credit union.

Enforcement officials involved in large-scale sting operations to catch Uber drivers also sometimes bought dozens of cellphones to create different accounts. To circumvent that tactic, Uber employees went to that city’s local electronics stores to look up device numbers of the cheapest mobile phones on sale, which were often the ones bought by city officials, whose budgets were not sizable.

In all, there were at least a dozen or so signifiers in the VTOS program that Uber employees could use to assess whether users were new riders or very likely city officials.

If those clues were not enough to confirm a user’s identity, Uber employees would search social media profiles and other available information online. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers.

When Edward Snowden exposed the fact that the NSA does this sort of thing, I commented that the technologies will eventually become cheap enough for corporations to do it. Now, it has.

One discussion we need to have is whether or not this behavior is legal. But another, more important, discussion is whether or not it is ethical. Do we want to live in a society where corporations wield this sort of power against government? Against individuals? Because if we don’t align government against this kind of behavior, it’ll become the norm.

Posted on March 6, 2017 at 6:24 AMView Comments

Terms of Service as a Security Threat

After the Instagram debacle, where it changed its terms of service to give itself greater rights over user photos and reversed itself after a user backlash, it’s worth thinking about the security threat stemming from terms of service in general.

As cloud computing becomes the norm, as Internet security becomes more feudal, these terms of service agreements define what our service providers can do, both with the data we post and with the information they gather about how we use their service. The agreements are very one-sided—most of the time, we’re not even paying customers of these providers—and can change without warning. And, of course, none of us ever read them.

Here’s one example. Prezi is a really cool presentation system. While you can run presentations locally, it’s basically cloud-based. Earlier this year, I was at a CISO Summit in Prague, and one of the roundtable discussions centered around services like Prezi. CISOs were worried that sensitive company information was leaking out of the company and being stored insecurely in the cloud. My guess is that they would have been much more worried if they read Prezi’s terms of use:

With respect to Public User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content on and in connection with the manufacture, sale, promotion, marketing and distribution of products sold on, or in association with, the Service, or for purposes of providing you with the Service and promoting the same, in any medium and by any means currently existing or yet to be devised.

With respect to Private User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content solely for purposes of providing you with the Service.

Those paragraphs sure sound like Prezi can do anything it wants, including start a competing business, with any presentation I post to its site. (Note that Prezi’s human readable—but not legally correct—terms of use document makes no mention of this.) Yes, I know Prezi doesn’t currently intend to do that, but things change, companies fail, assets get bought, and what matters in the end is what the agreement says.

I don’t mean to pick on Prezi; it’s just an example. How many other of these Trojan horses are hiding in commonly used cloud provider agreements: both from providers that companies decide to use as a matter of policy, and providers that company employees use in violation of policy, for reasons of convenience?

Posted on December 31, 2012 at 6:44 AMView Comments

Violating Terms of Service Possibly a Crime

From Wired News:

The four Wiseguy defendants, who also operated other ticket-reselling businesses, allegedly used sophisticated programming and inside information to bypass technological measures—including CAPTCHA—at Ticketmaster and other sites that were intended to prevent such bulk automated purchases. This violated the sites’ terms of service, and according to prosecutors constituted unauthorized computer access under the anti-hacking Computer Fraud and Abuse Act, or CFAA.

But the government’s interpretation of the law goes too far, according to the policy groups, and threatens to turn what is essentially a contractual dispute into a criminal case. As in the Lori Drew prosecution last year, the case marks a dangerous precedent that could make a felon of anyone who violates a site’s terms-of-service agreement, according to the amicus brief filed last week by the Electronic Frontier Foundation, the Center for Democracy and Technology and other advocates.

“Under the government’s theory, anyone who disregards—or doesn’t read—the terms of service on any website could face computer crime charges,” said EFF civil liberties director Jennifer Granick in a press release. “Price-comparison services, social network aggregators, and users who skim a few years off their ages could all be criminals if the government prevails.”

Posted on July 19, 2010 at 1:11 PMView Comments

Terrorists Prohibited from Using iTunes

The iTunes Store Terms and Conditions prohibits it:

Notice, as I read this clause not only are terrorists—or at least those on terrorist watch lists—prohibited from using iTunes to manufacture WMD, they are also prohibited from even downloading and using iTunes. So all the Al-Qaeda operatives holed up in the Northwest Frontier Provinces of Pakistan, dodging drone attacks while listening to Britney Spears songs downloaded with iTunes are in violation of the terms and conditions, even if they paid for the music!

And you thought being harassed at airports was bad enough.

Posted on February 10, 2010 at 12:39 PMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization—domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation—or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office and not in the target’s home or office—the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer—it’s out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AMView Comments

Privacy Policies: Perception vs. Reality

New paper: “What Californians Understand About Privacy Online,” by Chris Jay Hoofnagle and Jennifer King. From the abstract:

A gulf exists between California consumers’ understanding of online rules and common business practices. For instance, Californians who shop online believe that privacy policies prohibit third-party information sharing. A majority of Californians believes that privacy policies create the right to require a website to delete personal information upon request, a general right to sue for damages, a right to be informed of security breaches, a right to assistance if identity theft occurs, and a right to access and correct data.

These findings show that California consumers overvalue the mere fact that a website has a privacy policy, and assume that websites carrying the label have strong, default rules to protect personal data. In a way, consumers interpret “privacy policy” as a quality seal that denotes adherence to some set of standards. Website operators have little incentive to correct this misperception, thus limiting the ability of the market to produce outcomes consistent with consumers’ expectations. Drawing upon earlier work, we conclude that because the term “privacy policy” has taken on a specific meaning in the minds of consumers, its use should be limited to contexts where businesses provide a set of protections that meet consumers’ expectations.

Posted on September 4, 2008 at 1:15 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.