Entries Tagged "FTC"

Page 1 of 2

The Story of Tiversa

The New Yorker has published the long and interesting story of the cybersecurity firm Tiversa.

Watching “60 Minutes,” Boback saw a remarkable new business angle. Here was a multibillion-dollar industry with a near-existential problem and no clear solution. He did not know it then, but, as he turned the opportunity over in his mind, he was setting in motion a sequence of events that would earn him millions of dollars, friendships with business élites, prime-time media attention, and respect in Congress. It would also place him at the center of one of the strangest stories in the brief history of cybersecurity; he would be mired in lawsuits, countersuits, and counter-countersuits, which would gather into a vortex of litigation so ominous that one friend compared it to the Bermuda Triangle. He would be accused of fraud, of extortion, and of manipulating the federal government into harming companies that did not do business with him. Congress would investigate him. So would the F.B.I.

Posted on December 3, 2019 at 6:19 AMView Comments

The Federal Trade Commission and Privacy

New paper on the FTC and its actions to protect privacy:

Abstract: One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States — more so than nearly any privacy statute and any common law tort.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves a full suite of substantive rules that exist independently from a company’s privacy representations.

Posted on August 29, 2013 at 12:28 PMView Comments

Scareware: How Crime Pays

Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.

Here’s one scareware operator who sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.

Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.

Posted on February 7, 2011 at 8:45 AMView Comments

FTC Privacy Report

The U.S. Federal Trade Commission released its privacy report: “Protecting Consumer Privacy in an Era of Rapid Change.”

From the press release:

One method of simplified choice the FTC staff recommends is a “Do Not Track” mechanism governing the collection of information about consumer’s Internet activity to deliver targeted advertisements and for other purposes. Consumers and industry both support increased transparency and choice for this largely invisible practice. The Commission recommends a simple, easy to use choice mechanism for consumers to opt out of the collection of information about their Internet behavior for targeted ads. The most practical method would probably involve the placement of a persistent setting, similar to a cookie, on the consumer’s browser signaling the consumer’s choices about being tracked and receiving targeted ads.

News story.

Posted on December 6, 2010 at 1:52 PMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy–especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations–your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control–and privacy–users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but–and Google knew this–changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control–or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right–but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard–especially for the younger generation–to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization — domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation — or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant — even though it occurred at the phone company switching office and not in the target’s home or office — the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer — it’s out in the “cloud” somewhere — and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments — 69% of Americans now use some sort of cloud computing services — but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine — the advantages might very well outweigh the risks — but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism — and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.