Entries Tagged "FTC"

Page 1 of 2

Former Uber CISO Appealing His Conviction

Joe Sullivan, Uber’s CISO during their 2016 data breach, is appealing his conviction.

Prosecutors charged Sullivan, whom Uber hired as CISO after the 2014 breach, of withholding information about the 2016 incident from the FTC even as its investigators were scrutinizing the company’s data security and privacy practices. The government argued that Sullivan should have informed the FTC of the 2016 incident, but instead went out of his way to conceal it from them.

Prosecutors also accused Sullivan of attempting to conceal the breach itself by paying $100,000 to buy the silence of the two hackers behind the compromise. Sullivan had characterized the payment as a bug bounty similar to ones that other companies routinely make to researchers who report vulnerabilities and other security issues to them. His lawyers pointed out that Sullivan had made the payment with the full knowledge and blessing of Travis Kalanick, Uber’s CEO at the time, and other members of the ride-sharing giant’s legal team.

But prosecutors described the payment and an associated nondisclosure agreement that Sullivan’s team wanted the hackers to sign as an attempt to cover up what was in effect a felony breach of Uber’s network.

[…]

Sullivan’s fate struck a nerve with many peers and others in the industry who perceived CISOs as becoming scapegoats for broader security failures at their companies. Many argued ­ and continue to argue ­ that Sullivan acted with the full knowledge of his supervisors but in the end became the sole culprit for the breach and the associated failures for which he was charged. They believed that if Sullivan could be held culpable for his failure to report the 2016 breach to the FTC ­- and for the alleged hush payment—then so should Kalanick at the very least, and probably others as well.

It’s an argument that Sullivan’s lawyers once again raised in their appeal of the obstruction conviction this week. “Despite the fact that Mr. Sullivan was not responsible at Uber for the FTC’s investigation, including the drafting or signing any of the submissions to the FTC, the government singled him out among over 30 of his co-employees who all had information that Mr. Sullivan is alleged to have hidden from the FTC,” Swaminathan said.

I have some sympathy for that view. Sullivan was almost certainly scapegoated here. But I do want executives personally liable for what their company does. I don’t know enough about the details to have an opinion in this particular case.

Posted on October 19, 2023 at 7:08 AMView Comments

FTC Sues Data Broker

This is good news:

The Federal Trade Commission (FTC) has sued Kochava, a large location data provider, for allegedly selling data that the FTC says can track people at reproductive health clinics and places of worship, according to an announcement from the agency.

“Defendant’s violations are in connection with acquiring consumers’ precise geolocation data and selling the data in a format that allows entities to track the consumers’ movements to and from sensitive locations, including, among others, locations associated with medical care, reproductive health, religious worship, mental health temporary shelters, such as shelters for the homeless, domestic violence survivors, or other at risk populations, and addiction recovery,” the lawsuit reads.

Posted on August 30, 2022 at 6:58 AMView Comments

The Story of Tiversa

The New Yorker has published the long and interesting story of the cybersecurity firm Tiversa.

Watching “60 Minutes,” Boback saw a remarkable new business angle. Here was a multibillion-dollar industry with a near-existential problem and no clear solution. He did not know it then, but, as he turned the opportunity over in his mind, he was setting in motion a sequence of events that would earn him millions of dollars, friendships with business élites, prime-time media attention, and respect in Congress. It would also place him at the center of one of the strangest stories in the brief history of cybersecurity; he would be mired in lawsuits, countersuits, and counter-countersuits, which would gather into a vortex of litigation so ominous that one friend compared it to the Bermuda Triangle. He would be accused of fraud, of extortion, and of manipulating the federal government into harming companies that did not do business with him. Congress would investigate him. So would the F.B.I.

Posted on December 3, 2019 at 6:19 AMView Comments

The Federal Trade Commission and Privacy

New paper on the FTC and its actions to protect privacy:

Abstract: One of the great ironies about information privacy law is that the primary regulation of privacy in the United States has barely been studied in a scholarly way. Since the late 1990s, the Federal Trade Commission (FTC) has been enforcing companies’ privacy policies through its authority to police unfair and deceptive trade practices. Despite more than fifteen years of FTC enforcement, there is no meaningful body of judicial decisions to show for it. The cases have nearly all resulted in settlement agreements. Nevertheless, companies look to these agreements to guide their privacy practices. Thus, in practice, FTC privacy jurisprudence has become the broadest and most influential regulating force on information privacy in the United States—more so than nearly any privacy statute and any common law tort.

In this article, we contend that the FTC’s privacy jurisprudence is the functional equivalent to a body of common law, and we examine it as such. We explore how and why the FTC, and not contract law, came to dominate the enforcement of privacy policies. A common view of the FTC’s privacy jurisprudence is that it is thin, merely focusing on enforcing privacy promises. In contrast, a deeper look at the principles that emerge from FTC privacy “common law” demonstrates that the FTC’s privacy jurisprudence is quite thick. The FTC has codified certain norms and best practices and has developed some baseline privacy protections. Standards have become so specific they resemble rules. We contend that the foundations exist to develop this “common law” into a robust privacy regulatory regime, one that focuses on consumer expectations of privacy, that extends far beyond privacy policies, and that involves a full suite of substantive rules that exist independently from a company’s privacy representations.

Posted on August 29, 2013 at 12:28 PMView Comments

Scareware: How Crime Pays

Scareware is fraudulent software that uses deceptive advertising to trick users into believing they’re infected with some variety of malware, then convinces them to pay money to protect themselves. The infection isn’t real, and the software they buy is fake, too. It’s all a scam.

Here’s one scareware operator who sold “more than 1 million software products” at “$39.95 or more,” and now has to pay $8.2 million to settle a Federal Trade Commission complaint.

Seems to me that $40 per customer, minus $8.20 to pay off the FTC, is still a pretty good revenue model. Their operating costs can’t be very high, since the software doesn’t actually do anything. Yes, a court ordered them to close down their business, but certainly there are other creative entrepreneurs that can recognize a business opportunity when they see it.

Posted on February 7, 2011 at 8:45 AMView Comments

FTC Privacy Report

The U.S. Federal Trade Commission released its privacy report: “Protecting Consumer Privacy in an Era of Rapid Change.”

From the press release:

One method of simplified choice the FTC staff recommends is a “Do Not Track” mechanism governing the collection of information about consumer’s Internet activity to deliver targeted advertisements and for other purposes. Consumers and industry both support increased transparency and choice for this largely invisible practice. The Commission recommends a simple, easy to use choice mechanism for consumers to opt out of the collection of information about their Internet behavior for targeted ads. The most practical method would probably involve the placement of a persistent setting, similar to a cookie, on the consumer’s browser signaling the consumer’s choices about being tracked and receiving targeted ads.

News story.

Posted on December 6, 2010 at 1:52 PMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy—especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations—your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control—and privacy—users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but—and Google knew this—changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control—or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right—but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard—especially for the younger generation—to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.