Entries Tagged "secrecy"

Page 10 of 21

Punishing Security Breaches

The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.

In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.

If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules—maybe he wanted to show it to a friend—he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn’t the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.

Even if Powell had authorization to take the phone out of Apple’s labs—presumably someone has to test drive the new toys sooner or later—the corporate rules might have required him to pay attention to it at all times. We’ve all heard of military attachés who carry briefcases chained to their wrists. It’s an extreme example, but demonstrates how a security policy can allow for objects to move around town—or around the world—without getting lost. Apple almost certainly doesn’t have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple’s rules and Powell’s violation were both that clear, Apple should enforce them.

On the other hand, if Apple doesn’t have clear-cut rules, if Powell wasn’t prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and—as long as nothing bad happens—no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it’s quickly jettisoned when there’s an important job to be done. A common example is passwords: people aren’t supposed to share them, unless it’s really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.

Apple’s unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he’s a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.

Posted on April 26, 2010 at 7:20 AMView Comments

Privacy and Control

In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy’s and Larry Ellison’s comments from a few years earlier, and you’ve got a whole lot of tech CEOs proclaiming the death of privacy—especially when it comes to young people.

It’s just not true. People, including the younger generation, still care about privacy. Yes, they’re far more public on the Internet than their parents: writing personal details on Facebook, posting embarrassing photos on Flickr and having intimate conversations on Twitter. But they take steps to protect their privacy and vociferously complain when they feel it violated. They’re not technically sophisticated about privacy and make mistakes all the time, but that’s mostly the fault of companies and Web sites that try to manipulate them for financial gain.

To the older generation, privacy is about secrecy. And, as the Supreme Court said, once something is no longer secret, it’s no longer private. But that’s not how privacy works, and it’s not how the younger generation thinks about it. Privacy is about control. When your health records are sold to a pharmaceutical company without your permission; when a social-networking site changes your privacy settings to make what used to be visible only to your friends visible to everyone; when the NSA eavesdrops on everyone’s e-mail conversations—your loss of control over that information is the issue. We may not mind sharing our personal lives and thoughts, but we want to control how, where and with whom. A privacy failure is a control failure.

People’s relationship with privacy is socially complicated. Salience matters: People are more likely to protect their privacy if they’re thinking about it, and less likely to if they’re thinking about something else. Social-networking sites know this, constantly reminding people about how much fun it is to share photos and comments and conversations while downplaying the privacy risks. Some sites go even further, deliberately hiding information about how little control—and privacy—users have over their data. We all give up our privacy when we’re not thinking about it.

Group behavior matters; we’re more likely to expose personal information when our peers are doing it. We object more to losing privacy than we value its return once it’s gone. Even if we don’t have control over our data, an illusion of control reassures us. And we are poor judges of risk. All sorts of academic research backs up these findings.

Here’s the problem: The very companies whose CEOs eulogize privacy make their money by controlling vast amounts of their users’ information. Whether through targeted advertising, cross-selling or simply convincing their users to spend more time on their site and sign up their friends, more information shared in more ways, more publicly means more profits. This means these companies are motivated to continually ratchet down the privacy of their services, while at the same time pronouncing privacy erosions as inevitable and giving users the illusion of control.

You can see these forces in play with Google‘s launch of Buzz. Buzz is a Twitter-like chatting service, and when Google launched it in February, the defaults were set so people would follow the people they corresponded with frequently in Gmail, with the list publicly available. Yes, users could change these options, but—and Google knew this—changing options is hard and most people accept the defaults, especially when they’re trying out something new. People were upset that their previously private e-mail contacts list was suddenly public. A Federal Trade Commission commissioner even threatened penalties. And though Google changed its defaults, resentment remained.

Facebook tried a similar control grab when it changed people’s default privacy settings last December to make them more public. While users could, in theory, keep their previous settings, it took an effort. Many people just wanted to chat with their friends and clicked through the new defaults without realizing it.

Facebook has a history of this sort of thing. In 2006 it introduced News Feeds, which changed the way people viewed information about their friends. There was no true privacy change in that users could not see more information than before; the change was in control—or arguably, just in the illusion of control. Still, there was a large uproar. And Facebook is doing it again; last month, the company announced new privacy changes that will make it easier for it to collect location data on users and sell that data to third parties.

With all this privacy erosion, those CEOs may actually be right—but only because they’re working to kill privacy. On the Internet, our privacy options are limited to the options those companies give us and how easy they are to find. We have Gmail and Facebook accounts because that’s where we socialize these days, and it’s hard—especially for the younger generation—to opt out. As long as privacy isn’t salient, and as long as these companies are allowed to forcibly change social norms by limiting options, people will increasingly get used to less and less privacy. There’s no malice on anyone’s part here; it’s just market forces in action. If we believe privacy is a social good, something necessary for democracy, liberty and human dignity, then we can’t rely on market forces to maintain it. Broad legislation protecting personal privacy by giving people control over their personal data is the only solution.

This essay originally appeared on Forbes.com.

EDITED TO ADD (4/13): Google responds. And another essay on the topic.

Posted on April 6, 2010 at 7:47 AMView Comments

Comprehensive National Cybersecurity Initiative

On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:

  • Initiative #1. Manage the Federal Enterprise Network as a single network enterprise with Trusted Internet.
  • Initiative #2. Deploy an intrusion detection system of sensors across the Federal enterprise.
  • Initiative #3. Pursue deployment of intrusion prevention systems across the Federal enterprise.
  • Initiative #4: Coordinate and redirect research and development (R&D) efforts.
  • Initiative #5. Connect current cyber ops centers to enhance situational awareness.
  • Initiative #6. Develop and implement a government-wide cyber counterintelligence (CI) plan.
  • Initiative #7. Increase the security of our classified networks.
  • Initiative #8. Expand cyber education.
  • Initiative #9. Define and develop enduring “leap-ahead” technology, strategies, and programs.
  • Initiative #10. Define and develop enduring deterrence strategies and programs.
  • Initiative #11. Develop a multi-pronged approach for global supply chain risk management.
  • Initiative #12. Define the Federal role for extending cybersecurity into critical infrastructure domains.

While this transparency is a good, in this sort of thing the devil is in the details—and we don’t have any details. We also don’t have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.

Posted on March 4, 2010 at 12:55 PMView Comments

World's Largest Data Collector Teams Up With World's Largest Data Collector

Does anyone think this is a good idea?

Under an agreement that is still being finalized, the National Security Agency would help Google analyze a major corporate espionage attack that the firm said originated in China and targeted its computer networks, according to cybersecurity experts familiar with the matter. The objective is to better defend Google—and its users—from future attack.

EPIC has filed a Freedom of Information Act Request, asking for records pertaining to the partnership. That would certainly help, because otherwise we have no idea what’s actually going on.

I’ve already written about why the NSA should not be in charge of our nation’s cyber security.

Posted on February 5, 2010 at 6:02 AMView Comments

TSA Publishes Standard Operating Procedures

BoingBoing is pretty snarky:

The TSA has published a “redacted” version of their s00per s33kr1t screening procedure guidelines (Want to know whether to frisk a CIA operative at the checkpoint? Now you can!). Unfortunately, the security geniuses at the DHS don’t know that drawing black blocks over the words you want to eliminate from your PDF doesn’t actually make the words go away, and can be defeated by nefarious al Qaeda operatives through a complex technique known as ctrl-a/ctrl-c/ctrl-v. Thankfully, only the most elite terrorists would be capable of matching wits with the technology brilliance on display at the agency charged with defending our nation’s skies by ensuring that imaginary hair-gel bombs are kept off of airplanes.

TSA is launching a “full review” to determine how this could have happened. I’ll save them the effort: someone screwed up.

In a statement Tuesday night, the TSA sought to minimize the impact of the unintentional release—calling the document “outdated,” “unclassified” and unimplemented—while saying that it took the incident “very seriously,” and “took swift action” when it was discovered.

Yeah, right.

The original link to the document is dead, but here’s the unredacted document.

I’ve skimmed it, and haven’t found anything terribly interesting. Here’s what Wired.com noticed:

One of the redacted sections, for example, indicates that an armed law enforcement officer in or out of uniform may pass beyond the checkpoint without screening after providing a U.S. government-issued photo ID and “Notice of LEO Flying Armed Document.”

Some commercial airline pilots receive training by the U.S. Marshals Service and are allowed to carry TSA-issued firearms on planes. They can pass through without screening only after presenting “bonafide credentials and aircraft operator photo ID,” the document says.

Foreign dignitaries equivalent to cabinet rank and above, accompanying a spouse, their children under the age of 12, and a State Department escort are exempt from screening.

There are also references to a CIA program called WOMAP, the Worldwide Operational Meet and Assist Program. As part of WOMAP, foreign dignitaries and their escorts—authorized CIA representatives—are exempt from screening, provided they’re approved in advance by TSA’s Office of Intelligence.

Passengers carrying passports from Cuba, Iran, North Korea, Libya, Syria, Sudan, Afghanistan, Lebanon, Somalia, Iraq, Yemen or Algeria are to be designated for selective screening.

Although only a few portions of the document were redacted, the manual contains other tidbits that weren’t redacted, such as a thorough description of diplomatic pouches that are exempt from screening.

I’m a little bit saddened when we all make a big deal about how dumb people are at redacting digital documents. We’ve had a steady stream of these badly redacted documents, and I don’t want to lose that. I also don’t want agencies deciding not to release documents at all, rather than risk this sort of embarrassment.

EDITED TO ADD (12/10): News:

Five Transportation Security Administration employees have been placed on administrative leave after a sensitive airport security manual was posted on the Internet, the agency announced Wednesday.

EDITED TO ADD (12/12): Did the TSA compromise an intelligence program?

Posted on December 10, 2009 at 6:47 AMView Comments

My Reaction to Eric Schmidt

Schmidt said:

I think judgment matters. If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place. If you really need that kind of privacy, the reality is that search engines—including Google—do retain this information for some time and it’s important, for example, that we are all subject in the United States to the Patriot Act and it is possible that all that information could be made available to the authorities.

This, from 2006, is my response:

Privacy protects us from abuses by those in power, even if we’re doing nothing wrong at the time of surveillance.

We do nothing wrong when we make love or go to the bathroom. We are not deliberately hiding anything when we seek out private places for reflection or conversation. We keep private journals, sing in the privacy of the shower, and write letters to secret lovers and then burn them. Privacy is a basic human need.

[…]

For if we are observed in all matters, we are constantly under threat of correction, judgment, criticism, even plagiarism of our own uniqueness. We become children, fettered under watchful eyes, constantly fearful that—either now or in the uncertain future—patterns we leave behind will be brought back to implicate us, by whatever authority has now become focused upon our once-private and innocent acts. We lose our individuality, because everything we do is observable and recordable.

[…]

This is the loss of freedom we face when our privacy is taken from us. This is life in former East Germany, or life in Saddam Hussein’s Iraq. And it’s our future as we allow an ever-intrusive eye into our personal, private lives.

Too many wrongly characterize the debate as “security versus privacy.” The real choice is liberty versus control. Tyranny, whether it arises under threat of foreign physical attack or under constant domestic authoritative scrutiny, is still tyranny. Liberty requires security without intrusion, security plus privacy. Widespread police surveillance is the very definition of a police state. And that’s why we should champion privacy even when we have nothing to hide.

EDITED TO ADD: See also Daniel Solove’s “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy.”

Posted on December 9, 2009 at 12:22 PMView Comments

UK Defense Security Manual Leaked

Wow. It’s over 2,000 pages, so it’ll take time to make any sense of. According to Ross Anderson, who’s given it a quick look over, “it seems to be the bureaucratic equivalent of spaghetti code: a hodgepodge of things written by people from different backgrounds, and with different degrees of clue, in different decades.”

The computer security stuff starts at page 1,531.

EDITED TO ADD (10/6): An article.

Posted on October 5, 2009 at 3:10 PMView Comments

1 8 9 10 11 12 21

Sidebar photo of Bruce Schneier by Joe MacInnis.