Entries Tagged "privacy"

Page 80 of 145

Interesting Article on Libyan Internet Intelligence Gathering

This is worth reading, for the insights it provides on how a country goes about monitoring its citizens in the information age: a combination of targeted attacks and wholesale surveillance.

I’ll just quote one bit, this list of Western companies that helped:

Amesys, with its Eagle system, was just one of Libya’s partners in repression. A South African firm called VASTech had set up a sophisticated monitoring center in Tripoli that snooped on all inbound and outbound international phone calls, gathering and storing 30 million to 40 million minutes of mobile and landline conversations each month. ZTE Corporation, a Chinese firm whose gear powered much of Libya’s cell phone infrastructure, is believed to have set up a parallel Internet monitoring system for External Security: Photos from the basement of a makeshift surveillance site, obtained from Human Rights Watch, show components of its ZXMT system, comparable to Eagle. American firms likely bear some blame, as well. On February 15, just prior to the revolution, regime officials reportedly met in Barcelona with officials from Narus, a Boeing subsidiary, to discuss Internet-filtering software. And the Human Rights Watch photos also clearly show a manual for a satellite phone monitoring system sold by a subsidiary of L-3 Communications, a defense conglomerate based in New York.

Posted on June 5, 2012 at 6:07 AMView Comments

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure—announcing the vulnerability publicly and damn the consequences—to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that—at least in most cases—is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on—and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others—while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public—the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect—there’s a lot of very poorly written software still out there—but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

The Banality of Surveillance Photos

Interesting essay on a trove on surveillance photos from Cold War-era Prague.

Cops, even secret cops, are for the most part ordinary people. Working stiffs concerned with holding down jobs and earning a living. Even those who thought it was important to find enemies recognized the absurdity of their task.

I take photos all the time and these empty blurry frames tell me that they were made intentionally. Shot out of boredom, as little acts of defiance, the secret police wandered the streets of Prague for twenty years taking lousy pictures of people from far away because a job is a job.

Occasionally something interesting happened, like spotting a hot stylish, American made Ford Mustang Sally. However, it must have been an awful job, with dull days that turned into months and years, of killing time between lunch and dinner.

Posted on May 24, 2012 at 6:17 AMView Comments

Privacy Concerns Around "Social Reading"

Interesting paper: “The Perils of Social Reading,” by Neil M. Richards, from the Georgetown Law Journal.

Abstract: Our law currently treats records of our reading habits under two contradictory rules ­ rules mandating confidentiality, and rules permitting disclosure. Recently, the rise of the social Internet has created more of these records and more pressures on when and how they should be shared. Companies like Facebook, in collaboration with many newspapers, have ushered in the era of “social reading,” in which what we read may be “frictionlessly shared” with our friends and acquaintances. Disclosure and sharing are on the rise.

This Article sounds a cautionary note about social reading and frictionless sharing. Social reading can be good, but the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate our intellectual privacy ­ the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition. I argue that the choices we make about how to share have real consequences, and that “frictionless sharing” is not frictionless, nor it is really sharing. Although sharing is important, the sharing of our reading habits is special. Such sharing should be conscious and only occur after meaningful notice.

The stakes in this debate are immense. We are quite literally rewiring the public and private spheres for a new century. Choices we make now about the boundaries between our individual and social selves, between consumers and companies, between citizens and the state, will have unforeseeable ramifications for the societies our children and grandchildren inherit. We should make choices that preserve our intellectual privacy, not destroy it. This Article suggests practical ways to do just that.

Posted on May 23, 2012 at 7:25 AMView Comments

Smart Phone Privacy App

MobileScope looks like a great tool for monitoring and controlling what information third parties get from your smart phone apps:

We built MobileScope as a proof-of-concept tool that automates much of what we were doing manually; monitoring mobile devices for surprising traffic and highlighting potentially privacy-revealing flows

[…]

Unlike PCs, we have little control over the underlying privacy and security features of our mobile devices. They come pre-installed with locked-down operating systems that often restrict their owners from exercising meaningful control unless they’re willing to void their warranty and jailbreak the device.

Our current plans are to release MobileScope in the coming weeks and allow interested consumers, developers, regulators, and press to see what information their mobile devices can transmit.

Posted on May 11, 2012 at 6:42 AMView Comments

A Heathrow Airport Story about Trousers

Usually I don’t bother posting random stories about dumb or inconsistent airport security measures. But this one is particularly interesting:

“Sir, your trousers.”

“Pardon?”

“Sir, please take your trousers off.”

A pause.

“No.”

“No?”

The security official clearly was not expecting that response.

He begins to look like he doesn’t know what to do, bless him.

“You have no power to require me to do that. You also haven’t also given any good reason. I am sure any genuine security concerns you have can be addressed in other ways. You do not need to invade my privacy in this manner.”

A pause.

“I think you probably need to get your manager, don’t you?” I am trying to be helpful.

As I said in my Economist essay, “At this point, we don’t trust America’s TSA, Britain’s Department for Transport, or airport security in general.” We don’t trust that, when they tell us to do something and claim it’s essential for security, they’re tellling the truth.

Posted on April 11, 2012 at 9:57 AMView Comments

Teenagers and Privacy

Good article debunking the myth that young people don’t care about privacy on the Intenet.

Most kids are well aware of risks, and make “fairly sophisticated” decisions about privacy settings based on advice and information from their parents, teachers, and friends. They differentiate between people they don’t know out in the world (distant strangers) and those they don’t know in the community, such as high school students in their hometown (near strangers). Marisa, for example, a 10-year-old interviewed in the study (who technically is not allowed to use Facebook), “enjoys participating in virtual worlds and using instant messenger and Facebook to socialize with her friends”; is keenly aware of the risks—especially those related to privacy; and she doesn’t share highly sensitive personal information on her Facebook profile and actively blocks certain people.

[…]

Rather than fearing the unknown stranger, young adults are more wary of the “known other”—parents, school teachers, classmates, etc.—for fear of “the potential for the known others to share embarrassing information about them”; 83 percent of the sample group cited at least one known other they wanted to maintain their privacy from; 71 percent cited at least one known adult. Strikingly, seven out of the 10 participants who reported an incident when their privacy was breached said it was “perpetrated by known others.”

Posted on April 10, 2012 at 10:21 AMView Comments

The Battle for Internet Governance

Good article on the current battle for Internet governance:

The War for the Internet was inevitable—a time bomb built into its creation. The war grows out of tensions that came to a head as the Internet grew to serve populations far beyond those for which it was designed. Originally built to supplement the analog interactions among American soldiers and scientists who knew one another off-line, the Internet was established on a bedrock of trust: trust that people were who they said they were, and trust that information would be handled according to existing social and legal norms. That foundation of trust crumbled as the Internet expanded. The system is now approaching a state of crisis on four main fronts.

The first is sovereignty: by definition, a boundary-less system flouts geography and challenges the power of nation-states. The second is piracy and intellectual property: information wants to be free, as the hoary saying goes, but rights-holders want to be paid and protected. The third is privacy: online anonymity allows for creativity and political dissent, but it also gives cover to disruptive and criminal behavior—and much of what Internet users believe they do anonymously online can be tracked and tied to people’s real-world identities. The fourth is security: free access to an open Internet makes users vulnerable to various kinds of hacking, including corporate and government espionage, personal surveillance, the hijacking of Web traffic, and remote manipulation of computer-controlled military and industrial processes.

Posted on April 4, 2012 at 12:34 PMView Comments

1 78 79 80 81 82 145

Sidebar photo of Bruce Schneier by Joe MacInnis.