Page 474

Government Use of Hackers as an Object of Fear

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power:

But these more serious threats don’t seem to loom as large as hackers in the minds of those who make the laws and regulations that shape the Internet. It is the hacker—a sort of modern folk devil who personifies our anxieties about technology—who gets all the attention. The result is a set of increasingly paranoid and restrictive laws and regulations affecting our abilities to communicate freely and privately online, to use and control our own technology, and which puts users at risk for overzealous prosecutions and invasive electronic search and seizure practices. The Computer Fraud and Abuse Act, the cornerstone of domestic computer-crime legislation, is overly broad and poorly defined. Since its passage in 1986, it has created a pile of confused caselaw and overzealous prosecutions. The Departments of Defense and Homeland Security manipulate fears of techno-disasters to garner funding and support for laws and initiatives, such as the recently proposed Cyber Intelligence Sharing and Protection Act, that could have horrific implications for user rights. In order to protect our rights to free speech and privacy on the internet, we need to seriously reconsider those laws and the shadowy figure used to rationalize them.

[…]

In the effort to protect society and the state from the ravages of this imagined hacker, the US government has adopted overbroad, vaguely worded laws and regulations which severely undermine internet freedom and threaten the Internet’s role as a place of political and creative expression. In an effort to stay ahead of the wily hacker, laws like the Computer Fraud and Abuse Act (CFAA) focus on electronic conduct or actions, rather than the intent of or actual harm caused by those actions. This leads to a wide range of seemingly innocuous digital activities potentially being treated as criminal acts. Distrust for the hacker politics of Internet freedom, privacy, and access abets the development of ever-stricter copyright regimes, or laws like the proposed Cyber Intelligence Sharing and Protection Act, which if passed would have disastrous implications for personal privacy online.

Note that this was written last year, before any of the recent overzealous prosecutions.

Posted on April 8, 2013 at 6:34 AMView Comments

Apple's iMessage Encryption Seems to Be Pretty Good

The U.S. Drug Enforcement Agency has complained (in a classified report, not publicly) that Apple’s iMessage end-to-end encryption scheme can’t be broken. On the one hand, I’m not surprised; end-to-end encryption of a messaging system is a fairly easy cryptographic problem, and it should be unbreakable. On the other hand, it’s nice to have some confirmation that Apple is looking out for the users’ best interests and not the governments’.

Still, it’s impossible for us to know if iMessage encryption is actually secure. It’s certainly possible that Apple messed up somewhere, and since we have no idea how their encryption actually works, we can’t verify its functionality. It would be really nice if Apple would release the specifications of iMessage security.

EDITED TO ADD (4/8): There’s more to this story:

The DEA memo simply observes that, because iMessages are encrypted and sent via the Internet through Apple’s servers, a conventional wiretap installed at the cellular carrier’s facility isn’t going to catch those iMessages along with conventional text messages. Which shouldn’t exactly be surprising: A search of your postal mail isn’t going to capture your phone calls either; they’re just different communications channels. But the CNET article strongly implies that this means encrypted iMessages cannot be accessed by law enforcement at all. That is almost certainly false.

The question is whether iMessage uses true end-to-end encryption, or whether Apple has copies of the keys.

Another article.

Posted on April 5, 2013 at 1:05 PMView Comments

IT for Oppression

Whether it’s Syria using Facebook to help identify and arrest dissidents or China using its “Great Firewall” to limit access to international news throughout the country, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, propaganda, and control. They’re getting really good at it, and the IT industry is helping. We’re helping by creating business applications—categories of applications, really—that are being repurposed by oppressive governments for their own use:

  • What is called censorship when practiced by a government is content filtering when practiced by an organization. Many companies want to keep their employees from viewing porn or updating their Facebook pages while at work. In the other direction, data loss prevention software keeps employees from sending proprietary corporate information outside the network and also serves as a censorship tool. Governments can use these products for their own ends.
  • Propaganda is really just another name for marketing. All sorts of companies offer social media-based marketing services designed to fool consumers into believing there is "buzz" around a product or brand. The only thing different in a government propaganda campaign is the content of the messages.
  • Surveillance is necessary for personalized marketing, the primary profit stream of the Internet. Companies have built massive Internet surveillance systems designed to track users’ behavior all over the Internet and closely monitor their habits. These systems track not only individuals but also relationships between individuals, to deduce their interests so as to advertise to them more effectively. It’s a totalitarian’s dream.
  • Control is how companies protect their business models by limiting what people can do with their computers. These same technologies can easily be co-opted by governments that want to ensure that only certain computer programs are run inside their countries or that their citizens never see particular news programs.

Technology magnifies power, and there’s no technical difference between a government and a corporation wielding it. This is how commercial security equipment from companies like BlueCoat and Sophos end up being used by the Syrian and other oppressive governments to surveil—in order to arrest—and censor their citizens. This is how the same face-recognition technology that Disney uses in its theme parks ends up identifying protesters in China and Occupy Wall Street protesters in New York.

There are no easy technical solutions, especially because these four applications—censorship, propaganda, surveillance, and control—are intertwined; it can be hard to affect one without also affecting the others. Anonymity helps prevent surveillance, but it also makes propaganda easier. Systems that block propaganda can facilitate censorship. And giving users the ability to run untrusted software on their computers makes it easier for governments—and criminals—to install spyware.

We need more research into how to circumvent these technologies, but it’s a hard sell to both the corporations and governments that rely on them. For example, law enforcement in the US wants drones that can identify and track people, even as we decry China’s use of the same technology. Indeed, the battleground is often economic and political rather than technical; sometimes circumvention research is itself illegal.

The social issues are large. Power is using the Internet to increase its power, and we haven’t yet figured out how to correct the imbalances among government, corporate, and individual interests in our digital world. Cyberspace is still waiting for its Gandhi, its Martin Luther King, and a convincing path from the present to a better future.

This essay previously appeared in IEEE Computers & Society.

Posted on April 3, 2013 at 7:29 AMView Comments

Sixth Movie-Plot Threat Contest

It’s back, after a two-year hiatus. Terrorism is boring; cyberwar is in. Cyberwar, and its kin: cyber Pearl Harbor, cyber 9/11, cyber Armageddon. (Or make up your own: a cyber Black Plague, cyber Ragnarok, cyber comet-hits-the-earth.) This is how we get budget and power for militaries. This is how we convince people to give up their freedoms and liberties. This is how we sell-sell-sell computer security products and services. Cyberwar is hot, and it’s super scary. And now, you can help!

For this year’s contest, I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

Entries are limited to 500 words, and should be posted in the comments. In a month, I’ll choose some semifinalists, and we can all vote and pick the winner.

Good luck.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner. The Fifth Movie-Plot Threat Contest rules, semifinalists, and winner.

EDITED TO ADD (5/26): Semifinalists will be announced (and voting will begin) on June 15. My apologies for being late about this.

EDITED TO ADD (6/14): Voting is now open.

Posted on April 1, 2013 at 12:38 PM

What I've Been Thinking About

I’m starting to think about my next book, which will be about power and the Internet—from the perspective of security. My objective will be to describe current trends, explain where those trends are leading us, and discuss alternatives for avoiding that outcome. Many of my recent essays have touched on various facets of this, although I’m still looking for synthesis. These facets include:

  1. The relationship between the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power.
  2. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems—and allowing those companies to abuse that trust.
  3. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.
  4. Ubiquitous surveillance for both government and corporate purposes—aided by cloud computing, social networking, and Internet-enabled everything—resulting in a world without any real privacy.
  5. The four tools of Internet oppression—surveillance, censorship, propaganda, and use control—have both government and corporate uses. And these are interrelated; often building tools to fight one as the side effect of facilitating another.
  6. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), fight crime (increased police access to data), or control our actions in cyberspace.
  7. The need for leaks: both whistleblowers and FOIA suits. So much of what the government does to us is shrouded in secrecy, and leaks are the only we know what’s going on. This also applies to the corporate algorithms and systems and control much of our lives.

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, Liars and Outliers.) On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.

I believe society is headed down a dangerous path, and that we—as members of society—need to make some hard choices about what sort of world we want to live in. If we maintain our current trajectory, the future does not look good. It’s not clear if we have the social or political will to address the intertwined issues of power, security, and technology, or even have the conversations necessary to understand the decisions we need to make. Writing about topics like this is what I do best, and I hope that a book on this topic will have a positive effect on the discourse.

The working title of the book is Power.com—although that might be too similar to the book Power, Inc. for the final title.

These thoughts are still in draft, and not yet part of a coherent whole. For me, the writing process is how I understand a topic, and the shape of this book will almost certainly change substantially as I write. I’m very interested in what people think about this, especially in terms of solutions. Please pass this around to interested people, and leave comments to this blog post.

Posted on April 1, 2013 at 6:07 AMView Comments

Friday Squid Blogging: Bomb Discovered in Squid at Market

Really:

An unexploded bomb was found inside a squid when the fish was slaughtered at a fish market in Guangdong province.

Oddly enough, this doesn’t seem to be the work of terrorists:

The stall owner, who has been selling fish for 10 years, told the newspaper the 1-meter-long squid might have mistaken the bomb for food.

Clearly there’s much to this story that remains unreported.

More news articles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on March 29, 2013 at 4:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.