Blog: April 2013 Archives

The Importance of Backups

I’ve already written about the guy who got a new trial because a virus ate his court records. Here’s someone who will have to redo his thesis research because someone stole his only copy of the data. Remember the rule: no one ever wants backups, but everyone always wants restores.

I have no idea if that image is real or not, but I’ve been hearing such stories for at least two decades.

Posted on April 30, 2013 at 1:29 PM45 Comments

Pinging the Entire Internet

Turns out there’s a lot of vulnerable systems out there:

Many of the two terabytes (2,000 gigabytes) worth of replies Moore received from 310 million IPs indicated that they came from devices vulnerable to well-known flaws, or configured in a way that could to let anyone take control of them.

On Tuesday, Moore published results on a particularly troubling segment of those vulnerable devices: ones that appear to be used for business and industrial systems. Over 114,000 of those control connections were logged as being on the Internet with known security flaws. Many could be accessed using default passwords and 13,000 offered direct access through a command prompt without a password at all.

[…]

The new work adds to other significant findings from Moore’s unusual hobby. Results he published in January showed that around 50 million printers, games consoles, routers, and networked storage drives are connected to the Internet and easily compromised due to known flaws in a protocol called Universal Plug and Play (UPnP). This protocol allows computers to automatically find printers, but is also built into some security devices, broadband routers, and data storage systems, and could be putting valuable data at risk.

Posted on April 30, 2013 at 6:11 AM25 Comments

More Links on the Boston Terrorist Attacks

Max Abrahms has two sensible essays.

Probably the ultimate in security theater: Williams-Sonoma stops selling pressure cookers in the Boston area “out of respect.” They say it’s temporary. (I bought a Williams-Sonoma pressure cooker last Christmas; I wonder if I’m now on a list.)

A tragedy: Sunil Tripathi, whom Reddit and other sites wrongly identified as one of the bombers, was found dead in the Providence River. I hope it’s not a suicide.

And worst of all, New York Mayor Bloomberg scares me more than the terrorists ever could:

In the wake of the Boston Marathon bombings, Mayor Michael Bloomberg said Monday the country’s interpretation of the Constitution will “have to change” to allow for greater security to stave off future attacks.

“The people who are worried about privacy have a legitimate worry,” Mr. Bloomberg said during a press conference in Midtown. “But we live in a complex world where you’re going to have to have a level of security greater than you did back in the olden days, if you will. And our laws and our interpretation of the Constitution, I think, have to change.”

Terrorism’s effectiveness doesn’t come from the terrorist acts; it comes from our reactions to it. We need leaders who aren’t terrorized.

EDITED TO ADD (4/29): Only indirectly related, but the Kentucky Derby is banning “removable lens cameras” for security reasons.

EDITED TO ADD (4/29): And a totally unscientific CNN opinion poll: 57% say no to: “Is it justifiable to violate certain civil liberties in the name of national security?”

EDITED TO ADD (4/29): It seems that Sunil Tripathi died well before the Boston bombing. So while his family was certainly affected by the false accusations, he wasn’t.

EDITED TO ADD (4/29): On the difference between mass murder and terrorism:

What the United States means by terrorist violence is, in large part, “public violence some weirdo had the gall to carry out using a weapon other than a gun.”

EDITED TO ADD (5/14): On fear fatigue—and a good modeling of how to be indomitable. On the surprising dearth of terrorists. Why emergency medical response has improved since 9/11. What if the Boston bombers had been shooters instead. More on Williams-Sonoma: Shortly thereafter, they released a statement apologizing to anyone who might be offended. Don’t be terrorized. “The new terrorism”—from 2011 (in five parts, and this is the first one). This is kind of wordy, but it’s an interesting essay on the nature of fear…and cats. Glenn Greenwald on reactions to the bombing. How a 20-year-old Saudi victim of the bombing was instantly, and baselessly, converted by the US media and government into a “suspect.” Four effective responses to terrorism. People being terrorized. On not letting the bad guys win. Resilience. More resilience Why terrorism works. Data shows that terrorism has declined. Mass hysteria as a terrorist weapon.

Posted on April 29, 2013 at 10:27 AM54 Comments

Random Links on the Boston Terrorist Attack

Encouraging poll data says that maybe Americans are starting to have realistic fears about terrorism, or at least are refusing to be terrorized.

Good essay by Scott Atran on terrorism and our reaction.

Reddit apologizes. I think this is a big story. The Internet is going to help in everything, including trying to identify terrorists. This will happen whether or not the help is needed, wanted, or even helpful. I think this took the FBI by surprise. (Here’s a good commentary on this sort of thing.)

Facial recognition software didn’t help. I agree with this, though; it will only get better.

EDITED TO ADD (4/25): “Hapless, Disorganized, and Irrational“: John Mueller and Mark Stewart describe the Boston—and most other—terrorists.

Posted on April 25, 2013 at 6:42 AM20 Comments

More Plant Security Countermeasures

I’ve talked about plant security systems, both here and in Beyond Fear. Specifically, I’ve talked about tobacco plants that call air strikes against insects that eat them, by releasing a scent that attracts predators to those insects. Here’s another defense: the plants also tag caterpillars for predators by feeding them a sweet snack (full episode here) that makes them give off a strong scent.

Posted on April 24, 2013 at 6:51 AM8 Comments

The Police Now Like Amateur Photography

PhotographyIsNotACrime.com points out the obvious: after years of warning us that photography is suspicious, the police were happy to accept all of those amateur photographs and videos at the Boston Marathon.

Adding to the hypocrisy is that these same authorities will most likely start clamping down on citizens with cameras more than ever once the smoke clears and we once again become a nation of paranoids willing to give up our freedoms in exchange for some type of perceived security.

After all, that is exactly how it played out in the years after the 9/11 terrorist attacks where it became impossible to photograph buildings, trains or airplanes without drawing the suspicion of authorities as potential terrorists.

Posted on April 23, 2013 at 12:34 PM22 Comments

Securing Members of Congress from Transparency

I commented in this article on the repeal of the transparency provisions of the STOCK Act:

Passed in 2012 after a 60 Minutes report on insider trading practices in Congress, the STOCK Act banned members of Congress and senior executive and legislative branch officials from trading based on government knowledge. To give the ban teeth, the law directed that many of these officials’ financial disclosure forms be posted online and their contents placed into public databases. However, in March, a report ordered by Congress found that airing this information on the Internet could put public servants and national security at risk. The report urged that the database, and the public disclosure for everyone but members of Congress and the highest-ranking executive branch officials—measures that had never been implemented—be thrown out.

The government sprang into action: last week, both chambers of Congress unanimously agreed to adopt the report’s recommendations. Days later, Obama signed the changes into law.

The article went on to talk to four cybersecurity experts, all of whom basically said the same thing:

Bluntest of all was Bruce Schneier, a leading security technologist and cryptographer. “They put them personally at risk by holding them accountable,” Schneier said of the impact of disclosure rules on Congress members and DC staffers. “That’s why they repealed it. The national security bit is bullshit you’re supposed to repeat.” (Three of the four experts we consulted opted for the same term of choice.)

There was a security risk, but it was not a national security risk. It was a personal Congressperson risk.

EDITED TO ADD (4/25): Jon Stewart quoted my “the national security bit is bullshit” line.

Posted on April 23, 2013 at 7:10 AM22 Comments

About Police Shoot Outs and Spectators

Hopefully this advice is superfluous for my audience, but it’s so well written it’s worth reading nonetheless:

7. SO, the bottom line is this: If you are in a place where you hear steady, and sustained, and nearby (lets call that, for some technical reasons, anything less than 800 meters) gunfire, do these things:

  • Go to your basement. You are cool there.
  • If you don’t have a basement, go to the other side of the house from the firing, and leave, heading away from the firing. Do not stop for a mile.
  • If you do not think that you can leave, get on the ground floor, as far from the firing as possible, and place something solid between you and the firing. Solid is something like a bathtub, a car (engine block), a couple of concrete walls (single layer brick…nope).
  • If you are high up (say 4rd story or higher) just get away from the side of the building where the firing is taking place. You will, mostly, be protected by the thick concrete of the structure.

8. But for cripes sake, do not step out on to your front porch and start recording a video on your iPhone, unless you actually have a death-wish, or are being paid significant amounts of money, in advance, as a combat journalist/cameraman.

Posted on April 21, 2013 at 10:48 AM74 Comments

The Boston Marathon Bomber Manhunt

I generally give the police a lot of tactical leeway in times like this. The very armed and very dangerous suspects warranted extraordinary treatment. They were perfectly capable of killing again, taking hostages, planting more bombs—and we didn’t know the extent of the plot or the group. That’s why I didn’t object to the massive police dragnet, the city-wide lock down, and so on.

Ross Anderson has a different take:

…a million people were under virtual house arrest; the 19-year-old fugitive from justice happened to be a Muslim. Whatever happened to the doctrine that infringements of one liberty to protect another should be necessary and proportionate?

In the London bombings, four idiots killed themselves in the first incident with a few dozen bystanders, but the second four failed and ran for it when their bombs didn’t go off. It didn’t occur to anyone to lock down London. They were eventually tracked down and arrested, together with their support team. Digital forensics played a big role; the last bomber to be caught left the country and changed his SIM, but not his IMEI. It’s next to impossible for anyone to escape nowadays if the authorities try hard.

He has a point, although I’m not sure I agree with it.

Opinions?

EDITED TO ADD (4/20): This makes the argument very well. On the other hand, readers are rightfully pointing out that the lock down was in response to the shooting of a campus police officer, a carjacking, a firefight, and a vehicle chase with thrown bombs: the sort of thing that pretty much only happens in the movies.

EDITED TO ADD (4/20): More commentary on this Slashdot thread.

Posted on April 20, 2013 at 8:19 AM189 Comments

Me at the Berkman Center

Earlier this month I spent a week at the Berkman Center for Internet and Society, talking to people about power, security, technology, and threats (details here). As part of that week, I gave a public talk at Harvard. Because my thoughts are so diffuse and disjoint, I didn’t think I could pull it all together into a coherent talk. Instead, I asked Jonathan Zittrain to interview me on stage. He did, and the results are here: both video and transcript.

Be warned, though. You’re getting a bunch of half-formed raw thoughts, contradictions and all. I appreciate comments, criticisms, reading suggestions, and so on.

Posted on April 19, 2013 at 1:40 PM10 Comments

Initial Thoughts on the Boston Bombings

I rewrote my “refuse to be terrorized” essay for the Atlantic. David Rothkopf (author of the great book Power, Inc.) wrote something similar, and so did John Cole.

It’s interesting to see how much more resonance this idea has today than it did a dozen years ago. If other people have written similar essays, please post links in the comments.

EDITED TO ADD (4/16): Two good essays.

EDITED TO ADD (4/16): I did a Q&A on the Washington Post blog. And—I can hardly believe it—President Obama said “the American people refuse to be terrorized” in a press briefing today.

EDITED TO ADD (4/16): I did a podcast interview and another press interview.

EDITED TO ADD (4/16): This, on the other hand, is pitiful.

EDITED TO ADD (4/17): Another audio interview with me.

EDITED TO ADD (4/19): I have done a lot of press this week. Here’s a link to a “To the Point” segment, and two Huffington Post Live segments. I was on The Steve Malzberg Show, which I didn’t realize was shouting conservative talk radio until it was too late.

EDITED TO ADD (4/20): That Atlantic essay had 40,000 Facebook likes and 6800 Tweets. The editor told me it had about 360,000 hits. That makes it the most popular piece I’ve ever written.

EDITED TO ADD (5/14): More links here.

Posted on April 16, 2013 at 9:19 AM115 Comments

Google Glass Enables New Forms of Cheating

It’s mentioned here:

Mr. Doerr said he had been wearing the glasses and uses them especially for taking pictures and looking up words while playing Scattergories with his family, though it is questionable whether that follows the game’s rules.

Questionable? Questionable? It’s just like using a computer’s dictionary while playing Scrabble, or a computer odds program while playing poker, or a computer chess program while playing an in-person game. There’s no question at all—it’s cheating.

We’re seeing the birth of a new epithet, “glasshole.”

Posted on April 15, 2013 at 4:29 AM42 Comments

Remotely Hijacking an Aircraft

There is a lot of buzz on the Internet about a talk at the Hack-in-the Box conference by Hugo Teso, who claims he can hack in to remotely control an airplane’s avionics. He even wrote an Android app to do it.

I honestly can’t tell how real this is, and how much of it is the unique configuration of simulators he tested this on. On the one hand, it can’t possibly be true that an aircraft avionics computer accepts outside commands. On the other hand, we’ve seen lots of security vulnerabilities that seem impossible to be true. Right now, I’m skeptical.

EDITED TO ADD (4/12): Three good refutations.

Posted on April 12, 2013 at 10:50 AM40 Comments

Security Externalities and DDOS Attacks

Ed Felten has a really good blog post about the externalities that the recent Spamhaus DDOS attack exploited:

The attackers’ goal was to flood Spamhaus or its network providers with Internet traffic, to overwhelm their capacity to handle incoming network packets. The main technical problem faced by a DoS attacker is how to amplify the attacker’s traffic-sending capacity, so that the amount of traffic arriving at the target is much greater than the attacker can send himself. To do this, the attacker typically tries to induce many computers around the Internet to send large amounts of traffic to the target.

The first stage of the attack involved the use of a botnet, consisting of a large number of software agents surreptitiously installed on the computers of ordinary users. These bots were commanded to send attack traffic. Notice how this amplifies the attacker’s traffic-sending capability: by sending a few commands to the botnet, the attacker can induce the botnet to send large amounts of attack traffic. This step exploits our first externality: the owners of the bot-infected computers might have done more to prevent the infection, but the harm from this kind of attack activity falls onto strangers, so the computer owners had a reduced incentive to prevent it.

Rather than having the bots send traffic directly to Spamhaus, the attackers used another step to further amplify the volume of traffic. They had the bots send queries to DNS proxies across the Internet (which answer questions about how machine names like www.freedom-to-tinker.com related to IP addresses like 209.20.73.44). This amplifies traffic because the bots can send a small query that elicits a large response message from the proxy.

Here is our second externality: the existence of open DNS proxies that will respond to requests from anywhere on the Internet. Many organizations run DNS proxies for use by their own people. A well-managed DNS proxy is supposed to check that requests are coming from within the same organization; but many proxies fail to check this—they’re “open” and will respond to requests from anywhere. This can lead to trouble, but the resulting harm falls mostly on people outside the organization (e.g. Spamhaus) so there isn’t much incentive to take even simple steps to prevent it.

To complete the attack, the DNS requests were sent with false return addresses, saying that the queries had come from Spamhaus—which causes the DNS proxies to direct their large response messages to Spamhaus.

Here is our third externality: the failure to detect packets with forged return addresses. When a packet with a false return address is injected, it’s fairly easy for the originating network to detect this: if a packet comes from inside your organization, but it has a return address that is outside your organization, then the return address must be forged and the packet should be discarded. But many networks fail to check this. This causes harm but—you guessed it—the harm falls outside the organization, so there isn’t much incentive to check. And indeed, this kind of packet filtering has long been considered a best practice but many networks still fail to do it.

I’ve been writing about security externalities for years. They’re often much harder to solve than technical problems.

By the way, a lot of the hype surrounding this attack was media manipulation.

Posted on April 10, 2013 at 12:46 PM12 Comments

Nice Security Mindset Example

A real-world one-way function:

Alice and Bob procure the same edition of the white pages book for a particular town, say Cambridge. For each letter Alice wants to encrypt, she finds a person in the book whose last name starts with this letter and uses his/her phone number as the encryption of that letter.

To decrypt the message Bob has to read through the whole book to find all the numbers.

And a way to break it:

I still use this example, with an assumption that there is no reverse look-up. I recently taught it to my AMSA students. And one of my 8th graders said, “If I were Bob, I would just call all the phone numbers and ask their last names.”

In the fifteen years since I’ve been using this example, this idea never occurred to me. I am very shy so it would never enter my mind to call a stranger and ask for their last name. My student made me realize that my own personality affected my mathematical inventiveness.

I’ve written about the security mindset in the past, and this is a great example of it.

Posted on April 9, 2013 at 1:49 PM46 Comments

Bitcoins in the Mainstream Media

Interesting article from the New Yorker.

I’m often asked what I think about bitcoins. I haven’t analyzed the security, but what I have seen looks good. The real issues are economic and political, and I don’t have the expertise to have an opinion on that.

By the way, here’s a recent criticism of bitcoins.

EDITED TO ADD (4/12): Four more good links.

EDITED TO ADD (4/16): Another good bitcoin story, although it’s from 2011.

Posted on April 9, 2013 at 6:05 AM49 Comments

Elite Panic

I hadn’t heard of this term before, but it’s an interesting one. The excerpt below is from an interview with Rebecca Solnit, author of A Paradise Built in Hell: The Extraordinary Communities That Arise in Disaster:

The term “elite panic” was coined by Caron Chess and Lee Clarke of Rutgers. From the beginning of the field in the 1950s to the present, the major sociologists of disaster—Charles Fritz, Enrico Quarantelli, Kathleen Tierney, and Lee Clarke—proceeding in the most cautious, methodical, and clearly attempting-to-be-politically-neutral way of social scientists, arrived via their research at this enormous confidence in human nature and deep critique of institutional authority. It’s quite remarkable.

Elites tend to believe in a venal, selfish, and essentially monstrous version of human nature, which I sometimes think is their own human nature. I mean, people don’t become incredibly wealthy and powerful by being angelic, necessarily. They believe that only their power keeps the rest of us in line and that when it somehow shrinks away, our seething violence will rise to the surface—that was very clear in Katrina. Timothy Garton Ash and Maureen Dowd and all these other people immediately jumped on the bandwagon and started writing commentaries based on the assumption that the rumors of mass violence during Katrina were true. A lot of people have never understood that the rumors were dispelled and that those things didn’t actually happen; it’s tragic.

But there’s also an elite fear—going back to the 19th century—that there will be urban insurrection. It’s a valid fear. I see these moments of crisis as moments of popular power and positive social change. The major example in my book is Mexico City, where the ’85 earthquake prompted public disaffection with the one-party system and, therefore, the rebirth of civil society.

Posted on April 8, 2013 at 1:30 PM44 Comments

Government Use of Hackers as an Object of Fear

Interesting article about the perception of hackers in popular culture, and how the government uses the general fear of them to push for more power:

But these more serious threats don’t seem to loom as large as hackers in the minds of those who make the laws and regulations that shape the Internet. It is the hacker—a sort of modern folk devil who personifies our anxieties about technology—who gets all the attention. The result is a set of increasingly paranoid and restrictive laws and regulations affecting our abilities to communicate freely and privately online, to use and control our own technology, and which puts users at risk for overzealous prosecutions and invasive electronic search and seizure practices. The Computer Fraud and Abuse Act, the cornerstone of domestic computer-crime legislation, is overly broad and poorly defined. Since its passage in 1986, it has created a pile of confused caselaw and overzealous prosecutions. The Departments of Defense and Homeland Security manipulate fears of techno-disasters to garner funding and support for laws and initiatives, such as the recently proposed Cyber Intelligence Sharing and Protection Act, that could have horrific implications for user rights. In order to protect our rights to free speech and privacy on the internet, we need to seriously reconsider those laws and the shadowy figure used to rationalize them.

[…]

In the effort to protect society and the state from the ravages of this imagined hacker, the US government has adopted overbroad, vaguely worded laws and regulations which severely undermine internet freedom and threaten the Internet’s role as a place of political and creative expression. In an effort to stay ahead of the wily hacker, laws like the Computer Fraud and Abuse Act (CFAA) focus on electronic conduct or actions, rather than the intent of or actual harm caused by those actions. This leads to a wide range of seemingly innocuous digital activities potentially being treated as criminal acts. Distrust for the hacker politics of Internet freedom, privacy, and access abets the development of ever-stricter copyright regimes, or laws like the proposed Cyber Intelligence Sharing and Protection Act, which if passed would have disastrous implications for personal privacy online.

Note that this was written last year, before any of the recent overzealous prosecutions.

Posted on April 8, 2013 at 6:34 AM13 Comments

Apple's iMessage Encryption Seems to Be Pretty Good

The U.S. Drug Enforcement Agency has complained (in a classified report, not publicly) that Apple’s iMessage end-to-end encryption scheme can’t be broken. On the one hand, I’m not surprised; end-to-end encryption of a messaging system is a fairly easy cryptographic problem, and it should be unbreakable. On the other hand, it’s nice to have some confirmation that Apple is looking out for the users’ best interests and not the governments’.

Still, it’s impossible for us to know if iMessage encryption is actually secure. It’s certainly possible that Apple messed up somewhere, and since we have no idea how their encryption actually works, we can’t verify its functionality. It would be really nice if Apple would release the specifications of iMessage security.

EDITED TO ADD (4/8): There’s more to this story:

The DEA memo simply observes that, because iMessages are encrypted and sent via the Internet through Apple’s servers, a conventional wiretap installed at the cellular carrier’s facility isn’t going to catch those iMessages along with conventional text messages. Which shouldn’t exactly be surprising: A search of your postal mail isn’t going to capture your phone calls either; they’re just different communications channels. But the CNET article strongly implies that this means encrypted iMessages cannot be accessed by law enforcement at all. That is almost certainly false.

The question is whether iMessage uses true end-to-end encryption, or whether Apple has copies of the keys.

Another article.

Posted on April 5, 2013 at 1:05 PM31 Comments

IT for Oppression

Whether it’s Syria using Facebook to help identify and arrest dissidents or China using its “Great Firewall” to limit access to international news throughout the country, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, propaganda, and control. They’re getting really good at it, and the IT industry is helping. We’re helping by creating business applications—categories of applications, really—that are being repurposed by oppressive governments for their own use:

  • What is called censorship when practiced by a government is content filtering when practiced by an organization. Many companies want to keep their employees from viewing porn or updating their Facebook pages while at work. In the other direction, data loss prevention software keeps employees from sending proprietary corporate information outside the network and also serves as a censorship tool. Governments can use these products for their own ends.
  • Propaganda is really just another name for marketing. All sorts of companies offer social media-based marketing services designed to fool consumers into believing there is "buzz" around a product or brand. The only thing different in a government propaganda campaign is the content of the messages.
  • Surveillance is necessary for personalized marketing, the primary profit stream of the Internet. Companies have built massive Internet surveillance systems designed to track users’ behavior all over the Internet and closely monitor their habits. These systems track not only individuals but also relationships between individuals, to deduce their interests so as to advertise to them more effectively. It’s a totalitarian’s dream.
  • Control is how companies protect their business models by limiting what people can do with their computers. These same technologies can easily be co-opted by governments that want to ensure that only certain computer programs are run inside their countries or that their citizens never see particular news programs.

Technology magnifies power, and there’s no technical difference between a government and a corporation wielding it. This is how commercial security equipment from companies like BlueCoat and Sophos end up being used by the Syrian and other oppressive governments to surveil—in order to arrest—and censor their citizens. This is how the same face-recognition technology that Disney uses in its theme parks ends up identifying protesters in China and Occupy Wall Street protesters in New York.

There are no easy technical solutions, especially because these four applications—censorship, propaganda, surveillance, and control—are intertwined; it can be hard to affect one without also affecting the others. Anonymity helps prevent surveillance, but it also makes propaganda easier. Systems that block propaganda can facilitate censorship. And giving users the ability to run untrusted software on their computers makes it easier for governments—and criminals—to install spyware.

We need more research into how to circumvent these technologies, but it’s a hard sell to both the corporations and governments that rely on them. For example, law enforcement in the US wants drones that can identify and track people, even as we decry China’s use of the same technology. Indeed, the battleground is often economic and political rather than technical; sometimes circumvention research is itself illegal.

The social issues are large. Power is using the Internet to increase its power, and we haven’t yet figured out how to correct the imbalances among government, corporate, and individual interests in our digital world. Cyberspace is still waiting for its Gandhi, its Martin Luther King, and a convincing path from the present to a better future.

This essay previously appeared in IEEE Computers & Society.

Posted on April 3, 2013 at 7:29 AM38 Comments

Sixth Movie-Plot Threat Contest

It’s back, after a two-year hiatus. Terrorism is boring; cyberwar is in. Cyberwar, and its kin: cyber Pearl Harbor, cyber 9/11, cyber Armageddon. (Or make up your own: a cyber Black Plague, cyber Ragnarok, cyber comet-hits-the-earth.) This is how we get budget and power for militaries. This is how we convince people to give up their freedoms and liberties. This is how we sell-sell-sell computer security products and services. Cyberwar is hot, and it’s super scary. And now, you can help!

For this year’s contest, I want a cyberwar movie-plot threat. (For those who don’t know, a movie-plot threat is a scare story that would make a great movie plot, but is much too specific to build security policy around.) Not the Chinese attacking our power grid or shutting off 911 emergency services—people are already scaring our legislators with that sort of stuff. I want something good, something no one has thought of before.

Entries are limited to 500 words, and should be posted in the comments. In a month, I’ll choose some semifinalists, and we can all vote and pick the winner.

Good luck.

History: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Threat Contest rules, semifinalists, and winner. The Fourth Movie-Plot Threat Contest rules and winner. The Fifth Movie-Plot Threat Contest rules, semifinalists, and winner.

EDITED TO ADD (5/26): Semifinalists will be announced (and voting will begin) on June 15. My apologies for being late about this.

EDITED TO ADD (6/14): Voting is now open.

Posted on April 1, 2013 at 12:38 PM

What I've Been Thinking About

I’m starting to think about my next book, which will be about power and the Internet—from the perspective of security. My objective will be to describe current trends, explain where those trends are leading us, and discuss alternatives for avoiding that outcome. Many of my recent essays have touched on various facets of this, although I’m still looking for synthesis. These facets include:

  1. The relationship between the Internet and power: how the Internet affects power, and how power affects the Internet. Increasingly, those in power are using information technology to increase their power.
  2. A feudal model of security that leaves users with little control over their data or computing platforms, forcing them to trust the companies that sell the hardware, software, and systems—and allowing those companies to abuse that trust.
  3. The rise of nationalism on the Internet and a cyberwar arms race, both of which play on our fears and which are resulting in increased military involvement in our information infrastructure.
  4. Ubiquitous surveillance for both government and corporate purposes—aided by cloud computing, social networking, and Internet-enabled everything—resulting in a world without any real privacy.
  5. The four tools of Internet oppression—surveillance, censorship, propaganda, and use control—have both government and corporate uses. And these are interrelated; often building tools to fight one as the side effect of facilitating another.
  6. Ill-conceived laws and regulations on behalf of either government or corporate power, either to prop up their business models (copyright protections), fight crime (increased police access to data), or control our actions in cyberspace.
  7. The need for leaks: both whistleblowers and FOIA suits. So much of what the government does to us is shrouded in secrecy, and leaks are the only we know what’s going on. This also applies to the corporate algorithms and systems and control much of our lives.

On the one hand, we need new regimes of trust in the information age. (I wrote about the extensively in my most recent book, Liars and Outliers.) On the other hand, the risks associated with increasing technology might mean that the fear of catastrophic attack will make us unable to create those new regimes.

I believe society is headed down a dangerous path, and that we—as members of society—need to make some hard choices about what sort of world we want to live in. If we maintain our current trajectory, the future does not look good. It’s not clear if we have the social or political will to address the intertwined issues of power, security, and technology, or even have the conversations necessary to understand the decisions we need to make. Writing about topics like this is what I do best, and I hope that a book on this topic will have a positive effect on the discourse.

The working title of the book is Power.com—although that might be too similar to the book Power, Inc. for the final title.

These thoughts are still in draft, and not yet part of a coherent whole. For me, the writing process is how I understand a topic, and the shape of this book will almost certainly change substantially as I write. I’m very interested in what people think about this, especially in terms of solutions. Please pass this around to interested people, and leave comments to this blog post.

Posted on April 1, 2013 at 6:07 AM75 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.