Page 477

When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it’s not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side—it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don’t think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious…and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for—and society’s acceptance of—increased security.

Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they’re exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn’t enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they’re not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We’re already almost entirely living in a surveillance state, though we don’t realize it or won’t admit it to ourselves. This will only get worse as technology advances… today’s Ph.D. theses are tomorrow’s high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won’t work in the end, what is the solution?

Resilience—building systems able to survive unexpected and devastating attacks—is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city—witness New Orleans after Hurricane Katrina or even New York after Sandy—we need to start acting like it, and planning for it. Still, it’s hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don’t know how to adapt any defenses—including resilience—fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We’re going to have to figure this out if we want to survive, and I’m not sure how many decades we have left.

This essay originally appeared on Wired.com.

Commentary.

Posted on March 21, 2013 at 7:02 AMView Comments

Lessons From the FBI's Insider Threat Program

This article is worth reading. One bit:

For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.

“We would have done better hiring Punxsutawney Phil and waving him in front of someone and saying, ‘Is this an insider or not an insider?'” he says.

Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence.

Posted on March 20, 2013 at 11:51 AMView Comments

FinSpy

Twenty five countries are using the FinSpy surveillance software package (also called FinFisher) to spy on their own citizens:

The list of countries with servers running FinSpy is now Australia, Bahrain, Bangladesh, Britain, Brunei, Canada, the Czech Republic, Estonia, Ethiopia, Germany, India, Indonesia, Japan, Latvia, Malaysia, Mexico, Mongolia, Netherlands, Qatar, Serbia, Singapore, Turkmenistan, the United Arab Emirates, the United States and Vietnam.

It’s sold by the British company Gamma Group.

Older news.

EDITED TO ADD (3/20): The report.

EDITED TO ADD (4/12): Some more links.

Posted on March 19, 2013 at 1:34 PMView Comments

On Secrecy

Interesting law paper: “The Implausibility of Secrecy,” by Mark Fenster.

Abstract: Government secrecy frequently fails. Despite the executive branch’s obsessive hoarding of certain kinds of documents and its constitutional authority to do so, recent high-profile events ­ among them the WikiLeaks episode, the Obama administration’s celebrated leak prosecutions, and the widespread disclosure by high-level officials of flattering confidential information to sympathetic reporters ­ undercut the image of a state that can classify and control its information. The effort to control government information requires human, bureaucratic, technological, and textual mechanisms that regularly founder or collapse in an administrative state, sometimes immediately and sometimes after an interval. Leaks, mistakes, open sources ­ each of these constitutes a path out of the government’s informational clutches. As a result, permanent, long-lasting secrecy of any sort and to any degree is costly and difficult to accomplish.

This article argues that information control is an implausible goal. It critiques some of the foundational assumptions of constitutional and statutory laws that seek to regulate information flows, in the process countering and complicating the extensive literature on secrecy, transparency, and leaks that rest on those assumptions. By focusing on the functional issues relating to government information and broadening its study beyond the much-examined phenomenon of leaks, the article catalogs and then illustrates in a series of case studies the formal and informal means by which information flows out of the state. These informal means play an especially important role in limiting both the ability of state actors to keep secrets and the extent to which formal legal doctrines can control the flow of government information. The same bureaucracy and legal regime that keep open government laws from creating a transparent state also keep the executive branch from creating a perfect informational dam. The article draws several implications from this descriptive, functional argument for legal reform and for the study of administrative and constitutional law.

Posted on March 14, 2013 at 12:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.