Be Our Guest: Bruce Schneier

Could you please tell us how you got involved in security?

Cryptography has always been a hobby of mine. My first job after college was with the Department of Defense. Years later, I was laid off from AT&T Bell Labs; I started writing about cryptography for computer magazines, and then my first book: Applied Cryptography. I also started doing cryptography consulting, forming a company Counterpane. Since then, my career has been an endless series of generalizations: from mathematical security to computer and network security, to more general security technology, to the economics of security and now the psychology of security. My current research centers around the human side of security, especially the security of complex socio-technical systems.

You often refer to “security theater.” Could you elaborate on this?

Security theater refers to security measures that make people feel more secure without doing anything to actually improve their security. An example: the photo ID checks that have sprung up in office buildings. No one has ever explained why verifying that someone has a photo ID provides any actual security, but it looks like security to have a uniformed guard-for-hire looking at ID cards. Airport-security examples include the National Guard troops stationed at US airports in the months after 9/11—their guns had no bullets. The U.S. color-coded system of threat levels, the pervasive harassment of photographers, and the metal detectors that are increasingly common in hotels and office buildings since the Mumbai terrorist attacks, are additional examples.

For many years, I dismissed security theater as a waste of effort. Recently I modified my position. Security is both a feeling and a reality, and effective security needs to address both. See:

Is the human nature adapted to today’s security risks? Is more research and/or education necessary?

It’s not. We tend to be poor judges of risk. We overreact to rare risks, we ignore long-term risks, we magnify morally offensive risks. We get risks wrong—threats, probabilities and costs—all the time. When we’re afraid, really afraid, we’ll do almost anything to make that fear go away. Politicians and marketers, both, have learned to push that fear button to get us to do what they want.

The reasons are complicated and complex, and touch evolutionary psychology, neuropsychology, behavioral economics, and cognitive science. See:,, and

And I’m not convinced that education is the answer, although it’s certainly our best hope for an answer.

There exist many secure bricks like hashes, protocols etc. But still many real-world systems get hacked. Is the discrepancy between academic research and the real world a problem?

It’s not just a problem; it’s the problem. Those hashes, protocols, and so on are exactly that: bricks. They’re used in complex socio-technical systems—payment systems, voting systems, ID card systems, copyright protection systems, social networking systems—whose security is much more than the security of those bricks.

There’s a quote commonly attributed to Albert Einstein: “In theory, theory and practice are the same. In practice, they are not.” That about sums it up.

There is often a trade-off between security and user acceptance. What is your feeling about the right trade-off in DRM systems?

DRM systems are simply a bad idea. DRM doesn’t protect against the threat of digital piracy, and it pisses off legitimate customers. And it’s a fool’s errand. All of us in computer security know this, yet the entertainment industries try and fail again and again. Trying to make digital files uncopyable is like trying to make water not wet. The companies that figure out how to make money by aligning themselves with the natural order of cyberspace—with bits that are easily and infinitely copyable—will succeed, while those that attempt to change that natural order will fail. Think advertising models—compare with television—patronage models, cross licensing models, real-time service models, and so on. See:

There is a general trend of switching from closed source to open source software. Is this good or bad for security?

It’s neither. For security, we want software and systems to be evaluated by trained security professionals. In a closed-source system, those professionals need to be hired by the software vendors because there’s no other way for anyone to see the code. In an open- source system, we hope that professionals will evaluate the code because they can. But there’s no guarantee that they will.

Open-source software has the potential to be more secure than closed-source software, but just publishing the source code is no guarantee of security. See:

What about virtualization of computer resources, does it increase security?

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. Virtualization moves the trust boundary out one-step further—you now have to also trust your service vendors—but it doesn’t fundamentally change anything. It’s just another vendor you need to trust.

Of course, there are differences. You must trust outsourced service vendors completely; you can’t add third-party security products and services. You have to deal with cross-border legal issues. And you don’t just have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity. See:

That being said, outsourcing will probably increase security for customers that have poor security to begin with: home users, naive users, and so on. That’s not really the issue, though. The economic benefits of cloud computing, virtualization, and outsourcing are so compelling that they’ll happen regardless.

Categories: Text, Written Interviews

Sidebar photo of Bruce Schneier by Joe MacInnis.