Bruce Schneier: The Evolution of a Cryptographer

For a while, it seemed as if Bruce Schneier himself was encrypted. No one could decipher his whereabouts for an interview with CSO. This was unusual because Schneier, founder and CTO of Counterpane Internet Security, is usually aggressively available to the press. Plus, he has a new book to promote—Beyond Fear: Thinking Sensibly About Security in an Uncertain World—a decidedly iconoclastic and non-IT view of security. But the book also challenges physical security practitioners to learn a thing or two from the infosecurity ranks: to think in terms of systems.

Beyond Fear represents Schneier’s most ambitious departure yet from infosecurity, an arc he’s been traversing for some time now. When Senior Editor Scott Berinato finally found him, at a folk festival in Winnipeg, Canada, he was eager to talk about his evolution from mathematician to security generalist, and about the cultural disconnect between physical and information security and what he means by “brittle security.”

CSO: You’ve certainly evolved from your cryptography days.

Bruce Schneier: Security is a system, and the more I worked with security the more I realized that a systems perspective is the most appropriate one. When my primary work was in cryptography, I would design mathematically secure systems that would be defeated by clever attacks against the computers they ran on. Then, when I started doing more work in computer security, I would see well-designed security software and hardware being defeated by insecure networks. And then secure networks being defeated by human error. And so on. Security is a chain, and it’s only as secure as the weakest link. Improving the cryptography is often a futile exercise of strengthening the strongest link. Looking for the weakest link inevitably leads one to an ever-expanding systems perspective.

Similarly, noncomputer security can best be understood and evaluated using the same techniques we’ve developed for computer systems. The whole impetus driving Beyond Fear was my realization that conventional security was mostly a hodgepodge of tricks and techniques, and that there was little systems thinking. And, as a computer-security expert, I could bring some of that kind of thinking into the debate.

You’ll ruffle some feathers with that. The cultural merger of physical and IT security will be hard. Some will take exception to your idea that they’re not thinking in terms of systems.

There’s a huge cultural disconnect between the physical security guys and the computer security guys precisely because the former don’t think in terms of systems. I see it all the time when I look at security systems. The physical guys spend a lot of time worrying about national ID cards, while I wonder what identification has to do with the threats they are supposed to be countering. The physical guys make sure identification is checked twice at airports, but I notice that the people doing the ID verification can’t tell the real documents from forgeries. The physical guys think that confiscating a penknife from a grandmother is a success, but I see a system that failed. Our security is so riddled with holes because the physical guys don’t think in terms of systems.

Your evolution can be seen as a microcosm of what we’ve seen—that info and physical security are two tactics shared by the security discipline. Do you meet resistance from physical security guys when you speak more broadly about security, and conversely what do IT security folks, cryptographers and the like think about your broadening view?

The traditional physical security profession is centuries old and very resistant to change. I find that most practitioners aren’t able to think about their traditional problems in new ways. We saw this clearly in January 2003, when Matt Blaze published a paper on how to break a physical door-locking system. Professional locksmiths were outraged; “secret knowledge” should never be in the hands of the masses. But from my perspective, secret knowledge is always in the hands of the bad guys, and unless the good guys possess the same knowledge, the problem will never get fixed.

IT professionals, on the other hand, are much more eager to learn how their methodologies and ways of thinking might apply to real-world security. I have long used physical metaphors to explain computer security techniques; it’s no surprise that computer security methodologies can apply to physical security problems.

A physical security guy would argue that computer security folks are always trying to solve problems with technology even when it’s not appropriate. Should we acknowledge some fallibility in leading with the IT security foot in some cases versus the physical security foot?

Computer security folks are always trying to solve problems with technology, which explains why so many computer solutions fail so miserably. I advocate thinking about security in terms of systems; I certainly don’t advocate wantonly applying technology. Most of the time, the security problems are inherently people problems, and technologies don’t help much. Photo ID checks are a great example: Technologists want to add this and that technology to make IDs harder to forge, but I worry about people bribing issuing officials and getting real IDs with fake names. (At least two of the 9/11 terrorists did that.) Making IDs harder to forge doesn’t solve the people problem.

The iconoclasm in your book starts with its subtitle, Thinking Sensibly About Security in an Uncertain World. The implicit jab here is that there’s plenty of nonsensical thinking that needs correcting. What are some of the most extreme cases you’ve seen or heard?

Stupid security stories are a dime a dozen. There’s a website that chronicles them (www.stupidsecurity.com)—and an annual award for the most egregious offenders (see “Award-Winning Stupidity,” Briefing, August 2003). My greatest fear surrounding all these stupid security measures is that people actually believe they do some good.

Many people believe that increasing demands for identification increases security. Many believe that confiscating pocketknives from airplane travelers decreases the risk of hijacking. Security is both a feeling and a reality, and the more the two diverge, the more trouble we’re all in.

What has two years of cyberterrorism hype yielded?

There is definitely a lot of nonsense being written about cyberterrorism these days. You can cry wolf only so many times before people start ignoring you; after two years, people have become numb to the real threats. Even as the risks of cyberterrorism are overstated and overhyped, the risks of cybercrime are downplayed and minimized. My company performs managed security monitoring for hundreds of companies worldwide, and we see common crime every day. But it’s the terrorism risks that grab the headlines, and then nothing happens. There’s an issue of deflected responsibility going on here. If the problem is cyberterrorism, then the government has to do something about it. If the problem is cybercrime, the network owners have to fix the problem. If you run a major network, it’s certainly attractive to shift the responsibility elsewhere.

Recently, a George Mason University graduate student presented his thesis to a group of CIOs. The student had mapped the entire telecommunications infrastructure of the United States, using largely publicly available information. The CIOs demanded he cede his laptop to authorities and leave the conference because his thesis was a terrorism risk.

That didn’t surprise me; it’s an example of a common confusion between secrecy and security. Actually securing our telecommunications infrastructure would be a resilient security countermeasure. Not bothering to secure our telecommunications infrastructure and then trying to keep the vulnerabilities secret is brittle. Once the secret is out, security is lost, and you can’t get it back. You have to assume that bad guys can collate the same information that the student did; thinking otherwise is sloppy security.

Why does this mind-set persist—that, if we keep secrets or outlaw certain information, somehow bad guys will give up?

There is a widespread belief that secrecy equals security. It’s a common misconception, and one very similar to the traditional shoot-the-messenger way of dealing with someone who brings bad news. I think it’s an easy mental trap to fall into and that many people do. Secrecy does work to a point, but it’s a very brittle security.

What do you mean by “brittle?”

I use the term to describe how many security systems fail. Brittle systems are systems that fail easily, completely and catastrophically. A house of cards is a brittle system; remove one card and the whole structure collapses. Most computer systems are brittle: When security fails, it fails completely. Resilient systems remain secure even in the face of failure. Different security systems back each other up. Major failures don’t turn into major failures. Chapter 9 of Beyond Fear talks about brittleness and resilience, and I identify several ways of achieving resilience: defense in depth, compartmentalization, flexibility and so on. They’re all characteristics of natural security systems but are often lacking in computer security systems.

How is Congress doing on security?

I’ve testified before Congress on several occasions, so they’re getting at least some of the right speakers.

The process of security is orthogonal to the process of our democratic government. In the United States, lawmaking is a process of consensus. The reason you get so much FUD, self-serving aggrandizing, and partisan posturing is because that’s the way the process works. Everyone provides his own input—often in the form of money—and some kind of consensus is reached. Security doesn’t work that way. In fact, the worst security systems are those developed by consensus. Real security means making hard choices that hurt certain companies and industries. Real security means doing what’s right, not what’s politically safe. The recent National Strategy to Secure Cyberspace is a case in point. Because the document offends no one, it accomplishes nothing.

While I believe that certain individual members of Congress have a good understanding of the problems and technologies of computer security, I still think they believe that if all the affected parties go into a room, they can negotiate a solution. The last time I testified, I told them that it wouldn’t work and why. They all nodded politely, but I don’t know if it stuck.

Why do people have such a difficult time thinking in terms of risk rather than binarily?

I think the real question is Why are people so lousy at estimating, evaluating and accepting risk? That’s a complicated question, and I spend most of Chapter 2 of Beyond Fear trying to answer it. Evaluating risk is one of the most basic functions of a brain and something hard-wired into every species possessing one. Our own notions of risk are based on experience, but also on emotion and intuition. The problem is that the risk analysis ability that has served our species so well over the millennia is being overtaxed by modern society. Modern science and technology create things that cannot be explained to the average person; hence, the average person cannot evaluate the risks associated with them. Modern mass communication perturbs the natural experiential process, magnifying spectacular but rare risks and minimizing common but uninteresting risks. This kind of thing isn’t new—government agencies like the FDA were established precisely because the average person cannot intelligently evaluate the risks of food additives and drugs—but it does have profound effects on people’s security decisions. They make bad ones.

Do the privacy implications of some of the new security measures resulting from 9/11—widespread surveillance, Terrorism Information Awareness (TIA)—concern you?

Definitely. Terrorism is rare, while crime is common. Security systems that require massive databases in order to function—TIA, CAPPS 2—will make crime easier. They’ll make identity theft easier. They’ll make illegal government surveillance easier. They’ll make it more likely that rogue employees of the governments and corporations that maintain the systems will use the data for their own purposes. In the United States, there isn’t a government database that hasn’t been misused by the very people entrusted with keeping its information safe. IRS employees have perused the tax records of celebrities and friends. State employees have sold driving records to private investigators. This kind of thing happens all the time.

If these systems would actually help reduce the risk of terrorism, I might be willing to make trade-offs. But they don’t work. Even worse, they cause more security problems than they purport to solve.

What is going unreported, or underreported, in the realm of security?

The most surprising thing about security is how little it has to do with security. All security involves trade-offs, and the nonsecurity aspects of those trade-offs are generally far more important than the security considerations. For example, a bank would never implement a security system that would alienate all of its customers—no matter how secure it would make the bank. Airport security will confiscate the smallest knives but will allow matches and lighters—combustible materials—through because the tobacco lobby pressured the government. Businesses regularly have insecure networks because they find it easier to get things done that way.

Categories: Text, Written Interviews

Sidebar photo of Bruce Schneier by Joe MacInnis.