Bruce Schneier Q&A: The Endless Broadening of Security
For Bruce Schneier, the security discipline still evolves and expands. Now he's the one trying to expand it.
In September 2003, CSO published a groundbreaking interview with security guru Bruce Schneier. At the time, Schneier was evolving from cryptographer to general security thinker. An emerging generation of Internet criminals and the new realities of a post-9/11 world were fueling his ideas beyond information security to the broader realm where technology and the physical world interacted. He was beginning to see security as a social science. "Real security means making hard choices," Schneier said at the time. It's one of his favorite interviews, and one of ours, too.
Now, nearly five years later, we wanted to find out how Schneier's views on security have evolved since then. Of course his views have changed—Schneier is not one to let his ideas settle into complacency. For Schneier, security keeps getting broader, more general, more related to every aspect of our lives. Security, which started for him as fixed equations used for hiding digital data, has become nothing less than the fundamental catalyst for all human behavior. "I have come to believe that security is fundamentally about people," he says.
With this endless broadening of security has come an endless broadening of ambition. Schneier is launching launch the Workshop on Security and Human Behavior—an effort to bring together the brightest thinkers from any number of disciplines: Economists, technologists, psychologists, even poets will be there. The goal is no less than to launch a new academic discipline.
CSO spoke with Schneier about this effort, his impressions of how security's changed over the past five years, and the highly sophisticated risk management practiced by lima beans.
CSO: Five years ago, we published The Evolution of a Cryptographer about how your views on security had changed. Let's start there again. How have your views changed since then?
Schneier: My career seems to be an endless series of generalizations. First cryptography, then computer and network security, then general security—airlines, ID cards, terrorism, and so on—more recently security economics, and now the psychology of security.
This evolution reflects my continuing search for broader contexts by which to understand security. I started out in the details of the technology, but have come to believe that security is primarily about people—and that understanding the people is more important than understanding the technology. Because if we get the economic or psychological motivations wrong, it doesn't matter how good our technology is; it's not going to be used.
CSO: In other words, the fact that, technically, something should be secured has little to do with whether it will be secured?
Schneier: There are lots of examples of technically sound security ideas that never got fielded because the economic model was wrong. There was never any customer for digital cash because no one who was in a position to pay for the system cared about customer privacy. Instead, we ended up with PayPal, which isn't anonymous, but is easy to use and has a recognized income model. Solutions that defend against malware in the backbone don't work because the pain is felt at the endpoints. Cell phone companies spend millions to prevent toll fraud, but nothing on voice privacy. I could go on and on. It's not surprising, really. Security is fundamentally about people, and everything we know about people is relevant to security. What's more surprising to me is how so many of us security technologists have ignored the social sciences for so long.
CSO: "Everything we know about people is relevant to security" is broad! How broad can your security context get? Do you end up so broad that everything we do is characterized by that fundamental security proposition of fight-or-flight?
I think you have to end with human psychology. In the end, security is about people. It's about how people make security trade-offs. Yes, it's about fight or flight, and it's about fear. But it's also about thinking rationally and making intelligent trade-offs. That's what separates us from the rest of the animals. We can override our fear, our fight-or-flight mentality. We can reason. We can think.
CSO: You're on the precipice of formalizing some of these ideas from the past half-decade into nothing less than a new academic discipline. Tell me what that is, what you hope to accomplish, and how it comes about that you see something so all-encompassing so clearly.
Schneier: It's a combination of disciplines: experimental psychology, behavioral economics, evolutionary biology, cognitive science, neuroscience, and game theory, with bits of philosophy, sociology, and anthropology. All of these disciplines are coming together to explain how we think, and they have a lot to say about how we process fear, risk, security, costs, and trade-offs. Researchers from these disciplines have a lot to teach us in computer security, and we have a lot to teach them. It is my hope that by bringing all these people together—which I'm trying to do at the Workshop in Security and Human Behavior this June—these different disciplines can start talking to each other, and eventually start collaborating with each other.
CSO: What would you name this collaborative discipline? Anthro-security?
Schneier: I like "Security and Human Behavior" because it captures the evolution of the discipline. The convergence of security research with ideas from economics, which began in the late 1990s, begat the economics of information security, and the first WEIS conference in 2001. This led to a convergence of psychology, usability, economics, and security and privacy. Now we're seeing a convergence of behavioral economics and the psychology of information security, with all those other disciplines thrown in, which I hope will continue to grow.
CSO: You've said you hope even poets get involved?
Schneier: Yes, even poets and writers have something to say here. Certainly horror writers like Stephen King and Dean Koontz understand humans and fear.
CSO: Let's talk about the neuroscience aspect of this. The use of fMRI images of the brain is becoming a pop phenomenon. Because we can see parts of the brain "light up" in these studies, we make simple causal connections between how the brain works and how we behave. It seems like people are using brain scans to explain away many behaviors, even if the underlying science is far more complicated than the popular stories about this technology make it seem. Can you talk about that?
Schneier: Recently there have been enormous scientific advances in understanding the human brain, but neuroscience is still in its infancy; scientists are still groping around looking for coherent theories. And certainly, whenever someone says something like "the seat of this piece of cognition is in this part of the brain," they're making a gross oversimplification.
Making security trade-offs is fundamental to being alive. After figuring out how to eat and reproduce, the next most important thing for a species to figure out is how to avoid predators. So with security such a fundamental driver of brain development, it's not surprising that very primitive parts of our brain control some of our basic security reflexes. The amygdala, for example, is an ancient part of the human brain that first evolved in primitive fishes. It's what controls the fight-or-flight response: increased heart rate, increased muscle tension, sweaty palms, and so on. That part of the brain is so fast that when you see a snake, your amygdala starts working even before your conscious brain knows what you're looking at. You can override your amygdala. That's part of what makes you uniquely human, and it happens whenever you take a dressing-down from your boss and just listen instead of either running away or stabbing him with a spear. But it's hard.
CSO: Let's talk about some of what's happened in the security world over the past five years. The Department of Homeland Security recently celebrated its fifth anniversary. Most people associate DHS with orange alerts, airport security lines, and Hurricane Katrina. How would you evaluate DHS over its first five years? Is DHS important to the future? Should it exist?
Schneier: The DHS was formed by throwing together a bunch of different organizations under new management, and it has spent most of its effort trying to coordinate all these organizations. Herding cats is easy compared to what the DHS is trying to do; you can tell by the very public failures we all talk about. I always thought creating a large new bureaucracy wasn't the way to help. And, unfortunately, the politicization of the DHS over the past five years has contributed to the problem. The DHS in its current form should be disbanded.
Two security truisms are relevant here. One, security decisions need to be made as close to the problem as possible both in terms of time and space. There is a lot of room for abuse, so oversight is vital, but it's also more flexible and adaptive. And two, security analysis needs to happen as far away from the sources as possible. The whole picture is larger than any single agency, and each one only has access to a small slice of it. What this means is that we would do better as a nation if our counterterrorism response were coordinated centrally but implemented in a distributed fashion. Back in 2002, I wrote that "The new Department of Homeland Security needs to coordinate but not subsume." I still agree with that.
CSO: This is an elegant model for security: Act locally; think globally. It's what FEMA was celebrated for before it became part of DHS. It's so simple. Why don't we do this more?
Schneier: The U.S. Marine Corps, actually, have a doctrine that decisions are made close to the action, by people on the ground who know the situation best.
Two things prevent people from taking this approach: control and fear. Governments like control, and are predisposed to solutions that involve more centralized control. And people dislike fear. When people are scared, they'll do anything to make that feeling go away. Combine a government that wants control with people who will do whatever the government says they should, and you have the current situation.
CSO: Another phenomenon from the past five years: Walls. Often, they're called fences now but they've enjoyed renewed popularity in the security world. Along the Mexican border, on the West Bank, in Shia/Sunni neighborhoods in Iraq, and elsewhere, walls are being put up as a security measure. Often they're high-tech border control systems. What do you make of this? How does this relate to the work you're doing now?
Security is often about boundaries. Walls are one of our most primitive boundaries, and we have an almost visceral reaction to them. They make us feel safe. The problem is that the security of walls is less about the walls themselves and more about the doors in them. Every boundary, whether real or virtual, has authorized ways to go through it. It's the checkpoints that allow people to go between countries, the VPNs that allow people to enter the corporate network, or the doors and windows that allow us access into our own homes. My worry about walls between nations is that they decrease interaction and, by extension, understanding and trust -- which is a surer path towards long term security. Sure, walls can provide security in the short term, but they're not a solution for the long term.
CSO: As you look at information security today, what do you see compared to five years ago?
Schneier: Nothing has surprised me about how criminals have evolved on the Internet. Those who were paying attention knew that criminals would find the Internet as soon as there was substantial money there, and that criminal activity would get increasingly sophisticated and organized. What's more surprising is, well, that so many people were surprised by this. We're still fielding security products to defend against the hacker threat instead of the criminal threat. We're still more focused on the specifics of tactics—again, to defend against a hacker mindset—than the generalities of threats that better characterize criminals. Criminals are not hackers. They are more tolerant of risk. They have better funding. They are more interested in the goal than the particulars of the method of reaching that goal. They are older than hackers, and more experienced. And they're international.
CSO: How could this gap between the problem and how we understand and address it still exist, five years on, with so much damage to computers and people in our wake?
Schneier: There are several reasons for this gap. One is systemic, the bad guys are always going to be at least one step ahead of the good guys—they're more nimble, have less bureaucracy, are quicker to adapt to new technologies, etc.—and in a fast-changing technological world this gap is only going to get worse. The second is tactical; we are focused more on technology than on the broader picture. Security companies sell technological point solutions, so naturally they focus attention on those solutions. News stories are about tactics, which reinforces this view. And we're all enamored with technology; otherwise, we would be doing something else for a living, and we often ignore the forest for whatever neat techie trees we're currently working with.
CSO: You always seem to find inspiration for security wisdom in unusual places. Do you have any for us?
Schneier: I find the most surprising security wisdom in the insect world. It shouldn't come as a surprise. Evolutionarily, they've tried just about everything. Attack-and-defense techniques that worked were repeated, and those that failed weren't. Because evolution tries solutions at random and stops at the first workable solution found, insects tended to arrive at interesting and surprising solutions. It's common to find insect countermeasures that are non-obvious, but nonetheless effective.
CSO: Non-obvious security solutions?
Schneier: By and large, ants differentiate friends from foes by their sense of smell. There are some beetles that have evolved to defeat this security system by sneaking into the ant colony and laying low, playing dead if attacked, until they acquire the scent of their ant neighbors. After that, they're tolerated in the nest by the ants even as they feast on ant larvae.
Another story: Some flowers have long tube-like shapes to prevent bees, which don't pollinate them very effectively, from stealing their nectar. They prefer long-tongued hummingbirds. But some bees have evolved to chew a hole in the side of the flower and get the nectar that way.
But the neatest story I've found is about how lima bean plants defend themselves. When two-spotted spider mites attack them, the plants emit a chemical distress signal. The distress signal helps in three distinct ways. One, it gets other, nearby lima bean plants to start sending out the same distress signal, even if they're not being attacked yet. Two, it repels other two-spotted spider mites. And three, it attracts carnivorous mites to land on the lima bean plants and prey on the herbivorous two-spotted spider mites. Yes, the plants have evolved to call in air strikes against their attackers.