Bruce Schneier’s book Secrets and Lies
Everyone who needs to understand or implement cryptographic algorithms reads Bruce Schneier’s Applied Cryptography. In that cookbook for cryptographers, it’s a matter of faith that deep mathematics, properly understood and cleverly arranged, can make three interrelated guarantees regarding digital communication:
- Confidentiality. Because messages are encrypted, nobody but the sender and the intended recipients can read them.
- Authenticity. Because messages are signed, nobody can impersonate anyone else.
- Integrity. Because messages are signed, nobody can tamper with them undetectably.
Now, in his new book Secrets & Lies: Digital Security in a Networked World, Schneier—who is founder and CTO of Counterpane Internet Security—questions his own faith:
It’s just not true. Cryptography can’t do any of that. It’s not that cryptography has gotten any weaker sinced 1994, or that the things I described in that book are no longer true; it’s that cryptography doesn’t exist in a vacuum.
The error of Applied Cryptography is that I didn’t talk at all about the context. I talked about cryptography as if it were The Answer. I was pretty naive.
The result wasn’t pretty. Readers believed that cryptography was a kind of magic security dust that they could sprinkle over their software and make it secure. That they could invoke magic spells like “128-bit key” and “public-key infrastructure.” A colleague once told me that the world was full of bad security systems designed by people who read Applied Cryptography.
After the publication of Applied Cryptography, Schneier’s work as a security consultant led him to an increasing appreciation of the role of human factors. He began saying, over and over, that security is a process, not just a technology or a product. At one point, despairing that mathematically-unbreakable security schemes kept failing in the real world, he abandoned the book and rethought his whole approach to security. Cryptography, he realized, is essentially prophylactic. You encrypt a message to guard against attack; if the message is cracked, the game’s over. But the real world isn’t an all-or-none game. We can’t prevent every bad thing from happening, and when bad things do happen, we can’t just fold our tents. Prevention strategies are important tools, but they’ve got to be embedded in an ongoing process of risk analysis, detection, and response.
Secrets and Lies opens with a log of security events culled from various sources during March 2000.
You’ve heard it all before: buffer overflows, e-mail worms, Microsoft Windows vulnerabilities, denial-of-service attacks, CGI exploits, privacy violations, defaced websites, credit-card fraud.
The litany of woe runs for several pages, and then Schneier notes that he stopped keeping track after only a week. There was nothing unusual about that week, either, it was just a normal week in the life of cyberspace. Nor is there any reason to expect this flood of events to diminish in the near future. In fact, the reverse is likely. Growing interconnectedness means growing complexity. There are more moving parts, more points of failure, and more ways to screw up.
Bad guys in cyberspace are motivated by the same things that motivate bad guys in the real world: notoriety, money, vengeance, thrills. But in cyberspace, three factors work in favor of the bad guys to make the threats they pose qualitatively different.
- Automation The ability of computers to automate repetitive tasks changes the security landscape dramatically. If a burglar could quickly and automatically try all possible keys to a lock, we’d be a lot less secure in our homes. Cyberburglars can do exactly that.
- Action At A Distance Your Internet-connected computer is equally accessible to would-be burglars anywhere on the planet.
- Technique Propagation This may be the most disturbing factor. Many, if not most, attacks are perpetrated by “script kiddies”—people with time on their hands and no special skills. It only takes one clever person to discover and exploit a vulnerability. That technology can then be transferred, with terrifying ease and rapidity, to anyone wanting to use it.
These factors working together, assure that we’ll see an ongoing, and likely increasing, flood of security events such as those Schneier logged in March.
It’s true that the same factors that help the bad guys also help the good guys.
Just as it only takes one smart bad guy to discover and disseminate an exploit, it only takes one smart good guy to discover and disseminate the fix. The global nature of the Internet, and its amazing ability to propagate memes at light speed, works both for good and evil.
Sometimes the would-be good guys go too far, though. They don’t just counteract exploits, they create and publicize them. The rationale is that flaws exist, vendors aren’t necessarily motivated to find and fix them, bad-guy hackers will inevitably find and exploit them, so it’s up to good-guy hackers to find and publicize them.
Schneier isn’t buying that argument. In particular, he draws a sharp distinction between researching and documenting a flaw, and distributing software that exploits it. People who make use of these software exploits are criminals, says Schneier, and so are the people who write and distribute the exploits. Let’s deal with them accordingly.
There’s a lot of confusion nowadays swirling around notions of privacy, anonymity, and identification.
Sometimes people confuse privacy and anonymity, though of course they’re quite different things. I want my medical records to be private, meaning that only those medical personnel I authorize can read them. But it makes no sense to discuss anonymity in this context. I’m never anonymous in my dealings with the medical establishment, or indeed in nearly any other real-world relationship.
There are, to be sure, a few valid reasons to hide identity. In special cases—abuse victims, whistle blowers—there is need for what Schneier terms “social anonymity.” Sometimes, people really do need to be able to speak and act anonymously. But in supporting such anonymity, the Internet also opens itself to attack. As it happens, true anonymity is as hard to achieve as any other kind of digital security. What abuse victims and whistle blowers can have, and what they really need, is “pseudonymity”: “Hi, my name’s Bob, and I’m an alcoholic.” But networks are inherently traceable, and Schneier concludes that “true anonymity is probably not possible on today’s Internet.” That’s probably a good thing.
I have long believed that it’s more important to assert our own identities, and authenticate who and what we encounter in cyberspace, than to hide our identities. Schneier thinks so too. I have tended to focus on the “who” aspect—that is, authenticating who really sent a message, who really presented a string of credit card numbers to an e-commerce site. Schneier acknowledges this, while also calling attention to the “what” aspect, assuring, for example, that a fact in a database or a video clip, crucial to some matter of public policy, has not been faked.
I’ve written in the past about how authentication on today’s Internet is asymmetrical. When I establish an SSL session with an e-commerce server, I’m given some assurance that the server is genuine, but it receives no similar assurance from me. In the long run, says Schneier, such mutual authentication is essential.
Authentication is about the continuity of relationships, knowing who to trust and who not to trust, making sense of a complex world. Even nonhumans need authentication: smells, sound, touch. Arguably, life itself is an authenticating molecular pit of enzymes, antibodies, and so on.
We authenticate one another instinctively, in many ways, all the time, as we interact in the real world. When we extend our relationships into cyberspace, we lose visual, olfactory, tactile, and other cues. In their place, we offer PKI (public-key infrastructure). It’s a poor trade-off. For example, as Schneier notes, even the limited one-way server-only authentication that does exist on today’s Web is flawed. As often as not, the website named in a server certificate is not the same as the website on which the customer began a transaction. In such cases, theoretically, the customer should check the certificate, and contact the issuer to verify its authenticity. Of course, nobody does. In one of the most damning remarks in the book, he concludes:
I make my purchases because the security comes from credit card rules, not from the SSL. My maximum liability from a stolen card is $50, and I can repudiate a transaction if a fraudulent merchant tries to cheat me. Digital certificates provide no actual security for electronic commerce; it’s a complete sham.
Phew! These are strong words indeed. I prefer to think that certificates are not so useless. I sign all my e-mail, and in doing so I assert a binding between my identity and my e-mail address, as certified by Thawte. It’s hardly infallible, but that’s vastly more assurance of identity than is conveyed by the average unsigned e-mail message. But Schneier’s point is a crucial one. It’s not enough to delegate authentication to PKI infrastructure. Ultimately, we need to take these matters into our own hands. To do that, we’ll need to be able to authenticate one another directly, in a peer-to-peer fashion, using cues that are convenient, natural, and easy to understand.
It’s a rare book that distills a lifetime of experience. It’s a rarer one that chronicles the kind of crises and transformation that Bruce Schneier has undergone in the past few years.
He’s emerged with a vital perspective. Cryptography is an amazingly powerful tool, but it’s only a tool. We need to use it for all it’s worth. But at the same time we have to be clear about its limitations, and locate its use within a real-world context that is scarier and more complicated than we dare imagine. Is there hope? Schneier admits that he abandoned the book for a while because he felt he could offer none. In the end, he concludes:
We’re still stuck with an insecure Internet and insecure password-protected systems. But by the same token, we’re still stuck with insecure door locks, assailable financial systems, and an imperfect legal system. None of this has caused the downfall of civilization yet, and it is unlikely to. And neither will our digital security systems, if we refocus on the processes instead of the technologies.
We’re going to be dealing with these issues for the rest of our lives, and they’re not going to get easier any time soon. Secrets and Lies doesn’t offer many clear-cut answers because, well, there aren’t any, and Schneier isn’t pulling any punches. What he gives us, instead, is a framework within which to think rationally and productively about digital security. It’s a remarkable book. Anyone touched by these issues—which is to say, almost everyone—should read it.