Bruce Schneier: Geek of the Week
If one were to close one's eyes and imagine a BT Executive, one would never conjure up Bruce Schneier. He is one of the greatest experts in cryptography, and a well-known mathematician. He even got a brief mention in the book The Da Vinci Code. He also remains an outspoken and articulate critic of the way that security is actually implemented in applications, as Richard Morris found out when we dispatched him to interview him.
Once a sleepy IT backwater, Identity Management has been thrust into the spotlight over the past few years. More and more companies, alarmed by the escalating incidence of identity theft, have come to understand the importance of protecting the integrity of digital information held about individuals and the grave risks they run if they neglect to do it.
Until the mid 1970s, all cryptography had a fatal flaw. No matter how sophisticated the encrypted message -- from simple number substitutions to the German Enigma code -- if you captured the person or equipment used to send it, you had the means of decoding it.
The problem was not only about foreign governments pushing matchsticks under the fingernails of spies but also about finding answers to technological problems such as how a company could take credit card details electronically if, by telling you how to encode yours, it gave you the means of decoding everyone else's?
Then thirty years ago, two teams of mathematicians, one in Britain and one in the US, independently came to the same startling conclusion. By using an esoteric branch of number theory, they discovered that messages could be sent without the person encrypting them having any idea how to decrypt them.
This technology meant that e-mails would only go to the intended recipient -- embarrassing clicks of the "reply-all" button excepted -- and stocks could be traded through wires. Banks could now tell customers how to transfer money electronically, safe in the knowledge that it wouldn't get redirected to the teenage hacker next door in the process.
Bruce Schneier, the cryptographer whose name appears in Dan Brown's best-selling novel The Da Vinci Code has been described by the New York Times as "the world's best cryptologist," he became wealthy on the back of selling Counterpane Internet Security (the company he set up in 1999) to BT for about $40 million (£21 million) in 2006.
Now BT's chief security technology officer, he still has time to write extensive blogs and a free monthly newsletter called Crypto-Gram which attracts a regular 150,000 readers. In its ten years of publication it has become one of the most widely read forums for free-wheeling discussions, pointed critiques, and serious debate about security.
"Cryptography plays a large part in all IT transactions," he says. "It is not just about cracking codes, it is about creating them and using them for authentication."
He received his master's degree in computer science from the American University in Washington DC in 1988.
RM: Tell me, how is life at Counterpane since it was acquired by BT?
BS: Both Counterpane and Bruce are doing fine. Counterpane has been fully integrated into BT Americas, and is now part of BT's security portfolio worldwide. I have settled into my role as Chief Security Technology Officer for BT Global Services, and am continuing to write, speak, and conduct research on security.
RM: In general do you think anti-virus software is as good as it could be or is the motivating factor for the majority of companies: "Let's ship this as soon as possible."
BS: Anti-virus is an old and well understood piece of computer security. All of the vendors have gotten it right by now, and pretty much all the products are equally okay from a security perspective. The differences are in the non-security aspects of the product.
RM: Do you still see the need for an IT security industry?
BS: It's a complicated question. There will always be a need for IT, and a need for security, so there will always be a need for an IT security industry. But as IT matures, IT security will cease to become a consumer industry.
RM: In your experience, in what areas do IT departments under-spend on?
BS: In security, IT departments tend to under-spend on crime. It's the big scary risks that get the attention, but it's the smaller, more pedestrian risks that are generally more important.
RM: Why do you think that is?
BS: It's a basic cognitive bias: we over-estimate large and spectacular risks and under-estimate common risks. Flying versus driving is the best example: we fear the former, but the latter is orders of magnitude more dangerous. It's just how our brains are wired.
RM: Why are our networks and servers not secure? Do you think it is because companies will only design security as good as their customers know what to ask for?
BS: It's a combination of things. Certainly the marketplace doesn't value security as highly as we'd like it to. Companies respond to that, and don't design their products and services as well as they could. But also, security is hard. Building secure systems is very expensive; in some cases, prohibitively so. We're likely to be stuck with a patchwork of mediocre security products for a long time to come.
RM: But how do we go about educating people about web security? Some Governments, especially the UK Government, are extremely poor at educating people about internet security. How would you redress the balance?
BS: Education comes with experience. The younger generation is much more savvy about security on the Internet than the older generation. In that sense, there's not much formal education a government can require; people have to learn by doing. On the other hand, the very notion that we have to educate people about Internet security means that we, as security technologists, have failed in our jobs. Security needs to be built in; technology is changing so fast that people don't have time to develop an intuition about Internet security; we need to build security that protects people despite themselves.
RM: Do you think that two-factor authentication, or using methods in addition to passwords, could still be defeated by Trojan horses and phishing attacks?
BS: Of course; there isn't even any debate. The debate is whether two-factor authentication will turn out to be useless in defending against identity theft because criminals will turn to Trojan horses and man-in-the-middle attacks. It solves the security problems we had ten years ago, not the security problems we have today.
The problem with passwords is that they're too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They're also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can't be sure who is typing that password in.
Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it's harder for someone else to intercept. You can't write down the ever-changing part. An intercepted password won't be good the next time it's needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.
These tokens have been around for at least two decades, but it's only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don't provide adequate security, and are hoping that two-factor authentication will fix their problems.
Unfortunately, the nature of attacks has changed over two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.
RM: Do you think that we can ever have security and privacy?
BS: The problem with questions like this is twofold. One, "security" and "privacy" aren't absolute concepts, they're relative concepts. We have always had, and always will have, some amount of security and some amount of insecurity. We have always had, and always will have, some amount of privacy and some amount of exposure. Two, inherent in the question is the idea that security and privacy are somehow in opposition: the only way to get more of one is to give up some of the other. This is ridiculous: door locks, burglar alarms, and tall fences increase security and have no adverse affect on privacy: fences protect both security and privacy. And public lists of sexual proclivities decrease privacy and have nothing to do with security.
RM: Have you come across any further poor examples of cryptography like that on the Windows CE device which was based on a single key?
BS: Dozens, all the time. Everybody who studies systems like these does. It's amazing how much bad security there is in the real world.
RM: Can you give me one really poor example?
BS: There's a lot of stupid security out there; and I honestly don't collect anecdotes anymore. I even have a name of security measures that give the appearance of security without the reality: 'Security Theatre'. But wasn't there a system that was recently discovered to have a constant key, regardless of what key the user actually entered?
RM: What did you make of Linus Torvalds saying 'the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them. To me, security is important. But it's no less important than everything else that is also important!' Do you agree with him?
BS: Of course. Security is just one consideration of many. But just as some people have a passion for photo manipulation and some people have a passion for real-time processes, some people have a passion for security. And if you have a passion for something, nothing else matters. OpenBSD was written by people with a passion for security, and that's a good thing.
RM: In your 2003 book Beyond Fear, you write that when the U.S. Government says that security against terrorism is worth curtailing individual civil liberties, it's because the cost of that decision is not borne by those making it. Do you think this statement still holds true?
BS: Externalities like that one are eternal; they don't change within a few years. And understanding security externalities â€" places where those making the security trade-offs don't bear the costs of those trade-offs â€" is essential to understanding security.
RM: With Faceboook, Twitter, MySpace and other social networking sites encouraging us to share our personal information, is social engineering going to become easier to do? What will security look like in a world where nothing is private?
BS: We'll have a lot less security, that's for sure. Privacy is essential for security.
RM: Is it inevitable that governments will collect data on their population, and that when they do they will often lose it? Rather than stopping this from happening, should we just accept it and figure out how to minimise the consequences?
BS: I'm not sure what you mean by "inevitable." If there's no law prohibiting data collection, then yes -- governments will do it. But that's one of the primary reasons we have laws: to protect our freedoms and liberties from a government intent on taking them away from us.
We need comprehensive laws prohibiting the collection of personal information except in certain circumstances, as well as laws requiring that it be secured as long as the government has it, only used for certain purposes, and disposed of securely when it is no longer needed. Yes, it's a tall order, but it's a much better answer than just accepting that governments will inevitably become more invasive and powerful.
RM: What's best in the short term? Do you think making identities harder to steal, or making identities less useful to steal?
BS: The latter, definitely. Making personal identifying information harder to steal is probably impossible. Making it harder to use is our only real solution, short term or long term.
The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud.
This is what's been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don't want made public. The posting of Paris Hilton's phone book on the Internet is a celebrity example of this.
The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn't take much personal information to apply for a credit card in someone else's name. It doesn't take much to submit fraudulent bank transactions in someone else's name. It's surprisingly easy to get an identification card in someone else's name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.
Proposed fixes tend to concentrate on the first issue -- making personal data harder to steal -- whereas the real problem is the second. If we're ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.