Bruce Schneier Blazes Through Your Questions
Last week, we solicited your questions for Internet security guru Bruce Shneier. He responded in force, taking on nearly every question, and his answers are extraordinarily interesting, providing mandatory reading for anyone who uses a computer. He also plainly thinks like an economist: search below for “crime pays” to see his sober assessment of why it’s better to earn a living as a security expert than as a computer criminal.
Thanks to Bruce and to all of you for participating. Here’s a note that Bruce attached at the top of his answers: “Thank you all for your questions. In many cases, I’ve written longer essays on the topics you’ve asked about. In those cases, I’ve embedded the links into the necessarily short answers I’ve given here.”
A: Fifty years is a long time. In 1957, fifty years ago, there were fewer than 2,000 computers total, and they were essentially used to crunch numbers. They were huge, expensive, and unreliable; sometimes, they caught on fire. There was no word processing, no spreadsheets, no e-mail, and no Internet. Programs were written on punch cards or paper tape, and memory was measured in thousands of digits. IBM sold a disk drive that could hold almost 4.5 megabytes, but it was five-and-a-half feet tall by five feet deep and would just barely fit through a standard door.
Read the science fiction from back then, and you’d be amazed by what they got wrong. Sure, they predicted smaller and faster, but no one got the socialization right. No one predicted eBay, instant messages, or blogging.
Moore’s Law predicts that in fifty years, computers will be a billion times more powerful than they are today. I don’t think anyone has any idea of the fantastic emergent properties you get from a billion-times increase in computing power. (I recently wrote about what security would look like in ten years, and that was hard enough.) But I can guarantee that it will be incredible, fantastic, and mind-blowing.
Q: With regard to identity theft, do you see any alternatives to data being king? Do you see any alternative systems which will mean that just knowing enough about someone is not enough to commit a crime?
A: Yes. Identity theft is a problem for two reasons. One, personal identifying information is incredibly easy to get; and two, personal identifying information is incredibly easy to use. Most of our security measures have tried to solve the first problem. Instead, we need to solve the second problem. As long as it’s easy to impersonate someone if you have his data, this sort of fraud will continue to be a major problem.
The basic answer is to stop relying on authenticating the person, and instead authenticate the transaction. Credit cards are a good example of this. Credit card companies spend almost no effort authenticating the person—hardly anyone checks your signature, and you can use your card over the phone, where they can’t even check if you’re holding the card—and spend all their effort authenticating the transaction. Of course it’s more complicated than this; I wrote about it in more detail here and here.
Q: What’s the next major identity verification system?
A: Identity verification will continue to be the hodge-podge of systems we have today. You’re recognized by your face when you see someone you know; by your voice when you talk to someone you know. Open your wallet, and you’ll see a variety of ID cards that identify you in various situations—some by name and some anonymously. Your keys “identify” you as someone allowed in your house, your office, your car. I don’t see this changing anytime soon, and I don’t think it should. Distributed identity is much more secure than a single system. I wrote about this in my critique of REAL ID.
Q: If we can put a man on the moon, why in the world can’t we design a computer that can “cold boot” nearly instantaneously? I know about hibernation, etc., but when I do have to reboot, I hate waiting those three or four minutes.
A: Of course we can; Amiga was a fast booting computer, and OpenBSD boxes boot in less than a minute. But the current crop of major operating systems just don’t. This is an economics blog, so you tell me: why don’t the computer companies compete on boot-speed?
Q: Considering the carelessness with which the government (state and federal) and commercial enterprises treat our confidential information, is it essentially a waste of effort for us as individuals to worry about securing our data?
A: Yes and no. More and more, your data isn’t under your direct control. Your e-mail is at Google, Hotmail, or your local ISP. Online merchants like Amazon and eBay have records of what you buy, and what you choose to look at but not buy. Your credit card company has a detailed record of where you shop, and your phone company has a detailed record of who you talk to (your cell phone company also knows where you are). Add medical databases, government databases, and so on, and there’s an awful lot of data about you out there. And data brokers like ChoicePoint and Acxiom collect all of this data and more, building up a surprisingly detailed picture on all Americans.
As you point out, one problem is that these commercial and government organizations don’t take good care of our data. It’s an economic problem: because these parties don’t feel the pain when they lose our data, they have no incentive to secure it. I wrote about this two years ago, stating that if we want to fix the problem, we must make these organizations liable for their data losses. Another problem is the law; our Fourth Amendment protections protect our data under our control—which means in our homes, in our cars, and on our computers. We don’t have nearly the same protection when we give our data to some other organization for use or safekeeping.
That being said, there’s a lot you can do to secure your own data. I give a list here.
Q: How do you remember all of your passwords?
A: I can’t. No one can; there are simply too many. But I have a few strategies. One, I choose the same password for all low-security applications. There are several Web sites where I pay for access, and I have the same password for all of them. Two, I write my passwords down. There’s this rampant myth that you shouldn’t write your passwords down. My advice is exactly the opposite. We already know how to secure small bits of paper. Write your passwords down on a small bit of paper, and put it with all of your other valuable small bits of paper: in your wallet. And three, I store my passwords in a program I designed called Password Safe. It’s is a small application—Windows only, sorry—that encrypts and secures all your passwords.
Q: What’s your opinion of the risks of some of the new (and upcoming) online storage services, such as Google’s GDrive or Microsoft’s Live Drive? Most home computer users don’t adequately safeguard or backup their storage, and these services would seem to offer a better-maintained means of storing files; but what do users risk by storing that much important information with organizations like Google or Microsoft?
A: Everything I wrote in my answer to the identity theft question applies here: when you give a third party your data, you have to both trust that they will protect it adequately, and hope that they won’t give it all to the government just because the government asks nicely. But you’re certainly right, data loss is the number one risk for most home users, and network-based storage is a great solution for that. As long as you encrypt your data, there’s no risk and only benefit.
Q: Do you think that in the future, everything will go from hard-wired to wireless? If so, with cell phones, radios, satellites, radar, etc. using all the airwaves (or spectrum), do you think there is a potential for, well, messing everything up? What about power outages and the such?
A: Wireless is certainly the way of the future. From a security perspective, I don’t see any major additional risks. Sure, there’s a potential for messing everything up, but there was before. Same with power outages. Data transmitted wirelessly should probably be encrypted and authenticated; but it should have been over wires, too. The real risk is complexity. Complexity is the worst enemy of security; as systems become more complex, they get less secure. It’s not the addition of wireless per se; it’s the complexity that wireless—and everything else—adds.
Q: There has been some work to date on the cost-benefit economics of security. In your estimation, is this a sound approach to motivate better security, and do you think it is doomed to begin with since society disproportionately values other things before it values security? If so, do you think it’s time for us to take up digital pitchforks and shine some light on the economic gatekeepers’ personal lives?
A: Security is a trade-off, just like anything else. And it’s not true that we always disproportionately value other things before security. Look at our terrorism policies; when we’re scared, we value security disproportionately before all other things. Looking at security through the lens of economics (as I did here) is the only way to understand how these motivations work and what level of security is optimal for society. Not that I’m discouraging you from picking up your digital pitchforks. People have an incredibly complex relationship with security—read my essay on the psychology of security, and this one on why people are so bad at judging risks—and the more information they have, the better.
Q: Is there an equilibrium point in which the cost (either financial or time) of hacking a password becomes more expensive than the value of the data? If so what is it?
A: Of course, but there are too many variables to answer the question. The cost of password guessing is constantly going down, and the value of the data depends on the data. In general, though, we’ve long reached a point where the complexity of passwords an average person is willing to remember is less than the complexity of passwords necessary to be secure against a password-guessing attack. (This is for passwords that can be guessed offline only. Four-digit PINs are still okay if the bank disables your account after a few wrong guesses.) That’s why I recommend that people write their passwords down, as I said before.
Q: With over a billion people using computers today, what is the real threat to the average person?
A: It’s hard not to store sensitive information (like social security numbers) on your computer. Even if you don’t type it yourself, you might receive it in an e-mail or file. And then, even if you delete that file or e-mail, it might stay around on your hard drive. And lots of people like the convenience of Internet banking, and even more like to use their computers to help them do their jobs—which means company secrets will end up on those computers.
The most immediate threat to the average person is crime—in particular, fraud. And as I said before, even if you don’t store that data on your computer, someone else has it on theirs. But the long-term threat of loss of privacy is much greater, because it has the potential to change society for the worse.
Q: What is the future of electronic voting?
A: I’ve written a lot about this issue (see here and here as well). Basically, the problem is that the secret ballot means that most of the security tricks we use in things like electronic funds transfers don’t work in voting machines. The only workable solution against hacking the voting machines, or—more commonly—innocent programming errors, is something called a voter-verifiable paper trail. Vote on whatever touch-screen machine you want in whatever way you want. Then, that machine must spit out a printed piece of paper with your vote on it, which you have the option of reviewing for accuracy. The machine collects the votes electronically for a quick tally, and the paper is the actual vote in case of recounts. Nothing else is secure.
Q: Do you think Google will be able to eliminate the presence of phony malware sites on its search pages? And what can I do to ensure I’m not burned by the same?
A: Google is trying. The browsers are trying. Everyone is trying to alert users about phishing, pharming, and malware sites before they’re taken in. It’s hard; the criminals spend a lot of time trying to stay one step ahead of these identification systems by changing their URLs several times a day. It’s an arms race: we’re not winning right now, but things will get better.
As for how not to be taken in by them, that’s harder. These sites are an example of social engineering, and social engineering preys on the natural tendency of people to believe their own eyes. A good bullshit detector helps, but it’s hard to teach that. Specific phishing, pharming, and other tactics for trapping unsuspecting people will continue to evolve, and this will continue to be a problem for a long time.
Q: I recently had an experience on eBay in which a hacker copied and pasted an exact copy of my selling page with the intention of routing payments to himself. Afterwards, people informed me that such mischief is not uncommon. How can I ensure that it doesn’t happen again?
A: You can’t. The attack had nothing to do with you. Anyone with a browser can copy your HTML code—if they couldn’t, they couldn’t see your page—and repost it at another URL. Welcome to the Internet.
Q: All ethics aside, do you think you could make more money obtaining sensitive information about high net worth individuals and using blackmail/extortion to get money from them, instead of writing books, founding companies, etc.?
A: Basically, you’re asking if crime pays. Most of the time, it doesn’t, and the problem is the different risk characteristics. If I make a computer security mistake—in a book, for a consulting client, at BT—it’s a mistake. It might be expensive, but I learn from it and move on. As a criminal, a mistake likely means jail time—time I can’t spend earning my criminal living. For this reason, it’s hard to improve as a criminal. And this is why there are more criminal masterminds in the movies than in real life.
Q: Nearly every security model these days seems to boil down to the fact that there must be some entity in which you place your trust. I have to trust Google to keep my personal data and passwords secure every time I check my mail, even as they’re sharing it across their Google Reader, Google Maps, and Google Notebook applications. Even in physical security models, you usually have to trust someone (e.g., the security guard at the front desk, or the police). In your opinion, is there a business/economic reason for this, or do you see this paradigm eventually becoming a thing of the past?
A: There is no part of human social behavior that doesn’t involve trust of some sort. Short of living as a hermit in a cave, you’re always going to have trust someone. And as more of our interactions move online, we’re going to have to trust people and organizations over networks. The notion of “trusted third parties” is central to security, and to life.
Q: What do you think about the government or a pseudo-governmental agency acting as a national or global repository for public keys? If this were done, would the government insist on a back-door?
A: There will never be a global repository for public keys, for the same reason there isn’t a single ID card in your wallet. We are more secure with distributed identification systems. Centralized systems are more valuable targets for criminals, and hence harder to secure. I also have other problems with public-key infrastructure in general.
And the government certainly might insist on a back door into those systems; they’re insisting on access to a lot of other systems.
Q: What do you think needs to be done to thwart all of the Internet-based attacks that happen? Why it is that no single company or government agency has yet to come up with a solution?
A: That’s a tall order, and of course the answer to your question is that it can’t be done. Crime has been part of our society since our species invented society, and it’s not going away anytime soon. The real question is, “Why is there so much crime and hacking on the Internet, and why isn’t anyone doing anything about it?”
The answer is in the economics of Internet vulnerabilities and attacks: the organizations that are in the position to mitigate the risks aren’t responsible for the risks. This is an externality, and if you want to fix the problem you need to address it. In this essay (more here), I recommend liabilities; companies need to be liable for the effects of their software flaws. A related problem is that the Internet security market is a lemon’s market (discussed here), but there are strategies for dealing with that, too.
Q: You have repeatedly maintained that most of the investments that the government has made towards counter-terrorism are largely “security theater,” and that the real way to combat terrorism is to invest in intelligence. However, Tim Weiner’s book, Legacy of Ashes, says that the U.S. government is particularly inept at gathering and processing intelligence. Does that leave us with no hope at all?
A: I’m still a fan of intelligence and investigation (more here) and emergency response (more here). No, neither is perfect, but they’re way better than the “defend the target” or “defend against the tactic” thinking we have now. (I’ve written more about this here.) Basically, security that only forces the bad guy to make a minor change in his plot is largely a waste of money.
Q: I travel a lot and am continually frustrated with airport security. What can we, the little people, do to help ease these frustrations (besides taking a deep breath and strapping on our standard-issue orange jumpsuits, I mean)?
A: I share your frustration, and I have regularly written about airport security. But I got to do something you can’t do, and that’s take it out on the TSA director, Kip Hawley. I recommend this interview if you are interested in seeing him try to answer—and not answer—my questions about ID checks, the liquid ban, screeners that continually do badly in tests, the no-fly list, and the cover-your-ass security that continues to cost us time and money without making us appreciably safer.
As to what you can do: complain to your elected officials, and vote.
Q: What kinds of incentives can organizations put into place to 1) decrease the effectiveness of social engineering, and 2) persuade individuals to take an appropriate level of concern with respect to organizational security? Are you aware of any particularly creative solutions to these problems?
A: Social engineering will always be easy, because it attacks a fundamental aspect of human nature. As I said in my book, Beyond Fear, “social engineering will probably always work, because so many people are by nature helpful and so many corporate employees are naturally cheerful and accommodating. Attacks are rare, and most people asking for information or help are legitimate. By appealing to the victim’s natural tendencies, the attacker will usually be able to cozen what she wants.”
The trick is to build systems that the user cannot subvert, whether by malice, accident, or trickery. This will also help with the other problem you list: convincing individuals to take organizational security seriously. This is hard to do, even in the military, where the stakes are much higher.
Q: I am someone that knows little to nothing about computers. As such, what advice would you give to someone like me who wants to become educated on the topic?
A: There are probably zillions of books and classes on basic computer and Internet skills, and I wouldn’t even know where to begin to suggest one. Okay, that’s a lie. I do know where to begin. I would Google “basic computer skills” and see what comes up.
But I don’t think that people should need to become computer experts, and computer security experts, in order to successfully use a computer. I’ve written about home computer users and security here.
Q: How worried are you about terrorists or other criminals hacking into the computer systems of dams, power plants, air traffic control towers, etc.?
A: Not very. Of course there is a security risk here, but I think it’s overblown. And I definitely think the risk of cyberterrorism is overblown (for more on this, see here, as well as this essay on cyberwar).
Q: Can two-factor authentication really work on a Web site? Biometrics isn’t feasible because most people don’t have the hardware. One-time password tokens are a hassle, and they don’t really scale well. Image identification and PC fingerprinting technology that some banks are using is pretty easy to defeat with an evil proxy (i.e., any phishing Web site).
A: Two-factor authentication works fine on some Web sites. My employer, BT, uses two-factor access for the corporate network, and it works great. Where two-factor authentication won’t work is in reducing fraud in electronic banking, electronic brokerage accounts, and so on. That’s because the problem isn’t an authentication problem. The reasoning is subtle, and I’ve written about it here and here. What I predicted will occur from two-factor authentication—and what we’re seeing now—is that fraud will initially decrease as criminals shift their attacks to organizations that have not yet deployed the technology, but will return to normal levels as the technology becomes ubiquitous and criminals modify their tactics to take it into account.
Q: How much fun/mischief could you have if you were “evil” for a day?
A: It used to be a common late-night bar conversation at computer security conferences: how would you take down the Internet, steal a zillion dollars, neutralize the IT infrastructure of this company or that country, etc. And, unsurprisingly, computer security experts have all sorts of ideas along these lines.
This is true in many aspects of our society. Here’s what I said in my book, Secrets and Lies (page 389): “As technology becomes more complicated, society’s experts become more specialized. And in almost every area, those with the expertise to build society’s infrastructure also have the expertise to destroy it. Ask any doctor how to poison someone untraceably, and he can tell you. Ask someone who works in aircraft maintenance how to drop a 747 out of the sky without getting caught, and he’ll know. Now ask any Internet security professional how to take down the Internet, permanently. I’ve heard about half a dozen different ways, and I know I haven’t exhausted the possibilities.”
What we hope is that as people learn the skills, they also learn the ethics about when and when not to use them. When that doesn’t happen, you get Mohommad Attas and Timothy McVeighs.
Q: In that vein, what is the most devilish idea you have thought about?
A: No comment.
Q: What’s your view on the difference between anonymity and privacy, and which one do you think is more important for society? I’m thinking primarily of security-camera paranoia (as if nosy neighbors hadn’t been in existence for thousands of years).
A: There’s a huge difference between nosy neighbors and cameras. Cameras are everywhere. Cameras are always on. Cameras have perfect memory. It’s not the surveillance we’ve been used to; it’s wholesale surveillance. I wrote about this here, and said this: “Wholesale surveillance is a whole new world. It’s not ‘follow that car,’ it’s ‘follow every car.’ The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.
“More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a Web site.”
What’s happening is that we are all effectively under constant surveillance. No one is looking at the data most of the time, but we can all be watched in the past, present, and future. And while mining this data is mostly useless for finding terrorists (I wrote about that here), it’s very useful in controlling a population.
Cameras are just one piece of this, but they’re an important piece. And what’s at stake is a massive loss of personal privacy, which I believe has significant societal ramifications.
Q: Do you think it will ever be feasible to vote for public officials via the Internet? Why or why not?
Internet voting has the same problems as electronic voting machines, only more so. That being said, we are moving towards vote-by-mail and (for the military) vote-by-fax. Just because something is a bad security idea doesn’t mean it won’t happen.
Q: Hacker movies have become quite popular recently. Do any of them have any basis in reality, or are the hacking techniques fabricated by Hollywood?
A: I’ve written a lot about what I call “movie-plot threats”: the tendency of all of us to fixate on an elaborate and specific threat rather than the broad spectrum of possible threats. We see this all the time in our response to terrorism: terrorists with scuba gear, terrorists with crop dusters, terrorists with exploding baby carriages. It’s silly, really, but it’s human nature.
As to the movies: they all have some basis in reality, but it’s pretty slim—just like all the other times science or technology is portrayed in movies. Live Free or Die Hard is pure fiction.
Q: What would you consider to be the top five security vulnerabilities commonly overlooked by programmers? What book would you recommend that explains how to avoid these pitfalls?
A: It’s hard to make lists of “top” vulnerabilities, because they change all the time. The SANS list is as good as any. Recommended books include Ross Anderson’s Security Engineering, Gary McGraw’s Software Security, and my own—coauthored with Niels Ferguson—Practical Cryptography. A couple of years I wrote a reading list for The Wall Street Journal, here.
Q: Can security companies really supply secure software for a stupid user? Or do we just have to accept events such as those government computer disks going missing in the UK which contained the personal information of 25 million people (and supposedly had an underworld value of $3 billion)?
A: I’ve written about that UK data loss fiasco, which seems to be turning into a privacy Chernobyl for that country, here. Sadly, the appropriate security measure—encrypting the files—is easy. Which brings us to your question: how do we deal with stupid users? I stand by what I said earlier: users will always be a problem, and the only real solution is to limit the damage they can do. (Anyone who says that the solution is to educate the users hasn’t ever met an actual user.)
Q: So seriously, do you shop on Amazon, or anywhere else online for that matter?
A: Of course. I shop online all the time; it’s far easier than going to a store, or even calling a mail-order phone number, if I know exactly what I want.
What you’re really asking me is about the security. No one steals credit card numbers one-by-one, by eavesdropping on the Internet connection. They’re all stolen in blocks of a million by hacking the back-end database. It doesn’t matter if you bought something over the Internet, by phone, by mail, or in person—you’re equally vulnerable.
Q: Wouldn’t the world be simpler if we went back to “magic ink”? How awesome was that stuff!
A: If you like invisible ink, I recommend you go buy a UV pen. Great fun all around.
Q: Should I visit Minneapolis anytime soon, what is one restaurant that I would be wrong to pass up?
A: 112 Eatery. (Sorry, my review of it isn’t online.)
Q: What was the one defining moment in your life that you knew you wanted to dedicate your life to computer security and cryptography?
A: I don’t know. Security is primarily a way of looking at the world, and I’ve always looked at the world that way. As a child, I always noticed security systems—in retail stores, in banks, in office buildings—and how to defeat them. I remember accompanying my mother to the voting booth, and noticing ways to break the security. So it’s less of a defining moment and more of a slow process.
Q: What’s the worst security you’ve seen for a major financial firm? I use ING and their site forces you to use just a 4-digit pin.
A: There’s a lot of stupid security out there; and I honestly don’t collect anecdotes anymore. I even have a name of security measures that give the appearance of security without the reality: security theater. Recently I wrote about security theater, and how the psychological benefit is actually important.
Q: I read that AES and Twofish have protection against timing analysis. How does that work?
A: What an esoteric question for such a general forum. There is actually a timing attack against AES; a link to the specific attack, and a description of timing attacks in general, is here. This is a more general description of the attacks and defenses.
Q: How does it feel to be an Internet meme?
A: Surreal. It’s surreal to be mentioned in The DaVinci Code, to appear before the House of Lords, or to answer questions for the Freakonomics blog.
The hardest part is the responsibility. People take my words seriously, which means that I can’t utter them lightly. If I say that I use a certain product—PGP Disk, for example—people buy the product and the company is happy. If, on the other hand, I call a bunch of products “snake oil,” people don’t buy the products and the companies occasionally sue me.
Q: Is it true that there is a giant database of every site we have ever visited, and that with the right warrant a government agency could know exactly where we’ve been? What are our real footprints on the Web, and would it be possible for, say, an employer to someday find out every site you visited in college? Is there a way to hide your presence on sites that you believe to be harmless that others may hold against you?
A: There really isn’t any good way to hide your Web self. There are anonymization tools you can use—Tor for anonymous web browsing, for example—but they have their own risks. What I said earlier applies here, too; it’s impossible to function in modern society without leaving electronic footprints on the Web or in real life.
Q: Is there any benefit to password protecting your home Wifi network? I have IT friends that say the only real benefit is that multiple users can slow down the connection, but they state that there is no security reason. Is this correct?
A: I run an open wireless network at home. There’s no password, and there’s no encryption. Honestly, I think it’s just polite. Why should I care if someone on the block steals wireless access from me? When my wireless router broke last month, I used a neighbor’s access until I replaced it.
Q: Why do large government agencies and companies continue to put their faith in computer passwords, when we know that the human mind cannot memorize multiple strong passwords? Why is so much more effort put into password security than human security?
A: Because it’s easier. Never underestimate the power of doing the easy stuff and ignoring the hard stuff.
A: For those of you who don’t want to follow the links, they’re about the German terrorist plot that was foiled in September, and about how great a part electronic eavesdropping played in the investigation. As I wrote earlier, as well as in the links attached to that answer, I don’t think that wholesale eavesdropping is effective, and I questioned then whether its use had anything to do with those arrests. I still don’t have an answer one way or another, and made no definitive claims in either of the two above links. If anyone does have any information on the matter, I would appreciate hearing it.
Again, thank you all. That was fun. I hope I didn’t give you too many links to read.