Bruce Schneier: We Are Asking the Wrong Cybersecurity Questions
Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of over one dozen books — including his latest, “We Have Root” — as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press.
Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a lecturer in public policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of the Electronic Privacy Information Center and VerifiedVoting.org. In addition, he is the chief of security architecture at Inrupt.
You’ve worked in both the public and private sectors. In general, which is more security-conscious and why? What should each sector do to enhance its general cybersecurity profile?
Schneier: I don’t see a general difference between public and private. Or, more exactly, I don’t see any real pressure from the market, profitability, or anything else on private organizations to maintain more secure networks. Both are run by people, and people fall prey to the same sloppy security thinking. As to what they should do to enhance security, there’s nothing I can say that’s pithy enough for a short Q&A. This stuff is actually hard — take it seriously.
In your book “Beyond Fear,” you outlined a five-point plan covering everything from candy bars to nuclear plants. Can you give us a tl;dr (too long; didn’t read)?
Schneier: I wish it were a “five-point plan” to secure all of those things, but it’s just a five-step process to determine if any particular security measure is worth it. A lot of it is common sense. You wouldn’t use the same door lock to protect diamonds as you would to protect donuts, and you wouldn’t burn down every house in a city, even though that’s a pretty foolproof method of preventing burglary. But it’s designed to give people an analytical framework to decide whether a security technology, or policy, or law is worth it.
But, honestly, people should read the book.
You emphasize the value of efficient law enforcement as a primary tool against cybercrime, highlighting nations like Russia and Romania as examples of subpar law enforcement. In 2021, are these the primary locales for cybercriminals?
Schneier: We simply don’t have the data to make those kinds of assessments. We know where some cybercriminals live and that countries like Russia are a haven for them. In general, cybercriminals tend to operate in countries with poor computer-crime laws, ineffective police forces, and no extradition treaties. But that’s a generalization; there are cybercriminals everywhere.
Technology changes rapidly, but human nature does not. What can modern-day CSOs/CISOs learn from earlier security agencies like the Pinkertons [the Pinkerton National Detective Agency, established in the U.S. by Scotsman Allan Pinkerton in the 1850s]?
Schneier: The Pinkerton model had a lot of problems, but it was basically a private response to a public failing. Law enforcement wasn’t working, so the Pinkertons stepped in and provided private security for things like trains. By publicizing their defensive successes, they managed to convince train robbers that it just wasn’t worth it to rob a train guarded by the Pinkertons. The robbers would just do better plying their trade elsewhere. If there’s a lesson for today, it’s that effective security can convince cybercriminals to attack other — less well-defended — targets.
It’s 2021, and phishing is still a thing. So what can we do to get ordinary Netizens to stop clicking on embedded links and scrutinize URLs?
Schneier: Nothing, and the thinking behind the question is entirely backward. How about this: “It’s 2021; why is the Internet designed so poorly that clicking on a URL can be harmful?” Look, we live in a technological world. We don’t expect everyone to be experts in pharmacology, aircraft design and safety, food and restaurant hygiene, building evacuation requirements, and everything else. We have government agencies that make sure the drugs we take are not harmful, the planes we fly in are safe, the food we eat won’t make us sick, and we won’t be trapped in a building in case of a fire. We shouldn’t blame users for clicking on URLs: that’s what they’re for. It’s the job of the system designers to ensure that clicking on URLs isn’t dangerous. And it’s the job of the government to force system designers to do that.
Penetration testing is an essential tool for real-world cybersecurity. Can you tell us what sort of pen test you would recommend for an SME and an enterprise? Would it include physical pen-testing?
Schneier: I wouldn’t recommend anything specific, and I would caution against any general recommendations. Threats are specific. Security needs are specific. What works for one enterprise isn’t suitable for another, and so on. And — sorry — my overall recommendation is not to accept free security advice from random magazine columns. This stuff is actually hard.
We continue to see egregious security lapses by the general public: weak passwords, reused passwords, admin default passwords never changed, XP still being used, etc. Do you ever throw up your hands and say: “I give up”?
Schneier: Of course not. No one ever said this would be easy. And, yes, many of the old problems never seem to go away. But there are always new problems. We now have to worry about adversarial machine learning, for example. But as bad as computer and Internet security is, these tools have transformed life for the better in so many different ways. We will continue to muddle through, as we have in the past.
You’ve written about AI hackers. Is this something else we all need to worry about?
Schneier: Yes. We normally think of hacking as something done to computer systems, but I have been thinking about it more generally. For example, the tax code isn’t computer code, but it is a series of algorithms. It has vulnerabilities called tax loopholes. It has exploits, called tax avoidance strategies. And it has black hat hackers whose job it is to find and exploit loopholes, called tax accounts and tax attorneys. This idea extends to financial systems, legal systems, our systems of democracy, and so on.
If people can find hacks, so can AIs. They’ll find them more efficiently and differently than people. And they might not even realize that they’re hacks, and we might not even realize that we’re being hacked. It’s a fascinating rabbit hole to go down, and I do it in this essay.