How to Keep the Internet of Things From Killing Us All

The world is wired. Thanks to the Internet of Things (IoT), pretty much every electronic device we own can now talk to each of our other devices. While it might seem fun to be able to adjust settings on your refrigerator from your cell phone or track brush strokes from your e-toothbrush app, the IoT comes with a brand new set of vulnerabilities as well. Last spring, a computer security company revealed that hackers had stolen a casino’s entire database of high rollers by exploiting vulnerabilities in an Internet-connected aquarium. What happens when cheap IoT devices can drive your car off a cliff or give you poisons instead of medicine?

In Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World, technologist and best-selling author Bruce Schneier argues that the emerging ability of connected devices to move and influence the physical world has created new threats for which we are woefully ill-prepared.

Schneier’s training is in cryptography, but his rarer skill lies in revealing hidden connections among the social, political, and technological choices that have shaped the Information Age. (He’s also a family friend.) Schneier spoke to Pacific Standard about his new book and how we can prepare ourselves for the coming threats.

Your book has the scary title Click Here to Kill Everybody. Are we, in fact, all going to die because of these computers?

Probably not. The title is deliberately provocative.

So what is the worst-case scenario?

I actually hate doing worst-case scenarios. I call them movie-plot threats. [They] lead to worst-case thinking instead of figuring out what the proper trade-offs are. That sort of thinking leads to overreactions. So while I picked a provocative title, the book shies away from it almost from the beginning.

Still, the failures and risks we’re used to in computers have become physical. We’re used to computers not working, major sites on the Internet disappearing for a few hours, [and to] clicking and having pretty horrific things happen, but it’s always been about data. What changed is it’s become about people.

Are you referring to the so-called Internet of Things?

The Internet of Things is not about screens, not about computers and phones. It’s about objects that have computing power. These objects do things. They have actuators. They are vacuum cleaners, cars, medical devices. They’re supposed to do stuff in our world.

I have a thermostat. It’s a computer I control from my phone. It turns my furnace on and off. And so, unlike a spreadsheet, where if it goes bad I lose some data, if my thermostat goes bad in Minneapolis in the winter, I lose my pipes. I think of these as the hands and feet of the Internet. The Internet is now doing stuff. It’s affecting our world in a direct physical manner, and that just changes the way we have to think about computers and security.

Is there no rolling this back? Is the computerizing of everything here to stay?

Any solution that involved not doing the cool tech thing is not one that’s gonna happen any time soon, [but] I actually think that we’re going to reach a high-water mark of connectivity. I think about nuclear power. In the 1970s, nuclear power was the future. It was all gonna be nuclear, then we had accidents like Three Mile Island and others. Nuclear power didn’t disappear, but became one of many aspects of an energy policy. My guess is that connectivity is gonna go that same way. We’re still rushing into a big "connect-it-all" mentality, but at some point we’re gonna start making more conscious decisions about what to connect.

For example, do you mean we’ll all realize that we don’t need our refrigerators online?

You might not need it.

Right now you have it ’cause it’s cheaper. I think this is something people don’t understand. Refrigerators have had computers for decades, but they were specialized: circuitry designed for the refrigerator, special hardware, special software that would run on their refrigerator in a dedicated embedded system. That’s no longer cost-effective. Today, you pull a generic CPU of the shelf and you write generic software. That CPU comes with video software, an IP stack, processors for a microphone, with all of these things. The engineers build it, [so] they might as well hook it up. It’s already there. The cost of Internet connectivity is so low, the marginal benefit of a control [of your fridge on your phone] seems worth it. Twenty years ago, there would have been a considerable effort to connect. Right now, it comes "free" with whatever computer they’ve thrown in.

That [low cost is] driving a lot of it. It’s not that people want to computerize their electronic toothbrush; they can’t help it.

Is the problem that corporations want to sell the data generated from devices like an e-toothbrush?

In computer security, we have something called the CIA triad: Confidentiality, Integrity, and Availability. Most of what we worry about with data is confidentiality. That’s the Equifax hack, or the Office of Personnel Management hack, or Cambridge Analytica. Someone has my data and they’re misusing it in some way.

[Click Here to Kill Everybody] is primarily about integrity and availability, which matter much more when you have physically capable computers. Yes, I’m worried that someone will hack the hospital and see my private medical records, but I’m much more concerned if they change my blood type. That’s an integrity attack. I’m afraid that someone will hack my car and turn on the microphone, but I’m much more scared that they’ll disable the brakes. That’s an availability attack.

And in the hospital they’ll eventually have, if they don’t already, Internet-connected IVs where a hacker could turn up the morphine?

That’s right. When computers can affect the world in a direct physical manner, the integrity and availability threats are much worse than the confidentiality threats because they affect life and property. The obvious examples are always cars and the power grid, but there are many others.

What are the solutions? Boycotts? Lawsuits? Regulations? New technical standards?

We’re never gonna stop the influx of cheap insecure Internet of Things devices. We have to assume that they’re there, and build security on top of that.

There’s no single tech solution. All security is a patchwork of different things, but it’s also designing systems with failure in mind. In 2004, there was a major blackout in the United States that covered the Northeast U.S. and the Southeast of Canada. After that, we redesigned the power grid to be more resilient, so little failures didn’t cascade to big failures. It’s that kind of thinking—we’re not gonna make the system safe, but we can stem the catastrophes. We can make them fail securely.

So how do we get the tech industry to build such systems before the disasters?

In the book, I make a strong case for government involvement. There isn’t an area of security or safety that has improved without strong government involvement: cars, airplanes, pharmaceuticals, medical devices, workplace safety, food safety, restaurant hygiene. There isn’t one [where] on its own, the market improved the safety or security of the products and services. It always takes government stepping in to say, "You must do this; you can’t do that; if you do these things, we’re allowed to sue."

Will the tech world accept government intervention?

I think it’s inevitable. Governments regulate dangerous things. They always do. And once you realize that it will happen, the conversation shifts from "yes" or "no" to "how." I’d rather we have these discussions about what makes sense to do before there is a disaster and a crisis. If we in the tech space can try to figure out what a sensible policy is, we’ll have something to propose when policymakers say, "something must be done!"

So you’re hoping that the leaders of the tech world take charge here? What would you like to happen next? What’s the first step?

We need two things: for policymakers to start understanding tech, and technologies to understand policy. It’s the separation of the the two worlds that’s causing this disconnect. Understanding both is essential to understanding either. My goal is to bridge these worlds.

Technologists can’t ignore policy. But also the policymakers need to really understand tech. It’s no longer fashionable to be a Luddite. A few years ago, you say "I don’t use email," everyone would laugh. Now, if you don’t understand the tech, you’re dangerous.

Categories: Text, Written Interviews

Sidebar photo of Bruce Schneier by Joe MacInnis.