Hackers Used a Fish Tank to Break into a Vegas Casino. We’re All in Trouble.
Bruce Schneier’s new book, Click Here to Kill Everybody, explains the security risks of a new world of household devices connected to the Internet. I asked him what the risks are, why they are so serious and what their consequences are for politics.
HF: Technology has created a hyper-connected world. How does this lead to vulnerabilities?
BS: As we connect more things to the Internet, they can affect each other. This is generally a goodness, but it leads to vulnerabilities in unexpected ways. First, vulnerabilities in one thing can affect another thing. We saw this last year when a major Vegas casino’s high-roller database was hacked through — and I am not making this up — its Internet-connected fish tank.
The second way hyper-connection leads to vulnerabilities is that individual things, when combined, can generate new vulnerabilities. That is, it is their interaction that creates the vulnerabilities, without any individual system being at fault.
The third way is that vulnerabilities can cascade catastrophically. We also saw this in 2016 when vulnerabilities in Internet-connected webcams and digital video recorders enabled attackers to build a massive cyberweapon that, through a series of steps, took dozens of popular websites offline.
HF: How are those vulnerabilities changing as more and more of our everyday devices become connected to the Internet?
BS: What’s new with everyday devices like appliances, cars, medical devices, thermostats, consumer goods, toys and so on is that they do things. They affect the world in a direct physical manner. We used to only be concerned about bits and bytes. Now the risks are against life and property.
This fundamentally changes our threat model and obsoletes a lot of the security assumptions we have been making for decades: assumptions about how authentication works, about software reliability and patching, and about the wisdom of an unregulated technology space.
HF: You argue that “everyone wants you to have security, except from them.” Why is this so?
BS: We’ve built an Internet where the predominant business model is surveillance capitalism. So companies like Google and Facebook want your data to be secure from hackers and governments, as long as they get to spy on everything you’re doing — because that’s how they make money. Similarly, governments are all for security, as long as they get to access your data when they want it. As long as there’s this alliance between the big Internet companies and governments to ensure we can all be spied on, we won’t get real security.
HF: Why do businesses not have the appropriate incentives to fix the problems they are creating?
BS: There’s the spying I just mentioned, but it also goes deeper than that. Security isn’t something the public can evaluate. Consumers can’t tell which router or refrigerator is secure, even if they were willing to pay more for that security, so it’s not something businesses can use to differentiate themselves. Even worse, the risks are long-term and theoretical, which makes businesses willing to skimp on security and hope for the best.
We’ve seen this before. It’s a rare exception for an industry in the past century to improve its security and safety without being forced to by government: automobiles, airplanes, pharmaceuticals, food and restaurants, consumer goods.
HF: What should government do, and how does it need to change in order to do it?
BS: This is a complicated question and one that I spend most of my book trying to answer. I recommend a cocktail of different government interventions. I propose both explicit security rules and more flexible security standards. I propose liabilities when companies are negligent. I propose new laws, new legal interpretations of existing laws, and new actions by federal agencies. I see a role for international regulatory bodies and treaties, because many of the risks are fundamentally global.
The hard part is recognizing that the risks are great enough to require immediate action. My fear is that it will take a catastrophe — crashing *all* the cars, or shutting down *all* the power plants — to galvanize governments into action, and that they’ll react with something hastily put together and ill-considered. Our choice is not between longer government regulation or no government regulation; it’s between smart government regulation and stupid government regulation.
This article is one in a series supported by the MacArthur Foundation Research Network on Opening Governance that seeks to work collaboratively to increase our understanding of how to design more effective and legitimate democratic institutions using new technologies and new methods. Neither the MacArthur Foundation nor the network is responsible for the article’s specific content. Other posts can be found here.