Hacking Consumer Devices
Last weekend, a Texas couple apparently discovered that the electronic baby monitor in their children’s bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child’s father unplugged the monitor.
What does this mean for the rest of us? How secure are consumer electronic systems, now that they’re all attached to the Internet?
The answer is not very, and it’s been this bad for many years. Security vulnerabilities have been found in all types of webcams, cameras of all sorts, implanted medical devices, cars, and even smart toilets—not to mention yachts, ATM machines, industrial control systems and military drones.
All of these things have long been hackable. Those of us who work in security are often amazed that most people don’t know about it.
Why are they hackable? Because security is very hard to get right. It takes expertise, and it takes time. Most companies don’t care because most customers buying security systems and smart appliances don’t know enough to care. Why should a baby monitor manufacturer spend all sorts of money making sure its security is good when the average customer won’t even notice?
Even worse, that consumer will look at two competing baby monitors—a more expensive one with better security, and a cheaper one with minimal security—and buy the cheaper. Without the expertise to make an informed buying decision, cheaper wins.
A lot of hacks happen because the users don’t configure or install their devices properly, but that’s really the fault of the manufacturer. These are supposed to be consumer devices, not specialized equipment for security experts only.
This sort of thing is true in other aspects of society, and we have a variety of mechanisms to deal with it. Government regulation is one of them. For example, few of us can differentiate real pharmaceuticals from snake oil, so the FDA regulates what can be sold and what sorts of claims vendors can make. Independent product testing is another. You and I might not be able to tell a well-made car from a poorly-made one at a glance, but we can both read the reports from a variety of testing agencies.
Computer security has resisted these mechanisms, both because the industry changes so quickly and because this sort of testing is hard and expensive. But the effect is that we’re all being sold a lot of insecure consumer products with embedded computers. And as these computers get connected to the Internet, the problems will get worse.
The moral here isn’t that your baby monitor could be hacked. The moral is that pretty much every “smart” everything can be hacked, and because consumers don’t care, the market won’t fix the problem.
This essay previously appeared on CNN.com. I wrote it in about half an hour, on request, and I’m not really happy with it. I should have talked more about the economics of good security, as well as the economics of hacking. The point is that we don’t have to worry about hackers smart enough to figure out these vulnerabilities, but those dumb hackers who just use software tools written and distributed by the smart hackers. Ah well, next time.
name.withheld.for.obvious.reasons • August 23, 2013 6:28 AM
Thanks Bruce, a most timely set of comments. With the rise of the “Internet of Things” and the wide spread use of network enabled devices, the problem of reliability and usability continues to be an issue. As Barnaby Jack was to demonstrate, it can kill you. But, it also looks like hacking can kill you as well.
I lament the fact that the word hacker has been hijacked. I remember the days of going to Olson’s/Heath/Shack Electronics where hams (sometimes called hobbyists) met to share ideas, mod radios, and test out various changes or use different classes of amps on analog transceivers. Then, in the early 70’s everyone started looking at digital devices and computing (well before workstations or desktop computers). Not long after that the term hacker had surfaced (don’t remember the taxonomy) and basically referred to someone that resembled a ham radio hobbyist. Today the term hacker is associated with criminal activity and is no longer safe to use in mixed company. The term cracker, and I believe it refers to a “criminal” hacker, would be a more appropriate term to replace the much maligned hacker label.
Also upsetting is the specter (aka James Bond) associated with loading my own OS on a Kindle, but with the possibility of jail time for an act of doing nothing more than replacing the binary data on a device I owned, I could be held criminally culpable. I didn’t even query the JTAG interface–my reaction to this issue sucks. But, when I have to answer questions in a regulatory environment about “my” ethics I need to be clear that perverting systems (which is not the intent of the act) is not something I engage in–unless under contract while holding an umbrella of a legal release of liability.
And, having been a hacker (not a cracker) in my distant past (never malicious, illegal, or nefarious) I can say that I long for the days when reason and ignorance could not be consider synonymous.