Entries Tagged "Internet of Things"

Page 12 of 13

The 2016 National Threat Assessment

It’s National Threat Assessment Day. Published annually by the Director of National Intelligence, the “Worldwide Threat Assessment of the US Intelligence Community” is the US intelligence community’s one time to publicly talk about the threats in general. The document is the results of weeks of work and input from lots of people. For Clapper, it’s his chance to shape the dialog, set up priorities, and prepare Congress for budget requests. The document is an unclassified summary of a much longer classified document. And the day also includes Clapper testifying before the Senate Armed Service Committee. (You’ll remember his now-famous lie to the committee in 2013.)

The document covers a wide variety of threats, from terrorism to organized crime, from energy politics to climate change. Although the document clearly says “The order of the topics presented in this statement does not necessarily indicate the relative importance or magnitude of the threat in the view of the Intelligence Community,” it does. And like 2015 and 2014, cyber threats are #1—although this year it’s called “Cyber and Technology.”

The consequences of innovation and increased reliance on information technology in the next few years on both our society’s way of life in general and how we in the Intelligence Community specifically perform our mission will probably be far greater in scope and impact than ever. Devices, designed and fielded with minimal security requirements and testing, and an ever—increasing complexity of networks could lead to widespread vulnerabilities in civilian infrastructures and US Government systems. These developments will pose challenges to our cyber defenses and operational tradecraft but also create new opportunities for our own intelligence collectors.

Especially note that last clause. The FBI might hate encryption, but the intelligence community is not going dark.

The document then calls out a few specifics like the Internet of Things and Artificial Intelligence—no surprise, considering other recent statements from government officials. This is the “…and Technology” part of the category.

More specifically:

Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decisionmaking, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI ­—in settings such as public utilities and health care—will only exacerbate these potential effects. Russian cyber actors, who post disinformation on commercial websites, might seek to alter online media as a means to influence public discourse and create confusion. Chinese military doctrine outlines the use of cyber deception operations to conceal intentions, modify stored data, transmit false data, manipulate the flow of information, or influence public sentiments -­ all to induce errors and miscalculation in decisionmaking.

Russia is the number one threat, followed by China, Iran, North Korea, and non-state actors:

Russia is assuming a more assertive cyber posture based on its willingness to target critical infrastructure systems and conduct espionage operations even when detected and under increased public scrutiny. Russian cyber operations are likely to target US interests to support several strategic objectives: intelligence gathering to support Russian decisionmaking in the Ukraine and Syrian crises, influence operations to support military and political objectives, and continuing preparation of the cyber environment for future contingencies.

Comments on China refer to the cybersecurity agreement from last September:

China continues to have success in cyber espionage against the US Government, our allies, and US companies. Beijing also selectively uses cyberattacks against targets it believes threaten Chinese domestic stability or regime legitimacy. We will monitor compliance with China’s September 2015 commitment to refrain from conducting or knowingly supporting cyber—enabled theft of intellectual property with the intent of providing competitive advantage to companies or commercial sectors. Private—sector security experts have identified limited ongoing cyber activity from China but have not verified state sponsorship or the use of exfiltrated data for commercial gain.

Also interesting are the comments on non-state actors, which discuss both propaganda campaigns from ISIL, criminal ransomware, and hacker tools.

Posted on February 9, 2016 at 3:25 PMView Comments

The Internet of Things Will Be the World's Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web—can I call it WSW?—is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­—and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot “Holos.”

EDITED TO ADD: Commentary.

EDITED TO ADD: This essay has been translated into Hebrew.

Posted on February 4, 2016 at 6:18 AMView Comments

Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage—once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day—there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Posted on January 29, 2016 at 7:29 AMView Comments

The Internet of Things that Talk About You Behind Your Back

French translation

SilverPush is an Indian startup that’s trying to figure out all the different computing devices you own. It embeds inaudible sounds into the webpages you read and the television commercials you watch. Software secretly embedded in your computers, tablets, and smartphones picks up the signals, and then uses cookies to transmit that information back to SilverPush. The result is that the company can track you across your different devices. It can correlate the television commercials you watch with the web searches you make. It can link the things you do on your tablet with the things you do on your work computer.

Your computerized things are talking about you behind your back, and for the most part you can’t stop them­—or even learn what they’re saying.

This isn’t new, but it’s getting worse.

Surveillance is the business model of the Internet, and the more these companies know about the intimate details of your life, the more they can profit from it. Already there are dozens of companies that secretly spy on you as you browse the Internet, connecting your behavior on different sites and using that information to target advertisements. You know it when you search for something like a Hawaiian vacation, and ads for similar vacations follow you around the Internet for weeks. Companies like Google and Facebook make an enormous profit connecting the things you write about and are interested in with companies trying to sell you things.

Cross-device tracking is the latest obsession for Internet marketers. You probably use multiple Internet devices: your computer, your smartphone, your tablet, maybe your Internet-enabled television—­and, increasingly, “Internet of Things” devices like smart thermostats and appliances. All of these devices are spying on you, but the different spies are largely unaware of each other. Start-up companies like SilverPush, 4Info, Drawbridge, Flurry, and Cross Screen Consultants, as well as the big players like Google, Facebook, and Yahoo, are all experimenting with different technologies to “fix” this problem.

Retailers want this information very much. They want to know whether their television advertising causes people to search for their products on the Internet. They want to correlate people’s web searching on their smartphones with their buying behavior on their computers. They want to track people’s locations using the surveillance capabilities of their smartphones, and use that information to send geographically targeted ads to their computers. They want the surveillance data from smart appliances correlated with everything else.

This is where the Internet of Things makes the problem worse. As computers get embedded into more of the objects we live with and use, and permeate more aspects of our lives, more companies want to use them to spy on us without our knowledge or consent.

Technically, of course, we did consent. The license agreement we didn’t read but legally agreed to when we unthinkingly clicked “I agree” on a screen, or opened a package we purchased, gives all of those companies the legal right to conduct all of this surveillance. And the way US privacy law is currently written, they own all of that data and don’t need to allow us to see it.

We accept all of this Internet surveillance because we don’t really think about it. If there were a dozen people from Internet marketing companies with pens and clipboards peering over our shoulders as we sent our Gmails and browsed the Internet, most of us would object immediately. If the companies that made our smartphone apps actually followed us around all day, or if the companies that collected our license plate data could be seen as we drove, we would demand they stop. And if our televisions, computer, and mobile devices talked about us and coordinated their behavior in a way we could hear, we would be creeped out.

The Federal Trade Commission is looking at cross-device tracking technologies, with an eye to regulating them. But if recent history is a guide, any regulations will be minor and largely ineffective at addressing the larger problem.

We need to do better. We need to have a conversation about the privacy implications of cross-device tracking, but—more importantly­—we need to think about the ethics of our surveillance economy. Do we want companies knowing the intimate details of our lives, and being able to store that data forever? Do we truly believe that we have no rights to see the data that’s collected about us, to correct data that’s wrong, or to have data deleted that’s personal or embarrassing? At a minimum, we need limits on the behavioral data that can legally be collected about us and how long it can be stored, a right to download data collected about us, and a ban on third-party ad tracking. The last one is vital: it’s the companies that spy on us from website to website, or from device to device, that are doing the most damage to our privacy.

The Internet surveillance economy is less than 20 years old, and emerged because there was no regulation limiting any of this behavior. It’s now a powerful industry, and it’s expanding past computers and smartphones into every aspect of our lives. It’s long past time we set limits on what these computers, and the companies that control them, can say about us and do to us behind our backs.

This essay previously appeared on Vice Motherboard.

Posted on January 13, 2016 at 5:35 AMView Comments

Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AMView Comments

DMCA and the Internet of Things

In theory, the Internet of Things—the connected network of tiny computers inside home appliances, household objects, even clothing—promises to make your life easier and your work more efficient. These computers will communicate with each other and the Internet in homes and public spaces, collecting data about their environment and making changes based on the information they receive. In theory, connected sensors will anticipate your needs, saving you time, money, and energy.

Except when the companies that make these connected objects act in a way that runs counter to the consumer’s best interests—as the technology company Philips did recently with its smart ambient-lighting system, Hue, which consists of a central controller that can remotely communicate with light bulbs. In mid-December, the company pushed out a software update that made the system incompatible with some other manufacturers’ light bulbs, including bulbs that had previously been supported.

The complaints began rolling in almost immediately. The Hue system was supposed to be compatible with an industry standard called ZigBee, but the bulbs that Philips cut off were ZigBee-compliant. Philips backed down and restored compatibility a few days later.

But the story of the Hue debacle—the story of a company using copy protection technology to lock out competitors—isn’t a new one. Plenty of companies set up proprietary standards to ensure that their customers don’t use someone else’s products with theirs. Keurig, for example, puts codes on its single-cup coffee pods, and engineers its coffeemakers to work only with those codes. HP has done the same thing with its printers and ink cartridges.

To stop competitors just reverse-engineering the proprietary standard and making compatible peripherals (for example, another coffee manufacturer putting Keurig’s codes on its own pods), these companies rely on a 1998 law called the Digital Millennium Copyright Act (DCMA). The law was originally passed to prevent people from pirating music and movies; while it hasn’t done a lot of good in that regard (as anyone who uses BitTorrent can attest), it has done a lot to inhibit security and compatibility research.

Specifically, the DMCA includes an anti-circumvention provision, which prohibits companies from circumventing “technological protection measures” that “effectively control access” to copyrighted works. That means it’s illegal for someone to create a Hue-compatible light bulb without Philips’ permission, a K-cup-compatible coffee pod without Keurigs’, or an HP-printer compatible cartridge without HP’s.

By now, we’re used to this in the computer world. In the 1990s, Microsoft used a strategy it called “embrace, extend, extinguish,” in which it gradually added proprietary capabilities to products that already adhered to widely used standards. Some more recent examples: Amazon’s e-book format doesn’t work on other companies’ readers, music purchased from Apple’s iTunes store doesn’t work with other music players, and every game console has its own proprietary game cartridge format.

Because companies can enforce anti-competitive behavior this way, there’s a litany of things that just don’t exist, even though they would make life easier for consumers in significant ways. You can’t have custom software for your cochlear implant, or your programmable thermostat, or your computer-enabled Barbie doll. An auto repair shop can’t design a better diagnostic system that interfaces with a car’s computers. And John Deere has claimed that it owns the software on all of its tractors, meaning the farmers that purchase them are prohibited from repairing or modifying their property.

As the Internet of Things becomes more prevalent, so too will this kind of anti-competitive behavior—which undercuts the purpose of having smart objects in the first place. We’ll want our light bulbs to communicate with a central controller, regardless of manufacturer. We’ll want our clothes to communicate with our washing machines and our cars to communicate with traffic signs.

We can’t have this when companies can cut off compatible products, or use the law to prevent competitors from reverse-engineering their products to ensure compatibility across brands. For the Internet of Things to provide any value, what we need is a world that looks like the automotive industry, where you can go to a store and buy replacement parts made by a wide variety of different manufacturers. Instead, the Internet of Things is on track to become a battleground of competing standards, as companies try to build monopolies by locking each other out.

This essay previously appeared on TheAtlantic.com.

Slashdot thread.

EDITED TO ADD (1/5): Interesting commentary.

Posted on December 29, 2015 at 5:58 AMView Comments

Using Samsung's Internet-Enabled Refrigerator for Man-in-the-Middle Attacks

This is interesting research:

Whilst the fridge implements SSL, it FAILS to validate SSL certificates, thereby enabling man-in-the-middle attacks against most connections. This includes those made to Google’s servers to download Gmail calendar information for the on-screen display.

So, MITM the victim’s fridge from next door, or on the road outside and you can potentially steal their Google credentials.

The notable exception to the rule above is when the terminal connects to the update server—we were able to isolate the URL https://www.samsungotn.net which is the same used by TVs, etc. We generated a set of certificates with the exact same contents as those on the real website (fake server cert + fake CA signing cert) in the hope that the validation was weak but it failed.

The terminal must have a copy of the CA and is making sure that the server’s cert is signed against that one. We can’t hack this without access to the file system where we could replace the CA it is validating against. Long story short we couldn’t intercept communications between the fridge terminal and the update server.

When I think about the security implications of the Internet of things, this is one of my primary worries. As we connect things to each other, vulnerabilities on one of them affect the security of another. And because so many of the things we connect to the Internet will be poorly designed, and low cost, there will be lots of vulnerabilities in them. Expect a lot more of this kind of thing as we move forward.

EDITED TO ADD (9/11): Dave Barry reblogged me.

Posted on August 31, 2015 at 1:56 PMView Comments

Hacking Drug Pumps

When you connect hospital drug pumps to the Internet, they’re hackable. This is only surprising to people who aren’t paying attention.

Rios says when he first told Hospira a year ago that hackers could update the firmware on its pumps, the company “didn’t believe it could be done.” Hospira insisted there was “separation” between the communications module and the circuit board that would make this impossible. Rios says technically there is physical separation between the two. But the serial cable provides a bridge to jump from one to the other.

An attacker wouldn’t need physical access to the pump because the communication modules are connected to hospital networks, which are in turn connected to the Internet.

“From an architecture standpoint, it looks like these two modules are separated,” he says. “But when you open the device up, you can see they’re actually connected with a serial cable, and they”re connected in a way that you can actually change the core software on the pump.”

An attacker wouldn’t need physical access to the pump. The communication modules are connected to hospital networks, which are in turn connected to the Internet. “You can talk to that communication module over the network or over a wireless network,” Rios warns.

Hospira knows this, he says, because this is how it delivers firmware updates to its pumps. Yet despite this, he says, the company insists that “the separation makes it so you can’t hurt someone. So we’re going to develop a proof-of-concept that proves that’s not true.”

One of the biggest conceptual problems we have is that something is believed secure until demonstrated otherwise. We need to reverse that: everything should be believed insecure until demonstrated otherwise.

Posted on June 17, 2015 at 2:02 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.