Entries Tagged "robotics"

Page 3 of 3

The Internet of Things Will Be the World's Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web—can I call it WSW?—is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­—and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot “Holos.”

EDITED TO ADD: Commentary.

EDITED TO ADD: This essay has been translated into Hebrew.

Posted on February 4, 2016 at 6:18 AMView Comments

Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”

Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”

“Are you the son of a bitch that shot my drone?” one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”

He was arrested, but what’s the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”

Froomkin’s paper is well worth reading:

Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

EDITED TO ADD (8/9): How to shoot down a drone.

Posted on August 4, 2015 at 8:24 AMView Comments

Uh Oh—Robots Are Getting Good with Samurai Swords

It’s Iaido, not sword fighting, but still.

Of course, the two didn’t battle each other, but competed in Iaido tests like cutting mats and flowers in various cross-sectional directions. A highlight was when the robot horizontally sliced string beans measuring just 1cm in thickness! At the end, the ultimate test unfolds: the famous 1,000 iaido sword cut challenge. Ultimately, both man and machine end up victorious, leaving behind a litter of straw and sweat as testament to the very first “Senbongiri battle between the pinnacle of robotics and the peak of humanity.”

Posted on June 12, 2015 at 1:38 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.