Entries Tagged "robotics"

Page 2 of 3

Military Robots as a Nature Analog

This very interesting essay looks at the future of military robotics and finds many analogs in nature:

Imagine a low-cost drone with the range of a Canada goose, a bird that can cover 1,500 miles in a single day at an average speed of 60 miles per hour. Planet Earth profiled a single flock of snow geese, birds that make similar marathon journeys, albeit slower. The flock of six-pound snow geese was so large it formed a sky-darkening cloud 12 miles long. How would an aircraft carrier battlegroup respond to an attack from millions of aerial kamikaze explosive drones that, like geese, can fly hundreds of miles? A single aircraft carrier costs billions of dollars, and the United States relies heavily on its ten aircraft carrier strike groups to project power around the globe. But as military robots match more capabilities found in nature, some of the major systems and strategies upon which U.S. national security currently relies—perhaps even the fearsome aircraft carrier strike group—might experience the same sort of technological disruption that the smartphone revolution brought about in the consumer world.

Posted on August 25, 2017 at 6:34 AMView Comments

Robot Safecracking

Robots can crack safes faster than humans—and differently:

So Seidle started looking for shortcuts. First he found that, like many safes, his SentrySafe had some tolerance for error. If the combination includes a 12, for instance, 11 or 13 would work, too. That simple convenience measure meant his bot could try every third number instead of every single number, immediately paring down the total test time to just over four days. Seidle also realized that the bot didn’t actually need to return the dial to its original position before trying every combination. By making attempts in a certain careful order, it could keep two of the three rotors in place, while trying new numbers on just the last, vastly cutting the time to try new combinations to a maximum of four seconds per try. That reduced the maximum bruteforcing time to about one day and 16 hours, or under a day on average.

But Seidle found one more clever trick, this time taking advantage of a design quirk in the safe intended to prevent traditional safecracking. Because the safe has a rod that slips into slots in the three rotors when they’re aligned to the combination’s numbers, a human safecracker can apply light pressure to the safe’s handle, turn its dial, and listen or feel for the moment when that rod slips into those slots. To block that technique, the third rotor of Seidle’s SentrySafe is indented with twelve notches that catch the rod if someone turns the dial while pulling the handle.

Seidle took apart the safe he and his wife had owned for years, and measured those twelve notches. To his surprise, he discovered the one that contained the slot for the correct combination was about a hundredth of an inch narrower than the other eleven. That’s not a difference any human can feel or listen for, but his robot can easily detect it with a few automated measurements that take seconds. That discovery defeated an entire rotor’s worth of combinations, dividing the possible solutions by a factor of 33, and reducing the total cracking time to the robot’s current hour-and-13 minute max.

We’re going to have to start thinking about robot adversaries as we design our security systems.

Posted on July 31, 2017 at 12:19 PMView Comments

Roombas will Spy on You

The company that sells the Roomba autonomous vacuum wants to sell the data about your home that it collects.

Some questions:

What happens if a Roomba user consents to the data collection and later sells his or her home—especially furnished—and now the buyers of the data have a map of a home that belongs to someone who didn’t consent, Mr. Gidari asked. How long is the data kept? If the house burns down, can the insurance company obtain the data and use it to identify possible causes? Can the police use it after a robbery?

EDITED TO ADD (6/29): Roomba is backtracking—for now.

Posted on July 26, 2017 at 6:06 AMView Comments

US Army Researching Bot Swarms

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we’re on the subject, this is an excellent report on AI and national security.

Posted on July 24, 2017 at 6:39 AMView Comments

People Trust Robots, Even When They Don't Inspire Trust

Interesting research:

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot—which was controlled by a hidden researcher—led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

Our notions of trust depend on all sorts of cues that have nothing to do with actual trustworthiness. I would be interested in seeing where the robot fits in in the continuum of authority figures. Is it trusted more or less than a man in a hazmat suit? A woman in a business suit? An obviously panicky student? How do different looking robots fare?

News article. Research paper.

Posted on April 26, 2016 at 9:33 AMView Comments

The Internet of Things Will Be the World's Biggest Robot

The Internet of Things is the name given to the computerization of everything in our lives. Already you can buy Internet-enabled thermostats, light bulbs, refrigerators, and cars. Soon everything will be on the Internet: the things we own, the things we interact with in public, autonomous things that interact with each other.

These “things” will have two separate parts. One part will be sensors that collect data about us and our environment. Already our smartphones know our location and, with their onboard accelerometers, track our movements. Things like our thermostats and light bulbs will know who is in the room. Internet-enabled street and highway sensors will know how many people are out and about­—and eventually who they are. Sensors will collect environmental data from all over the world.

The other part will be actuators. They’ll affect our environment. Our smart thermostats aren’t collecting information about ambient temperature and who’s in the room for nothing; they set the temperature accordingly. Phones already know our location, and send that information back to Google Maps and Waze to determine where traffic congestion is; when they’re linked to driverless cars, they’ll automatically route us around that congestion. Amazon already wants autonomous drones to deliver packages. The Internet of Things will increasingly perform actions for us and in our name.

Increasingly, human intervention will be unnecessary. The sensors will collect data. The system’s smarts will interpret the data and figure out what to do. And the actuators will do things in our world. You can think of the sensors as the eyes and ears of the Internet, the actuators as the hands and feet of the Internet, and the stuff in the middle as the brain. This makes the future clearer. The Internet now senses, thinks, and acts.

We’re building a world-sized robot, and we don’t even realize it.

I’ve started calling this robot the World-Sized Web.

The World-Sized Web—can I call it WSW?—is more than just the Internet of Things. Much of the WSW’s brains will be in the cloud, on servers connected via cellular, Wi-Fi, or short-range data networks. It’s mobile, of course, because many of these things will move around with us, like our smartphones. And it’s persistent. You might be able to turn off small pieces of it here and there, but in the main the WSW will always be on, and always be there.

None of these technologies are new, but they’re all becoming more prevalent. I believe that we’re at the brink of a phase change around information and networks. The difference in degree will become a difference in kind. That’s the robot that is the WSW.

This robot will increasingly be autonomous, at first simply and increasingly using the capabilities of artificial intelligence. Drones with sensors will fly to places that the WSW needs to collect data. Vehicles with actuators will drive to places that the WSW needs to affect. Other parts of the robots will “decide” where to go, what data to collect, and what to do.

We’re already seeing this kind of thing in warfare; drones are surveilling the battlefield and firing weapons at targets. Humans are still in the loop, but how long will that last? And when both the data collection and resultant actions are more benign than a missile strike, autonomy will be an easier sell.

By and large, the WSW will be a benign robot. It will collect data and do things in our interests; that’s why we’re building it. But it will change our society in ways we can’t predict, some of them good and some of them bad. It will maximize profits for the people who control the components. It will enable totalitarian governments. It will empower criminals and hackers in new and different ways. It will cause power balances to shift and societies to change.

These changes are inherently unpredictable, because they’re based on the emergent properties of these new technologies interacting with each other, us, and the world. In general, it’s easy to predict technological changes due to scientific advances, but much harder to predict social changes due to those technological changes. For example, it was easy to predict that better engines would mean that cars could go faster. It was much harder to predict that the result would be a demographic shift into suburbs. Driverless cars and smart roads will again transform our cities in new ways, as will autonomous drones, cheap and ubiquitous environmental sensors, and a network that can anticipate our needs.

Maybe the WSW is more like an organism. It won’t have a single mind. Parts of it will be controlled by large corporations and governments. Small parts of it will be controlled by us. But writ large its behavior will be unpredictable, the result of millions of tiny goals and billions of interactions between parts of itself.

We need to start thinking seriously about our new world-spanning robot. The market will not sort this out all by itself. By nature, it is short-term and profit-motivated­—and these issues require broader thinking. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission as a place where robotics expertise and advice can be centralized within the government. Japan and Korea are already moving in this direction.

Speaking as someone with a healthy skepticism for another government agency, I think we need to go further. We need to create agency, a Department of Technology Policy, that can deal with the WSW in all its complexities. It needs the power to aggregate expertise and advice other agencies, and probably the authority to regulate when appropriate. We can argue the details, but there is no existing government entity that has the either the expertise or authority to tackle something this broad and far reaching. And the question is not about whether government will start regulating these technologies, it’s about how smart they’ll be when they do it.

The WSW is being built right now, without anyone noticing, and it’ll be here before we know it. Whatever changes it means for society, we don’t want it to take us by surprise.

This essay originally appeared on Forbes.com, which annoyingly blocks browsers using ad blockers.

EDITED TO ADD: Kevin Kelly has also thought along these lines, calling the robot “Holos.”

EDITED TO ADD: Commentary.

EDITED TO ADD: This essay has been translated into Hebrew.

Posted on February 4, 2016 at 6:18 AMView Comments

Shooting Down Drones

A Kentucky man shot down a drone that was hovering in his backyard:

“It was just right there,” he told Ars. “It was hovering, I would never have shot it if it was flying. When he came down with a video camera right over my back deck, that’s not going to work. I know they’re neat little vehicles, but one of those uses shouldn’t be flying into people’s yards and videotaping.”

Minutes later, a car full of four men that he didn’t recognize rolled up, “looking for a fight.”

“Are you the son of a bitch that shot my drone?” one said, according to Merideth.

His terse reply to the men, while wearing a 10mm Glock holstered on his hip: “If you cross that sidewalk onto my property, there’s going to be another shooting.”

He was arrested, but what’s the law?

In the view of drone lawyer Brendan Schulman and robotics law professor Ryan Calo, home owners can’t just start shooting when they see a drone over their house. The reason is because the law frowns on self-help when a person can just call the police instead. This means that Meredith may not have been defending his house, but instead engaging in criminal acts and property damage for which he could have to pay.

But a different and bolder argument, put forward by law professor Michael Froomkin, could provide Meredith some cover. In a paper, Froomkin argues that it’s reasonable to assume robotic intrusions are not harmless, and that people may have a right to “employ violent self-help.”

Froomkin’s paper is well worth reading:

Abstract: Robots can pose—or can appear to pose—a threat to life, property, and privacy. May a landowner legally shoot down a trespassing drone? Can she hold a trespassing autonomous car as security against damage done or further torts? Is the fear that a drone may be operated by a paparazzo or a peeping Tom sufficient grounds to disable or interfere with it? How hard may you shove if the office robot rolls over your foot? This paper addresses all those issues and one more: what rules and standards we could put into place to make the resolution of those questions easier and fairer to all concerned.

The default common-law legal rules governing each of these perceived threats are somewhat different, although reasonableness always plays an important role in defining legal rights and options. In certain cases—drone overflights, autonomous cars, national, state, and even local regulation—may trump the common law. Because it is in most cases obvious that humans can use force to protect themselves against actual physical attack, the paper concentrates on the more interesting cases of (1) robot (and especially drone) trespass and (2) responses to perceived threats other than physical attack by robots notably the risk that the robot (or drone) may be spying – perceptions which may not always be justified, but which sometimes may nonetheless be considered reasonable in law.

We argue that the scope of permissible self-help in defending one’s privacy should be quite broad. There is exigency in that resort to legally administered remedies would be impracticable; and worse, the harm caused by a drone that escapes with intrusive recordings can be substantial and hard to remedy after the fact. Further, it is common for new technology to be seen as risky and dangerous, and until proven otherwise drones are no exception. At least initially, violent self-help will seem, and often may be, reasonable even when the privacy threat is not great—or even extant. We therefore suggest measures to reduce uncertainties about robots, ranging from forbidding weaponized robots to requiring lights, and other markings that would announce a robot’s capabilities, and RFID chips and serial numbers that would uniquely identify the robot’s owner.

The paper concludes with a brief examination of what if anything our survey of a person’s right to defend against robots might tell us about the current state of robot rights against people.

Note that there are drones that shoot back.

Here are two books that talk about these topics. And an article from 2012.

EDITED TO ADD (8/9): How to shoot down a drone.

Posted on August 4, 2015 at 8:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.