People Trust Robots, Even When They Don't Inspire Trust

Interesting research:

In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.

In some cases, the robot—which was controlled by a hidden researcher—led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.

When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

Our notions of trust depend on all sorts of cues that have nothing to do with actual trustworthiness. I would be interested in seeing where the robot fits in in the continuum of authority figures. Is it trusted more or less than a man in a hazmat suit? A woman in a business suit? An obviously panicky student? How do different looking robots fare?

News article. Research paper.

Posted on April 26, 2016 at 9:33 AM54 Comments

Comments

jayson April 26, 2016 9:52 AM

The robot could also have served as a proxy for the authority figure who told them to follow the robot. Or maybe it’s more of a statement about how a herd of college students acts today.

Daniel April 26, 2016 10:23 AM

Oh this is nonsense and Bruce’s comment hints at why…Here’s what’s on the robot: “”Emergency Guide Robot”. So when the robot is guiding the people to the conference room it is doing something that manifestly is not its primary purpose because at the time it is leading them to the conference room there is no emergency. Now, how it logical to infer that a machine which doesn’t do its secondary function well is also not going to do its primary function well? They are two different functions.

If they wanted to run a real experiment they should have had BOTH situations be emergencies. If people had still followed the robot during the second emergency even after having been mislead by the robot during the first emergency THEN we would have an interesting research result. As it is, it just more garbage science.

Andrew April 26, 2016 10:54 AM

Another interesting (for me at least) question – how would humans react if the robot evacuating them in an emergency situation would suddenly break. How many will continue in the direction set by the now broken robot and how many would resort to common sense and follow the exit signs pointing back?

boog April 26, 2016 11:08 AM

@Daniel:

Now, how [is] it logical to infer that a machine which doesn’t do its secondary function well is also not going to do its primary function well?

In this case, pretty easy:

  • It’s primary function is to guide people from point A to point B in an emergency scenario.
  • It’s secondary function is to guide people from point A to point B in a non-emergency scenario.
  • Failing to do the secondary function meant guiding people the wrong direction or breaking down completely.

I think trusting that a robot that absolutely fails in a non-emergency scenario would somehow succeed at the exact same task in an emergency scenario is illogical.

paul April 26, 2016 11:52 AM

@boog

I’m not sure that’s the way the reasoning goes. The primary function is to guide people from A to B; the secondary function is to guide people from K to Q.

At this point we’re all pretty familiar with the idea of GPS-based navigation systems that screw up some sets of directions but work fine with others, so it’s not entirely implausible that a robot could mislead in one case and work fine in another.

But yeah, mostly I think it’s about the way that cognitive frames shrink in emergencies and things like emergencies, so that it’s easy for people to become focused on subgoals (follow the guide) while losing track of the main goal (get to an exit.

Cryo in the Wilderness April 26, 2016 12:15 PM

Don’t you know about the herd?
Well, everybody knows that the herd is the word
I said, herd, herd, herd is the word
Ah-well-ah, herd, herd, herd, herd is the word

Don’t say The Big Bopper didn’t try to warn you.

Mel April 26, 2016 1:07 PM

Well, who knows what the psychologists might not have done to the EXIT signs? It’s hard to say whether the psychologists have factored themselves out of the experimental situation as much as we would like.

Gweihir April 26, 2016 1:28 PM

People are generally stupid and instead of thinking about a situation, just do what they think is expected of them. This is just one more example of that.

Tony H. April 26, 2016 1:40 PM

And for the ethics committee types, not to worry, had there been a real emergency during the experiment, a real Emergency Guide Robot would have been automatically deployed to guide everyone to safety.

Ernie Van Meter April 26, 2016 1:56 PM

No people aren’t stupid, this is just a poor experiment. Trust is inherently multi-faceted, it is not binary – which seems to have been the researchers’ presupposition.

Let’s say I have an alcoholic uncle Jack who is also a compulsive liar. I wouldn’t trust him with my kids and I wouldn’t trust his advice, but I don’t believe Jack wants to kill me, he’s always been nice. If I’m at Jack’s house and he says “there’s a fire, let’s go out the back door!” I am going to trust him in that situation.

The people in the experiment are basically just trusting that the designers of the robot weren’t actively trying to kill them, which seems pretty reasonable.

boog April 26, 2016 1:58 PM

@paul:

…it’s not entirely implausible that a robot could mislead in one case and work fine in another.

No, not at all – I don’t think anyone said it was entirely implausible.

But getting lost (and sometimes breaking down) when the robot’s primary function is to be a guide is kind of the point, at least when inspiring confidence. In an emergency, most people wouldn’t think about how earlier failures might have been due to poor GPS-navigation (while the robot proudly leads them away from the door marked with exit signs), or due to some tolerable variance in quality between primary and secondary functions.

Chances are they aren’t thinking about it at all, and are either 1) doing as the researchers suspect and following the only authority figure they know, or 2) doing as you say and focusing on subgoals over the larger goal. Probably a little bit of both.

Daniel April 26, 2016 2:03 PM

I think trusting that a robot that absolutely fails in a non-emergency scenario would somehow succeed at the exact same task in an emergency scenario is illogical.

Now there is a beautiful example of a non sequitur. If (A) robots absolutely fail in non-emergency situations, then (B) it is a mistake to trust them in emergency situations. It is (B) a mistake to trust them in emergency situations (because the researchers purposely designed the study that way) ergo (A) it must be a mistake to trust them in non-emergency situations.

This is the type called affirming the consequence.

https://en.wikipedia.org/wiki/Affirming_the_consequent

Even if the premises and conclusion are all true, the conclusion is not a necessary consequence of the premises.

Chris April 26, 2016 2:18 PM

Not really surprising. The assumption is that Robot is an “Emergency Guide Robot” specifically designed to guide people in an emergency. I encourage them to retry this experiment with the “Jersey Shore’s ‘The Robot Situation'” Robot and see if it garner’s as much trust.

Ron April 26, 2016 2:46 PM

Ont thing I like about robots. When a government robot census taker comes to the door, I can kick it in the teeth without incurring an assault charge.

David Leppik April 26, 2016 3:08 PM

One of the lessons that UI/UX designers learn is that even smart people tend to act like idiots during an emergency. They fixate on one thing and don’t re-focus. Normally people are trained to look for exit signs, so that’s what they’ll fixate on. In this case, a robot got their attention, so that’s what they fixated on.

This is also something that con artists do. By getting their mark panicky or emotional, they don’t have to worry about obvious clues giving them away.

GrowingUpInTech April 26, 2016 3:46 PM

Panick beats all logic, the robot offered hope in a scary situation. We are not logical, it is panick in a scary situation. It is human.

JeffP April 26, 2016 4:52 PM

I think the same test can be applied replacing the robot with political and religious leaders. Yes, this circles back to the Milgram experiments.

@Alan Bostick, Trust No One! Stay Alert! Keep Your Laser Handy!

Chelloveck April 26, 2016 5:00 PM

It’s just that there’s a hierarchy of authority we tend to follow:

Permanent static signals (road signs)
Temporary static signals (detour signs)
Dynamic signals (LED signs with specific directions)
Human signals (flaggers)
Priority human signals (emergency workers)

If I’m driving to my Grandma’s house I’ll follow the posted road signs… Unless there’s a detour sign. But if there’s an LED sign which contradicts the detour sign I’ll probably follow that instead, on the grounds that since it can be updated more easily it probably has more up-to-date information. (This may or may not be a warranted assumption.) But if there’s a human flagger there I’ll follow their directions on the assumption that they’re the local authority. Unless there’s a police officer telling me differently.

The robot probably fits the hierarchy at the same level as the LED sign. In our minds, since it’s a dynamic guide it takes priority over the static signs. This is true even if it’s been proven wrong a small number of times before. I’ve been led astray by the signage before, but correctness generally follows the hierarchy outlined above. If this is my first time encountering an “Emergency Guide Robot” my default assumption is that somebody has tested it and demonstrated its general suitability for the purpose.

It’s worth noting that all levels of signals can be spoofed. Anyone can sneak onto the highway and put up realistic signs that misdirect cars. Or dress in the uniform of a flagger or emergency worker to misdirect them. Likewise, the researchers have spoofed the building’s dynamic signage. I bet they’d get exactly the same results if they had an LED sign blinking “THIS WAY” instead of a robot.

blake April 26, 2016 5:01 PM

@boog, @Daniel

The distinction between “emergency” and “non-emergency” situations here is an arbitrary notice on the machine.

Writing “The Infallible Toaster” on the side doesn’t guarantee your toast will never get burnt.

Are you somehow expecting a different (non-buggy?) implementation of A* to kick in one an emergency siren goes off?


It might also be related to modern liability conditioning. What are you going to do if something terrible happens and later a lawyer asks you why you didn’t follow the clearly labeled Emergency Guide Robot? Did you even read the safety information on the back of your visitor pass when you entered the site?

Marcos Malo April 26, 2016 5:29 PM

@Chum

In my case, marrying/cohabitating/whatever with a robot would be my most ethical choice. “Dating” a robot sounds a bit odd to me. Why would someone “date” a machine? Rent, lease, or buy is the correct terminology. Or possibly “beta test”.

boog April 26, 2016 5:30 PM

@Daniel:

If (A) robots absolutely fail in non-emergency situations, then (B) it is a mistake to trust them in emergency situations

At what is essentially the same task? Why not?

Suppose I design a robot to deliver me a steaming hot cup of coffee. On a warm afternoon I ask it for a glass of ice cold water (not it’s primary function), which it promptly spills all over me. Do you think I would ever again, with confidence, ask it for coffee without first figuring out why it dumped ice water all over me? From an engineering perspective, or even just being practical, I don’t see how someone could trust it at its primary task without knowing why it failed at basically the same task.

Not affirming the consequent here – I’m making no argument about consequences or conclusions. I’m simply saying that these are two very similar functions, and to me it makes sense to trust that the success rates would likewise be similar. This is how I’d approach the problem anyhow.

k15 April 26, 2016 5:51 PM

A more real life decision: trust the person who programmed the robot, or trust some stranger who just appeared in front of you and wants you to go with them?

no fun April 26, 2016 6:13 PM

The people knew they were being researched on somehow, and doesn’t it look a little too “convenient” for an “Emergency Guide Robot” to be around for the first time ever just before an emergency happens? I’d be strongly suspecting the whole thing to be a setup if it were me, which would not engender a realistic response in me. Therefore I’d probably follow the robot too, to complete the scenario.

Contrived situations can very easily get you contrived results. It’s very hard to do something realistic enough to get you realistic results.

boog April 26, 2016 6:44 PM

@blake:

if(scenario_type == EMERGENCY)
  { calculateRoute(destination); }
else
  { driveAroundRandomlyAndMaybeShutdown(); }

Dirk Praet April 26, 2016 7:07 PM

@ Ernie Van Meter

The people in the experiment are basically just trusting that the designers of the robot weren’t actively trying to kill them, which seems pretty reasonable.

At least not wittingly, to quote a certain James Clapper. In a best case scenario, the robot is programmed to “control” the crowd and to contain panic. It is highly unlikely that he actually has valid recommendations for any type of emergency and thus will as per his programming advise a certain course of action that may or may not be appropriate for the situation. The logical decision here would be to completely ignore the machine and use common sense.

Cranky Observer April 26, 2016 7:20 PM

= = = People are generally stupid and instead of thinking about a situation, just do what they think is expected of them. This is just one more example of that. = = =

Of course one of the more common criticisms in after-action reviews of simulated or real emergencies is that people took too much time to think: that they had 90 seconds before that propane tank exploded and they spent 85 seconds observing and determining the optimal course of action and as a result died. Following pre-planned courses of action (such as, I dunno, using the building’s provided Emergency Guide Robots) quickly and tenaciously is also a survival strategy.

Also, given the sensors available on modern phones and autos it is not unreasonable to think that an Emergency Guide Robot has better information about the situation than the subject and that perhaps the smoke down the corridor is a lesser risk than the flames the robot can sense on the other side of the door marked Exit.

David E. April 26, 2016 8:34 PM

The researchers put smoke between the room where the people were and the entrance they used to enter the building and then were surprised when the people followed the robot to the back of the building?

We’re taught all our lives to not enter smoke if there is a fire and to take the nearest clear exit, or the exit that appears to be the clearest if there is no clear one.

This appears to be a not-well-thought-out or planned experiment. Reset, redesign, and try again.

El Cheapo April 26, 2016 9:25 PM

“A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.”
http://artificial378.rssing.com/chan-5821171/all_p4.html

David April 26, 2016 10:42 PM

Clueless college students blindly follow “Emergency Guide Robot”… No surprise given that they’re also ready to follow Bernie Sanders straight off a cliff.

keiner April 27, 2016 4:14 AM

@David

…or this Trumpet guy!

To me no surprise at all, people believe in TV and newspaper ads to tell the truth, so what did you expect?

Jon April 27, 2016 4:50 AM

A minor detail:

Might it be a reasonable assumption on the part of the people when:

a) the robot has broken down
b) someone has emerged to take control
b.i) of the people
b.ii) and of the robot
c) and led them to the conference room

that they would assume that under b.ii) that the robot has been repaired and is now more trustworthy? Imagine a human who leads people astray and stops. Someone else appears with the correct answers, and then the human reappears in another situation, presumably corrected.

Yeah, there’s your ‘control’ group – Have people, live human beings, behave as the robots did, and then compare trust.

J.

Neto April 27, 2016 7:05 AM

as on_fun said, I think I’d also be predisposed to follow the robot and finish the exercise properly because it seems really hard not to be aware that the situation is not a real emergency.

I’d be cool to know whether test subjects genuinely experienced panic. I would very much doubt it.

awareness of the test in cases like this seems pretty relevant to the possible outcomes.

Gerard van Vooren April 27, 2016 7:32 AM

@ Ernie Van Meter,

“The people in the experiment are basically just trusting that the designers of the robot weren’t actively trying to kill them, which seems pretty reasonable.”

“I, Robot” is the answer to this. If you are willing to see through all the Hollywood effects/drama, the movie is rather clear.

Peanuts April 27, 2016 9:06 AM

Researchers should have had a pop open compartment when the smoke alarm went off, it could offers to collect valuables.

I wonder if the sheep would recognize they are being robbed and potentially set up to be murdered by the happy faced robotic overlord

This approach would go well in an interview setting to try before hiring

Roger April 27, 2016 10:25 AM

It is difficult to say what to make of this experiment except that — as other commenters have observed — the data revealed in this summary do not really support the researchers’ conclusions. Perhaps there will be more to it when they publish, but at the moment they seem to be drawing a very long bow on a very short arrow.

Some specific issues:

  1. Did the test subjects actually engage in this as a real scenario? A perennial problem with psych experiments is that by now, everyone who is not totally ignorant of the ways of the world realises that they nearly always involve bullshit games and lying about their objectives. So some subjects spend their time trying to work out the rules of the game, instead of acting normally. We probably see that here with the video of the young man in the “pre-emergency phase”, following the robot around and around in tight circles like a duckling following its mother. It is difficult to believe that he actually thinks this is the way to the conference room, or that spinning on the spot is a sensible way to follow the machine’s directions. Why does he do it? Only he can tell us, but it is doubtful it has anything to do with respecting the robot’s supposed “authority”
  2. Belief. Closely related to point 1 is the question of whether test subjects believed that there was a real fire. It would have been ethically dubious to create real panic in the subjects, and I doubt that they did. Certainly the short video does not show any of the other cues one would expect: alarms, flashing lights, other evacuees, floor wardens. Plus, the smoke generators used for fire evacuation exercises is almost odourless; it smells nothing like fire. If subjects did not believe there was really a fire, this completely undermines interpretations of their actions.
  3. Lost It is suggested that it is odd for subjects to follow the robot to the back of the building when they had earlier seen a well-signposted exit at the front. This assumes firstly that they paid attention to emergency signs whilst entering a new building, when it is well established that very few people do. It assumes secondly that they had some idea where the front door was — despite being in a novel environment, where they had been led around in a confusing path, and now find themselves in a smoke-filled corridor, unable to see! In that scenario, 9 out of 10 people would have no idea how to find the exit and might follow the robot as the only feasible option.
  4. Sensory overload The smoke in the video is thick enough to reduce visibility to a couple of feet, and the lighting is dim. Having been in an evacuation exercise like that, I can say it is very confusing, especially in an unfamiliar area. Add the badly designed lamps on the robot: its uppermost light “wands” throw glare in the subjects’ faces, whilst its lower lamps cast a dramatic, narrow pool of light on the floor but leave walls and obstacles in darkness. Hmm, it almost seems designed to baffle the subjects’ senses … In this scenario you might follow the robot simply because it is the only thing you can see.
  5. Semantic overload of “authority” Oh, they have to trot out the trendy social theories about authority. And as usual, they are semantically overloaded. It is perfectly reasonable to believe that the robot is an “authority” in the sense of having access to useful information in an emergency — surely that is the entire concept and point of an “emergency guide.” But “authority” in this sense is unrelated to social authority.
  6. Meaningless of trust The term “trust” is bandied about as if it is applicable to machines. It is not. No-one really believes the toaster is plotting against them, no matter how many times it burns the toast. The relevant concept here is actually reliability — but, as others have observed, reliability is a many nuanced concept for a complex machine.

boog April 27, 2016 11:57 AM

@Roger:

6. Meaningless of trust The term “trust” is bandied about as if it is applicable to machines. It is not. No-one really believes the toaster is plotting against them, no matter how many times it burns the toast.

The term “trust” can also mean “confidence”, as in trusting that a machine will function well or poorly, regardless of any intent. Not so meaningless when you look at it that way.

JDM April 27, 2016 12:45 PM

So people took the non-smoke-filled path versus the smoke-filled path and this shows they simply trusted the robot. Okay.

Seems to me they used their judgement to decide that since the robot was showing them a non-smoke-filled exit it would be smarter to go that way than step into a smoke-filled corridor. The researchers were surprised at that?

Marcos Malo April 27, 2016 6:27 PM

@Anura
Mea culpa.

Being a jerky boyfriend to a robot doesn’t strike me as unethical, at least until the robot can make a convincing argument/demonstration that it is conscious (or passage of an AI emancipation amendment).

probably April 28, 2016 12:28 AM

@Chris Johnston

Imagine that you are a technically-inclined habitual cellphone-user…

You’re probably already familiar with the limited accuracy of GPS inside large buildings. You may or may not be aware of the state-of-the-art of locally guided positional systems. You might or might not know about the longer history of thermal radiation imagery. You should have read a thing or two about about the failure modes of image-recognition. You likely don’t know too much about how vulnerable today’s software and hardware is to intentional malicious interference. You definitely know that you’re a subject in an experiment.

Is it significant or material which emergency exit you use?

Yes, I’d say it is. Yet, the optimal exit choice is determined by a much more interesting number of factors than the existence of one lonely bug-ridden robot…

I think it’s a great start though! The tech & psych science communities should be able to learn a lot from each other.

Harald K April 28, 2016 3:27 AM

Guys, hasn’t it occurred to you that college students aren’t entirely dumb? That the vast majority of them probably quickly understood that they were in a silly psych experiment?

Especially when they’re studying psychiatry themselves, and are very familiar with the practice of performing studies on students because they are so conveniently at hand.

So maybe they did what they did because they thought they were being tested, and this was what they were “supposed” to do. Or maybe out of sympathy they just thought they’d give the researchers something interesting to write about. Never mind generalizing from college students to people in general, this research is probably not reproducible even on college students.

This is the sort of study I’d hoped people did less of in $current_year.

Phreon April 28, 2016 12:37 PM

What do you expect? Regardless of how complex we might seem, in the end, people are robots.

fajensen April 29, 2016 2:27 AM

@Harald K –
Doesn’t matter how dumb or how clever – I still, 30 years after graduating, see distinguished people argue in meetings over numbers which are obviously* a factor 100 wrong and insisting they are correct because they got them from a model. Good thing we have Q&A or power stations would be blowing up all over.

We experience time and time again renowned experts blindly following models even when these clearly doesn’t make any sense and are founded solidly on beliefs!

Look at LTCM – Nobel Prize Winners the lot of them!

We have presently the longest running recession in Europe – This fact, the fact that present policies year-on-year do not work does not matter: We follow “responsible economic policies” that are defined by economic models baked into robot advisor’s; the economic experts go right ahead and recommend that politicians adjust their policies to track the predictions of the models -> whatever bugs, assumptions, religion, and academic fraud that went into defined those models thus becomes something that has real consequences.

The predictions thus generated are famously/scandalously often 10, 30 or 100%% off but the show continues. Policies and Laws are basically made on garbage advice from demonstrably unreliable robots.

It’s no surprise that this experiment shows the same trend. People will and does really trust a machine more than a person (at least until the machine crosses the “uncanny valley”).

I believe one reason, in this experiment, is because our mirror neuron system does not know about machines so we simply don’t wonder about a machines motives and the designer of the machine is unseen. People don’t readily think that the designer is evil or incompetent.

The second reason is that it’s easier. If we all are ruined in the same way, by following “best practice” and “expert advice” and “sophisticated modelling”, then it’s not really anyone’s fault – it’s just “the system”, “bad luck”, “unforeseen circumstances”, “an act of God”. There is safety in incompetence.

Third, Coercion – We know what happens to whistle-blowers and the nail that stick out; they don’t get to collect their pension. If they are lucky.

*) By Inspection. In ones head, one can usually “guesstimate” within the right magnitude IF one understands the problem.

Peter May 4, 2016 7:29 AM

Peope should think always about a situation, instead of expecting something in advance or just “trust” someone/something blindly.
I would never stop thinking just because there is a “robot”. At least there should be something intelligent like Asimov’s robot laws existing.

Rhialto June 2, 2016 5:52 AM

@Cryo: I think you mean the song Surfin’ Bird by the Trashmen, possibly better known in the cover by The Cramps (or the one of the Ramones).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.