Confusing Self-Driving Cars by Altering Road Signs

Researchers found that they could confuse the road sign detection algorithms of self-driving cars by adding stickers to the signs on the road. They could, for example, cause a car to think that a stop sign is a 45 mph speed limit sign. The changes are subtle, though -- look at the photo from the article.

Research paper:

"Robust Physical-World Attacks on Machine Learning Models," by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song:

Abstract: Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world--they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.

Posted on August 11, 2017 at 6:31 AM • 47 Comments

Comments

Justin TymeAugust 11, 2017 6:46 AM

If a few researchers can fake out self-drivers think what millions of mischievous teenagers could do. I already foresee teens playing chicken with them. Maybe throwing stuff in front of them. Just for fun.

Meanwhile, robbing freight haulers will likely become a new crime sector. How will self drivers deal with road rage? (...a de-escalation chip/app?)

It might take longer than many think to get self driving vehicles right.

GordAugust 11, 2017 7:06 AM

This is why I am entirely against autonomous cars. The only way in my opinion it can be even remotely safe, is if all the cars on the road are autonomous, removing human interaction and the need for signs. Not that I would at all comfortable in that scenario either..

Paul RenaultAugust 11, 2017 7:38 AM

I'd like to see how the autonomous cars do in Québec, where the stop signs say "Arrêt" or where they say "Arrêt/Stop", the Stop is taped over so it says "101" (the French language law).

That, and snowstorms, slush, and generally bad driving conditions.

They're always tested in the easiest possible conditions, no fog, no snow, no rain, no dust storms..and they still manage to fail.

Harald K.August 11, 2017 7:58 AM

"If a few researchers can fake out self-drivers think what millions of mischievous teenagers could do."

They have to learn to construct adversarial examples first. You need to know a bit about the target's architecture, its classes, and what attacks it's been hardened against. (It's an arms race: it's easy for autonomous car makers to defend against any particular published attack, but seemingly impossible to make the resulting system vulnerable to something else.) Finally you need a good deal of computing power.

Since you can already cause a lot more mayhem by swapping road signs by hand, dropping things from bridges, just stepping out on the street in a reflex and direct people authoritatively down a one-way street etc., I doubt teenagers will bother.

Rich MartonAugust 11, 2017 8:12 AM

Could a projector be used to alter the sign? Project the confusing pattern onto the sign from a distance, may be hard to project 'black' but with an intense enough light surround the 'black' area with high enough contrast might be archived. Selectively spoof the sign for a particular, targeted vehicle.

Autonomous vehicles being tested on more or less straight road in clear weather. I want to see them negotiate narrow country roads here in rural New England where the roads were created by following a cow with a paving machine. Trees and rocks right up to the edge of the asphalt, and a freeze thaw cycle that 'grows' rocks up out of the ground. And, said road, may or may not have a center line or fog lines; in the winter, with snow banked up on the sides of an already narrow road with a school bus approaching from the other direction.

Dean WebbAugust 11, 2017 8:40 AM

New spy thriller flash fiction:

***

As the Himynamistan diplomatic convoy made its way to the intersection, the Dassom agent noted their passing as he sat slumped and fetid, like countless other bums on the streets of San Francisco. The convoy made its halt at the stop sign, autonomous brakes holding firm against the gravity of the downward slope.

As the convoy yielded right-of-way to the cross traffic, the Dassom agent, nameless in the shadows of the alleys of dumpsters between glittering financial monuments, lifted a small infrared controller and pointed it at the 18-wheeler loaded with pig iron that was rolling along just behind the convoy.

The Dassom agent pressed a button on the IR device and shot a signal to the 18-wheeler.

You know, how that big truck got to the top of the hill with all that metal in it was a testament to the builders of the engine in that beast of a machine. Well done, lads! Such a shame that the engineering and craftsmanship were going to be wrecked soon after the truck's driving software interpreted the IR signal as a manual emergency override to disengage all braking systems and to accelerate.

The Dossam agent did not turn to one side or the other, but kept the metallic collision between the truck and the Himynamistan diplomats in their unmoving vehicles to his back. Most of the wreckage went forward, towards the cross street traffic, but a few small ricochets bounced off the back of the agent's hoodie.

***

And that's how it all looks like an accident in the world of autonomous vehicles... software failure...

There is No ComparisonAugust 11, 2017 8:51 AM

There is No Comparison
The big-data collectors know they need smart markers embedded with in the pavement and signs. Then new designs for traffic lights. They won’t admit to this trillion dollars expense because they have to hoodwink the consumer first. So they lobby, and pay for reelection and bias the press.

If you want the best deal in freedom, security, convenience, price and avoiding tracking and advertising, then go in the opposite direction of the herd mentality.
I LIKE drive and going anywhere on a whim. Who wants to get in the car with criminals? Who stops the car in high-crime areas? My superior human situational awareness algorithms value my families life first in avoiding accidents and becoming another crime statistic.

However alcoholics (1 in 8 Americans) and drug addicts are the exception...lock them in/buckle up as part of the sentencing.

Gord WaitAugust 11, 2017 10:24 AM

Time for radio beacons that broadcast the sign info. Have to be encrypted so it can't be spoofed, of course. Combine with mapping data for extra verification..

Peter S. ShenkinAugust 11, 2017 10:50 AM

In rural parts of the US, a high percentage of road signs are filled with shotgun or bullet holes. You know -- target practice, or just Friday-night mischief after a few beers. I wonder what autonomous driving systems would make of those....

Sean ColbathAugust 11, 2017 11:19 AM

This paper presents an interesting approach, but their methodology is pretty poor, and the title of THIS blog entry is completely misleading.

A correct title would be "Researchers find a way to fool a classifier that they themselves built".

There is little time in the paper given to detailing their classifier, or comparing it to one that might be deployed in a car. Although they use an open data set, we do not know what features their classifier uses. And they admit the native performance of the system is only 90%.

There are 17 sign classes in the training, and they are not uniformly represented. The "speed limit" class amounts to a quarter of all the training data. If the system is going to make an error, it is more than likely to error into one of those four classes.

The takeaway for me was that their classifier sucks.

markAugust 11, 2017 11:23 AM

As I've said before, I don't see a good use for self-driving vehicles *other* than in a special lane on the freeways. Streets, esp. in a metropolitan area, are asking for people getting hit, and other accidents. In large numbers.

And here's my ultimate "do you want to see your self-driving car freeze, with smoke coming out of its ears, like Robbie the Robot": point your favorite map to Dewey Rd & Garrett Park Rd, Wheaton-Glenmont, MD. Now look west. The double yellow line goes away. It's three lanes, with parking in one, a barrier between the road and the park on the other side. No lines AT ALl. Oh, and full-sized city buses drive on it. And it's two-way.

albertAugust 11, 2017 11:26 AM

At least in the US, ALL octagonal signs are STOP signs, regardless of what's printed on them. The type of sign is identified by shape. So the brain mistook an octagonal STOP sign for a rectangular SPEED LIMIT sign? A 5-year-old wouldn't make that mistake.

One would assume that any 'researcher' who's not a total idiot would consult: https://mutcd.fhwa.dot.gov/ and educated themselves. This includes the 'experts' who write the code.

And people wonder why they're being warned about the danger of autonomous vehicles.

You guys may want to review the discussions on voting systems in this blog. Every 'solution' to a technical problem can be circumvented. A few commenters pointed out the obvious: hand-counted paper ballots, reality-checked by exit polls.

Sadly, I'm even surprised at this.

How disappointing...

. .. . .. --- ....

Ollie JonesAugust 11, 2017 12:08 PM

Automated sign-reading is not as brittle a setup as everybody thinks, at least in the Tesla Model S / X world. Leastways, it's not subject to massive disruption by hacking road signs.

The forward-facing cameras do read signs. But the cars also remember, both locally and fleetwide, where the signs are located along the road.

My commute has the proverbial 45mph speed limit sign. One early morning after a winter storm it was covered with rime ice and totally unreadable. Nevertheless the sign on the dash panel switched at the right moment.

I know Tesla is using me and their other customers to gather global information on this stuff too. And that's fine with me.

Fully autonomous driving is going to require the same sort of memory that drivers use when they notice that something has changed.

AnuraAugust 11, 2017 12:44 PM

This is what happens when your commerical success is dependent primarily on being first to the market. Image recognition is hard, but they focused on getting good enough to rush to the market and nothing more. Honestly, relying on fuzzy logic for self-drive cars is dumb. We should be working on developing a secure navigation infrastructure before we start getting dependent on some half-assed solution.

WinterAugust 11, 2017 12:59 PM

"Nevertheless the sign on the dash panel switched at the right moment."

I assume autonomous cars all have a navigator program that is at least as capable as my years old TomTom gadget. That knows all speed limits, one way streets etc.

Street sign recognition will generally be backed up by map knowledge.

Still, these autonomous cars should have very robust AI inside to handle recognition failures.

Jeremy CastanetaAugust 11, 2017 1:47 PM

A few other things that are confusing self-driving cars:

-Kangaroos in Australia
(https://www.theguardian.com/technology/2017/jul/01/volvo-admits-its-self-driving-cars-are-confused-by-kangaroos)

-Badly painted roads
(http://www.dailymail.co.uk/sciencetech/article-3517050/Wheres-lane-Self-driving-cars-confused-shabby-U-S-roadways.html)

-Hipster cyclists
(http://www.businessinsider.fr/us/google-self-driving-cars-get-confused-by-hipster-bicycles-2015-8/)

(This list doesn't even consider malicious elements like stickers on road signs, Barbie dolls waved in front of the camera, DoS/jammer attacks or remote hacks)

MartyAugust 11, 2017 2:00 PM

I seem to remember that this is a fairly common problem with the simplest neural network image classifiers. It's also fairly simple to counter; you just train your classifiers against a hostile network designed to generate misleading images. In fact, the technique that they are using in this paper could be very handy for insuring that someone's classifier is robust.

Impossibly StupidAugust 11, 2017 2:22 PM

@Harald K.
"They have to learn to construct adversarial examples first."

Not really. That's the problem with the push of weak/fake AI that is popular these days: there are so many edge conditions to the classification that, in sufficient numbers, random chance is going to find the dangers for you, and then word of the vulnerability will spread. When you have millions of cars on the road scanning thousands of signs every day, that's a lot of opportunities to go wrong. Some random splash of paint or bird poop or a stuck leaf or some other one-in-a-billion exception happens that causes havoc, and it becomes an exploit that can be replicated and refined.

nippy biplaneAugust 11, 2017 2:27 PM

Or even simpler: kids, for instant fun just creep up to your neighbor's drive way and spray a bit of paint on his self-driving car's camera lens or smear some grease on it. He's not going to be driven around by any computer for a while!

Anyway, why would I want to buy a car that will deliberately kill me if it calculates that the guy that has just jumped in front of my vehicle on the motorway happens to be a bit younger than I am?

Tom ErlweinAugust 11, 2017 2:45 PM

We've got a big issue with the reliability of self-driving car safety stats. In order to influence legislators and sway public opinion, manufacturers are distorting data when it comes to disclosing accidents. One of the ways they do this is by stretching the definition of "not at fault." For instance, if a self-driving car is suddenly unable to detect the road it will force a switch to manual mode. If the person behind the wheel does not react in time, the incident is not regarded as the car's fault.

https://www.digitaltrends.com/cars/automated-car-takeover-time/
https://www.wired.co.uk/article/google-car-human-control

Daniel D MayAugust 11, 2017 2:55 PM

This can all be solved with an RFID chip in everything. Including the back of your neck next to the barcode muhuahahahaha!!!!1

>:)

JeremyAugust 11, 2017 3:00 PM

So they used some black and white boxes to make a stop sign look like a 45 mph speed-limit sign.

Then they used a subtle full-sign overlay to make a right-turn sign look like a 45 mph sign.

Then they made a faded stop sign look like a 45 mph sign.

Then they made an attack disguised as graffiti that looks like a 45 mph sign...

Is anyone else noticing a pattern here? Like maybe the particular car they're testing has an unhealthy bias towards seeing everything as a 45 mph sign?

-=-

Pragmatically, I think the solution here is obviously that a self-driving car has to do something safe even in the event that the signs are wrong. The fact that a sign can be maliciously modified so that it looks wrong to a car without looking wrong to a human is interesting and somewhat worrying, but all drivers (human and computer) need to cope with more obvious attacks where a sign is simply removed or replaced wholesale and still NOT CRASH, and if it can do that, then it shouldn't crash in the subtle cases, either.

albertAugust 11, 2017 3:34 PM

@Anura,
You refer to "fuzzy logic". Are you referring to Lotfi Zadehs control paradigm? If not, please find another term. Fuzzy Logic is used (quietly) around the world in control systems, and very successfully.
..
@etc.
First self-driving cars, then package-delivery drones, then self-flying aircraft. Aircraft manufacturing is a multi-billion dollar industry, highly dependent upon a multi-billion dollar GPS system and multi-billion dollar radar and ATC systems. Autos have none of that infrastructure, so we, the taxpayers, will have to provide it.

. .. . .. --- ....

65535August 11, 2017 4:24 PM

@ Will The Thrill

The lane markers are significant thing the in very large area I am in. They are frequently, “sanded off” and replaced by temporary line markers which are confusing to normal drivers let alone computer driven cars [a bit of the old lane marker is still there and the new lane marker is also there].

The traffic congestion car crashes are another major factor where I drive. Frequently, a car or truck will cut instantly in front for you leaving you with very little stopping distance – and leading to a number of car wrecks. Cars hitting you from behind are another concern.

I also see Repairmen trucks dropping large [or small] objects such as ladders and so on. I am unsure how the self-driving car algorithms account for those situations.

Also, rain a slick roads cause cars to spin-out in front of drivers or re-gain traction at a certain angles and travel into to other lanes. These problems should be solved before self driving cars are allowed on the roads in my area.

The idea of some kid painting over the car camera is unsettling yet will probably happen.

@ mark

“As I've said before, I don't see a good use for self-driving vehicles *other* than in a special lane on the freeways.”

I agree fully. The legal problems with these type of cars could mushroom once car crashes start to occur.

[update on Hutchins case]

Motherboard has the transcript of the Hutchins [sp?] bail or “Identity Hearing” which is somewhat confusing. As Emptywheel notes the identity of the person(s) who where involved in potentiating the code into a full banking Trojan is the heart of the case – and should possible been stated at the bail hearing.

Next, I see a cash bail of 30,000 USD was set. What about the bail bonds company where the bail is usually 10 percent of the total bail? Any legal opinion on the bail amount?

Transcript link
https://www.documentcloud.org/documents/3923335-USA-v-Marcus-Hutchins-August-4-2017-Hearing.html


AJWMAugust 11, 2017 5:34 PM

I've done far too much winter driving where an ongoing snowfall leaves the road covered in a layer of snow (sometimes several inches deep) and your only choice really is to follow the wheel tracks of whatever vehicle is ahead of you, actual lanes be damned. Just hope that that driver (or whoever he was following) knew where he was going.

And forget reading the signs when they've accumulated a layer of ice/snow.

For that matter, if actually driving in falling snow or freezing rain, I don't imagine the car's cameras would have much of a view anyway, unless they have wipers/heaters of their own.

Impossibly StupidAugust 11, 2017 5:47 PM

@Jeremy
"Is anyone else noticing a pattern here?"

No, that's just the way the technology works. You train the system with some example speed limit sign images (or whatever) and that's all it "learns" to recognize. The "hack" is then to either make it give a false positive or a false negative. It's relatively easy to do because deep learning algorithms don't have any intelligence to them; they learn simple associations rather than significant meaning. So if it just learned to associate a certain small set of pixels with a 45mph sign, all you have to do is tweak those corresponding parts of a different sign and you get an instant misclassification 100% of the time.

"Pragmatically, I think the solution here is obviously that a self-driving car has to do something safe even in the event that the signs are wrong."

But it's not just signs that can be misclassified. It's other cars. Or lane markings. Or pedestrians. All it's data could be suspect, but then it really can't function at all if it had to be perfectly safe. There's a tradeoff, and it mainly comes down to whether or not such systems can be safer than humans under the same conditions. I don't think they will be any time soon, and maybe not at all without strong AI.

neillAugust 12, 2017 3:42 AM

sure they crossreference street signs with maps and data from other cars

problem would be IF a city rezones and changes signs 'overnight' - do you believe the new signs or use the old data? which overrides what? new crosswalks, stopsigns, speedlimits etc

you would need an official city database tied in

other point i never see discussed is the SEATING position of the occupants - daimler had a showcar where the 2 front seats can rotate so the passengers can face each other

that would render airbags almost useless, more deaths from broken necks!

JFAugust 12, 2017 9:24 PM

from the Washington Post, 20 years ago:
"By Donald P. Baker June 19, 1997
It began as a youthful prank. Three friends, during a night of beer drinking, stole a bunch of highway signs to decorate the mobile home they shared....

Soon after their night of revelry and vandalism in February 1996, a car roared through an intersection where a stop sign was missing and was broadsided by a Mack truck. Three teenagers riding in the car died instantly."

I feel sure many such examples could be found, both of the malicious variety and the plain stupid, such as the example above.

It seems to me that autonomous cars, having some redundancy, will still prove an improvement over humans, who suffer from youth and inexperience, old age and decrepitude, inebriation, fatigue, and just plain distraction. Oh yes, and also medical emergencies such as cerebral stroke, heart arrhythmia and critically low blood sugar levels.

I still enjoy driving, but I also believe this technology will achieve a greater level of safety than human drivers, and I expect that day will not be so far off as some here profess to believe.

Bob PaddockAugust 15, 2017 7:21 AM

A couple of years ago I attended a White House Office of Science and Technology hosted event at Carnegie Mellon University about Robotics. Top people from Google, Uber, CMU. many others and Government acronyms such as DARPA and IARPA were there. This is what they where most concerned with all day:

"Statistically we know that we WILL kill a Child [with our technology]. How do we handle the aftermath of that?"

They had no answer.

IARPA also mentioned that one of there most concerning things they study is "an Employment Event", their euphemism for economic collapse.

On the way to work I took note of a tree branch weighted down with rain on the leaves. It completely obscured the Stop Sign behind it. If you did not know this intersection, you'd just blow through it. Also what about Fog? Been times I've had to roll down the windows and listen because trying to see anything was pointless.


Hansel and GretelAugust 16, 2017 7:59 AM

In this paper we propose a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer.

Funny.

Google Car gets lost in the woods.

How does Google Car AI deal with two kids on bicycles, on the highway shoulder, who gets chased by a dog, from the yard adjacent to the two-lane highway, in heavy traffic, and both kids enter the highway to escape from the bites of a mad dog? (This actually happened to me. Nobody died.)

Hansel and Gretel got lost in the woods and became the witch's dinner. In this case, you'll be eaten by the Google Car who becomes the Dumbest Witch in the West.

Jim A.August 16, 2017 8:25 AM

"If a few researchers can fake out self-drivers think what millions of mischievous teenagers could do. " --Well they have been known to print out a replica license plate, attach it to a car and run through red light cameras at speed so that a hated teacher gets tickets.

"Street sign recognition will generally be backed up by map knowledge." Which is why this sort of hack will work best with temporary markings. The most effective target would seem be those "stop one side/slow the other side" signs that flagmen use. They're not overwritten by map data, and they are the same shape on each side. A few minutes access to the signs could lead to ....hijinks. Somebody speeds past the flagman and then veers into the construction when confronted with an oncoming car.

JFAugust 16, 2017 9:26 AM

@Hansel and Gretel "How does Google Car AI deal with two kids on bicycles, on the highway shoulder, who gets chased by a dog, from the yard adjacent to the two-lane highway, in heavy traffic, and both kids enter the highway to escape from the bites of a mad dog? (This actually happened to me. Nobody died.)"

The Google car stops, if possible given the laws of physics, just as an alert driver would, only probably with a bit better response time. With a distracted or sub-par driver? No such happy ending.

Search "children struck by car after running into the road". You can read hundreds of real world cases where human drivers have struck and killed children who ran into the road.

Hansel and GretelAugust 16, 2017 10:30 AM

@ JF

The Google car stops, if possible given the laws of physics, just as an alert driver would, only probably with a bit better response time. With a distracted or sub-par driver? No such happy ending.

As I said, nobody died.

A Google Car would have killed both of them.

I was in a line of cars travelling at ~60MPH... spaced about 1-2 seconds apart.

Oncoming traffic was the same.

Three northbound cars (and an equal number of southbound cars) managed to avoid headon collisions while saving two kids and a dog.

Two crashes.

Google would have killed both kids or caused 6 headons.

Death Race 2000.

Hansel and GretelAugust 16, 2017 11:21 AM

@ JF

PS. Google Cars would probably continue in their predetermined forward direction without hitting brakes and run over the kids hundreds of times because programmers hadn't figured out how to put a signpost on the map for a mixture of human-canine roadkill hamburger on the pavement --- until version 666666666666666666.1

JFAugust 16, 2017 2:09 PM

@ Hansel and Gretel

Kudos to you and five other drivers who saved two human lives and one canine. I also have a personal story that pertains.

A few years ago, my wife's aunt, who lived not far from us, was out for a Saturday morning walk, something she did most every morning. Crossing a two lane residential street, speed limit 30 mph, in a crosswalk marked with white stripes and large bright yellow signs on both sides of the street announcing Stop for Pedestrians in Crosswalk, she was struck by a driver who did not touch his brakes until after the impact. The driver coming from her left had stopped, the driver who struck her, coming from her right, stated afterward he did not see her. It was a bright sunny morning without any obstructions to his line of sight. She was airlifted to a nearby city, but succumbed to her injuries about 8 weeks later.

In the US in 2015, there were 5376 pedestrians killed by vehicles, and of those, 456 were 19 years of age or younger. It is apparent that human drivers do not set the standard for safe driving very high. I am pretty sure an autonomous vehicle would have spared our aunt a horrible death.

https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812375

Impossibly StupidAugust 16, 2017 8:19 PM

There is no way anyone can be certain that any lives would be saved or lost at this point. There just hasn't been any large scale testing of fully autonomous vehicles in a sufficiently wide variety of circumstances. I'll believe the technology is ready for prime time when the CEOs of these companies mandate that all their own cars be self-driving, and maybe even require that their other executives and board members buy the same.

Hansel and GretelAugust 17, 2017 10:27 AM

@ Impossibily Stupid

I'll believe the technology is ready for prime time when the CEOs of these companies mandate that all their own cars be self-driving, and maybe even require that their other executives and board members buy the same.

That won't happen.

They like driving their McLarens, BMWs, Mercedes, Rolls, Bentley, Lamborginis, and Ferrari's.

In the US in 2015, there were 5376 pedestrians killed by vehicles, and of those, 456 were 19 years of age or younger. It is apparent that human drivers do not set the standard for safe driving very high. I am pretty sure an autonomous vehicle would have spared our aunt a horrible death.

It won't be settled by you. Or death rates. It will be settled by your insurance company(ies) who have to pay the bills.

If these brain-dead AI cars do gain market acceptance, it will be AFTER the insurance companies lobby their favorite whore to give "AI cars" the right of way over pedestrians -- like in Mexico.

Always be sure to look both ways before you cross the street when you are a pedestrian in Mexico City.

Hansel and GretelAugust 17, 2017 8:25 PM

You know, I've been thinking about it.

What if all these automobile-human-lawnmowers in Charlottesville and Europe are being "driven" by Google AI employees?

After all, the "authorities" are always looking for the MIA drivers.

What if Google Execs are trying to create a market for their driverless cars?

Hansel and GretelAugust 17, 2017 11:15 PM

PS

At the same time, these are also Google Car test drives (like a test pilot in a new fangled airplane).

Why?

They need to "take readings" from the car impact sensors to teach the Google AI to know the difference between impact with guardrail, another vehicle, and human pedestrians from various angles.

Why?

So they know how to figure out how to draw signpost on a map indicated human roadkill so Google Cars following up the rear don't turn the victims into a puddle of hamburger.

TMAugust 18, 2017 7:06 AM

While computers may be overall less error-prone than humans (for the tasks they are programmed for, assuming they are well programmed), when they make mistakes, they make them consistently. That is a big problem for the self-driving car. We just have no idea yet what could possibly happen, what ripple effects subtle bugs in the software could have, once they are deployed at large scale. In addition there is the specter of a vulnerability that a malicious attacker could use to wreak large scale havoc. I don't see how this could ever be excluded.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.