Attacking Driverless Cars with Projected Images

Interesting research—”Phantom Attacks Against Advanced Driving Assistance Systems“:

Abstract: The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them.

The paper will be presented at CyberTech at the end of the month.

Posted on February 3, 2020 at 6:24 AM β€’ 22 Comments

Comments

Phaete β€’ February 3, 2020 6:49 AM

So a tool that relies on reflected light (how we see colour), is fooled by projected light.
A shot for open goal, but props for the work they did, looks solid.

Ruz β€’ February 3, 2020 7:39 AM

Hmm. Is this another vector for accessing these systems that might not be secured? Drive by malware infection, literally? πŸ™‚

K.S. β€’ February 3, 2020 8:03 AM

There is no reason to think that optical image processing would be immune to standard I/O attacks. In this case spoofing. A more interesting vector would be trying to attack image processing (think seizures for your autopilot) algorithm directly. Think DoS (e.g. rapidly-changing road signs), overflow (e.g. crafted image attack) and so on.

I am largely amazed that humans up to this point are immune to such attacks. We only know of inducing seizures and creating optical illusions. We are much more robust than any information processing system we have designed. I hope smarter tech doesn’t lead to figuring out how to reliably crash humans.

Winter β€’ February 3, 2020 8:42 AM

The old Roadrunner cartoon series used this spoofing attack extensively. However, in the cartoon, the results were not always realistic.

Mike D. β€’ February 3, 2020 9:49 AM

@K.S. We’re only “immune” to those attacks to the extent that we can recognize them and “call bullshit” on them, dropping them from consideration. There are attacks that can work on us, such as simply erecting a fake sign.

Reminds me of something that happened to me last year. I’ve got a 2016 BMW M235i with the “speed limit assistance” option installed. Basically, there’s a forward-facing camera in the instrument cluster that has the rear-view mirror, and it looks for speed limit signs, and tries to read them. Then it displays it on the instrument panel if I have that mode selected.

Somewhere along I-35 en route from Austin to San Antonio, it became convinced it saw a “Speed Limit 85” sign. In a construction zone where the speed limit was 60, no less. Knowing that this highway doesn’t go over 75 here, I could call bullshit and ignore the reading.

But if Tesla Autopilot made this mistake, and had control of the car? Unless it had some database of maximum speed limits vs. GPS coordinates, it would be hard pressed to detect the outlier.

Wish I could have seen the camera data that led to the glitch.

Samuel Johnson β€’ February 3, 2020 10:22 AM

When I was a lad the trick was for two boys to stand on opposite sides of the road, near a corner and bending over, then suddenly pull up an imaginary rope.

MikeA β€’ February 3, 2020 11:02 AM

Of course, the answer is right in front of us, per the synopsis: Deploy Vehicular Communication systems. Since in this timeline (which I seem to have drifted into), Bluetooth, WiFi (even the original version), “We are all reasonable people” Internet standards, inband telephone signalling, misleading or forged telegrams, forged letters, whispered lies, etc, have never existed. We have always had perfect security for any communication that affected health and safety.

Rainbows and Unicorns all around…

Frank Kienast β€’ February 3, 2020 11:05 AM

Why can’t a Tesla simply confirm that the object it detects appears on its radar? A projected image should not show up on radar. Surely radar is being used in other ways? Even my Toyota has radar that allows its adaptive cruise control to maintain a constant distance behind the car ahead as that car changes speed.

The software algorithm could simply confirm that the object it is changing behavior based upon shows up on radar. If it does not, something is amiss, so signal the driver that something is wrong and let the driver take control. Should be a very rare occurrence, so no significant inconvenience to driver.

JG4 β€’ February 3, 2020 11:18 AM

@Frank – Sensor fusion was a good idea before our ancestors began to correlate light and sound. It still is, not that it is easy to program or compute. The F-35 disaster probably includes some that doesn’t work.

https://www.nakedcapitalism.com/2020/02/links-2-3-2020.html

Which pot strain works best for gambling? Vegas budtenders share their tips LA Times. News you can use!

Big Brother Is Watching You Watch

Google Maps Hacks Simon Weckert. β€œ99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.” This is awesome.

Hiding in plain sight: activists don camouflage to beat Met surveillance Guardian

Imperial Collapse Watch

Costliest U.S. Carrier Isn’t Ready to Defend Itself, Tests Show Bloomberg

Rj Brown β€’ February 3, 2020 11:54 AM

Just a historical (hysterical?) note: Inter-vehicle communication has been around since the early seventies, or even earlier. Does anyone remember CB?

Impossibly Stupid β€’ February 3, 2020 12:38 PM

@K.S.

I am largely amazed that humans up to this point are immune to such attacks.

I’m not. As has been noted, humans engage in higher-order thinking about the world that is far beyond what machine learning algorithms can match. Even so, we make mistakes in perception and cognition that can lead to car crashes or otherwise generally acting against our own best interest.

@Mike D.

Wish I could have seen the camera data that led to the glitch.

Bruce has covered attacks on traffic sign readers before, and it’s dangerously bad how easy they can be to fool by small tweaks to significant pixels that humans wouldn’t even notice. It’s not hard to imagine that an adulterated “65” could be seen as 85, or possibly even the I-35 sign could have been mistaken for part of “SPEED LIMIT 85“. Machine learning is stupid that way, and anyone calling it AI should be properly laughed out of the field.

@Frank Kienast

Why can’t a Tesla simply confirm that the object it detects appears on its radar?

I’m reminded of an old saying along the lines of: “A man with one watch always knows what time it is. A man with two watches is never certain.” If you follow the link, you’d also see that some of the phantom images are projected on to items that would show up on *DAR. The proper solution will not come from a simple confirmation of more and more data.

mark β€’ February 3, 2020 1:34 PM

Great… and no one’s thought about the likliest event: a truck or van drives near it with a graphic on the side, and those are getting more realistic all the time.

And what about buses that are covered in that mesh used for ads?

Or, of course, the video display (and overly bright) billboards?

Clive Robinson β€’ February 3, 2020 3:26 PM

@ R J Brown,

Inter-vehicle communication has been around since the early seventies, or even earlier.

Morevthan a century in fact.

During WWI there were some vehicles fitted with what were effectively “tuned spark gap” transmitters and the coresponding receivers.

Shortly befor WWII the police in london had radio vehicles and were experimenting with AM on VHF around 80 MHz. The building used to do the testing (in Barnes) has incorrectly been identified as either part of “the Radio Security Service” (RSS) amateur radio operators who were forming the early “Voluntary Wirless Interceptors” and later “Y Stations” sent their intercepts to (was actually Arkely View in Barnet) or an “X Station”.

Rj Brown β€’ February 3, 2020 3:51 PM

From an earlier post:

Bruce has covered attacks on traffic sign readers before, and it’s dangerously bad how easy they can be to fool by small tweaks to significant pixels that humans wouldn’t even notice. It’s not hard to imagine that an adulterated “65” could be seen as 85, or possibly even the I-35 sign could have been mistaken for part of “SPEED LIMIT 85”.

This certainly smacks of gross over-training to me. Part of the difficulty of machine learning is to not over-train, as that spoils generalization and error robustness. Its similar to over-fitting when performing regression on a dataset. You gotta know when to stop — when enough is enough.

Mike D. β€’ February 4, 2020 8:32 AM

@Impossibly Yes, I’ve seen Bruce’s prior work. I was wanting to see the data in my particular case. Pity it’s locked away.

@mark The truck that decapitated a Tesla (and its driver) in Florida was inadvertently doing just that: the light-colored trailer was apparently mistaken for sky by the car.

Anyway, I just hope someday people start getting credit for how difficult driving is and how versatile, though imperfect, the human brain is.

Impossibly Stupid β€’ February 4, 2020 12:36 PM

@Rj Brown

This certainly smacks of gross over-training to me.

That’s another laughable concept that should get people ejected from the field of Artificial Intelligence. If the system gets worse when it is supplied with more data, it is fundamentally learning the wrong things. Honest people that understand how the current fad of machine learning works know this. If the system depends on the humans building it to precisely “know when to stop”, it isn’t AI and shouldn’t be seen as reliable enough to trust with anyone’s life.

@Mike D.

I was wanting to see the data in my particular case.

Yeah, that’s the human reaction to such an outcome. People somehow think that if they see what the system was seeing they can figure out an answer to an implicit question of “What was it thinking?” But that’s the problem; it’s not actually thinking about anything meaningful, so you won’t get a satisfactory answer. It’s just correlating pixels in the images it gets fed. It doesn’t know what a sign is or what letters are or what numbers represent. From a security/safety perspective, the more useful thing to do with that kind of data is generate an adversarial network (as is done by these researchers and others Bruce highlights) that allows us to better understand why these algorithms inherently cannot be absolutely trusted.

Thatguy β€’ February 4, 2020 5:15 PM

Personally, I dont see fully autonomous automobiles happening en mass. Yes, I see them working in select cities, and certain well documented highway routes. Here is the thing. Think about rural areas such as dirt roads, and extreme climate like blizzards. Do you want to get stuck in a blizzard in the middle of nowhere? or drive into a lake or off a bridge because of a gps or navigation error? Even IF we are able to somehow handle those scenarios, until EVERYONE goes fully autonomous how much safer is it? Think about all those close calls you have had with someone that wasnt paying attention or maybe drunk, and at the last second you averted disaster. You wouldnt be able to do that with autonomous automobiles. Its not going to see a truck running a red light through an intersection going 60mph coming right at you. OR the hundreds of other niche incidents that might not happen to you everyday but eventually will. Until everyone else isnt allowed to drive, I think the risk of an accident might be higher in many areas in the country. I am fine using it as a taxi in a warm city where the speed limit isnt above, say, 35 mph driving through subdivisions and such. Not on the freeway, not in rural areas, not in bad weather. Not on bridges or cliffs.

We havent even gotten to the security of its systems yet either. We all know the more complex a system is or features added increases the insecurity inherently. We dont even have the ability to thoroughly secure computers, networks, air condition units…refrigerators, washer/dryers, baby monitors. If those things get hacked, it sucks. If a car gets hacked people will die. We havent figured out how to do security yet in practice.Β  who are you trying to secure?? passengers, pedestrians?, the car? What are the costs and risks we are willing to live with, who decides this? Do we even know what those are? They are subjective quantities. At what point would these autonomous automobiles be considered safe en mass? When they are safer to travel in than an airplane?, safer than the accident rate in your state? Whenever corporations manufacturing them can convince lawmakers to allow them?

Knotts β€’ February 5, 2020 7:04 AM

“A German artist tricked Google Maps into sending out a traffic jam alert by wheeling around 99 phones in a handcart”
Did anyone know that Google is doing THAT with phone location data??!!
What else are they doing??!! (answer: whatever they want)

EvilKiru β€’ February 5, 2020 11:45 AM

@Knotts: Why would anyone be surprised to discover that a traffic routing program, such as the Google Maps app that all 99 of those phones were running, uses your location data to aid in traffic pattern analysis?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.