Adversarial Machine Learning against Tesla's Autopilot

Researchers have been able to fool Tesla’s autopilot in a variety of ways, including convincing it to drive into oncoming traffic. It requires the placement of stickers on the road.

Abstract: Keen Security Lab has maintained the security research work on Tesla vehicle and shared our research results on Black Hat USA 2017 and 2018 in a row. Based on the ROOT privilege of the APE (Tesla Autopilot ECU, software version 18.6.1), we did some further interesting research work on this module. We analyzed the CAN messaging functions of APE, and successfully got remote control of the steering system in a contact-less way. We used an improved optimization algorithm to generate adversarial examples of the features (autowipers and lane recognition) which make decisions purely based on camera data, and successfully achieved the adversarial example attack in the physical world. In addition, we also found a potential high-risk design weakness of the lane recognition when the vehicle is in Autosteer mode. The whole article is divided into four parts: first a brief introduction of Autopilot, after that we will introduce how to send control commands from APE to control the steering system when the car is driving. In the last two sections, we will introduce the implementation details of the autowipers and lane recognition features, as well as our adversarial example attacking methods in the physical world. In our research, we believe that we made three creative contributions:

  1. We proved that we can remotely gain the root privilege of APE and control the steering system.
  2. We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
  3. We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.

You can see the stickers in this photo. They’re unobtrusive.

This is machine learning’s big problem, and I think solving it is a lot harder than many believe.

Posted on April 4, 2019 at 6:18 AM34 Comments


Stephen Smoogen April 4, 2019 6:41 AM

Heck humans are routinely fooled by similar things on roads.. this is why there are so many rules about how road paints are X colours, where signs must be placed, etc. In a book about airplane crash reports, it was a common thing where people flying in might swerve at the last minute because of reflectors on the ground telling the brain they were on the wrong runway.

It could also be related to one of those things in human driving where you can show them swerving routinely in a simulator, but they don’t ever remember doing so and many will claim even when shown that they never did. So this is where we are expecting the machine learning to do better than humans, programmed by humans who forget that we do it also.

David rudling April 4, 2019 7:02 AM

So there you have the genesis of your next book. After “Click here to Kill everybody” about IOT you need to address how AI can subtly kill us all without the necessity of cyber-psychosis in military killing machines, etc.

Ed Lopez April 4, 2019 7:26 AM

Gaining root privilege of the APE is a concern, but at least it is one that is understandable and correctable via security updates.

The issue of fooling the Autopilot system’s optical sensors is far more concerning. There is no doubt that this is a machine learning issue, but it also has to do with the ability to manufacture illusory data for a machine to process. In this case machine learning may help to solve these issues, but we may want to consider these lessons in developing smart road technologies in the future.

But what if this attack involved the radar-based functions of Tesla’s autopilot or lidar-based functions of other autonomous vehicle systems, where the car is sending out active signals to sense road conditions. Such signaling has little protection, and can easily be jammed or fooled, resulting in other potential avenues for malicious action. Tesla owners will tell you how earlier versions of autopilot were prone to what was called ‘truck lust’, where the vehicle would appear to dangerously steer towards trunks with unusual series of reflectors or lights.

Finally, there is the continuing question as to if semi-autonomous driver aids, such as Tesla’s current version of Autopilot that requires an alert driver to be able to take over at any time, is a good thing in reducing casualties, or a bad thing in allowing drivers to be less than fully attentive. So far the statistical evidence points towards it doing more good than harm. But as we progress toward autonomous driving, we have to consider both the car, the road, GPS, and even inter-vehicle communications as part of an overall system.

wiredog April 4, 2019 8:12 AM

This is not a new idea…

Seriously, the Subaru EyeSight system reacts to any confusing situation by activating a visible and audible alarm and then turning itself off. Too much snow on the road for it to see the lane markers, confusing lane markers, side of the road not easily distinguished from off of the road? Sound an alarm and disable. It fails fairly safe.

1&1~=Umm April 4, 2019 8:48 AM


“Researchers have been able to fool Tesla’s autopilot in a variety of ways, including convincing it to drive into oncoming traffic.”

Not exactly unexpected realy. The issues are the old ones of ‘what the programmer expected’ and in the case of AI ‘what data it’s rule set is bulit with’.

Let’s be honest humans are as bad with the ‘unexpected’ but in many cases our rules are very flexible and thus we can adapt.

Mind you it’s taken us umpteen millennia to develop that particular survival trait.

We learn from an early age by pain, fear and death and the understanding of them innately. I’ve yet to see any machine have innate feelings, so a big tranche of learning methods are unavailable to them.

Will it change?

Others on this blog appear to think so, so I’ll leave it upto them to explain why they think so.

Chelloveck April 4, 2019 9:57 AM

To be fair, there’s prior art involved here. There are video studies dating back as far as the mid-1900s showing how traffic in the American southwest can be diverted using malicious road markings. For plenty of examples see the documentaries of C. Jones et al, particularly the studies of Coyote v. Roadrunner.

Phaete April 4, 2019 10:50 AM

They really did a Tesla on that photo.
Only a backseat driver has this viewpoint, if the real driver had this viewpoint of the road something is really wrong.

Impossibly Stupid April 4, 2019 10:57 AM


Let’s be honest humans are as bad with the ‘unexpected’ but in many cases our rules are very flexible and thus we can adapt.

No, we function because our intelligence allows us to understand the world from a larger perspective than simply what our senses report. We don’t just have “flexible rules” for lane detection in isolation, we have abstract ideas for what a lane is as a concept, and why we do (or do not) follow particular road markings/signs/etc. Any indications that we’re supposed to drive into oncoming traffic must be kicked up to higher level cognition, because that’s contrary to the very purpose of traveling in lanes.

Mind you it’s taken us umpteen millennia to develop that particular survival trait.

Not really. It doesn’t take any great intelligence to move around in the real world without smashing into things. It can even be done in large swarms/flocks without the imposition of lanes. The problem is that these machine learning techniques are not based on any theory of intelligence. They’re just useful little hacks for solving individual tasks that, despite having been difficult to solve with older techniques, lack any real complexity.

Will it change?

There is nothing I’ve seen in AI research in the past 40 years that points to any real desire to understand what intelligence actually is (or how it might differ from notions like “innate feelings”). All we’ve been getting is this chase after short-term results that wow the people who control the funding (both academic and corporate). Consequently, like Bruce, I have a very low opinion of what we’re going to see in “Security and Survival in a Hyper-connected World”. When they can be so easily tricked, the assertion that self-driving cars are going to save lives is dubious at best.

Phaete April 4, 2019 11:16 AM

The photo has either been manipulated or taken with a very old broken camera.
If you look at the billboard left of the road markings, this is totally blurred.
The other roadmarkings are similarly blurred.
So yes, if you see the normal roadsigns and markings as blurred, so will added fake ones look blurred.

Any driver having this vision (blurry roadsigns at 30-50 yard) would have taken his license away in a heartbeat.

The original report is quite solid though. Good work to further the technology.
Can’t find that photo there, only good quality photos or CG ones from the hardware.

albert April 4, 2019 11:36 AM

Good comments all.

Taking control of the computers in an autonomous vehicle is one thing, but failures in the Ai s/w is quite another. In many aircraft accidents, failure of the pilots to understand how Flight Control Systems work is an important issue. In a modern FCS, the computers can takeoff, fly to a destination, and land, all by themselves, if the hardware functions as designed. If not, then the pilot must assess and mitigate the malfunction, often with only seconds to decide. In an auto, decisions times can be less than a second. In the aircraft environment, pilots are well-trained professionals with many hours of flight experience, yet it looks like we are reaching their learning limits when FCS systems are concerned. Compare this to the drivers of autos and trucks, where the drivers often have no training at all.

The argument that Ai can do better has been proven false, because no one can possibly program every eventuality into an Ai system. A reasonably intelligent and aware human can assess new situations instantly, if the proper training and some experience and been provided.

Failures of Ai cannot be attributed to the s/w, but to the humans promoting it.

“We have met the enemy, and he is us.” – Pogo.
. .. . .. — ….

Petre Peter April 4, 2019 12:53 PM

In some languages a car is called a machine. This gives us machine machine learning.

gordo April 4, 2019 1:23 PM

Driving across town in the morning after a light dusting of overnight snow a couple of days ago, about 25 miles one-way, not one road sign the entire distance was legible due to the snow sticking to and covering the sign surfaces.

As it was, the roads themselves were dry and the driving conditions normal. I will say that one doesn’t see that, especially at such a large scale, at least here, that often, but if signage was critical for autonomous vehicle operation, none of them, across the whole region, would have been moving that day.

Chelloveck April 4, 2019 3:04 PM

@Phaete: The photo is on page 34, the version posted with the BoingBoing article is clearly a screenshot of the paper in a PDF viewer. There’s very little degradation in BoingBoing’s version though. I don’t think the purpose is to show that the spots are practically invisible from the perspective of the human driver, just that they’re relatively unobtrusive. It’s not like there are huge orange bollards in the road or anything. A human driver would likely see the spots on the pavement and wonder what they were for, but quickly dismiss them as irrelevant to the task of driving and ignore them.

Theo April 4, 2019 11:32 PM

This shows the Tesla being diverted into the wrong lane. It does not show the Tesla being diverted into oncoming traffic. The control system should have a somewhat higher priority algorithm that avoids running into things. It’s going to need that even if the lane following is perfect.

George April 5, 2019 3:35 AM

The teslas are really a bunch of network managed cars, so drive them at your own risk.

Phaete April 5, 2019 8:41 AM


The photo must have been very unobtrusive for me to not see it in the original paper.
It still doesn’t disprove my point though.
You can’t read an almost 2 yard high billboard at 30-50 yards, and the legal roadmarkings are blurry, so why shouldn’t added roadmarks be anything but blurry?

“Look at how unobtrusive these markings look on this blurry photo…”
This simply doesn’t give me an impression of competence.
Quite the opposite.

Impossibly Stupid April 5, 2019 10:43 AM


This is why genuine intelligence is important. It doesn’t matter whether or not there is traffic in the oncoming lane at the exact moment the car is told to switch. Higher cognition should tell you that there are very few circumstances where it makes sense to have vehicles moving in opposite directions that will be doing so using the same lane. The attack would be particularly effective if it were done just before a divided highway. No matter how great the self-driving software is in regular operation, nobody (outside a testing environment) should be eager to find out what happens when a vehicle is going the wrong way into traffic.


The point you’re trying to make is actually the opposite of what the reality of this technology is. One of the big problems with the current approach to machine learning is that the algorithms often over-train on specific pixels in high-resolution images. That’s what allows them to be fooled by a few rogue markings on signs or in the road. I’ve had the experience myself where object recognition was improved when I used blurry, low resolution images that masked minor artifacts and sensor noise.

The fact remains that no human, however sharp their eyesight and dull their wit, would see a few dots and come to the conclusion that it was an indication they should switch lanes at all, let alone into one dedicated to going the reverse direction. If cars still have these flaws, the technology cannot respectably be called AI, and it should not be allowed on the road.

Theo April 5, 2019 1:33 PM

@impossibly stupid

I don’t know what “genuine intelligence” is, but at a minimum it would requiring integrating a variety of cues such as lane markings, street signs and stationary and moving objects into a model and using all the information to determine the best course. I think we agree on that. If the Tesla is not doing that there is a problem, but I don’t think we can deduce that from the paper.

The images seem lacking in any other cues. In that situation I would not expect a human to do better. Human drivers routinely demonstrate poor, but mostly harmless, lane keeping in many situations. Roads are designed to prevent ambiguity where it matters.

The entry to a divided highway has multiple cues. The lane markings are particularly clear, there are keep right arrows and keep out gores marked on the road. There are keep right, arrows, no-entry, and “wrong way, go back” signs. The lanes are often laid out to require significant steering to use the wrong side. Finally there are often vehicles coming towards you in their lane. If the Tesla ignores all those to follow minor marks on the pavement that is a problem. But we don’t have evidence of that here.

Note that despite all the features humans still sometimes end up driving the wrong way. Usually it’s DWI, but occasionally it’s quite inexplicable.

MarkH April 5, 2019 5:01 PM

  1. Well done to Keen Labs! This is my first time to hear of them. It’s a good piece of work, and an example of how technical prowess in the “People’s Republic” is growing from strength to strength.
  2. The expression “artificial intelligence” (in its present understanding) came into the world just a few years after I did. Within 15 years after that, I was coming to understand that “AI” is almost pure snake oil.

Another 45 years down the road, I have yet to see evidence that this has changed. Any shyster who so wishes can label digital automation of any process as “artificial intelligence.”

The late Edsger W. Dijkstra was contemptuous of applying anthropomorphic language to computing machinery for the excellent reason that it is extremely inaccurate and misleading. Having seen a writing titled “Giant Brains: Machines That Think,” Dijkstra proposed that someone could write “Giant Hearts: Machines That Fall in Love” … which would be as perfectly asinine as the first title.

To paraphrase the late Douglas Adams, artificial intelligence is almost, but not quite, entirely unlike human (or indeed, any vertebrate) intelligence. The expression itself is marketing hogwash. In conveys a ton of hype, mixed with droplets of reality.

A characteristic weakness of “AI” realizations is what people in that field call brittleness. They will function satisfactorily in some domain, and then fail really badly (likely in some startling manner) outside the boundaries of that domain.

Often, these domain boundaries are not readily apparent (or even discernible at all) to human beings.

The meta-phenomenon which astonishes me, is how many people passionately rush to the defense of AI, social media, completely irresponsible use of personal remote-control aircraft (so-called drones), hyper-addictive mobile phones, and the like.

Any critic of these technological wonders will soon learn what it’s like to express skepticism at a gathering of UFO enthusiasts.

In the 21st century, the most dishonest, exploitative and destructive corporate abuses are defended by armies of shills, who don’t charge a penny for their service!

Phaete April 6, 2019 6:46 AM

@Impossibly Stupid

The point you’re trying to make is actually the opposite of what the reality of this technology is……I’ve had the experience myself where object recognition was improved when I used blurry, low resolution images that masked minor artifacts and sensor noise.

Don’t try to convince me that you can see more accurate on low resolution photos then on high res ones.
If you can’t train the correct algorithms for high res photos then that is a skill/knowledge (or money) problem, not a resolution problem.

And ofc, if your recognition is trained for low res pictures, it will do bad at high res and better at low res. That’s just a user error.

Impossibly Stupid April 6, 2019 3:47 PM


I don’t know what “genuine intelligence” is

You might not be able to define it to the extent of having a scientific theory, but you likely are able to reasonably compare examples of relative intelligent behavior. If I were to query you, or even an inexperienced student driver, on what you thought a lane end/merge marking looked like, or a STOP sign, or whatever, it is a pretty safe bet you are going to demonstrate that you have learned something at a much higher level of abstraction than what the current crop of machine learning functions at.

I don’t think we can deduce that from the paper.

No deduction is necessary. The technology is what it is, and can be evaluated for flaws independent of any particular application. Bruce has covered other similar attacks, and Boing Boing also has an even more comprehensive list of machine learning failures.

Note that despite all the features humans still sometimes end up driving the wrong way. Usually it’s DWI, but occasionally it’s quite inexplicable.

What failings individual humans have is not at issue. The real concern here is how significant errors in automation can result in dangerous behavior at a previously unknown scale. Yes, hundreds die every day on America’s highways due to human error. It’d be good if automation could eliminate those deaths, but not if we’re just trading random accidents for scores of cars designed to run amok simply because a bird happened to poop in just the wrong pattern. And that’s to say nothing of deliberate offensive hacking that could cause possibly millions of cars to crash all at the same time.


Don’t try to convince me that you can see more accurate on low resolution photos then on high res ones.

That should actually be a pretty easy task if you take the time to understand how this technology works. Again, the underlying problem is overtraining on particular data points in the sample set. When you try to teach the neural network what a STOP sign is, for example, it doesn’t learn the same things a human does. It has no understanding of octagons, red backgrounds, white printing, Roman letters, or English words. It just hones in on what pixels are the most representative of the object being shown.

The reason we see “unobtrusive” failures is because just those particular bits of a scene/object are replaced by the same small set of bits that were trained to be recognized as some other object. Whether by chance or intentionally, you now have a system that is failing because of the higher resolution training data.

What lower resolution images do is allow you to guide training away from the details and back to what more closely represents the higher-level concepts a human would be learning. A blurry reddish blob at a corner in the distance has a very good chance of being a STOP sign, even if you can’t make out the details. Such a system (especially backed by genuine AI) could indeed be more accurate/safer than a stupid high resolution system that can easily be fooled into giving false positives with 100% confidence.

If you can’t train the correct algorithms for high res photos then that is a skill/knowledge (or money) problem, not a resolution problem.

My point remains that there currently are no “correct algorithms”, because very few people are working forward from a theory of intelligence. Without that, we aren’t able to build systems that have the ability to correct incorrect learning.

And ofc, if your recognition is trained for low res pictures, it will do bad at high res and better at low res. That’s just a user error.

Strongly disagree. Again, intelligence is simply a game changer. Even without images of any kind, I could teach a child to figure out what different kinds of road signs will look like. And it’d take more than a few misplaced dots for them to give a false positive otherwise.

Alyer Babtu April 6, 2019 5:30 PM

@Impossibly Stupid

understand how this technology works

Is there any general work on the topology/qualitative mapping behavior of these ML and AI systems ?

For example, the derived (perhaps by training) system consists of an input space, a transform or mapping, and an output space. What kind of notions of closeness or neighborhood pertain to these components ? Is the mappng continuous in any sense, and to what degree ? (differentiable ?) How stable is the algorithm which produces the system mapping (say from training data) ? What is the actual possible range of outputs of a given system ?

It seems like the current methods yield non-smooth mappngs that produce wildly divergent outputs from inputs one would have thought of as “close” (adversarial examples). But perhaps we are using inappropriate notions of “close”, i.e. have the wrong topologies, and so hinder understanding of the system dynamics.

Sancho_P April 6, 2019 6:22 PM

“If cars still have these flaws, the technology cannot respectably be called AI, and it should not be allowed on the road.”

And this is my problem.
I’m eagerly waiting to direct my car by phone.
It could take me right to the doors of my gym in town, where I’d never find a regular parking lot, and then drive in circles or find itself an acceptable spot to wait. Imagine bus stops or such places, when bus or police arrives it would automagically return to circle mode, together with dozens of other (electrical, mind the environment!) driverless cars.
Having my second post-workout beer I’d check my phone where my car actually is, probably to call it to pick me up. Oh, with good AI it may be at the charging station of the public library, so I’d have another drink instead.
Imagine a McDo drive through, my car would go there, lower the side window, order my personalized burger and have it ready when picking me up!
I’m longing for that bright future!

JG4 April 6, 2019 8:14 PM

Thanks for the implicit invitation to say something about intelligence. I’ve been busy lately, which is why you haven’t heard from me sooner and more often. Had a 24-hour fast yesterday and almost 5 hours of intensive windshield therapy in the rain last night. As a result, today was very creative. I prefer sunlit windshield therapy in big sky country, but the pressure of high speeds in dense traffic (trucks on narrow two-lane highways) with limited visibility (rain) does focus the wits. It’s pleasant to take off the pressure. Like when you quit beating your head against the wall. Big sky country has wide roads, light traffic, and some years back, no speed limit.

I’ve mentioned the OODA loop before and how I found it. Observe-orient-decide-act. I’ve probably said a few times that observations essentially are sensor input, which can be raw data from the all-encompassing corporate-government surveillance apparatus. Orient is the step where a system filters the inputs to produce an estimation of possible actions. In the case of the all-encompassing surveillance apparatus, the estimation is intended to optimize profits (loosely money and power) for various elite persons and organizations. Money and power are entropy maximizers, as are various associated activities, such as sex, murder, destruction of environment and war. I think that one of my better quips was deleted for failing to articulate the deep connection between entropy maximization and security – “War is the continuation of entropy maximization by other means.” “Murder is the continuation of politics by other means.” Recent is a relative term.

Deciding is another filtering operation that attempts to optimize the result of action. Act is where the system uses some actuator to change system conditions. In the case of the self-optimizing resource-extraction engines, act often is creation and dissemination of fake news (Bernays, Project Mockingbird, Brennan, etc.) to manipulate public opinion. In the case of the self-driving car, act is manipulation of a sharply limited number of degrees of freedom – inputs if you prefer. Roughly speaking, the car has one steering degree of freedom, and one velocity degree of freedom. The outputs could be four degrees of freedom – velocity, x- and y-position and direction. You’d think that a system with two inputs (acceleration and direction) would be easy to optimize, but you’d be wrong. Making sense of the inputs isn’t as easy as we make it look. Engineering robust systems to do these things with a handful (yoke and rudder pedal) of additional degrees of freedom isn’t even within the grasp of a multi-billion dollar corporation, to pick a random example that is being hashed in another thread.

I appreciate the excellent discussion, in particular of the limits of AI and how to make it do what is needed. And the excellent discussion of the failures of machine augmentation in human control of airliners. In fact, I owe Clive some comments that he is correct about analog “computers” without electricity being applied first to computation of one or two degrees of freedom, whatever a parabola with air drag is called. The two degrees of freedom were initial velocity (charge/mass) and elevation. That is a two-input single-output control system. Adding a transverse wind drag makes it a three-input two-output control system. I hadn’t thought of a manual slide rule as an analog computer, but it certainly falls within a relevant meaning. I meant motorized mechanical computers – one step up from slide rules. I thought that the transition to automated control of two degrees of freedom (azimuth and elevation) involved electrohydraulics, with some elaborate electromechanical computers being what Shannon cut his teeth on. We seem to be using different definitions of analog computers that point to different times along the same trajectory. As I always, I appreciate the history lessons and hope to offer some as well. Did I mention The Idea Factory? It’s a bit loose with details, but it is connecting some dots that I should have connected on my own. I thought for a moment that wire EDM might have been invented for making magnetrons slots, but it seems to have come later. Western Electric was the go-to manufacturer of the day.

A system that can observe, even if the optical sensor is a person tracking a target with a manual sighting system, orient, decide and act (aim, the gun and pull the trigger) is an OODA loop. Later the observing step was radar. We might count a system where a person trains sights on a target and pulls the trigger, with a computing layer that calculates trajectory, including terms for wind, two components of drag, velocity, direction, distance and acceleration, and uses electrohydraulics to control two degrees of freedom, as computer-augmentation of human intelligence. It replaces the difficult and expensive step of learning to aim for wind and speed by trial and error with an analog computer. The analog computer wouldn’t observe, but does fall within the meaning of orient and decide, and the electrohydraulics fall within the meaning of act, as in actuator. The beginnings of AI in some sense. Only twenty years later, the computing was fully electronic and the aiming fully autonomous for shooting down V-1’s toward the end of the war. Nike may have been a 1960’s version. Phalanx definitely a 1980’s version. Patriot a 1990’s version. There was a successful test of intercepting a reentry vehicle in roughly the last ten years. Your view of what constitutes success subject to modification by the most powerful neurotransmitter on your planet – campaign contributions. Not sure where we are now.

Perhaps at the step labeled “buggered backwards with a baregepole.” I am properly concerned about the unintended consequences of under-engineered AI systems being unleashed. An autonomous drone, such as the ones in Stuart Russell’s brilliant video discussed here in 2017, or the MIT parking garage UAV, constitute fully autonomous OODA loops with no humans in them.

We can think of natural intelligence in living systems as one layer of an adaptive system aimed at entropy maximization. The replicators taught by Dawkins, the guy with biting wit and disdain for the creationists, were made famous in a book called The Selfish Gene. The replicators are entropy maximizers in the sense taught by Onsager, Prigogine, Dorion Sagan and others. Intelligence is just another of the evolved systems for maximizing entropy. And the need for security is just a consequence of the ongoing competition to maximize entropy. This may be as good as it gets on your planet. The other layers of adaptation in living systems are genetic selection, epigenetic programming, and enzymatic-RNA feedback loops. The ultimate feedback-feedforward loop is intelligence.

Living systems have sensor systems to observe, filters to orient using the sensor data, filters to decide, and various chemical actuation systems. For most animals, action generally is nerve impulses directed to muscles. Observe generally is sight, hearing, smell, and touch. Some bacteria have analogues of muscles, but the computation layers are pretty thin. The computation layers are non-existent in viral systems (unless they have quorum sensing) and the actuation step is more like a magnet pulling on a key. Both bacteria and viruses are entropy maximizers, as are all living systems. The definition of maximization subject to some fine-tuning to allow for various types of optimization. Intelligence is a self-optimizing computing system that can observe, orient, decide and act. The natural variety has been shaped by hundreds of millions of years of differential survival. It can act on the timescale of microseconds to decades. The artificial variety can act on the timescale of nanoseconds to millenia. It has been shaped by not much more than a hundred years of effort by the replicators to further their agenda of entropy maximization. Expect more of the same.

Copyright JG4, All Rights Reserved. Limited license to Schneier on Security to display in perpetuity or any subset thereof.

Phaete April 7, 2019 1:08 AM

@Impossibly Stupid

That should actually be a pretty easy task if you take the time to understand how this technology works.

Then this should be easy for you.

Deep Learning for Single Image Super-Resolution:A Brief Review

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

I can dig up some more if you like, but i assume you can find them yourself as well.

Things don’t get clearer at lower resolution, you only scan low res because there is a lack somewhere else.

Impossibly Stupid April 7, 2019 4:58 PM

@Alyer Babtu

Is there any general work on the topology/qualitative mapping behavior of these ML and AI systems ? … What is the actual possible range of outputs of a given system ?

There’s always a lot of ongoing work, but there are so many individual differences in implementations that generalizations are hard to come by. Perhaps the most approachable examinations are things like DeepDream, because of the way it maps the output back to the input image itself, thereby allowing a human to better see how certain features are more or less likely to trigger a false positive.

But perhaps we are using inappropriate notions of “close”, i.e. have the wrong topologies, and so hinder understanding of the system dynamics.

This is one of the major ongoing problems in these sorts of machine learning systems. You can’t really interrogate them at a high level, a la expert systems, to find out what they really have “learned”. But, as these incidents of false positives show, it is very often the case that what is given significant weight is all too often a terribly small sub-set of the data points that were in the training set.


Then this should be easy for you.

Very: you have engaged in the common pseudo-science tactic of citing irrelevant research as an attempt to create an air of legitimacy, despite the fact that any rational observer can see that neither of the papers you reference either supports your claim or refutes mine in any way. Such intellectual dishonesty is indeed easy to detect, so I suggest you refrain from doing it if you sincerely want this conversation to continue.

Things don’t get clearer at lower resolution, you only scan low res because there is a lack somewhere else.

I made no claims that things would get “clearer”. I very directly did assert that you might be able to get more accurate recognition of objects if a major source for false positives is overtraining on less significant details. There are all kinds of reasons it makes great sense to train on degraded images, because I don’t know of anyone who wants their self-driving car to dangerously malfunction just because a little rain gets on the lens, or a little snow gets on a sign.

CB April 8, 2019 10:23 AM

The root access is a serious issue.

The sticker diversion attack is taking advantage of missing lane markings in an intersections to add temporary markings connecting the right line to the middle line in an empty zone.
It’s important to notice that in this process the car never crosses physical markings.

The test was made on an empty road. It would be interesting to see how the car reacts if it detects opposite traffic before, during or after the crossing.

It’s not a bad idea for the car to follow diverting markings. This could be roadworks. But it should at least check map data to identify in which lane it is and what is the official traffic direction of that lane. Then alert the driver that the car was diverted to opposing lane and he needs to look for opposing traffic and return to the correct lane when possible.

fajensen April 10, 2019 7:16 AM

Projecting the dis-information patterns rather than painting it on the road or traffic signs using, f.ex. using some kind of programmable infrared light source, would allow the adversary to evaluate many different patterns in real time.

It also allows the adversary the freedom to have some “scratch monkey” install the light sources on a regular zero-hour contract. Only when fired up with the correct pattern do they become weapons so nobody need know what they are actually for. The devices could look be more traffic cameras, everywhere is festooned with cameras so nobody would think about more camera looking devices.

Paul Nash April 15, 2019 5:32 AM

Shades of the Boeing 737 Max. While less complex than lane-tracking (no AI, just bad hardware and software design), the anti-stall system can kill entire plane-loads of people. And has done so twice.

VinnyG April 16, 2019 8:45 AM

@MarkH re: “artificial intelligence” – Obviously an oxymoron, but only one example of a bevy of terminology changes made over the years in an apparent quest to make our profession seem warmer and fuzzier (“more human” to some, I guess.) My favorite example was the transformation of “DP” to “IT”… To me, “data processing” was and remains the salient top-level descriptor for the profession.

@Sancho_P re: car circling in holding pattern – Might be some unintended consequences there. What happens if the parking capacity where you are imbibing is, say, 500 cars, and on the night in question, capacity is exhausted, and there are an additional 1000 cars “circulating”? I think the record is clear on the competence of transportation authorities to plan for such eventualities. You might enjoy one additional cocktail, but waiting until sunrise for your car while it “matches wits” with other vehicles, possibly not so much…

Sancho_P April 16, 2019 5:31 PM


”I think the record is clear on the competence of transportation authorities to plan for such eventualities.”
Hello? Authorities? Planning ahead?
No one is thinking, let alone planning, for fully autonomous driving cars. The gym in my thought example has a capacity of 65 persons, usually there are less than 14 machines occupied. But as nearly everywhere in Spain there is not one single designated parking lot for that business.
What if, by circling, my driverless car collects a traffic violation ticket? Or is crashing with, say, a bicycle, or hurting a pedestrian?

The step from driver “on board but absent” to fully absent may be two years, but for authorities it means two decades to realize the former.
Cars talking to each other? Endless.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.