Another Attack Against Driverless Cars

In this piece of research, attackers successfully attack a driverless car system -- Renault Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do not directly operate cars) -- by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot's sensors.

Boing Boing post.

Posted on July 31, 2019 at 6:46 AM • 43 Comments

Comments

Snarki, child of LokiJuly 31, 2019 7:12 AM

It's the "Modern Trolley Problem!": self-driving car is heading toward an innocent group of people, but could be hacked to instead run over a smaller group. What do you do?

Answer: shoot Elon Musk twice.

Sed Contra July 31, 2019 7:18 AM

But wait - driverless cars project aerial images that confuse drones making them place adversial images harmlessly in nearby fields, away from roads. We can relax.

wiredogJuly 31, 2019 7:22 AM

For this attack to work no one else on the road can notice the drones following the car being attacked.

Petre Peter July 31, 2019 7:42 AM

Great! Now I have to look out for drones while 'relaxing' in my self driving car.

Clive RobinsonJuly 31, 2019 7:58 AM

@ Bruce,

by following them with drones that project images of fake road signs

Sounds like a perfectly modern argument to ban drones...

After all these days it's all in the "edge-case" argument, as we have to "Think of the Children".

SolaricJuly 31, 2019 8:02 AM

And this matters... why? I mean yeah sure it's vaguely interesting in an abstract way. But there is nothing special about purposefully being able to cause crashes. It's not self-driving car thing, at all. We just expect, you know, most people not to be fucking criminals IRL and try to murder everyone, and that actually seems to work out overall. An attacker right now can also paint over lines, change and remove street signs, etc. These are crimes, theft by default but if any sort of crash results the perpetrators are liable for that as well.

Remote network attacks are a real concern and a novel threat scenario to self-driving cars. But anything that requires physical presence and action is not.

BobJuly 31, 2019 8:38 AM

The obvious solution is drones that follow driverless cars to shoot down drones that try to project fake road signs.

TatütataJuly 31, 2019 8:41 AM

Before too long, we'll have machine-readable road signs (e.g., a UV QR overlay, or PCM-modulated light signals) with electronic signatures, and traffic will come to a standstill because the DOT's root certificate was compromised.

The attack somehow reminds me of an invention which allegedly protects cyclists against SMIDSYs ("Sorry Mate I Didn't See You"). A projector on the bike draws a mobile symbol, or safe passing limits ahead or astern. This patent is only one of a bunch.

In 1963 the Glasgow to London Royal mail train was stopped by nothing more than a flashlight and a filter.

Sleeper ServiceJuly 31, 2019 8:55 AM

Suspect that most countries will start moving towards virtual signage anyway. Broadcast speed restrictions, incident information etc. etc. direct to the vehicle. The big issues here would be the requirement for pretty much 100% mobile coverage for the transport network, all vehicles would require a means of getting the information and the communications links would need to be secure.

Its certainly not cheap planting a message sign in a remote location and it may prove beneficial in terms of resource usage if the reliance on physical devices can be slowly eroded.

Still not secure but what is.

TatütataJuly 31, 2019 9:25 AM

I wrote: ... Royal mail train was stopped by nothing more than a flashlight and a filter. , like I remembered the story.

The article states:

The signal had been tampered with by the robbers: they had covered the green light and connected a battery to power the red light.

Impossibly StupidJuly 31, 2019 10:05 AM

@wiredog

For this attack to work no one else on the road can notice the drones following the car being attacked.

You seem to lack the ability to generalize and reconceptualize threats like this. Just because the study used a drone doesn't mean it's the only way to project said image. It could be any other nearby car on the road. It could be something set up alongside the road. In fact, it could be the car itself that is modified to do this. Imagine a driver who wants to be able to go faster than the limit that's posted on real signs. Now imagine someone hacking that setup, too.

@Solaric

It's not self-driving car thing, at all.

It most certainly is when the vehicles perceive and act on inputs that human drivers do not. The open-world aspect of the problem makes it very dangerous to try automating a task like driving. It very much matters what the edge conditions are that can alter the safe behavior of any driver.

SolaricJuly 31, 2019 10:26 AM

@Impossibly Stupid

It most certainly is when the vehicles perceive and act on inputs that human drivers do not.

Nope. It's still in the bucket of adversarial attack. That sensors can perceive outside human limits is a feature not a bug. That an attacker can devise a physical way in person to go after that is no different then the endless ways to attack human senses and response characteristics. And in the actual world our primary "defense" against that is just not having people do it and punishing those who do.

The open-world aspect of the problem makes it very dangerous to try automating a task like driving. It very much matters what the edge conditions are that can alter the safe behavior of any driver.
Not really because the open-world aspect of the problem applies just as much to humans doing a task like driving, and it is in fact very dangerous. Yearly worldwide death counts are something like 1.25-1.5 million and another 20-50 million injuries. And that's even with barring a certain number of really risky people from driving at all, which itself often has major negative effects on them.


And it's completely worth it anyway, because mechanized point to point transport is absolutely that valuable. But it's a huge mistake to analyze something like this in a vacuum and succumb to false over focus on risks without considering the context, which is literal megadeaths (just for starters). Security is always, always an economic equation. It exists with the context of limited resources and overall goals, and it's fine to simply calculate that certain threat models will be out of scope. What matters for something new is if there is improvement vs what there will be anyway and at reasonable cost. And if it's a threat model we don't care about with human drivers, then you're starting with a lift to say it should be a stopper for non-human ones. Physical attacks are inherently not very scalable, hard to have be anonymous, harder to treat frivolously, etc. It's reasonable to treat them differently as a category.

UthorJuly 31, 2019 12:21 PM

The lane keeping assist in my Mom's CR-V "helpfully" tried to correct me into the side of an underpass thanks to a weird shadow on the road.

Somehow, I'm slightly less concerned about cars being spoofed by flying drones.

Bob PaddockJuly 31, 2019 12:27 PM

@Clive Robinson

"... as we have to 'Think of the Children'."

They have been for years.

A few years ago I attended a White House Office of Science and Technology hosted event at Carnegie Mellon University about Robotics. Top people from Google, Uber, CMU. many others and Government acronyms such as DARPA and IARPA were there too. This is what they were most concerned with all day:

"Statistically we know that we WILL kill a Child [with our technology]. How do we handle the emotional public aftermath of that, as it will effect our bottom line?"

They had no answer.

Sergey BabkinJuly 31, 2019 12:30 PM

Spoofing the speed limits is probably not such a big deal: the worst this attack could cause is getting a traffic ticket. A driver and a proper self-driving car should be selecting the speed based not on the traffic signs but on the road situation, and if the road situation looks unsafe beyond 30 km/h, it should not be going 90 km/h even if the road sign allows it.

Probably a more dangerous sign to attack would be one that directs the traffic by lanes. Worse yet, the European system of traffic signs has the blue-background signs that actively permit specific turns while implicitly prohibiting the driving in the other directions (as opposed to prohibiting certain turns and leaving the other ones available). Spoofing such a sign is probably the easiest way to cause a collision.

1commentJuly 31, 2019 1:00 PM

>"Statistically we know that we WILL kill a Child [with our technology]. How do we handle >the emotional public aftermath of that, as it will effect our bottom line?"

>They had no answer.

Do they mean automotive technology? Because, cars have been killing kids for 100 years. All they have to do is compare the kill rate and set up the liability accordingly.

Paranoia Destroys YaJuly 31, 2019 1:07 PM

Fake road signs or projecting tunnels on desert canyon walls is an old tactic used by that supergenius Wile E. Coyote

ThunderbirdJuly 31, 2019 1:56 PM

The comments to the effect that "this is just the same as restriping the road" ignore several items. First, humans would presumably notice and report a restriped road. Second, humans presumably wouldn't choose to follow the new stripe directly into a cliff where someone had cleverly painted a tunnel entrance (see Wile E. Coyote above). Third, nothing says that all attacks have to be done with physical presence.

People are very taken with the possibility of preventing X-tens-of-thousands of deaths each year by eliminating human drivers. I agree, that sounds swell. But you have to balance that against how many people get killed if all self-driving cars choose to dodge into the oncoming lane at once. While that is a movie-plot threat, abig difference between computer network crime and plain-old crime is that a guy with a mask and a knife can only mug one person at a time, while automated systems allow crime to scale...

As for things you could do by altering speed limits, it would be pretty funny to pick random spots where the freeway limit suddenly appeared to drop to 25 MPH. Given a certain number of non-automated vehicles, you'd be pretty sure to either create a massive traffic jam or some rear-end collisions.

RoxolanJuly 31, 2019 2:32 PM

You could also send a drone on a highway to shine a laser pointer into the driver's face at the worst time.

AlexJuly 31, 2019 6:10 PM

Given how many human idiots blindly follow GPS/satnav guidance into lakes, ponds, muddy fields, etc., giving humans or devices bad instructions is going to cause problems.

I've driven ~75,000 miles with two different semi-autonomous cars (Bosch Mobility Solutions, NOT Tesla's garbage) and have a decent amount of faith in them. Do I trust it blindly? No. BUT, it's done very well and has avoided accidents at times, 2 which were people behind me who weren't stopping, and the car got out of the way.

It does misread the speed limit signs sometimes, which can be annoying or frightening depending on how you have the settings configured.

I think this falls under the Movie-Plot world of possible attacks. You could easily rig an elevator to go flying through its limits at full speed with some black electrical tape, but no one's done this.

Anon Y. MouseJuly 31, 2019 6:50 PM

If the proponents of self-driving cars were *really* serious about
improving vehicle safety, there are a number of things that could
be done right now without waiting untold number of years and investing
billions of dollars in that fantasy. Such as:

Better driver training

More law enforcement of distracted driving statutes

Outlawing any cellphone use, including hands-free, by drivers

Prohibiting the installation/use of in-car entertainment systems that play videos

Prohibiting the installation/use of in-car systems that let the driver check
e-mail, make phone calls, etc.

Creating guidelines, standards, and mandates for simplified graphic user
interfaces for navigation and environmental controls


The fact that advocates for self-driving cars are doing none of these
things tells me they are really just gadget freaks and/or industry shills.
Their claims to be concerned about traffic deaths is just empty rhetoric.

MikeAJuly 31, 2019 7:50 PM

@Tatütata
-----
Before too long, we'll have machine-readable road signs (e.g., a UV QR overlay, or PCM-modulated light signals) with electronic signatures, and traffic will come to a standstill because the DOT's root certificate was compromised.
----

I can see it now. In the UV QR case, A remake of (the original) Italian Job, wherein Benny Hill doesn't mess with the traffic signls, but rather a legion of skate-punks slaps stickers over certain signs (signed with that compromised cert). There is a preexisting workforce currently slapping such stickers to advertise various goods, so no minion-training required. (in the modulated light version, re-purposed LiFi)

Clive RobinsonAugust 1, 2019 1:02 AM

@ Bob Paddock,

They had no answer.

They never do, their only desire is to exist at the expense of others and thus fear the inevitable.

Just over two centuries ago Mary Shelley (second wife of Poet Percy Bysshe Shelley) wrote a book that is probably the most well known of either of their works. In what many now consider the first true Science Fiction novel "Frankenstein; or, The Modern Prometheus" she has Victor Frankenstein create by the science of galvanism a man, as had Prometheus of Greek mythology made a man of clay.

In many ways the book has become a warning of over ambition in science to create as a god things that religions dictate as the domain of gods. The result, the creation always brings the same fate, in that the usurpers of gods domains become punished by their own creations.

Though three quaters of a century later we have a slightly different view of technology as seen in H.G.Wells "The Time Machine". In it we see the result of a race that divides where one part (Eloi) lives a dilittant child like existence on the creations of the other part that choses to hide (Morlocks). But as with live stock the price for the easy life style of the Eloi is to be consumed by the Morlocks who create the conditions for such lives.

The difference in viewpoints can be ascribed to the authors back grounds. Mary Shelley was brought up in semi privilege and later shared life with those who's life had been all privilege and had existed on the labours of those who lived in the twilight "Below Stairs". H.G.Wells was brought up below stairs not as a servant but below his fathers business in which he helped. So both authors had seen either side of the "Upstairs, Downstairs" divide without actually being of those divides.

Both of these "futures" can be seen in our current chasing of the use of technology for a "Privileged Life" where the desire of many is the easy "above stairs" life style of privilege, without the responsability to self and others that entails.

That is the technology that has taken the labour of the "below stairs" life has no life, thus is treated with utter disregard by those employing it for the illusion of an "above stairs" life. Thus they sleep walk into what would appear to be an increasingly Morlokian trap of the technology creators who very much feed upon their Eloi like dilittant or childlike behaviours and pursuits, slowly encircling them into a rent-seekers entrapment which is sometimes compared to the relationship between a drug addict and their dealer.

Thus we can see that in many ways the technology robs those seduced by it's mainly empty promises, of the independence and self reliance considered normal just two or three generations ago. Each generation now becoming more dependent on technology and less responsible for themselves. In effect more child like.

So the Silicon Valley Big-Corp's are not realy enhancing life, they are turning where they are alowed to, humans into "product" or "live stock" on which they feed. Thus they have to maintain an illusion of enhancing peoples lives whilst mainly doing the opposite.

Thus whilst the citizens still have the chance of evading the encircling trap, Big-Corp is scared that people will awake and realise thus escape the trap before it's to late to do so. The surest way for people to turn against Big-Corp is for them to see it harm them or the ones they love.

Whilst in the main we accept the careless homicide by vehicle of others as "accidents", it's due to the fact we accept that drivers are human thus imperfect. The selling point of Automated Vehicles is that they will be perfect drivers thus their can be no accidents. Thus for such technology and it's promises, it will not be seen as an accidental homicide but more as deliberate cold and calculating murder, for which someone will have to pay. And Big-Corp knows that it will be their heads in the cross hairs. Thus their desired Morlockian dystopian future dressed to look like a utopia will fail to happen and they will suffer the fate of Percy Bysshe Shelley's Ozymandias.

Gerard van VoorenAugust 1, 2019 1:49 AM

@ 1comment,

"Do they mean automotive technology? Because, cars have been killing kids for 100 years. All they have to do is compare the kill rate and set up the liability accordingly."

It's all about insurance and when the insurance numbers say that it's better to use a driverless car, then it's happening. Because then the insurance guys get involved and get things done. That is the way of the world.

JosephAugust 1, 2019 2:17 AM

Insurances do have the advantage of taxpayers as the last resort of backdrop because as we know all insurance will fail (or go bankrupt) during a "catastrophic" happening.

It also hinges on the belief money solves everything because insurance is based on repayment.

Totally real nameAugust 1, 2019 2:34 AM

Solving responsibility issue in the future will be a sh*t show. In pure SW, you can get away with magical "provided as is". With car accindents and real victims, it won't be possible. You can either way send a developer to jail (technically, he is responsible for driving your car, and if his mistake killes someone,...) or you give them "Out of Jail Free card". Blaiming driver sitting in the car is not a long term solution. He did't cause the glitch or the crash, he just failed to correct someones mistake.

There are some fun times ahead of us.

JonKnowsNothingAugust 1, 2019 8:36 AM

re: drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot's sensors.

Don't have to worry so much about the hackers. Police will be able to do it for Police Reasons.

Police demand for access to car computer systems to shut down engines or lock doors and windows or unlock doors and windows or just to track the car and occupants and to calculate their travel paths or predict their destination. Their biggest issue is how to get close enough to hack into the system for full control.

Spain’s traffic authorities are deploying drones to help tackle drivers breaking the law ...
Photographic evidence gathered by the certified drones will be sent to civil guard traffic officers as soon as offences are committed, or relayed later to the relevant authorities.

I think we can figure out who the Relevant Authorities might be.

ht tps://www.theguardian.com/world/2019/aug/01/spain-deploys-drones-to-monitor-traffic-blackspots
(url fractured to prevent autorun)

TatütataAugust 1, 2019 8:40 AM

You can either way send a developer to jail (technically, he is responsible for driving your car, and if his mistake killes someone,...) or you give them "Out of Jail Free card".

Come on, courts everywhere have been generally very lenient with car drivers, even in the presence of recklessness leading to death.

Without developing a rich topic, just two examples:

The Guardian: Opinion Trial by jury
Dangerous drivers should not be allowed to choose trial by jury
It might seem an unlikely thing for a QC to advocate – but this is about justice: jurors are too ready to acquit drivers who cause death or injury to pedestrians and cyclists

Findlaw.ca : Is Canadian law too lenient when punishing driving offenders?

What a software developer, or rather his employer, might have to fear, is a class action suit, where they exist. Compare the fate of US and German owners of Volkswagen (and others) car owners...

But thinking about it, let's see how the 737 Max case will turn out.

BerndAugust 1, 2019 10:26 AM

I am pretty sure that there are a lot of human drivers out there that would totally try to drive through a fake tunnel just like in roadrunner.
As long as the machine is better than the average human driver it is still preferable even when it has known life-treatening bugs - because humans have life-treatening bugs too.

The difference: You could (and of course have to) ebentually fix that bugs for machines while they will keep beeing unfixable for humans.

TatütataAugust 1, 2019 11:19 AM

I am pretty sure that there are a lot of human drivers out there that would totally try to drive through a fake tunnel just like in roadrunner.

I entered a common headline into a search engine, which returned the following suggestions...

gps drive into the lake
gps leads car into water
woman drives car into ocean
google maps drive into lake
gps mishaps

Who's guilty in this case? The navigation system, or the mindless driver?

Visiting friends from abroad were driving me home, and adamantly insisted on following the instructions from their GPS unit, which wanted to take us through a detour of almost 2km. I tried to tell them as nicely as I could "I bl**dy know where I live, d*mmit!", but they wouldn't believe me. They might just as well drive into a lake... Both had advanced engineering degrees.

In retrospect I figured out that the probable reason was that the device's map couldn't take into account a local traffic regulation (something like "No left turn between 7AM and 10AM" -- a beloved cash earner for the donut eaters), and just calculated the route as if this possibility wasn't available at all. Technology...

BerndAugust 1, 2019 12:10 PM

Who's guilty in this case? The navigation system, or the mindless driver?
Thats easy: The street is guilty. It should have changed its location so that it matches the instructions of the driver's navi!

No, seriously: It is always the guilt of humans. If a driver ignores the obvious and follows the "orders" of his navi, then it is completely his own fault.
If a selfdriving car kills someone because of a software bug, its the guilt of them who wrote the software (i am a software developer by the way).

The real question is not about who has each single instance of guilt. It is about how to have the least possible summed amount of guilt in a whole society.
Yes, someone who makes self-driving cars is guilty for the deaths that car produces because of software bugs. The same person is also responsible for the avoided deaths that would have occured without him making the self-driving car though.
I would certainly have sleep issues if i where developing a self-driving car.

But it is very positive overall, that some do develop the self-driving car. Someone has to do it and it looks like the people now doing it are mostly technically competent enough to actually get a working better-than-human version done.

Clive RobinsonAugust 1, 2019 12:15 PM

@ Bernd,

You could (and of course have to) ebentually fix that bugs for machines while they will keep beeing unfixable for humans.

Humans have only been fixing software bugs for around a lifetime. Evolution on the other hand has been fixing humans for longer than humans have been around.

Jim A.August 1, 2019 1:07 PM

Do they mean automotive technology? Because, cars have been killing kids for 100 years. All they have to do is compare the kill rate and set up the liability accordingly.

Yes, but while the children that are saved because of the switch to self-driving cars are statistics, the quite possible smaller number of children killed by the switch WILL have names. I think that it will be a long time before the technology is ready for local streets, but it will be ready for the "walled garden" of the interstates comparatively soon.

Brad TempletonAugust 1, 2019 2:05 PM

This belongs in the least worrisome and silliest class of potential car vulnerabilities.

#1) Remote compromise attacks that could control or interfere with groups of vehicles -- worry about these the most

#2) Local radio compromise attacks which could still affect many vehicles. (Don't do V2V to solve these, otherwise worry about them.)

#3) Attacks requiring physical access to vehicle -- worry but not as much as they don't scale.

#4) Attacks which are much more expensive than much simpler and more dangerous acts.

#5) Flying drone interference.

Plant as many organic orchards (oxygen & shade factories) per day as possibleAugust 1, 2019 3:48 PM

SOD

I (as usual) am left wondering:

1) Who the hell keeps promoting self-driving vehicles?

We surely have not needed them for several decades, and surely don't need them now.

And several arrays of technical tests given to self-driving test vehicles were failed. The test results were published in some science news periodicals (sorry I can't remember exactly which one, I guess you'd benefit from fact checking me on that, right?). Yet I remember a chart of self-driving vehicle test results with about 20 tests being failed badly. In terms of percentage rating of failure it was something like 75-100% failure per test of about 15 out of 20 some tests... er, something like that.

No joke: Somebody who was assigned to violate my privacy routinely would be able to recall exactly which periodical that was.

2) How exactly can we block the financial and cultural and technical implementation of self-driving vehicles?

3) If self-driving vehicle implementation is not effectively blocked, what will be the subsequent damages beyond the traffic deaths and property damages?

Personally, thus far, I suspect that the robotics and optronics and scanning and pattern recognition and AI industries will be handed "the keys to the city", so to speak. And there will be more complex and subtle problems.

4) Thus, it's still within our survival interests to research and to discover and to know and to remember exactly how to safely disable all known mechanisms, if needed. Also, it's still within our survival interests, to research and to discover and to know and to remember exactly how to safely disable future not yet known mechanisms, if needed.

EOD

BerndAugust 2, 2019 6:55 AM

Humans have only been fixing software bugs for around a lifetime. Evolution on the other hand has been fixing humans for longer than humans have been around.
Yes. But evolution can't fix fast. Evolution is about tenths of thousands of years yielding minor changes while humans got from steam to global electronic information network in two hundred years. They also had two world wars and countless local wars in that time frame. Two hundred years are almost nothing when it comes to evolution.
So if it is about having the safer technology, i would not bet on genetic evolution making that happen first.

We surely have not needed them for several decades, and surely don't need them now.
Some things, also no one "needed" before they where there:
- Fire
- Horses
- Copper
- Steel
- Steam engines
- Electricity
- Coaches
- Cars
- Electronics
- Computers
- The Internet (you are using it right now!)

JonKnowsNothingAugust 2, 2019 9:08 AM

Some things, also no one "needed" before they where there:
- Fire
- Horses
- Copper
- Steel
- Steam engines
- Electricity
- Coaches
- Cars
- Electronics
- Computers
- The Internet (you are using it right now!)

Hmmm I looked at your list and I do not see a single item that qualifies as "NEED". Convenience yes. Need No.

Especially The Internet. It is not a NEED item. Surveillance systems demand you use it and may require you to use it but you do not NEED the internet.

Millions of people do not have the internet. Millions are about to join SplinterNets. Millions are no long able to afford the cost of the internet or SplinterNet.

It's nice to have but it isn't a NEED.

BerndAugust 2, 2019 10:08 AM

Hmmm I looked at your list and I do not see a single item that qualifies as "NEED". Convenience yes. Need No.
Well, nihilisticly we don't even need life itself as we can't miss it if we don't have it.
But the "convenience" of having fun is not possible without being alive.

Clive RobinsonAugust 2, 2019 10:54 AM

@ Bernd,

Yes. But evolution can't fix fast. Evolution is about tenths of thousands of years

Actually it's quite a bit faster than people think.

The tolerance to alcohol developed in less than four thousand years across entire populations (being easily drunk is a great way to get yourself killed before you breed).

But epigenetics have been found to work across three generations.

But turn,

Yes. But evolution can't fix fast.

Around and instead ask yourself,

    Why does evolution NOT need to fix fast?

I'll note that "gene splicing" for "GMO" is very much like editing software. It can be done very quickly, which many think is not a good idea...

Perhaps we realy should be asking,

    Why do we need rapid change, In areas we have no real knowledge of?

After all the rapid pace of the software industry is fairly disaster strewn with "broken and over featured" being rather more than the norm. We would not accept that in the likes of tangible physical objects, in fact we have legislation to try to ensure "Fitness for Purpose". So why do we accept it in intangible information objects?

Especially when software is rather rapidly becoming the engine in many physical objects that can easily kill...

JFAugust 2, 2019 11:46 AM

@Bernd

This discussion about autonomous cars arises here every couple of years and the arguments on both sides continue relatively unchanged, and it appears to me the naysayers have made up their minds and set their opinions in concrete.

I am reminded of reading in elementary school about the time when "horseless carriages" were new and in certain states there were laws which required the driver of such a vehicle to employ a pedestrian to run out ahead waving a red flag to warn others of the approach of the contrivance. It may well have been necessary, as I don't know how effective the brakes would have been. In any case, it certainly sounded silly to me in the fifties.

The reality is all automotive engineering progress has been relatively steady but incremental since then, until now. Self driving cars is as revolutionary as the transition from horse-drawn conveyance to early steam, electric and internal combustion vehicles. And I expect the arguments against, in a few years time, will sound about as silly as the red flag laws did to me way back when.

JonKnowsNothingAugust 2, 2019 4:54 PM

@JF

am reminded of reading in elementary school about the time when "horseless carriages" were new and in certain states there were laws which required the driver of such a vehicle to employ a pedestrian to run out ahead waving a red flag to warn others of the approach of the contrivance. It may well have been necessary, as I don't know how effective the brakes would have been. In any case, it certainly sounded silly to me in the fifties.

The reality is all automotive engineering progress has been relatively steady but incremental since then, until now. Self driving cars is as revolutionary as the transition from horse-drawn conveyance ...

Sorry to disappoint you but...

Horses (mules burros and donkeys) and horse carriages are in use today. Some are "sport horse" competitions but there are plenty of places in the world where "modern modes of transport" do not exist.

You are lucky and wealthy if you can afford a horse in the USA but in those regions where modern transport doesn't work, having a horse or mule is vital along with being able to grow and harvest enough fodder for it to eat all year round. In such places there is no A$ or local feed mill where you can just pop over and buy a bale or two for the My Friend Flicka in your back yard.

It's a symptom of entitlement that presumes that everyone is like us and has access to all that we have access to and then reinforces their position by dismissing the reality for much of the planet.

In some parts of the world they have internet but they don't have cars.

ht tps://en.wikipedia.org/wiki/My_Friend_Flicka
ht tps://en.wikipedia.org/wiki/Combined_driving
ht tps://commons.wikimedia.org/wiki/Category:Horse_buggies_of_the_Amish
ht tps://commons.wikimedia.org/wiki/Category:Horse_locomotives
ht tps://commons.wikimedia.org/wiki/File:Animal_traction_IMG_7039.JPG

ThunderbirdAugust 5, 2019 10:25 AM

The old canard that everyone that doesn't think X is ready for prime time are Luddites is a commonplace. I am pretty sure that the people that didn't think "horseless carriages" were ready for prime time were not discussing the security implications of autos, and regardless they were not subject to tampering by people halfway around the world.

Some things are different from other things, and network-enabled computer intrusion is different from automobiles, and when it is *combined* with automobiles, it causes a concern. Incidentally, I was a bit amused to see the suggestion that a mitigation for comms tampering is "no vehicle-to-vehicle communication," since that is the only way I can see that you can make automated driving work well in the general case. If each unit has to act only on the information it can collect, without any idea what other units are going to do, it seems like it could lead to amusing emergent effects.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Sidebar photo of Bruce Schneier by Joe MacInnis.