AI vs. Human Drivers

Two competing arguments are making the rounds. The first is by a neurosurgeon in the New York Times. In an op-ed that honestly sounds like it was paid for by Waymo, the author calls driverless cars a “public health breakthrough”:

In medical research, there’s a practice of ending a study early when the results are too striking to ignore. We stop when there is unexpected harm. We also stop for overwhelming benefit, when a treatment is working so well that it would be unethical to continue giving anyone a placebo. When an intervention works this clearly, you change what you do.

There’s a public health imperative to quickly expand the adoption of autonomous vehicles. More than 39,000 Americans died in motor vehicle crashes last year, more than homicide, plane crashes and natural disasters combined. Crashes are the No. 2 cause of death for children and young adults. But death is only part of the story. These crashes are also the leading cause of spinal cord injury. We surgeons see the aftermath of the 10,000 crash victims who come to emergency rooms every day.

The other is a soon-to-be-published book: Driving Intelligence: The Green Book. The authors, a computer scientist and a management consultant with experience in the industry, make the opposite argument. Here’s one of the authors:

There is something very disturbing going on around trials with autonomous vehicles worldwide, where, sadly, there have now been many deaths and injuries both to other road users and pedestrians. Although I am well aware that there is not, senso stricto, a legal and functional parallel between a “drug trial” and “AV testing,” it seems odd to me that if a trial of a new drug had resulted in so many deaths, it would surely have been halted and major forensic investigations carried out and yet, AV manufacturers continue to test their products on public roads unabated.

I am not convinced that it is good enough to argue from statistics that, to a greater or lesser degree, fatalities and injuries would have occurred anyway had the AVs had been replaced by human-driven cars: a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway….

Both arguments are compelling, and it’s going to be hard to figure out what public policy should be.

This paper, from 2016, argues that we’re going to need other metrics than side-by-side comparisons: Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?“:

Abstract: How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern their deployment. One proposed way to assess safety is to test drive autonomous vehicles in real traffic, observe their performance, and make statistical comparisons to human driver performance. This approach is logical, but it is practical? In this paper, we calculate the number of miles of driving that would be needed to provide clear statistical evidence of autonomous vehicle safety. Given that current traffic fatalities and injuries are rare events compared to vehicle miles traveled, we show that fully autonomous vehicles would have to be driven hundreds of millions of miles and sometimes hundreds of billions of miles to demonstrate their reliability in terms of fatalities and injuries. Under even aggressive testing assumptions, existing fleets would take tens and sometimes hundreds of years to drive these miles—­an impossible proposition if the aim is to demonstrate their performance prior to releasing them on the roads for consumer use. These findings demonstrate that developers of this technology and third-party testers cannot simply drive their way to safety. Instead, they will need to develop innovative methods of demonstrating safety and reliability. And yet, the possibility remains that it will not be possible to establish with certainty the safety of autonomous vehicles. Uncertainty will remain. Therefore, it is imperative that autonomous vehicle regulations are adaptive­—designed from the outset to evolve with the technology so that society can better harness the benefits and manage the risks of these rapidly evolving and potentially transformative technologies.

One problem, of course, is that we treat death by human driver differently than we do death by autonomous computer driver. This is likely to change as we get more experience with AI accidents—and AI-caused deaths.

Posted on December 9, 2025 at 7:07 AM39 Comments

Comments

Clive Robinson December 9, 2025 8:26 AM

@ Bruce, ALL,

With regards,

“Two competing arguments are making the rounds.”

Whilst they appear to be competing they actually are both right.

The issue is which end of the “environmentally complex” scale the are addressing.

The “automated” deals with mostly low complexity, which “lulls humans” away from paying significant attention to the driving and surrounding environment.

Humans when alert and focused will deal relatively well with high complexity which “automated” systems do not have the depth or speed to deal with.

It’s why automated systems work well for underground trains light railways and trams that do not share with road drivers or pedestrians.

From which we can draw a valid conclusion,

“It is other humans that make the environment complex. Remove them and automated systems will function effectively. Don’t remove the other humans and even experienced and focused human drivers can not deal with the complexities created.”

For obvious reasons nobody wants to talk about what would be an seen by some as in infringment on their libitarian freedoms (to kill, main, and mutilate, other humans, or themselves).

Other countries which have different cultures and morals and thus driving legislation and regulation tend to have way less “accidents” on the roads even though legally they can drive faster etc.

Speaking of so called “accidents” or “acts of deities” there is as I’ve noted several times in the past, no such thing as the laws of nature very clearly apply.

What there is, is in fact,

1, A lack of time to “gather” information.
2, A lack of time to “process” what information there is.
3, A lack of time to “act”.

As can be seen “time” is the over riding issue for humans, at some point events happen where there is not sufficient time to be able to gather, process, and act on the information available.

In theory nearly all bullets can be dodged if they are fired from sufficient distance the reality is no time or information to do so.

So in theory any automated system that can “gather, process, and act” faster than a human will be safer than a human. The reality is whilst they might be able to gather faster, they are still way too slow on process and act.

But there is another issue we don’t talk about. Humans are not designed to “intrinsically self sacrifice”.

Do we want an automated car that decides to “wrap dad around a lamp post” rather than “play skittles with a young mother and toddler” who “just stepped out onto the crossing”?

Have a think about that for a few moments.

Bob Paddock December 9, 2025 9:00 AM

@Clive Robinson, “As can be seen “time” is the over riding issue for humans, at some point events happen where there is not sufficient time to be able to gather, process, and act on the information available.”, this is where instinct and experience kick in.

A car on the interstate pulled across the interstate in front of me.
There was less than 50 feet between us. Slamming on the brakes would have just skidded into the side of the car. By instinct alone I put the gas peddle to the floor and managed to speed round the front of that car. No AI would ever take that course of action. Moments after that I was thinking “what was that about? I never would have thought to do that”.

I was once in a meeting with the top Self Driving people of the world their major concern that day was:

We know our technology WILL kill a child.
How do we handle the perception management after that happens?

Nothing at all about how to prevent killing the child ever came up.

I’ve yet to see a compelling answer to the the question that if an accident is enviable do you kill the pedestrian or kill the driver? Does the age of either or both enter into the answer?

Tim December 9, 2025 9:33 AM

Or we could all agree to scrap a decades-long boondoggle that seeks to “improve” a system of transportation that’s destructive in almost every way imaginable. Perish the thought!

Gorgasal December 9, 2025 10:24 AM

“a pharmaceutical company, following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway”

This is a strawman. Of course the pharma company cannot sidestep regulations. But the fact that people die during clinical trials is baked into the regulations!

The authors’ argument, at least as given here, boils down to “if anyone dies in the treatment arm at all, the study needs to be stopped”. Thank goodness that clinical trials absolutely don’t work like that: roughly speaking, studies are stopped when at any point in time, more people die in the treatment than in the control arm, and that is precisely how it should be. Because we should not be chasing perfection, but improvement.

It’s interesting how often a book that deals with data could have profited from someone with actual expertise in data analysis. Turns out there are many, many statisticians who evaluate such clinical trials routinely, and any one of them could and hopefully would have caught this nonsense.

Bart December 9, 2025 10:32 AM

The difference is, the autonomous drivers will steadily get better and stay that way.

Human drivers will always be terrible. Humans have a rich history of proving they cannot be trusted with giant death machines.

Steve December 9, 2025 10:44 AM

@Tim: “Or we could all agree to scrap a decades-long boondoggle that seeks to “improve” a system of transportation that’s destructive in almost every way imaginable.”

The Blaise Pascal solution: “All of humanity’s problems stem from man’s inability to sit quietly in a room alone.

@Gorgasal: Agreed. More or less what I came here to say. We don’t expect treatments to be perfect. They just need to be better than the control group.

Brent W December 9, 2025 11:01 AM

@Bart: “The difference is, the autonomous drivers will steadily get better and stay that way.

Human drivers will always be terrible. Humans have a rich history of proving they cannot be trusted with giant death machines.”

Human drivers, with the help of vehicle regulations and traffic engineering, have steadily improved over time. The US is anomalous in this respect, as our fatalities have recently ticked up, while the rest of the developed world has continued to see great reductions.

There’s also the difficult-to-analyze difference between good drivers and bad drivers. The insurance industry has plenty of data to back this up: drivers are not equally likely to contribute to a traffic fatality, and bad or impaired drivers are orders of magnitude worse than good drivers. If our only goal is to reduce fatalities, rather than to increase adoption of a technology, stricter licensing requirements (and punishments), reduced speed limits, and engineering roads with more curves (to further reduce speed) would seem to be a more clear path towards that outcome.

To the original topic of the post, we really need more data on how autonomous systems perform in the types of situations which are most dangerous. Highway speeds, surprises on the road, and so on. I expect they would do better generally, mostly because they would follow the speed limit and drive predictably, but there are a lot of questions around their failure modes, particularly as the vehicles age and lack maintenance.

KC December 9, 2025 11:06 AM

@Bob Paddock

I’ve yet to see a compelling answer to the the question that if an accident is enviable do you kill the pedestrian or kill the driver? Does the age of either or both enter into the answer?

I see Germany has an Automated Driving Ethics Report (2017) with 20 Rules.

Rule 9 prohibits AV decisions made on personal features like age, etc. It says those involved in generating mobility risks must not sacrifice non-involved parties. Rule 8 talks about genuine dilemmas and the need for an independent public agency to review lessons learned.

Mercedes-Benz also links to ISO 39003:2023 for AV Ethics. According to Gemini, it does not necessarily give ‘Trolley Problem’ answers, but guides manufacturers to have documented, ethically-sound reasoning for whatever decisions an automated vehicle makes.

Morley December 9, 2025 11:12 AM

The lack of regulation is disturbing. Elon says his cars are self driving, and there’s no real consequence (to him) when customers/bystanders die trying that out. There should be an investigation like the FAA does for airliner crashes.

Michael Josem December 9, 2025 11:20 AM

“following death or injury, cannot simply sidestep regulations around the trial of, say, a new cancer drug, by arguing that, whilst the trial is underway, people would die from cancer anyway…”

Huh? That’s absolutely what people say, because people are not lunatics.

The goal of cancer drugs is to reduce the death rate, not to end death itself. Come on, this isn’t complicated: If a disease kills millions of people, and a vaccine or other treatment would change that to be thousands of people (thus saving millions of lives) then it would be criminal to stop the treatment because it doesn’t reduce the death rate to zero!

I’ve read the article three times, hoping that the writer has made a typo or I’ve misunderstood – and I continue to think their argument is so deranged that I continue to think that I’ve misunderstood it. Are they really saying that cancer treatments that are imperfect should be prohibited? Come on, that’s either completely deranged or just evil.

Tom December 9, 2025 11:54 AM

it seems odd to me that if a trial of a new drug had
resulted in so many deaths, it would surely have been
halted and major forensic investigations carried out

It’s difficult to believe this is written by someone who’s thought about this for more than a few seconds.

How many people die during the course of a drug trial is irrelevant to the outcome of the trial; it’s whether the people on the drug die at a higher or a lower rate than people not on the drug. This guy is basically arguing that we should go on letting people kill each other on the roads — hundreds of them each day — until we can guarantee that no autonomous vehicle will kill anyone ever. If it reduces the rate by 99%, that’s still not good enough because it still kills a few people each day. Waving that away as “argu[ing] from statistics” is the most horrible callousness; never mind that a technology could save tens of thousands of lives each year, that’s just “arguing from statistics”.

mark December 9, 2025 12:23 PM

I would go for “you prove self-driving is safer”.

I will not go into a self-driving car. If I don’t want to drive… there’s this thing that I’m sure no one here ever heard of or used… called “public transportation”. It needs to be made free (Mamdani’s promising that in NYC – here in Montgomery Co, MD we’ve already got it.)

And cars are so “efficient”. I’d see how “convenient” they were when I lived in Chicago, and took the Metro (commuter rail) down the middle of the Kennedy, cruising past all the traffic jam, who are rushing into downtown to get their “early bird” special on the parking lot, $20 A DAY for in before 7, out by 3:30.

Let me note there are reports in the last week of self-driving taxis going by stopped school busses with the stop signs out.

lurker December 9, 2025 1:16 PM

@Brent W

I would expect an autonomous vehicle system to have the time to analyze the age of the vehicle and its lack of maintenance. If the AI decides not to drive because the vehicle is too dangerous, a human will find a way to override it.

@Morley

The forensic attention paid to airplane and major railway crashes is proportional to the commercial losses suffered by the operators of those transport modes. Road “accidents” have relatively minor public effect. The Ford Pinto crossed a threshold of visibility. AI vehicles are not there, yet.

Clive Robinson December 9, 2025 1:45 PM

@ Bob Paddock, KC, ALL,

Ding ding went the Trolly[1] Dilemma

“I’ve yet to see a compelling answer to the the question that if an accident is [inevitable] do you kill the pedestrian or kill the driver? Does the age of either or both enter into the answer?”

The answer to the first question is,

“What ever is the minimum at the time”

The problem with that is what your second question hints at and why I used “dad” and “a young mother and toddler” in my phrasing of the Trolly Dilemma.

At the time of the event most would argue “mother and toddler” on the notion of “micromorts”

However if I changed “dad” to “sole bread winner of family with with five young children” how does that effect the calculus?

Put simply there is no right answer other than “hard segregation” between vehicles and pedestrians.

Hence my comments about trains and trams where segregation is usually ensured with suitable barriers.

@ KC,

“guides manufacturers to have documented, ethically-sound reasoning for whatever decisions an automated vehicle makes.”

The problem is that of all questions that are,

“Predicated on missing or incomplete information.”

And it’s kept judges busy for more than a millennium of “common law” years and in the process made them look like psychopaths.

Ultimately there will always be examples where any set of rules to enshrine,

“ethically-sound reasoning”

Will fails in the most awful of ways.

Because reasoning is “rules based” and ethics is morally thus emotionally based.

Consider,

“Thou shall not kill except in self defence.”

The first half is a rule, the second half is an emotionally based defence based on the point of view of an observer long after the event. In English Common law there was no second half “murder was murder” untill after the French introduced the defence…

It’s subsequently been argued over and over, that this gave “evil lawyers” ways to stop their “evil clients” facing “true justice”.

@ Tom,

“It’s difficult to believe this is written by someone who’s thought about this for more than a few seconds.”

Actually thought is not required it’s standard practice.

The most horrific example I know of was in the UK on a drug trial that the MSM called the “Elephant Man Drug Trial” because of the visible effects it had. They injected a half dozen fit young health young men and that put them in a state of organ failure within an hour a state known as “Disseminated Intravascular Coagulation”(DIC) normally seen in severe sepsis occured (that medical practitioners say it really stands for “Death is Coming”). The trial was halted immediately and various investigations and enquiries followed,

https://www.leighday.co.uk/news/blog/2016-blogs/ten-years-after-the-elephant-man-drug-trial/

The real result is many “drug trials” are now “arms lengthed” through agencies that carry them out in far away foreign nations where there are either no regulations and or no or low compensation if harms happen.

After all,

“Why test on expensive lab animals, when desperate humans can be had for a few dollars?”

Oh and then there is the “don’t give patients the actual drug used during trials”. This happened with most of the US C19 vaccines. The production process used for mRNA trials was expensive and had complications. The process used for actual mass manufacture caused a significant number of injuries to peoples hearts. In my case I got what was called a blood clot in the atrium the size of the end of my thumb, that took my cardiac output down to about 1/20th… It’s still down below 1/4 of cardiac output and is very irregular. Thus I still have regular blackouts… I’m not allowed to drive, use machinery, or effectively even work… As I’m a danger to others and can not be covered under required employers insurance, thus can not be employed…

[1] Miss quote of the “Trolly Song” sung by Judy Garland in the 1944 film Meet Me in St. Louis.

K.S. December 9, 2025 2:15 PM

Push for self-driving is also a push for tighter government control over people’s mobility. Self-driving cars are trivially easy to centrally control.

Richard December 9, 2025 2:18 PM

We’ve had the means to dramatically reduce traffic deaths for decades, but not the political willpower to actually enact them. Focusing on a single, expensive, inefficient, privately owned technology when investment in modern infrastructure and public transit would reduce death and injury for all road users is a serious case of tunnel vision.

Note really anonymous December 9, 2025 3:31 PM

Who to kill?

I’ve yet to see a compelling answer to the the question that if an accident is enviable do you kill the pedestrian or kill the driver? Does the age of either or both enter into the answer?

I think it is pretty clear in some cases. If it is my car, it better be acting in my interests. If it is some taxi like varient, then things get more complicated. Do I get treated as the owner while paying for a trip or is the car acting in the real owners interest? Does the real owner have a duty to have the car act in my interests or am I treated the same as any other person? Maybe I’m even treated worse because I supposedly signed away a bunch of rights to get the ride and me or my estate will be less able to obtain redress than other people.

Ray Dillinger December 9, 2025 6:07 PM

We will continue to have the argument about whether we should emphasize driver and vehicle safety or the safety of other people and their property, but we have that argument anyway about SUVs and sports cars. That’s not a new argument. That’s an argument about decision-making priorities, not about whether or not the systems are good enough at implementing those decisions. Self-drive systems will not solve it.

This issue is temporary. The facts are changing FAR faster than the regulations that define our policy for dealing with the facts. Whatever regulations we make in response to the current state of the systems will be misapplied when dealing with the self-drive vehicles of a decade from now.

While I don’t trust these systems YET, I have no doubt at all given the volume of data being gathered and the systems ramping up to gather more, that within ten years they’re going to be unequivocally better than human drivers, regardless of what decision-making priorities regulations impose on them.

Here is why:

The argument: We would need millions or possibly billions of driven miles to gather the data to evaluate these vehicles for legislative purposes, or to gather the training data to make them better.

The facts: That’s such an easily-reached goal that it’s laughable, and besides we won’t stick to that goal. If we haven’t already gathered a billion miles of training data, we’ll have it within a year or two. We’ll have that much data PER DAY within a few more years. And then we’ll substitute a billion miles of simulated driving against testing data, on cars trained with HUNDREDS of billions of miles of training data, to make legislative decisions. We will substitute simulated testing for highway testing because vehicle manufacturers have money and legislators need to finance campaigns.

Here is why I’m confident that we’ll have that much data:

In the US people drive about 45 billion miles per year. If two percent of those miles are driven (cf. Waymo, Tesla, etc) by automated systems gathering training data, you get almost a billion miles of data every year. It’s not two percent YET, but depending on adoption the US may reach the billion-mile mark of road miles actually driven by full self-drive systems within a few years. And those road miles automatically driven are the tip of a vast iceberg of training data.

Other vehicles are equipped with “L2 systems” that do things like assisting lane keeping, applying brakes to avoid collision, managing lane changes to maintain cruise control speed, limiting cruise control speed to avoid getting too close to vehicles in front of you, monitoring local speed limits to enforce compliance on fleet vehicles, etc. This includes recent models of Ford, Chevrolet, BMW, Volkswagen, Audi, Porsche, Rivian, Kia, Hyundai, Volvo, Mercedes-Benz, Nissan, Toyota, Lexus, Honda, Acura, and Subaru.

These are not full self-drive systems. But every one of those “L2 systems” is gathering data just as constantly and eagerly as the full self-drive vehicles, whether their driving-assist features are currently in use or not. Taken together, and given the intention of manufacturers to have the option “available” on most new vehicles, we’re going to get that billion miles of training data every month, then every week, then EVERY DAY within a few years as new vehicles with the systems installed replace a larger and larger percentage of existing cars. Keep in mind that all this is still just within the US. There’s a whole planet full of people out there and most of them drive at least a quarter as much as Americans do. And data will be harvested just as aggressively from their driving.

As for “option availability,” who are we kidding? The manufacturers want that data and they’ll put the data-gathering systems into every new car whether the consumer pays for the “option” of being able to actually use the data for any kind of driving assist or not. And also, whether the consumer wants it or not. Or has any knowledge or control over who they share it with. That’s a whole different debate that we won’t get into here.

369 December 9, 2025 7:28 PM

https://news.yahoo.com/news/tech/science/articles/ai-making-spacecraft-propulsion-more-160000022.html

‘To make interplanetary travel faster, safer and more efficient, scientists need
breakthroughs in propulsion technology. Artificial intelligence is one type of
technology that has begun to provide some of these necessary breakthroughs.

From optimizing nuclear thermal engines to managing complex plasma confinement in fusion systems, AI is reshaping propulsion design and operations. It is quickly becoming an indispensable partner in humankind’s journey to the stars.

…commonly known as reinforcement learning, teaches machines to perform their tasks by rating their performance, enabling them to continuously improve through experience.

Reinforcement learning can improve human understanding of deeply complex systems – those that challenge the limits of human intuition. It can help determine the most efficient trajectory for a spacecraft heading anywhere in space, and it does so by optimizing the propulsion necessary to send the craft there. It can also potentially design better propulsion systems, from selecting the best materials to coming up with configurations that transfer heat between parts in the engine more efficiently.

In regard to space propulsion, reinforcement learning generally falls into two categories: those that assist during the design phase – when engineers define mission needs and system capabilities – and those that support real-time operation once the spacecraft is in flight.

Military applications, for instance, must respond rapidly to shifting geopolitical scenarios. An example of a technology adapted to fast changes is Lockheed Martin’s LM400 satellite, which has varied capabilities such as missile warning or remote sensing.’

MK December 9, 2025 9:35 PM

@mark

Montgomery County is about 500 square miles. Los Angeles County is about 4000 square miles. There’s no way a sufficient amount of “public transportation” (for free) could be provided. There will be automobiles. Some will be self-driving. I think it will be an improvement overall. Current Tesla FSD technology would make LA streets much safer.

Paul Sagi December 10, 2025 3:37 AM

Clive:
“Do we want an automated car that decides to “wrap dad around a lamp post” rather than “play skittles with a young mother and toddler” who “just stepped out onto the crossing”?”
A couple of years ago I thought about the same situation, my conclusion is the car should act in its own self-interest, i.e. the collision that does the least damage to the car, thus protecting car occupants. Hitting pedestrians is thus preferred to hitting a lamp post.

Bob:
I faced the same situation at a ‘T’ intersection and had the same reaction. The car passed a couple of feet behind my car, hit the median, spun and ended up in a ditch by the roadside.

Richard Kirby December 10, 2025 3:47 AM

@Note really anonymous

You might well own the physical car, but almost certainly won’t own the AI software, and it will probably be frequently auto-updated without your consent being required, due to safety considerations.

Perhaps they will provide a configurable option, so if you are an old man near end of life, you can select the altruistic option of risking injury to yourself and damage to the car if instead you were about to hit a young mother with children. Perhaps even have the whole trolley problem ala https://en.wikipedia.org/wiki/Moral_Machine when you first sign on to your new car.

Not really anonymous December 10, 2025 5:45 AM

You might well own the physical car, but almost certainly won’t own the AI software, and it will probably be frequently auto-updated without your consent being required, due to safety considerations.

No, I drive a car where that isn’t an issue and expect to continue being able to do that for as long as I can drive. I won’t be driving anything that drives itself. (I don’t even have cruise control.) Very likely grandfathering of currently available cars will still be in effect by the time I need to give up driving.

Perhaps even have the whole trolley problem ala https://en.wikipedia.org/wiki/Moral_Machine when you first sign on to your new car.

The moral machine doesn’t seem to be quite the same as the trolley problem, as it appears you need to actively participate in causing someones death with either choice, while typically in the trolly problem you only need to actively participate in one of your two choices. The idea of setting an altruism factor seems interesting, but it is going to be complicated to explain how it affects the cars choices and how you value various individual beings and property.

finagle December 10, 2025 6:27 AM

A few thoughts to add to the debate.

Driving offences are handled separately in the UK and maximum sentences are laughable compared to crimes committed with other weapons.

AI driving systems being linked to deaths are more likely to be handled as civil cases, not criminal. Once self driving is legal the lawmakers are unlikely to want to be bothered with moral or technical questions, they are going to leave it to the bereaved to sue the car makers. Once in a while they may take action against a particularly public and egregious manufacturer, but in the main they will point to civil law and hide.

Self driving is already legal in a lot of places. What we’re really discussing is the step to make it mandatory, which will decimate departments of the civil service (no driver licensing), police (no need for road enforcement), financial sector (no car insurance differentiation by driver), and make driving instructors and test centres a thing of the past. The resistance to such a move comes from those who have an interest in keeping the status quo, regardless of any studies or arguments to the contrary, and they are the ones currently making the decisions.

@Not really anonymous
ULEZ has shown how easy it is to grandfather vehicles. It is a LOT less than 10 years.

Road use is already changing drastically with the enormous number of (illegal) e-bikes and scooters. A comfortable 50% of traffic in my (admittedly very urban) area is delivery based using mopeds (almost all L plate, so basically untrained and licensed) or e- vehicles (almost all not legal). Self driving systems need to handle this shift in road usage, where you have kamikaze users controlling vehicles with inadequate training or regard for safety.

Montecarlo December 10, 2025 7:40 AM

There are two scenarios that would cause a randomized control experiment to be terminated.

One is when the experimental group is experiencing a significantly worse outcome than the control group. The other is when the experimental group is experiencing a statistically significant benefit. In this case, it may be unethical to deprive the control group of the benefits of the intervention.

Given the rapid improvement of self driving vehicles, the second outcome is most likely. So the question is whether it would be ethical to allow human drivers to continue to operate vehicles, given the danger this poses to pedestrians and other vehicles.

Many people consider freedom of movement to be a fundamental right. A possible compromise: the government does not perform a cost/benefit analysis of your vehicle trip (i.e. you retain the freedom to travel), and in return the vehicle must do the driving to minimize the harm to third parties.

Kenneth December 10, 2025 8:28 AM

I’ll continue to use this example as a litmus test.

When I can get in a car and legally drink a six pack while the car moves towards my destination and if during that trip I get in a wreck for some reason and I can sue the manufacture. Then I will trust autonomous driving.

Otherwise this is just a gimmick.

Walker, cyclist, driver December 10, 2025 9:06 AM

Some critical issues we need to decide on:

Who has liability, the “driver” or the car vendor? This has to be made very clear.

Will self-driving cars let their “drivers” break the rules of the road, like speed limits? Drivers will want that, but authorities should mandate that the cars must refuse.

This in turn means that speeding drivers will not be using self-driving mode, not unless this becomes mandatory and enforced at some point. So the most dangerous drivers will stay dangerous, this reduces the safety benefit of self-driving cars.

Car vendors will want the driving environment to be as simple as possible, to get the maximum benefit from automation, and be able to drive faster safely. That means, once they have the numbers, they will likely want to ban not just manually driven cars from certain roads, but also cyclists. And they’ll want to remove pedestrian crossings, which means fewer and more cumbersome (stairs) crossings for pedestrians.

On the other hand, self-driving could be a benefit for cities if handled right: E.g. we could at some future point mandate that to be allowed to drive a car in city centres you will have to do so in self-driving mode, and the car must follow a low speed limit like 30 km/h set wherever pedestrians and cyclists may be encountered. Drivers who refuse this or don’t have self driving cars would then park outside the city centre and use public transit (which will have to be added were missing). This would make cities a lot better for everyone.

Clive Robinson December 10, 2025 9:22 AM

@ Paul Sagi,

With regards

“my conclusion is the car should act in its own self-interest”

Just hope it’s not fitted with an ejector seat like the Goldfinger 007 car 😉

I always used to “fail the Trolley Test” because I used to point out it was not an “either or” there was a third unstated option of “to do nothing”. Which would when you think about it be both legally and morally the best option as,

“You would be doing no wrong, that you could be blamed for.”

Whilst I suspect that would be true for 99:100 people, in my life I’ve always “rendered assistance” at car accidents and other events of misfortune, without thinking about it, I just instinctually did it.

However I still sort of hear my father giving sage advice,

“The place to be when there is trouble is somewhere else.”

Along with the advice that,

“If something feels off, then trouble is probably on it’s way.”

He very definitely believed that what people called “a sixth sense” was real, in that evolution had taught our “monkey brain” to spot signs of danger, and other things that our evolved conscious human brain chose to ignore…

It’s why one of those “evolved human brain” sayings you hear said a lot really bugs me. It’s,

“Fight or Flight”

Because in nature there are way more than just those two options.

One more commonly seen is “Freeze” or “playing dead” also called “Playing Possum”. Then there are the “Fainting goats”… But also there are ducks, geese, and swans that to protect the nest/brood will pretend to have a broken wing to attract a fox or dog etc away.

Then it starts getting messy ={ with bodily fluids getting ejected towards the attacker. One I’ve actually seen is the Sea Cucumber that in effect turns it’s self inside out and ejects it’s guts and their contents at you…

Now imagine you had to write all that up as part of what @KC referred to as a,

“Documented, ethically-sound reasoning for whatever decisions an automated vehicle makes.”

It makes me think that such a document would never get finished 😉

Gautam Anand December 10, 2025 11:32 AM

AV’s on public roads are a very narrow lens to judge this topic by. Public roads and AV’s are a 1st world thing and will stay so for a decade or more.

The comparitive with drug trials (for new medicines, for cheaper medicines, locally made medicines), now thats a world wide requirement.

The topic definitely needs another comparitive.

Ismar December 10, 2025 3:59 PM

Most of the above discussions are, unfortunately, irrelevant.
Ask yourself why self driving cars are being introduced- is it to make the roads safer or increase profits for investors?
Since the investors will not be the ones paying the price on the roads and think they will get benefits from this type of investment (rightly or wrongly), we will be seeing more of these autonomous vehicles on our roads.
We do make these kinds of mistakes on this blog a lot where we spend time and effort discussing in detail facts which are not likely to affect outcomes in real world as we fail to understand what drives real world decision making .

Clive Robinson December 10, 2025 9:34 PM

@ Ismar, ALL,

With regards,

“Ask yourself why self driving cars are being introduced- is it to make the roads safer or increase profits for investors?”

There are actually three groups involved,

1, International scope corporations.
2, Federal / State legislators.
3, Those licensed to drive.

They form a hierarchy of “real politic” power with Corporations at the top.

The real question you need to think about is,

“What’s in it for Federal and State entities?”

One aspect of this is given above by @K.S. as,

“Push for self-driving is also a push for tighter government control over people’s mobility. Self-driving cars are trivially easy to centrally control.”

It’s not just “controlling mobility” it’s “full on surveillance” as evidenced by both OnStar and Tesla. In effect every time you put a computer with storage and independent of the vehicle owner communications in a vehicle the driver becomes a source of income.

Not just for the corporation and data brokers but due to “evidence collected” for both vehicle and medical insurance companies but both federal and state law enforcement via fines etc. In theory the vehicle could also be commanded to “take you prisoner” and off to a place where you can be “dealt with” without your permission etc (think Chicago’s unlawful Homan Square Facility “Black Site”,

https://en.wikipedia.org/wiki/Homan_Square_facility

Then there will be a fourth group to consider added to the list. Which is third party commercial entities with the ability to “file claim” for “alleged monies owed” who will become able to “disable and collect” the vehicle without any proof, or liability for their actions.

I could go on adding other groups who can use your self driving vehicle against you, but I think the general idea becomes clear. That is “self driving vehicles” will become not just a method of collecting income, but also a method by which you are in most of North America in effect “enslaved”…

I can see a new “market” opening where people will supply “aftermarket services” that add “flick of a switch” disabling mechanisms, etc. Till of course it’s made illegal for “Health and Safety” or “Think of the children” arguments.

Do I,

“Sound paranoid, cynical, or both?”

I assure I’m not, it’s a “logical path of behaviour” for federal or state legislators to go down, pushed by various “vested interests” to a “logical conclusion” based on existing and previous behaviours.

There are reasons why the USA has the worlds highest prison population by head of population size, and “self driving cars” will eventually become a tool to make those incarcerated figures worse.

For those that feel otherwise, please make your reasoning for that view point clear in your comment…

Paul Sagi December 11, 2025 2:32 AM

I had a front tire explode (a 2″ X 4″ hole opened in the sidewall) while I drove in the middle lane at 55mph. I had to avoid other cars while moving to the road shoulder.
I wonder how an autonomous vehicle would react to a tire that instantly deflated.

Paul Sagi December 11, 2025 7:17 PM

@Clive Robinson: Clive, I suppose giving a person a binary choice when a third option exists is: 1) Trying to manipulate the person or 2) The person giving the choices does not realise there’s a third option.

Corporations indeed make self-driving vehicles for profit, with varying levels of sophistication and responsibility.

Elon Musk over-hypes the driver-assist feature of Tesla vehicles, leading to crashes. Some drivers have reported their vehicle suddenly braked on a clear road or suddenly swerved and crashed into a roadside hazard at a road exit when they intended to continue going straight (not exiting).

If you get the chance, take a look at the self-driving vehicles from China, BYD, Omoda, and etc.

Will steering wheels and control pedals disappear? Then how to suddenly change direction and swerve in to the carpark of a shop the driver suddenly remembered they need to visit as the shop came into sight? How to program the car destination at the beginning of a trip? What if there’s a temporary road closure because a truck has spilled its load? What if GPS fails?

Gert-Jan December 12, 2025 8:57 AM

It’s a complicated topic.

The way a driverless car works is different compared to a human driving a car. That also means that its performance may be better or worse depending on the circumstances.

What you’d want are minimal safety standards, either generically or for classes of situations. In the end, it will be political decision how high this bar should be.

I am not a statistician, but I assume it will very difficult (if not impossible) to prove a driverless car can meet such safety standard. Even more difficult that it will keep meeting such standard after a software update, a hardware update, and/or after a few years of wear and tear.

Ideally – from a safety point of view – the driverless car would become unoperable if the circumstances have changed that would lead to a below-bar car performance. For example during heavy fog or rain, when too many cameras or sensors are underperforming, when the vehicle is overweight, when it doesn’t understand the road signs (e.g. abroad), etc. etc.

Paul Sagi December 13, 2025 12:38 AM

I wonder how software updates of autonomous vehicles will be done. Will they be pushed to the vehicle whilst it’s in motion? I think updating a moving vehicle could lead to a crash of the vehicle. I propose that software updates be pushed to vehicles only whilst they are being recharged.

E.R. December 16, 2025 5:21 PM

@Bart
The difference is, the autonomous drivers will steadily get better and stay that way.

I guess you mean “steadily” to a particular point that depends on factors such as the quality of sensor data.

Or how much effort a profit-seeking entity feels compelled to invest into such improvements.

Not to forget that Waymo was recently in news in Austin Texas due to the fact that their self-driving cars have ignored the “stop” signs on school buses that have stopped to let out children. Local law enforcement had requested Waymo to cease operations in that city until they had resolved that safety issue. The company refused.

The second issue that needs to be remembered is that companies like Waymo steadily exaggerated the amount of physical training miles driven by their vehicles, or included virtual miles in that number. The original intention with this was to dilute the amount of injuries per driven mile. But that may be due to the “creative redefining of facts” illness they accrued while being part of Google.

Clive Robinson December 17, 2025 6:39 AM

@ E.R., ALL,

With regards “more training data” arguments, as you note,

“The second issue that needs to be remembered is that companies like Waymo steadily exaggerated the amount of physical training miles driven by their vehicles, or included virtual miles in that number.”

It needs to be remembered that,

“They actually do not really work.”

Because the “useful information” per mile rapidly goes down to the level of “random event” at best, over increasing distance (or time if speed limits are being obeyed).

In essence you,

“See nothing new, just the same old same old”.

But there is another issue which is the “implicit assumption of scaling”

The argument the Current AI LLM and ML Systems “investment vehicles” make great claims about Agent AI just needing to be “larger”…

For very practical reasons I’ve mentioned “scalling up” rarely gives you anything useful.

Look at it this way,

I muck out the animals at the farm and put the 5h1t on a pile. Ask two questions,

1, How much do you need to pile up to make “cattle crap useful?”

2, As you pile up at what point does the rising mound of “cattle crap become dangerous?”

There is the “grains of sand” argument that basically says that the point of danger always arises very rapidly, whilst the point of being useful very very rarely if ever happens.

Thus scaling up rarely if ever produces something “safe and efficacious”.

rdbrown December 28, 2025 2:09 AM

Philip Koopman, is worth reading on this subject, he’s been publishing about for the last 10 years or so, coming from Embedded Systems safety.

Koopman, P., Widen, W. (2024). Redefining Safety for Autonomous Vehicles. In: Ceccarelli, A., Trapp, M., Bondavalli, A., Bitsch, F. (eds) Computer Safety, Reliability, and Security. SAFECOMP 2024. Lecture Notes in Computer Science, vol 14988. Springe doi:10.1007/978-3-031-68606-1_19

One of 671 from Google Scholar, many available from arXiv or cmu.edu

https://safeautonomy.blogspot.com/

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.