NTSB Investigation of Fatal Driverless Car Accident

Autonomous systems are going to have to do much better than this.

The Uber car that hit and killed Elaine Herzberg in Tempe, Ariz., in March 2018 could not recognize all pedestrians, and was being driven by an operator likely distracted by streaming video, according to documents released by the U.S. National Transportation Safety Board (NTSB) this week.

But while the technical failures and omissions in Uber’s self-driving car program are shocking, the NTSB investigation also highlights safety failures that include the vehicle operator’s lapses, lax corporate governance of the project, and limited public oversight.

The details of what happened in the seconds before the collision are worth reading. They describe a cascading series of issues that led to the collision and the fatality.

As computers continue to become part of things, and affect the world in a direct physical manner, this kind of thing will become even more important.

Posted on November 13, 2019 at 6:16 AM60 Comments

Comments

Bob Paddock November 13, 2019 7:22 AM

I attended a Presidential Commission on Robotics meeting at Carnegie Mellon University during the Obama administration.

UBER, Google, and a whole lot of government acronyms such as IARPA were present.

What they spent most of the day obsessing about was:

“We know, statistically, our technology WILL kill a child. How do we handle the public and political fall out of that?”

In my view they were more concerned about that than looking for technical solutions.

I brought up a personal experience:

I once almost had a accident on Interstate 80, in a construction zone.

Somehow a car that was over packed pulled from between some construction equipment, in front of me.

This driver could not see out any window but right in front of him, and he was pulling across the interstate traffic, not going with the flow. He could not see out the passenger window, that was facing me.

He shot out from between the construction equipment about twenty feet in front of me, while I was doing 45 MPH, remember it was a construction zone.

The correct solutions to the problem was to floor the gas, so that I could get in front of him while there was still space, and get off on the right hand berm of the road.

Any solution the applied the brakes would have caused a collision.

The experts in the room replied that Aircraft Autopilot software may have been able to avoid a collision in this example.

David Rudling November 13, 2019 8:49 AM

@Bruce
This goes back to the point I made to yourself and the other speakers during the questions following your talk “Securing a World of Physically Capable Computers” at Oxford University on June 17 this year.
All too often rubbish software has been able to avoid liability because frequently it is not treated in law as a product in the normal way. Often the user just has a license to use it.
I drew a comparison with the Boeing 737 Max situation and pointed out that Boeing could not avoid the liabiity for (allegedly) selling a defective product (the airplane) even if the defective component was software.
The same will surely hold true for all physically capable products in which software is embedded. The manufacturer will be liable for a defective overall product irrespective of which component of the product is defective.
I suggested that product liabilty judgements in the courts had the potential to be a significant future controlling force in this area. I don’t think you were convinced but I still maintain my view.

parabarbarian November 13, 2019 8:57 AM

@ Bob Paddock

“We know, statistically, our technology *WILL* kill a child. How do we handle the public and political fall out of that?”

In my view they were more concerned about that than looking for technical solutions.

Maybe that is because they believe they can overcome the technical problems. The perception problem is another narrative. Cars are dangerous. Self driving cars may be much less likely to cause a death but they will not be perfect. Given a sufficiently terrible incident the neo-luddites will form up an Everytown for Car Safety or Moms Demand Action for Driverless Car Sense or some simlar group. Given a wealthy benefactor to finance a propaganda campaign and that is a legitimate concern.

The experts in the room replied that Aircraft Autopilot software may have been able to avoid a collision in this example

Where any of the experts from Boeing? 🙂

JonKnowsNothing November 13, 2019 9:09 AM

Autonomous systems are going to have to do much better than this.

Well… sorry to inform but… they can’t.

They cannot do better because the entire project systems are directed towards MONEY and PROFIT.

What they want is to commit murder by proxy and not go to jail.

We as a society allow this all the time, sometimes we do eventually say “STOP” but generally it takes a long time and a lot of deaths and then these same companies find a way to shop-the-slop to other countries.

If while driving a car, I see a pedestrian and I ignore the pedestrian (several times) than then speed up my car to a rate that when I hit the pedestrian it causes death… I don’t get an IPO or Venture Funding or Bonus for that.

Airline autopilot programs don’t work either. The NTSB always comes back with Pilot Error because someone in 1/1000000th of second didn’t push or pull or flip or flick a switch that would have disengaged the computer.

Our Shoddy Software Programs the perpetuate the MONEY AT ALL COSTS paradigm of development will continue until enough folks just drop out of the market.

It’s not going to be that long as the economic inequities slide farther apart.

  • The new problem will be what will YOUR family do when you are the victim?
  • How much will YOUR family suffer by false claims about what happened?
  • What impediments will be placed to prevent YOUR family from finding out the truth?
  • What changes will not happen because YOUR death or paralysis are the edge and corner cases?

They are not able to do better because they have no answers that do not reduce their PROFIT or TIME TO MARKET.

In fact, they have No Answers At All.

ht tps://arstechnica.com/tech-policy/2019/11/uber-ceo-downplays-khashoggi-murder-then-walks-back-his-comments/

ht tps://www.theguardian.com/technology/2019/nov/11/uber-jamal-khashoggi-saudi-arabia-mistake-dara-khosrowshahi
(url fractured to prevent autorun)

Evan November 13, 2019 9:54 AM

Contrast the Boeing 737 MAX problems with this. There are technical flaws in the flight control software that Boeing shipped, so the 737 MAX was grounded indefinitely until those were resolved or mitigated. Contrast this with driverless cars, where the silence is deafening. A very germane security question is, what do we do about the fact that tech industry lobbyists have effectively captured the agencies responsible for regulating them? We won’t be able to install any effective regulation of driverless cars, or machine learning algorithms, or any other aspect of technical automation, until that changes – and it’s a much bigger long-term public safety threat than this or that car going haywire.

Alex K. November 13, 2019 10:39 AM

@parabarbarian :

Self driving cars may be much less likely to cause a death but they will not be perfect. Given a sufficiently terrible incident the neo-luddites *will* form up an Everytown for Car Safety or Moms Demand Action for Driverless Car Sense or some simlar group.

I can’t imagine that the protest group will be named anything other than “Mothers Against Driverless Deaths”, when the organization currently using that acronym begins to fear their current mission will be deprecated by this technology.

Adrian November 13, 2019 11:51 AM

There are both an objective issue and a subjective issue.

Objectively, every time I drive, there’s a terrifyingly real probability that I’ll be killed or even (gulp) that I’ll kill someone.

Subjectively, I feel like I have more control over that than straight statistics. Most drivers, including myself, think they’re above average, which is possible but unlikely. We believe that it’s the aggressive and inattentive drivers who cause crashes, so they’re the ones who’ll die, even though we know that’s not really true.

When you let someone/something else do the driving, you no longer have that subjective assurance that you control your destiny. Even if the algorithms statistically do orders of magnitude better than average human drivers, we lose that (false) sense of security that feeling in control provides. When a self-driving car causes a crash that most human drivers would have avoided, it’s going to be hard to convince them that’s a reasonable trade-off for the thousands of other crashes that would have happened at the hands of human drivers.

Z.Lozinski November 13, 2019 12:39 PM

Perhaps it’s time to adopt the rules from aircraft accident investigation for autonomous vehicle accidents globally, and maybe other software failures and security failures in critical systems when there is loss of life. This is “ICAO Annex 13” process. It has resulted in massive improvements to air safety over the past 70 years.

The sole objective of the investigation of an accident or incident shall be the prevention of accidents and incidents. It is not the purpose of this activity to apportion blame or liability.

[Emphasis added].

There are a number of implications, that aim to get to the root cause(s), and also avoid the investigation turning into a Public Relations exercise.

  1. The investigation process is carried out by a group that represents all of the key stakeholders. The Chicago Convention is an International Treaty so the contracting parties are nation states. But you can expect major investigations to include the US NTSB, the UK AAIB or the French BEA and representatives of Airbus/Boeing/GE/P&W/Rolls-Royce. To apply this model for vehicles/security means creating national centres of competence in a number of states.
  2. The investigation process is not carried out in public. There are not frequent press briefing. This could be a source of abuse, but the people I know involved in the ICAO Annes 13 process take their responsibilities extremely seriously. In practise only a couple of Annex 13 investigations over the last 70 years have been compromised by political considerations, and everyone in the aviation industry knows which ones they were.
  3. A draft report must be published after 12 months. A full report is published when it is ready. (Sometimes this takes longer than people want, but it is usually because the investigation wants to get to root causes and to gather the evidence required to support the conclusions.
  4. The investigation process can make binding recommendations during the investigation process if they find critical issues, which take immediate effect under international law. The grounding of the Boeing 737-MAX in March 2019 is the most visible recent example of this. The more usual version is the issuance of airworthiness directives, requiring the inspection or replacement of a potentially failing component for a specific aircraft or engine type.
  5. The lawyers hate it, as you cannot use any of the confidential evidence given to the air accident inquiry in civil or criminal proceedings. There was a recent accident in the UK, and the Police asked for the files from the Air Accident Investigation Board. They were refused by the UK Air Accidents Investigation Board (AAIB), with the support of the Government Minister. The High Court agreed on the grounds that the safety of the global aviation system was more important than the needs of any single law suit.

I’ve used similar ground rules when investigating major project failures on behalf of my employer and our clients. It is hard work, but they work.

To be fair, the NTSB seems to be following a similar process. The key point is to get global agreement we are going to focus on root causes not blame assignment.

Clive Robinson November 13, 2019 12:50 PM

@ Bruce,

Autonomous systems are going to have to do much better than this.

Pardon me if I say that is a “social argument” rather than a “technical argument”.

Engineers know that things fail, science in the main predicts them fairly well which is why “Preventative maintenance” used to be considered highly where members of the public were concerned[1].

However we actually ask for very low reliability from humans, and assume fear of incarceration, fines or both will encorage better performance. These are things that will not work with devices that do not have suficient complexity to have free will. Further those that will sell/lease such systems will ensure that they will not be subject to those legal remedies either.

The problem though is rather more indepth than liability and punishment, because society has conditioned many to assume “accidents” happen by “act of god” not as is the case “deficiency of human”. Accident’s don’t happen and it’s not the mystic hand of some deity reaching out to punish.

All of what we call accidents are entirely predictable under the laws of nature, if sufficient knowledge and time to process them are available. Which just leaves the very vexed question of “If the consequences are avoidable in the remaining time” which may well not be the case. Justice is predicated on “someone” being to blaim even if it is a deity…

Further there is also despite all evidence to the contrary, an assumption of “infallibility” when it comes to computers. This causes many to belive not only do computers, but the machines they are put in, have both “reliabilty” and “infallibility” that realy is not there and never has been.

Anyone who knows about currently automated navigation systems in public spaces and has not bought into the self deluding hype[] knows they are neither reliable or infallible.

Not least because even when such systems are highly constraind to tracks or buried wire following when in populated places. Or as with planes or ships on auto pilot they follow straight lines from way point to way point without considering that anything else will be sharing the same air space or sea way. They still fail for various reasons.

However because the average human can do collision avoidence as part of their inherant make up, they do not consider it to be the extrodonarily difficult task that it actually is even in the simplest of cases such as just stopping on a track.

Back in the 1980’s and 90’s when computing power could not even deal with the simple task of speed control on railway systems for trains on grades or coming into platforms, engineers turned to “fuzzy logic” to help solve the problem. Whilst this helped it did not solve the underlying issues. In this decade we are talking about AI, again it is not solving the underlying issues.

We have no understanding of how to solve the real underlying issues, and society is almost certainly not ready to accept the actual solutions when they are found. Thus trying to automate the problem is like trying to fix a broken bone with a sticking plaster, which has the magic talisman of “Mr Bump” printed on it.

[1] Whilst those supplying goods and services to the public ostensibly take some level of care to ensure reasonable safe functionality, past NTSB reports have shown that private individuals tend to work to lower levels of care, if any at all. Hence the “Drive and dump” attitude of some who simply walk away from vehicles when they break beyond a certain degree, some just leaving them at the side of the road where they break.

MarkH November 13, 2019 2:55 PM

I believe that Clive is spot-on about the present capacity of automation to function dependably in the range of situations presented by typical public road networks.

I note that much of the “testing” of these gadgets has been carried out in the arid southwest of the U.S., on broad, magnificently engineered, highly maintained and well-marked streets. In much of the U.S., only the better stretches of interstate highways are as good, and in many countries roads so easy to navigate are only a dream.

A main thoroughfare in my wife’s Eastern European home town has a fairly steep gradient, with two lanes in the uphill direction (for ease of passing) and one lane downhill. The uphill lanes were (until recently) so badly cratered that it was common for drivers going up to cross their vehicles all the way into the wrong-way downhill lane … most roads in that region have received no repair whatsoever since the fall of the communist regimes.


For years, my idea has been “self-driving cars will be lousy and frequently kill people, but people drive so badly that it won’t take many years for the robots to do better.”

But I recently saw an interesting program on public television (called “Look Who’s Driving”) that surveyed the state of self-driving tech, which offered a different perspective.

I was startled to learn that the average American driver would need to drive 8 hours per day for about 1,000 years before reaching 50% probability of a fatal accident. As it turns out, the safety level of human driving — notwithstanding the abundance of slovenliness, negligence and irresponsibility — sets a pretty high bar.

When I think about challenging conditions — for example, night driving in rain on slush-covered roads (markings obscured) in urban traffic with heavy merges, along paths altered by construction work — I wonder how would those robots fare, which crawl in such stately fashion along the immaculate streets near Apple headquarters?

As Clive observes, the techno-optimists are imagining that they can automate the response to challenges they don’t really understand.

My “not many years” for self-driving tech to match human driving safety, might prove to be several decades.


One of the many daunting mountains to climb, is that driving is and will remain a highly social activity. Threading a safe path through busy areas involves a constant (and usually quite unconscious) scanning of subtle cues from pedestrians, cyclists, and the drivers of other vehicles — including actual body language (when visible) and the “body language” of vehicle motions — which help to estimate where they’ll being going in the next few seconds.

I’m sure robots will get better and better at this. To maintain a high level of safety, they’ll need to approximate the acuity of human drivers, who benefit from millions of years of evolutionary selection for the ability to detect and discriminate behavioral signals.


I’m sure that a far more realistic approach to enhancing road safety with technology, is for computers not to drive the car, but rather support the driver.

It’s been technologically feasible for many years, to add systems to cars able to monitor the driver’s attentiveness. These could both alert the driver that there’s a problem, and assist the car in getting safely away from traffic in case the driver doesn’t respond.

I’m pleased to see that there is now some work in deploying this kind of technology on passenger cars.

CallMeLateForSupper November 13, 2019 4:35 PM

From a (transcript of) Look Who’s Driving, an episode of PBS’ NOVA that premiered three weeks ago tonight, which I watched.

“MISSY CUMMINGS: When cars are fully manual and you must do everything yourself, you bring all your cognitive resources to bear to do that task, but if the automation is doing a good enough job, people will

CHECK OUT in their heads <<<<<
very quickly. Your brain is wired to stop paying attention when the automation starts doing well enough.” (EMPHASIS mine)
https://www.pbs.org/wgbh/nova/video/look-whos-driving/

Try riding in a vehicle that someone else is driving, but maintain a driver’s alertness. See how long you can avoid taking a long look at the lovely mesa in the distance, or thinking about your growling stomach. Staying alert and focused is difficult.

Sancho_P November 13, 2019 6:10 PM

It’s amazing that we technicians always try to automate extremely complex things which even the simplest human can learn within days.

To automate e.g. a politician would not involve real time action, and being relatively simple regarding actors and sensors.
And probably would avoid bribery.
Also the social consequences would be relatively small then.
So why ???

Clive Robinson November 13, 2019 6:42 PM

@ Sancho_P,

To automate e.g. a politician would not involve real time action, and being relatively simple regarding actors and sensors.

The real difference between robots and politicians whilst small in some areas, is “entertainment value” robots being worthwhile machines doing a job well, are to be polite, just “dull”. Politicians on the other hand being not worthwhile doing a job about as badly as it can be, for which they should have been sacked long ago, are for some very strange reason entertaining… A kind of a conflation of the worst dregs of “Reality TV” and “100 dumbest criminals caught on camera”, hence dare I say it premium “Road Crash TV”…

JG4 November 13, 2019 7:56 PM

@Clive – I continue to call for burning at the stake – after a fair and speedy trial, and a finding by the highest court that it is neither cruel nor unusual, in consideration of their crimes.

@MarkH – I am pleased to infer that your pro-capitalist bias is informed by a healthy appreciation of Marxism. Back in the day I was rabidly anti-Marxist, as perhaps Clive was in his youth. It’s not clear to me that the wreckage of the Evil Empire has produced leadership any more or less corrupt than ours. “Not many years” could go either way. I’ll guess that it’s just a few years until machines are clearly safer than human drivers.

https://www.schneier.com/blog/archives/2019/11/ntsb_investigat.html#c6801589

My “not many years” for self-driving tech to match human driving safety, might prove to be several decades.

My only quibble with your excellent thoughts is that you are implicitly assuming a linear (or at least less-than-exponential) progression of adaptive systems. I use “adaptive” to capture all of the potential approaches to “AI.” IMHO deep learning, machine learning and neural networks all are the same thing, but I understand that people versed in these things care about the distinctions. I would have preferred that they use clearer language rather than hair-splitting.

JonKnowsNothing November 13, 2019 9:51 PM

@Z.Lozinski

The key point is to get global agreement we are going to focus on root causes not blame assignment.

And that really describes the problem in a nutshell. You can write the crappiest code on the planet but if it falls into “no blame game” then nothing really is achieved.

Accidents such as these are known and expected. The shoddy programming that passes as “ready for profit” kills loads of people from all sorts of industries and conditions. As long as the “no fault” concept is allowed to remain for Big Companies but not for Individuals we will continue to have more shoddy products killing more people.

Reverse the use cases some to see the bias.

  1. I am driving my autonomous stupid car and the “smart car” runs over:

a) a child (who cares not a problem)
b) a family (who cares not a problem)
c) a policeman (oh.. well..)
d) a politician (well sputter sputter)
e) a president (that’s too far!)

Its the same stupid car, doing the same stupid things.

Now we get the PR SPIN

a) killing the child – Where were the parents?
b) killing the family – A Tragedy but move on nothing to see here
c) killing a policeman – Call out the SWAT team, safeties off
d) killing a politician – Either you celebrate or call out the National Guard
e) killing a president – You are dead no matter what unless you are the CEO of a Mega Corp and have funds stashed in an Off Shore Tax Haven.

So “no blame” fails spectacularly.

And we DO know the root cause. It’s in every bugzilla database on the planet. thousands, tens of thousands and more UNFIXED, UNFIXABLE bugs. Corner, Edge and Race Conditions all documented.

Just ask them to release their ENGINEERING database. The ones Engineering provides Marketing and QA are only subsets.

The NTSB may find the fault but they are there to protect the corporations with a shield of propriety.

And it’s not going to shield YOU from getting pancaked by an autonomous semi-truck that doesn’t stop.

Clive Robinson November 14, 2019 1:34 AM

@ JG4,

I would have preferred that they use clearer language rather than hair-splitting.

Sounds like Proto-politicians on(/in) the make(ing) 😉

Jonas Quinn November 14, 2019 6:49 AM

Nothing is impossible.
Wonder is the birth of awareness, and of us. Growth is the driver of chi.
This life is nothing short of an awakening quantum leap of consciousness-expanding curiosity.
We are at a crossroads of freedom and turbulence. Reality has always been beaming with beings whose chakras are enveloped in guidance. Who are we? Where on the great vision quest will we be re-energized?

Discontinuity is born in the gap where energy has been excluded. Stagnation is the antithesis of choice. Where there is turbulence, inspiration cannot thrive.

The goal of psionic wave oscillations is to plant the seeds of transformation rather than dogma. The infinite is full of four-dimensional superstructures. Today, science tells us that the essence of nature is joy.

Imagine a redefining of what could be.
We reflect, we exist, we are reborn. We exist as superpositions of possibilities. To go along the mission is to become one with it.

Jonas Quinn November 14, 2019 6:52 AM

Nothing is impossible.
Wonder is the birth of awareness, and of us. Growth is the driver of chi.
This life is nothing short of an awakening quantum leap of consciousness-expanding curiosity.
We are at a crossroads of freedom and turbulence. Reality has always been beaming with beings whose chakras are enveloped in guidance. Who are we? Where on the great vision quest will we be re-energized?

Discontinuity is born in the gap where energy has been excluded. Stagnation is the antithesis of choice. Where there is turbulence, inspiration cannot thrive.

The goal of psionic wave oscillations is to plant the seeds of transformation rather than dogma. The infinite is full of four-dimensional superstructures. Today, science tells us that the essence of nature is joy.

Imagine a redefining of what could be.
We reflect, we exist, we are reborn. We exist as superpositions of possibilities. To go along the mission is to become one with it.

Gideon November 14, 2019 7:05 AM

The whole concept of driverless cars is misleading; the profit is in driverless trucks; companies are misleading us into thinking the technology is for us as drivers – it’s not – whether we drive ourselves of have the car drive us is fairly immaterial – we’re not being paid to drive.

Driverless trucks on the other hand have massive potential for profit.

Trucks are bigger and scarier – when they crash they do more damage, but as soon as the tech is ‘signed-off as safe for cars’ it will immediately be applied to trucks.

Tech companies are used to being ahead of laws – they don’t need to circumvent laws because the laws were written before the tech arrived and it takes fairly long for tech to catch up.

The answer to driverless trucks is relatively simple – the creator/distributor of the autonomous system remains legally responsible for it for the life of that system/vehicle – any all accidents are the responsibility of that company.

Any/all algorithmic logic must be published in order for a system to be approved – before I or my child get in a driverless vehicle I want to know in what circumstances the software will sacrifice her and me.

Personally, whilst I know many of the flights I take are run on autopilot for much of the flight I would not fly on a plane that relied solely on autopilot. Once driverless trucks are unleashed none of us will have the ability to decide for ourselves – we’ll either use the roads and be at the mercy of driverless truck algos or we won’t usethe roads; the latter is not practical.

Tux November 14, 2019 7:06 AM


“We know, statistically, our technology *WILL* kill a child. How do we handle the public and political fall out of that?”
In my view they were more concerned about that than looking for technical solutions.

Maybe that’s because they are already doing the maximum they can regarding the technical solutions, and that even with this maximum there will be accidents?

me November 14, 2019 9:56 AM

@Tux

Maybe that’s because they are already doing the maximum they can regarding the technical solutions

@Adrian When a self-driving car causes a crash that most human drivers would have avoided, it’s going to be hard to convince them that’s a reasonable trade-off for the thousands of other crashes that would have happened at the hands of human drivers.

Do you call this “doing the maximum”?
from the article:
self-driving system did not have the capability to classify an object as a pedestrian unless they were near a crosswalk.
When the car thought Herzberg a vehicle or bicycle, it assumed she would be travelling in the same direction as the Uber vehicle…
When it classified her as an unknown object, it assumed she was static…
each time the classification flipped, the car treated her as a brand new object. That meant it could not track her previous trajectory and calculate that a collision was likely…
when finally realized that whatever was in front of it could not be avoided… It suppressed any planned braking for a full second, …avoid unnecessary extreme maneuvers in response to false alarms.

This is super dumb approach, is flawed in evry point.
this is so so bad that i have done better than them and it was only a light following people for school recital, i’m super noob in ia and still did better than them: since my ia was able to detect persons i programmed a light to follow persons, but if two persons are detected it will follow always the same by comparing where it was in the previous frame.

how is possible to assume that a bike always travel in the car direction and can never be an obstacle??
they should be all jailed, ALL!
this is just a fraud scheme where they wanted to sell a technology that doesn’t exist and can’t be sold yet.

MarkH November 14, 2019 3:44 PM

@JG4:

you are implicitly assuming a linear (or at least less-than-exponential) progression of adaptive systems

Possibly, but I was much more explicitly regarding the cautions expressed by experts who have been monitoring the development of autonomous vehicle technology, in the NOVA documentary kindly linked above by CallMeLateForSupper.

Because I don’t wish to rant too repetitively on this blog, here’s a previous rant expressing my judgment that “A.I.” is mostly fraudulent.

For me, the most humorous of sciences is exobiology, because noone has yet found a single specimen of what it purports to study. A.I. runs a close second in this race … is there any gadget humans have created that can legitimately be classified as intelligent?

In my teens, I happened to be in the same place as Andy Koenig, who had written a “learning program” based on the “think of an animal” guessing game. If the program failed to guess correctly, it would ask you to enter a question to distinguish your animal, and then add it to the file it used for guessing. I thought, “ok, this is a clever toy.” [The ANIMALS program was in the directory whose name started with ARK, for Andy’s initials.]

The same computer center also had available on the terminals Joseph Weizenbaum’s ELIZA, the “simulated therapist” which would hold a “conversation” with you by making simple analysis of sentences you entered and picking out certain categories of words to echo back to you within canned phrases. Weizenbaum never pretended the ELIZA understood anything: it was much more about the responses of the person, than coming within a factor of 1,000,000 of “intelligence.” I though, “ok, this is a clever toy.”

In the late 80s I was laid up a couple of days with a back sprain, and decided to do a little reading on the “neural networks” that were So Exciting. When I discovered that they were just matrices with feedback algorithms to tweak the coefficients, I thought “what a crock of $hit: maybe these things are useful in some domains, but calling them ‘neural networks’ is asinine.”

I’m guessing that the name came from a grotesquely oversimplified model of neuronal functioning proposed by the artificial intelligentsia in the 50s, that had already been obsoleted by actual brain research. Really, it’s as bad as “Crown Sterling” referenced in some recent posts by Bruce.

I certainly can’t rule out that there is (or will be) exponential growth (asymptotically faster than any positive polynomial regardless of degree or coefficients) in adaptive systems (as you quite properly call them). But I also can’t rule out that there will be logarithmic (asymptotically slower than any positive polynomial regardless of degree or coefficients) growth!

I was blessed to be associated with some very bright geeks when I was young and still had a full head of hair. One of them observed to me the depressing similarity between predictions of what computers were going to do within 10 years, whether from 1950, 1960 or 1970.

Essentially, those same goals of exhibiting human-like ability in ordinary tasks remained “10 to 25 years away” in 1980, 1990, 2000, 2010, and will also in a few months.


The essence of the problem is that an extremely serious and dangerous real-world phenomenon — the delegation of decision-making from people to absolutely unintelligent machinery — is obscured from the public view by idiotic jargon like “artificial intelligence” and “machine learning.”

Sancho_P November 14, 2019 6:39 PM

@Gideon
”The whole concept of driverless cars is misleading …”

That’s right, 100 points – Anyway, I like the transition phase!
I’d have my car to drop me off at the hair dresser (because there is no parking space available), then driving to McDo to the free charging lots with renewable energy, watching ads there until I call it back by the phone app to pick me up when I’m ready.
If McDo is occupied my car will circle around the block while talking to other cars for free charging opportunities, or just wait in a bus lane / station until a bus is oncoming.
Oh,
if my car kills some people in the meantime – no probs, I have an alibi.

wiredog November 14, 2019 7:43 PM

”Autonomous systems are going to have to do much better than this.”

Hate to be the bearer of bad news, but the non-autonomous systems aren’t any better. If the autonomous systems are better than human drivers, then that’s going to be good enough to start with,

About 4 years ago I saw a jaywalker get run over fatally. I was the first one on the scene and called 911. The driver didn’t stop, and may actually not have realized he hit her. They were never caught anyway.

MarkH November 14, 2019 8:21 PM

@wiredog:

I sympathize with your dismay over the miserable state of road safety. Many countries have better road safety than the U.S.

Even so, the U.S. is now running almost 100 million miles (about 150 million km) of road vehicle travel per fatality attributed to road vehicle impact/collision.

As I mentioned above, a typical driver would need about 1,000 years at 8 hours of driving per day, to reach a 50% likelihood of reaching that fatal incident.

Whether self-driving cars of today are able to equal that is an open question. They might not be there yet. Possibly, it might take them a long time to get there.

In the meantime, it’s within U.S. reach to cut road fatalities by half without the use of bleeding-edge tech.

Clive Robinson November 15, 2019 1:29 AM

@ MarkH,

As I mentioned above, a typical driver would need about 1,000 years at 8 hours of driving per day, to reach a 50% likelihood of reaching that fatal incident.

I’m curious as to where that statistic came from…

As a back of a napkin first approximation 300,000,000 US citizens and 30,000 road fatalities a year. Says a 1:10,000 chance of being one of those fatalities. But as we know the actual number of drivers on the road is a lot less than that so one in four thousand might be nearer. But people do not drive for eight hours a day often it’s less than one hour a day…

Which brings up the vexed question of the ratio of drivers to fatalities. In modern cars it’s actually quite hard to die if the car is kept within the legal road behaviours. Interestingly as seen in many other nations road fatality statistics modern car design is keeping more drivers alive.

Interestingly that 30,000 road deaths a year in the US does not change that much even though population size has… Thus if the numbers of those dying inside a vehicle is decreasing, that suggests that the number of people outside of vehicles dying is increasing…

JonKnowsNothing November 15, 2019 1:47 AM

One of the common mantras about autonomous cars is that they are less likely to have an accident because a computer can react faster than a human.

This narrative follows:

For every accident, what is the result to the drive, passenger, pedestrian?

The narrative implies that for the totality of every accident an autonomous car is involved in the result will be less injury and death than one where the human is driving.

But this is certainly a false premise for those inside a vehicle.

Why?

Because some years back a decision made by the auto-industry to make cars LESS safe overall. It was done to build lighter weight, “fuel efficient” cars and the cars were re-engineered to ALLOW the humans inside to receive significant injury in the various crumple zones built into a car.

The trade off was a lighter weight cars, less fuel vs crushed feet, legs, hips and concussive rebound injuries as the engine block slide into the passenger compartment.

Similar trade off was made to the rear of the car where the trade off was a lighter weight chassis vs crushed spines of rear seat passengers plus serious burns when the gas tank and front end of the other vehicle penetrated into the back passenger compartment.

The auto industry also knows how to limit pedestrian injuries by reshaping the front hood of the car.

While some aspects may have been re-introduced the important point is that the car industry already knows how to make the interior of a car safer but those concerns were jettisoned years ago.

iirc (badly) A senior executive of a major high-end carmaker stated in the case of an accident with one of their autonomous cars, the only person they were going to protect was the occupant because it was the only variable they could control.

They were forced to walk back that statement.

MarkH November 15, 2019 2:18 AM

@Clive:

For 2018, the Federal Highway Administration conjured the number of 3.2 trillion vehicle miles on U.S. roads (I know, the magnitude of this number is freakish).

As a check, suppose about half of the U.S. population drives (roundly, 160 million) for a mean of 20,000 miles per driver per year. That seems rather high, but typical miles/year for passenger autos has long been pegged at 15,000.

Making a reasonable guess for average speed, this translates to about 2 hours (or more) driving per business day — obviously excessive, but America is famed for its psychotic driving culture.

Also, bear in mind that there are at least 4.5 Americans whose actual job is driving. Many of them come near the legal limits, averaging up to 12 hours per business day. Each one of these drivers “balances out” a roomful of occasional drivers.

The annual death toll is running about 34,000 at present, for about 94 million vehicle miles per road fatality.

Depending on assumptions of average speed, this works out to at least 500 years — and more likely around 1000 years — of driving 56 hours every week.

You’re quite right about the distribution of casualties; as I recall roughly half of U.S. road fatalities have been pedestrians in recent decades.

Peter November 15, 2019 3:50 AM

How about throwing programmer to jail? It might seem extreme, but it’s not. It’s programmer who drives the car and it’s his negligence that caused the accident. Human driver is just the strawman, “fail-safe” that doesn’t (and can’t) work. Human mind is not capable of watching road for hours and react in split of a second. When doing some active task (than driving), than yes. It’s absolutely possible. But when you’re mostly idle, you can’t keep focus very long.

Just try watching phone on the table for half an hour without loosing the focus. After that, try using it for half an hour.

wiredog November 15, 2019 7:04 AM

@JonKnowsNothing
“1959 Chevrolet Bel Air vs. 2009 Chevrolet Malibu IIHS crash test”
https://www.youtube.com/watch?v=xtxd27jlZ_g

tldw: You’d much rather be in the 2009 Impala.

About 10 years ago I was in a wreck where the driver of a small Kia didn’t realize we were all stopped at a red light and rear-ended the Benz S500 that was behind my Mitsubishi. The impact was hard enough to push the Benz into my car hard enough to bend the frame. The Benz also had major damage, but was drivable. The Kia looked like a missile hit it. The engine was up on the windshield. The driver of the Kia? Sprained his wrist when the airbag deployed. Modern cars are MUCH safer than older ones.

Sancho_P November 15, 2019 1:14 PM

@Peter ”How about throwing programmer to jail?”

No, that’s the wrong end of the chain, it’s the legislator who is reliable.
The simple law “You can’t run driverless cars on our public roads” would send nobody to jail.
And it wouldn’t accept additional problems in a society already running downhill.

Society doesn’t need driverless cars, we’d need solutions to drive less.

Clive Robinson November 15, 2019 2:15 PM

@ Sancho_P,

Society doesn’t need driverless cars, we’d need solutions to drive less.

And whilst we are getting to the “drive less” we need ways to ensure we “drive more slowly”.

Because if you hit a child head on at 20mph you will hurt them but in by far the majority of cases they will get only minor injuries thus survive.

However even by far the majority of adults will not survive beong hit even by a side swipe at 40mph. And the 5% that do survive a head on hit at that speed, more than half will have permanent life changing injuries.

Bad as driving might appear in the US at double that of Canada and four times that of the UK and many other European nations, it is almost as nothing compaired with the middle and north east of Africa, and one or two parts of South America, where the memorial markers have got to such a density that they are now yet another danger to drivers and passengers.

It would appear that without significant legislation that is continuously enforced, some drivers fall into the worst of Dunning-Kruger behaviours.

And before people ask, I long ago decided that whilst cycling was something I was not just good at, but importantly safe at, driving cars was something I was crap at. Mainly because nearly all cars are physically too small for me to get into the driving position. Let’s face it, would you feel safe in a vehicle where the driver had such long legs, they had to contort themselves just to get a hand on the gear shift that was digging in the back of their knee and who’s vision was about 30 degrees from vertical because their head was pressed up hard against the roof such that they effectively had one ear on their shoulder and the other apparently stuck to the roof…

A friend has a picture of me sitting in the front passenger seat of their top of the line BMW. The sun roof is open so that I can stick more than half my head out of it…

So no, I decided long ago that driving was not for me. A decision that proved a wise one, when a few years later I started blacking out quite frequently and had to surender my licence on medical grounds, even though by then I was not driving.

Sancho_P November 15, 2019 6:10 PM

@Clive Robinson

Yup, generally our way to survive would be to slow down, in all aspects of human activity.
Anyway, it would be too late, so let’s go on 😐

MarkH November 15, 2019 8:30 PM

@Clive:

How frustrating, that mass-produced goods are made for a mythical average. I understand better now, your distaste for airline travel.

I’ve no doubt that I engender impatient fury in car parks (parking lots to barbaric Americans) because, unless they are quite empty, I reverse at about 2 mph. It adds perhaps 10 to 15 seconds to getting my car out …

In case there’s a tiny pedestrian behind my vehicle, I want to allow maximum time for reaction.

I’ve no doubt that when I’m snail-crawling out from my parking space, waiting drivers see my old gray head and say “they should take that doddering fellow’s license away!”

Clive Robinson November 16, 2019 3:08 AM

@ MarkH,

I’ve no doubt that when I’m snail-crawling out from my parking space, waiting drivers see my old gray head and say “they should take that doddering fellow’s license away!”

Wisdom comes with age.

Wisdom is not knowledge or ability, but the experience to know when to apply the results of knowledge to temper ability to a purpose.

Those who are young and impatient, are generally not sufficiently experienced to realise what the v^2 realy means in those kinetic energy equations[1] they were taught in high school physics classes.

Likewise they have probably not realised what the transfer of energy from a one ton vehicle at 10mph (~4.5 m/S) to a “tiny pedestrian” and what that realy means, even though they might have seen a Newton’s cradle…

[1] For those that know of forces from equations but can’t see them in real life terms. At the Earth’s surface, acceleration due to gravity is a little under 10m/s^2 thus multiplied by mass in kg we get the force acting on it in Newtons (thus kg.m/s^2). Now it just so happens that “a standard English eating Apple” like the one that alledgadly fell on Newton’s head is about 100g (4 to the pound gives 113g). Thus as an approximation the force of 1 Newton is approximately the same as one standard eating apple or two medium eggs[2] resting in the palm of your hand, 1 pound or 16once 4.5 Newtons and so on.

[2] All of our original mass and volume measures were based on hens eggs, because our first repeatable experiments were done in kitchens and called recipies and the only measuring device was a “balance” or wooden cup/bowl. Later measures for length came from “body parts” and areas of land that one man and a horse or yolk of oxen could plough in one day.

Bob Paddock November 18, 2019 10:04 AM

Department of Transportation has updated their Connected Vehicle Pilot Program today,
for New York City and Tampa-Hillsborough Expressway Authority Pilot. Wyoming was updated earlier this summer.

https://www.its.dot.gov/pilots/

Is the real goal tracking?

Antistone November 18, 2019 3:53 PM

The description of what the software was doing is rather horrifying. As a professional software engineer, it sounds pretty irresponsible to me (if it’s accurate).

  • Lacks the ability to classify an object as a pedestrian unless they’re near a crosswalk. Even assuming this was an unintended second-order effect rather than an explicit design choice, that sounds like incompetence in the design, and incompetence AGAIN in the testing.
  • When an object is reclassified, its history is discarded. Sounds like lazy carelessness in the programming (though I can imagine this defect would be hard to catch at the testing stage).
  • Vehicles/bikes assumed to be traveling in the same direction as traffic, and unknown objects assumed to be static. This one actually MIGHT be totally fine, if “assumed” just means “the short-term guess we use until we have enough data to actually MEASURE its velocity”, and if the former problem were fixed.
  • Waiting for human action when it detects less than 1 second to an unavoidable collision. WTF? Humans take time to respond, and time is clearly of the essence. If you actually detect the human TRYING to drive, then sure, deferring to them might be reasonable; but doing nothing when there’s no input at all is effectively giving up. How is that a plan?

.

All that said, there are TWO important points for comparison when evaluating how safe the driver was. One is how much safer we could have made it with modest additional effort; in that dimension, it sounds like it does very badly.

But the other is: in actual experience, how often does it get into collisions compared to the status quo ante–that is, compared to human drivers? This article gives quantitative statistics for how many collisions the driverless cars have gotten into over the past few years, but NO such statistics for the humans, and that strikes me as irresponsible journalism.

JonKnowsNothing November 19, 2019 8:30 PM

Another OH?

This version will let the AI-Smart-Dumb-Car “guess” if it’s safe to do X Y or Z.

The article describes merging into or moving across traffic from a stop or parking lot scenario. Instead of waiting for a “clear” opening, the car is going to guess when it can speed-merge into the lane.

If I understood their proposal, they will monitor the rate and speeds of cars passing and wait until one car “slows down” then jump into or across the lane.

Gonna be a lot of spilt coffee, T-boned and offset rear crashes with that in practice. California rural areas have actual crossings that span 4 lane divided highways. You may beat the farm tractor but not the semi behind it.

Not to forget the folks watching videos, texting, shaving (not Bruce) and doing office prep at a light requiring a HONK to get them moving just in time for a AI-ML wipe-out.

ht tps://arstechnica.com/science/2019/11/giving-autonomous-cars-a-theory-of-mind-improves-their-integration/
(url fractured to prevent autorun)

JonKnowsNothing November 19, 2019 8:40 PM

Best line in the article, Best Ever Reason:

Car after car rolls past, keeping you trapped as your frustration rises.

Maybe this person should not be driving in the first place…

Imagine the frustration waiting and waiting for the AIML car to move at all. Maybe the mfgs should put in a “clap meter” with a swing arrow going Hot-Cold on the likelihood the car is going to jump the queue. If they attach bitcoin to it, it would be as good as any other gambling device.

Best RL effect:

A study of accidents involving AVs in California indicated that over half of them involved the AV being rear-ended because a human driver couldn’t figure out what in the world it was doing…

ht tps://arstechnica.com/science/2019/11/giving-autonomous-cars-a-theory-of-mind-improves-their-integration/
(url fractured to prevent autorun)

Clive Robinson November 20, 2019 3:29 AM

@ JonKnowsNothing,

    A study of accidents involving AVs in California indicated that over half of them involved the AV being rear-ended because a human driver couldn’t figure out what in the world it was doing

They couldn’t? Obviously they had not seen enough 1950’s B movies 😉 Otherwise they would have known, the AV is “a robot in disguise”. Thus like all robots it’s only desire is to,

    “Kill All Humans”…

So the only real question is “how?”[1], which with this “jump out” algorithm appears to be by inducing sufficient stress to cause a heart attack :-S

[1] Now some people might think I’m a little biased, because I would like to try various medieval punishments on “jump out drivers”… But having spent an annoyingly long time in a “neck brace” and sling, also having great difficulty using my right hand all because of a “jump out car driver” I feel I might have justification for wanting to go all biblical on them… The so called “accident” happened because the bus I was on was busy, so I ended up having to stand in my crutches holding on as best I could with my right hand. When the car jumped out in front of the bus. Not without justification the bus driver stompped on the brakes. I ended up by the process of inertia quite literally “flying” down about one third the length of the bus my grip torn from the hand rail I’d been holding, befor my head made contact with the front of the bus bringing me to a stop in an untidy pile by the driver position. Thankfully not a “dead stop” but an unconscious one, that ended up with me being treated in hospital. Hence the uncomfortable fashion accessory at the neck and arm in a sling. With the accompanying stares from people wondering why I was hobbling around on one crutch looking like an extra escaping from a zombie movie due to sleep deprivation and pain. It’s at times like that you wonder if there is a “#NotAHappyCamper” social media tag.

AlexS November 21, 2019 2:13 AM

How about comments from someone who actually owns a car fully outfitted with all that Bosch Mobility Solutions has to offer AND is certified for full autonomous driving in some states? No, it’s not a Tesla — P.T. Barnum, er, Elon Musk might make you think they have the most advanced car in the world, but Teslas are quite poor in this area — even GM beats them.

I’ve clocked about 75,000 miles / 120,000 km in semi-autonomous and full-autonomous cars. If designed properly, do these systems have the ability to prevent / reduce accidents? Absolutely. I’ve had the collision avoidance systems engage 3 times. Two were due to other human drivers not stopping behind me, the 3rd was a deer running across my path. I won’t be dramatic and say “it saved my life!!!” BUT I will definitely say it kept me out of the hospital.

Now, will these systems get it wrong? Absolutely. Humans get it wrong thousands of times a day causing accidents, and we’re asking these computers to interact with failure-prone humans. Even elevators, in their own little protected shafts and very simplistic software get it wrong and jam.

Will people die because of automation? Yes. Will people be saved due to automation? Yes. And at least from my experiences, yes, the number of people saved will outweigh the number of people killed. This isn’t any different than airbags in cars, which also have killed many people over the years.

According to the CDC, ~200,000 people / year are injured in bathrooms. While new technologies should be properly tested and possible unintended consequences should be investigated, technology shouldn’t be feared.

Our regulators also need to grow some brains. In states where my car isn’t allowed to run full autonomous, it goes to a reduced-function state. One of the features disabled in the crippled state? Automatically stopping for stop signs and red lights. The car sees them, registers that they’re there…but isn’t allowed to stop for them. After all, the car stopping on its own is “autonomous” driving, therefore banned, making the car less safe than it could be.

Clive Robinson November 21, 2019 4:38 AM

@ AlexS,

According to the CDC, ~200,000 people / year are injured in bathrooms.

That is effectively a meaningless quote and almost certainly wrong.

Firstly it’s not comparing “apples with apples”. We know for instance people cut themselves shaving, burn themselves on radient heaters and still electrocute themselves with hair dryers and mains powered radios. Whilst some of those are so minor they would not be reported, even the more severe such as burns would probably not be reported either. But what you would not report from the bathroom you almost certainly would if your new car cut you and gave rise to a similar level of blood letting as some shaving accidents, as for your car burning you again you would be hot on the phone to complain, likewise if your super duper electric car fried you alive on a rainy day, and you if you survived, or your loved ones if you didn’t would be baying for blood in newspapers and via lawyers.

A more intetesting statistic might be why US drivers die rather more frequently than North Western European countries. But take care there are two simple ways to normalise, the first is against numbers of people in a country (not good) or against number of vehicles in a country (better). There is also a third ehich is by estimated usage (better still).

Have a look at the list in,

https://en.m.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate

And look down at the US (12.4, 14.2, 7.3) and UK (3.1, 5.7, 3.4) lines which are next to each other. You will find that the US figures are loosely 4,3,2 times worse respectively than the UK, and the UK whilst not the worst in North West Europe, is most certainly not the best by a long way. One of the UK’s closest neighbours Norway for instance (2.7, 3.9, 3.0) and their neighbour Sweden (2.8, 4.6, 3.3) do noticeably better. Oh and then there is the US neighbours Canada (5.8, 8.9, 5.1) and Mexico (12.3, 43, 27.5).

Which ever way you look at it the Americas are not the places people drive safely. Which brings us to your argument of,

Will people die because of automation? Yes. Will people be saved due to automation? Yes. And at least from my experiences, yes, the number of people saved will outweigh the number of people killed.

It’s clear to see that one answer might be “It kills more in Europe than it saves, but Saves more in the Americas than it kills”.

That is whilst a machine on average might drive better than drivers in Mexico/US/Canada, it will drive worse than those in areas of North Western Europe.

It’s a complex subject, but one thing is clear, there is a wide disparity in driving standards, and,

    Improving driving standards will in effect allways save more than it kills where ever you go.

Unlike machines currently or for the foreseeable future… And there is a well known reason for this which is “complexity” it’s why trains under human or machine control have way way better statistics than road vehicles.

Sancho_P November 21, 2019 5:12 PM

@Clive Robinson

”It’s a complex subject, but one thing is clear, there is a wide disparity in driving standards, …”

That is fact, as:
Road conditions (paved / unpaved miles)
Average technical condition and age of vehicles
Average passengers per car (in relation to regular seats)
Risk awareness of the average population, licensed drivers, education, …

are not included in this statistics (see Africa, e.g. CAR, Somalia).
However, driverless cars will be a problem for the rich countries only, together with their problem of limited markets because computerized, sophisticated cars are of little value in 3/4 of the planet.
This road is a dead end, unneeded btw.

Clive Robinson November 21, 2019 6:27 PM

@ Sancho_P,

This road is a dead end, unneeded btw.

Unneeded by most, but wanted by many.

There are a few people who for various reasons medical/disability who either cannot drive or it would be unsafe for them to drive (I’m one these days). So if we can not use “public transport” for various reasons –like there is none– then our only option is expensive taxis and the like.

However there are many who would like the benifit of public transport / taxi but without the inconvenience or expense. Some would much rather read a book, do work, sleep or get up to otherthings in the privacy of their own vehicle. How many people do you think would like being able to go out on a Friday/Saturday night and not have to worry if they had one or more drinks to many and start sleeping it off on the way home? I’m guessing many if not most more responsible drivers.

So desirable to many and a life saver for some.

The question is “at what price?” as I’ve mentioned automatic trains and light railways are very safe, safer infact than driver operated trains except for one area, people being on the track.

We could very easily have cars that were as safe if we got rid of drivers and pedestrians. That is we could design the roads to be used like railway systems where the cars are not autonomous except in certain safety features and they would be under a central control system, which knows where each and every vehical is and where they are going thus can “look ahead” to move the vehicle safely.

But this would mean that there could be no drivers or pedestrians in the same area operating independently of the central control system. Most drivers are in no way ready to give up “self control”, that is they delude themselves they will always be better than a control system especially when an emergency is involved. Just about every real independent study done this century shows that the majority of drivers or operators do not function at all well in emergancies even with training, because they lack the brain paths caused by sufficiently repeated experience…

Like any “human skill” there is the “10,000 hour” rule to become firstly profficient, then experienced then master. How long do you think the average driver would have to spend actively behind the wheel to clock up 30,000 hours of driving bearing in mind many people only work 2,000 hours a year. In short many won’t get there before old age diminishes their abilities.

supersaurus November 22, 2019 2:50 PM

@clive robinson

Because if you hit a child head on at 20mph you will hurt them but in by far the majority of cases they will get only minor injuries thus survive.

suppose your body could float oriented horizontally so that the lowest point on your head was 6 feet off the ground. now release whatever it is holding you up. neglecting air resistance, your head would hit the ground at slightly less than 20 ft/sec or slightly greater than 13 mph. how do you think your head would feel after that collision? note 13 mph is quite a bit less than 20 mph.

I just can’t agree with your “minor injuries” assertion. sure, the details of the collision, e.g. hardness of the ground/car, where the car actually struck, etc, matter, but I’m assuming “child” means “head not above top of hood”. even assuming no direct physical injury, e.g. skull fracture, occurs, the head is going to accelerate very rapidly from zero to 20 mph given the difference in mass, even granting some crushing in the front of the car, which is going to shake that jello inside the skull severely.

maybe I’m a little sensitive about it because I experienced a head injury equivalent to dropping my head onto a fairly hard floor from about 5.8 feet…and I still have effects from it, not including death. this was six years ago, so I’m not looking for a miracle, but you may be sure I have a lot more respect for head injuries.

Clive Robinson November 22, 2019 5:10 PM

@ supersaurus,

slightly less than 20 ft/sec or slightly greater than 13 mph. how do you think your head would feel after that collision?

I actually know because as I’ve said before on this blog I’ve had a head injury sufficiently forcefull to give me a full fracture of the lower jaw at the point of the jaw. And as I was told in hospital even people going through wind screens on cars at 60mph generally don’t get that sort of injury. In fact they are not normally seen on living people.

But look at it this way an adolescent who is still technically a child can throw a punch in excess of 6m/S or ~13mph into another childs face and they generally only end up with a bruise.

But the statistics of road accidents involving children / adolescents in the UK used to be issued by RoSPA and are still on line in places.

Have a look at,

https://www.rospa.com/rospaweb/docs/advice-services/road-safety/drivers/inappropriate-speed.pdf

Under pedestrians they give,

85% fatality at 40mph.
45% fatality at 30mph.
5% fatality at 20mph.

They used to do another report that had the figures in age or hight groupings but I did not see any current.

Sancho_P November 22, 2019 5:34 PM

@Clive Robinson re medical/disability[/elderly] …

”… then our only option is expensive taxis and the like.”
You don’t believe that technical solutions will ever solve societal problems, do you?
I’d have a couple of autonomous flying saucers for free, ‘care for a green one? 😉

”we could design the roads …under a central control system, …”
You wouldn’t expect what I can dream of when I close my eyes – Cheers!

Clive Robinson November 23, 2019 7:46 AM

@ Sancho_P,

You don’t believe that technical solutions will ever solve societal problems, do you?

Short answer “on their own no”.

Firstly “Be carefull what you wish for” when you say “solve societal problems” the history of eugenics should always always make you cautious.

But from my perspective the longer answer is, technology is agnostic to use and those that produce it follow capatalist economic behaviour. So unless society provides regulation or legislation as a social act to counter balance that then at the very least minorities and the vulnerable will be exploited.

I would very much like it to be otherwise, but I’m old enough to realise that “society has to fight for what it believes is right” rather than the stupid mantra of “The free market will provide” spouted by the self interested elite and their acolytes.

Very clearly technology as supplied to the market currently does not solve socoetal issues, as all manner of ICT products demonstrate “by the steaming bucket load” every day. When added to by bought legislation like the DMCA, that is not going to change anytime soon.

For instance most of society says that “being a peeping tom on others is bad for society” no matter who does it or for what ever motive. Because it is an excepted “social norm” or “more”. Thus spying on people is very far from “socially acceptable”.

Which is why you have the likes of the US FBI, DoJ, and Intelligence community, and their equivalents in other countries who want to spy on everyone for the power it gives them spout a load of very very distant corner cases and arm wave movie plots into existance to scare people with. Or invent bogiemen to scare people with such as the old “Reds under the beds” nonsense.

As just “chicken little” type rhetoric of the “Red Scare” nolonger realy works, these distant corners cased, movie plots, and bogiemen get termed in “emotional blackmail wording” which we sometimes call “think of the children rhetoric”.

They do this because they know that society is fundementaly against them, thus as they have no valid rational argument a normal person would accept, they have to resort to “emotional blackmail” or the threat of it. Further as we have seen they are quite prepared to do worse, a lot worse, by various entrapment and “agent provocateur” techniques create bogiemen in the flesh so they can get larger appropriations and more power as well as “on boarding” more legislators to get the legislation you know they will misuse to their benifit.

In times past we would have called such people and those associated with them “Evil” or “Pariahs” and at best made them outcasts by running them out of town, if not “tar and feather them” or earlier “burn them as witches” etc for the sake of “Justice being seen to be done” in public.

Now the best we are alowed to do is point out they are “evil” perhaps say at them “get behind me satan” or some other biblical quote and ridicule them and hold them up publically as the fools they are so every one laughs them out the door.

I’m sure there are quite a few such people in “high places” that many citizens would love to “turn on a spit” all be it figuratively.

But then “society” does not get a choice when it comes to such things, or say who gets positions of high office. Because we alowed those rights to be taken away from us by self interested legislators years ago. Just as we are loosing the right to hold “politicians on the take” via lobying etc to account…

If as we obvioulsy have, lost those rights, what else do you think we have lost?

Well obvioisly one is the right to say how technology is used in society.

Does that answer your question sufficiently?

supersaurus November 25, 2019 3:27 PM

@ clive robinson

But look at it this way an adolescent who is still technically a child can throw a punch in excess of 6m/S or ~13mph into another childs face and they generally only end up with a bruise.

apples to apples please. getting hit by the hard part of the front of a car that weighs 4,000lbs or by a concrete sidewalk is quite different from being hit by a six year old’s fist. the accelerations are what matter, see e.g. professional boxers who survive much higher fist velocities for a while because the gloves reduce the rate of acceleration.

5% fatality at 20mph

5% fatality is not equivalent to “only minor injuries”, this is not an apples to apples comparison either. also, as you point out, the size of the people cannot be determined from the statistic. it is quite different to hit a sloping windshield after possibly suffering a broken pelvis by being hit by the hood and thus accelerated to some fraction of the 20mph by the initial collision than to hit your head on the front of the car as the initial collision.

people who go through windscreens at 60mph are not normally living people after they finish being broken by the dash and then hitting the ground, hence irrelevant.

for that matter, both of us are giving anecdotal data about accidents that happened to us, however I must repeat my question about banging your head from six feet up. you seem to brush that off with the child’s fist argument, but in my case it required 46 steel staples plus many stitches to close up my skull after they finished the craniotomy. I landed on a smooth linoleum kitchen floor. I’ll take a child’s fist to the head any time. ask anybody, my head is plenty hard for that ;).

and a question, you didn’t mention the direction of the force that broke your jaw? in any case hitting your jaw in any direction is a lot less severe than hitting the back of your head because jaws aren’t all that solidly attached, which again affects the rate of acceleration of your brain bag.

supersaurus November 25, 2019 3:31 PM

@clive robinson

PS, forgot, skull fracture from hitting the floor preceded the craniotomy, which was made necessary by internal damages.

Clive Robinson November 25, 2019 8:47 PM

@ supersaurus,

5% fatality is not equivalent to “only minor injuries”

As I originally said (which you quoted earlier),

    Because if you hit a child head on at 20mph you will hurt them but in by far the majority of cases they will get only minor injuries thus survive.

The “majority of cases” does not preclude either serious injury or death in a minority of cases.

Oh and please do not read “hitva child head on” as meaning hitting the child in the head. In England to “hit head on” means to make direct inline contact from you moving directly toward them, nothing more than that.

Also,

getting hit by the hard part of the front of a car that weighs 4,000lbs

A two tonne car is not normal in Europe or UK, due to many things but “Carbon footprint” “fuel emissions regulations” and high fuel prices tend to make people drive in smaller cars. Further European cars also tend not to have “hard parts” often the likes of radiator grills are made of flexible plastics and the front of vehicles is designed to be both low and sloped which is designed to cause even children to be hit in a quite vectored way, reducing the effective impact force significantly.

As for,

or by a concrete sidewalk

In the UK “sidewalks” are usually called “pavements” and frequently they are not concrete and more importantly in many areas where children are found such as urban and suburban areas the pavement is seperated from the road by a “verge” of grass on soil, four or often more, feet wide. The most dangerous part being the 90 degree outwards edge on the “kerb stone” that is granit or pre-formed cement.

is quite different from being hit by a six year old’s fist.

Yes the damage caused by an impact can be approximated as Newtons / Sq meter (in part it’s one of the reason bullets make holes in you whilst the butt of the gun just thumps the sholder). A childs closed fist at the same velocity of 6meters/second as the car has a very small surface area as little as 3cm squared. A car on the other hand especially with sloped bonnet etc may have a 0.5-1.5 square meter vector reduced impact area.

By the way the size of boxing gloves is designed to significantly increase the area and due to their mass actually increase the energy marginally but do not reduce the velocity or acceletation in any apreciable manner. In fact tests have shown that because the boxers hand is better protected they actually hit with much greater velocity and acceleration when the gloves are on their hands than when they are not. So in fact talking about boxers and their gloves is very much “not an apples to apples comparison either”.

As for,

however I must repeat my question about banging your head from six feet up

You actually provide to little information to say very much of anything.

As for my incident as I’ve said before, I was karate kicked with a “flying drop kick” to the back of the head by an adolescent who I’ve been told was over six feet tall and quite broad so probably upwards of 100kg. The point of my jaw hit a vertical steel sign pole aproximately 5cm diameter which was about 30cm in front of my face. It was estimatrd that I was unconcious for 20-45 minutes consistant with Traumatic Brain Injury (TBI). As the maxiofacial surgeon remarked it was the hardest bone break to make in the entire human body and very rarely seen on live humans. There is also evidence that it may well have caused me brain damage having cognitive impairment[1] and personality change. But as I am still functional at above average levels cognatively they did only basic brain scans which generally show little or nothing with what is called “mild TBI”. What should have been used was SPECT imaging, which uses a gamma emitting injection and gamma camera, and is of sufficient sensitivity to show up “traumatic brain injury” (TBI) and “chronic traumatic encephalopathy” (CTE). Both TBI and CTE often get misdiagnosed as PTSD and chronic depression. It’s important to know the difference between TBI and PTSD etc because treating for PTSD instead of TBI can be harmfull… The problem I have is that the neurologists think it’s PTSD and the psychiatrists TBI, that is they are both passing the buck.

The scary thing is that recent resurch into Alzheimers Disease and Tau Pathology suggest that in something like 50% of AD sufferers TBI has happened to them. Which if you think about it does not bode well…

[1] Amongst other things I had to teach myself to read again which was not fun. However whilst I used to be able to read a paperback a day, 400 odd technical papers and two or three technical books a week with significant recall before the attack, things are significantly different. Now I’m hard pushed to read one paperback a week and technical papers of any density tend to cause problems with “sensory overload” likewise technical books that I’m lucky to read but not realy remember three or four a year. However once information is “in and remembered” processing it is not realy any different and giving voice to results or putting them on paper only marginally more difficult than before.

MarkH November 25, 2019 9:18 PM

Wow Clive

About when in life did you suffer this trauma?

I remember as a boy reading how stupid was the “plot convenience” in TV and movies (usually crime shows) when one character knocks another on the head to make them “sleep” for a while. A neurologist explained that any blow sufficient to cause more than momentary unconsciousness will have life-long effects.

Now that more is known, repeated blows from boxing, American football and even soccer (from heading the ball) are known to cause permanent lesions.

Even extended hypoxia/low pressure (as in Himalayan climbers) causes injuries visible on scans, though not (yet) associated with cognitive losses.

I saw a friend recover from a brain injury (due to hemorrhage): a long, tough process that leaves one changed.

What misfortune, to be the target of some depraved punk.

tds November 26, 2019 4:29 AM

@Clive Robinson, MarkH, or the usual suspects

OT, If its any consolation, it took me more than 7 aNd 59/60 HourS to read le Carre’s “Agent Running in the Field”. During the last few pages it seemed unlikely that le Carre could pull off a credible ending.

Clive Robinson November 26, 2019 6:43 AM

@ MarkH,

About when in life did you suffer this trauma?

As I’ve mentioned befor it happend at the turn of the century, and as for age, well I’ve kind of mentioned that before when talking about the level of gray / badger in beards.

MarkH November 26, 2019 10:14 AM

@Clive:

I think my friend was in her mid to late 50s at the time of her catastrophe. Luckily, she was in generally good health, which the doctors said helped a lot.

At times she seemed like a new-born, but within about 6 months had recovered more than I dared to hope.

From the outset, the neurologists predicted (accurately) what the lasting effects would be, and that after about 2 years she would “plateau.”

It was very humbling for me to see her journey … I never saw her discouraged.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.