Java Cryptography Implementation Mistake Allows Digital-Signature Forgeries

Interesting implementation mistake:

The vulnerability, which Oracle patched on Tuesday, affects the company’s implementation of the Elliptic Curve Digital Signature Algorithm in Java versions 15 and above. ECDSA is an algorithm that uses the principles of elliptic curve cryptography to authenticate messages digitally.


ECDSA signatures rely on a pseudo-random number, typically notated as K, that’s used to derive two additional numbers, R and S. To verify a signature as valid, a party must check the equation involving R and S, the signer’s public key, and a cryptographic hash of the message. When both sides of the equation are equal, the signature is valid.


For the process to work correctly, neither R nor S can ever be a zero. That’s because one side of the equation is R, and the other is multiplied by R and a value from S. If the values are both 0, the verification check translates to 0 = 0 X (other values from the private key and hash), which will be true regardless of the additional values. That means an adversary only needs to submit a blank signature to pass the verification check successfully.

Madden wrote:

Guess which check Java forgot?

That’s right. Java’s implementation of ECDSA signature verification didn’t check if R or S were zero, so you could produce a signature value in which they are both 0 (appropriately encoded) and Java would accept it as a valid signature for any message and for any public key. The digital equivalent of a blank ID card.

More details.

Posted on April 22, 2022 at 7:09 AM24 Comments


TimH April 22, 2022 10:02 AM

Financial accounts, involving other peoples money, have to be audited by law. Security software implementations (not just the algorithms) don’t have to bem despite involving other peoples money.


Clive Robinson April 22, 2022 11:26 AM

@ ALL,

You would be surprised at just how often checks like these get left out. Mostly they are bugs that annoy when tripped over. Some can with a lot of work become vulnarabilities at some level. But a few like this one cause a raised eyebrow, a WTF?
And “a perfect 10” on the score board[1] –or should be– at CVE etc.

often this happens quite deliberatly because…

Well the list of reasons is many, but two that happen regularly,

1, Make test an interfaces requirment that the caller handles.
2, Shift error checking to the left mentality.

The first happens with the likes of “Div by Zero” every competent programer knows you should not do this. Therefore “you will test for it” implying “so I don’t have to”.

The second is based on the seperation of function, most often with “user input”. You seperate out the “data range etc” checking from the “business logic”. Then you move that checking as far to the left as possible that way it keeps things clean/tidy etc. This is so engrained in programers it actually comes as a shock when you tell them it’s actually quite a bad idea, as it leaves the “business logic” very susceptible to vulnerabilities.

But then there are some that are so insidious you think they have to be malicious… In a way they are, but they “usually” are not intended to be.

One such is the,

3, “It ain’t ever going to happen” view point[1].

That is you have say a 1 in 2^1024 chance of someting happening in a “random” selection. Well, the thinking is,

A, Common sense says there’s no chance of that in my lifetime.
B, So the code is never going to run.
C, Adding the code here will be messy.
D, So put the code elsewhere.
E, And add a note to the docs…

Whilst point A on “average” is true… security is not about average. Because “attackers have agency” so point B is very definately false…

There are some others reasons I’ve listed with more details a day ago,

The point is whilst these things do happen and often, it leaves open a “backdoor excuse” that in ICTsec we are dancing around without addressing.

That is we assume there was not “intent”, that is by accident not design. Further when in fact there was intent and even when you show it we again tend to assume it was not “malicious”… and so on.

We avoid the confrontation for “sociological” not “logical” reasons. The result is failed security and all the hurt that causes. Which now people are actually doing “cyber-warfare for real” should cause a bell to ring loudly…

But we have had a whole spate of people taking advantage of the advantage the “no confrontation” attitude gives them. They have taken advantage maliciously and with intent and people were actually getting physically hurt by software vulnerabilite a decade ago.

Yet do you hear a bell ringing?

As an industry software engineering seriously needs to up it’s game and fairly quickly.

How many more photos of dead and injured children in the MSM will it take?

As I’ve said for many years,

Security is a Quality Process and it begins before project day zero.

[1] Who else remembers the news about an elfin gymnast breaking the Olympics score display “because nobody gets a perfect 10” but then she went and did it 😉

Mat April 22, 2022 1:22 PM

Virtually anyone who knows if/else is a programmer past 2+ decades. Thanks to cheap labor in India and most US folks are content enough to become a recruiter to hire such programmers from offshore. Then you need a PhD to find these bugs created by unqualified ones. Same with majority of FOSS.

Similarly, virtually anyone can be a CISO now-a-days just with an MBA! They have no idea about security engineering.
No wonder Russia can bring down US cybersecurity with just a push of a button.

You can’t be a Doctor without medical degree/residency.
Same with attorneys.
But cybersecurity and software are wild wild west. And doctors trust such medical software written by greenhorns! What an irony.

John April 22, 2022 3:22 PM


SMOP = Small Matter of Programming


Glad my plants are still growing and I can eat them!!


SpaceLifeForm April 22, 2022 5:42 PM

@ Clive, ALL

When one has a Reference Implementation written in C++, and they are re-writing the code in Java, there is absolutely no excuse.

If one can understand Java, there is no reason that one can not parse C++ source code for reference.

Especially simple checks for zero.

Also note that C++ to Java converters have been around for years.

This was not an implementation mistake.

SpaceLifeForm April 22, 2022 6:31 PM

@ &ers, Mat

Good thread. Keep in mind that when once one gets CISO role, they will most likely be the scapegoat thrown under fhe bus.

Clive Robinson April 22, 2022 8:26 PM

@ &ers, Mat, SpaceLifeForm, ALL,

With regards Taz Wake’s “cathartic” tweets.

If you start where he say’s,

“Often I watch them “find a new role” just as a project is due, or spend half their lives explaining how X is the reason it failed.”

Those are very definately the total failures because they are to dumb to realise they’ve boxed themselves in.

I’ve described the right way to pull this nonsense at least four times on this blog before over the years.

The thing is Taz or those he is observing have it slightly wrong…

The ones he’s describing are failures at this particular stratagy and yes they do tend to stand out like “spare teats on a bull”…

If you are going to play this game you need to understand the “thirds rule of projects”…

The ones he’s seeing are leaving at or perhaps after the 2nd third. That is a rookie tactical mistake, as by then things have been documented and almost cast in stone… So it cuts out the “how X” from being believable.

The first third of a project basically produces nothing, it’s little more than “get the cards and sit around the table”. That is the game has not started, and though there may be a lot of money on the table, non of it is in play.

This is the optimum time to jump from that ship to the next as it passes. No cards have been dealt and no money has gone in the pot, there is no indication of good play or bad or by whom. Everything is “in potentia” like the “cat in the box” the project is in superposition…

This is because most people can “start a project” and “get the resources / pieces lined up”. But importantly “no actual project work has started” so everything can look good or bad depending on how you want to spin it…

Therefore there is absolutly no record of how the project is going to turn out…

But more importantly you can “crap talk” about the project all you like. Wax eloquently about the aims, objectives, ambitions, potential, methodology, yada, yada to your hearts content to a new prospective employer. Because even if “they have a man on the inside” there is nothing what so ever to contradict your “crap talk” fantasy…

So… You’ve “jumped and gone” from that “old project” and have started a new one elsewhere which is now your “current project”. Importantly it is in it’s first third… Thus you can “crap talk” about it any which way you like…

Because you are one step away from that “old project” you jumped from. It is now getting to, or has got to the deliverable point where it will nolonger be in a superposition state.

That “old project” can be in basically one of three states,

1, A roaring success.
2, Functional / acceptable.
3, Some form of failure / disaster.

If the 1 in ten has happend and the project is a roaring success you claim it as your success… due to your Insights, predictions, planning, leadership, yada, yada… Breaking the ground and laying the solid foundations that are absolutly essential etc, etc (remember there is nothing to contradict your statments no matter how high you pile it).

If as is likely, the project was not a roaring success but at best functional… Which “officialy” happens about 50% of the time[1], you claim that the foundations were solid but the building was not built to plan. And… that was down to those who replaced you, that obviously did not have your vision, drive, or even ability to follow the plan you left them to follow to build on your solid foundations…

You say the same thing but more definately for the 4 in 10 projects that fail so badly it can not be hidden and get cancelled etc.

So no matter what the out come of the “old project” because you left at the end of the first not second third, you are still “the hero who boldy went” with “vision, drive, determination” and all that other “True Grit” bovine droppings.

But importantly it’s the “old project” not your “current project” so you talk more about the current project that’s still in that first third…

Seriously you can using this technique very quickly move yourself up to being an “industry guru” or other “futurologist, man of vision” position and “go network” to spread your image into as many gullible minds as possible… The trick there is to talk not just a couple of years into the future but a decade or more.

Now that the Web 3 nonsense is getting to be a little less touchy-feely fog catching, it’s realy time to start eulogize about the benifits and joys Web 5/6 will deliver, by running Web 3 down… Point out Web 3’s “perceived deficits” and talk about “golden solutions” to them. As long as you stick to the “perception crap” rather than the “technical crap” you can not be proved wrong. Because “perception” is not driven by logic, reason or sense… You only have to look at the notion of “Non-Fungible Tokens”(NFTs)[2] to see the steam already rising off of that freshly dropped load… So think up something that will make NFTs look like dime-store toys and go sell sell sell… Remember a little good snake oil when warmed up and polished can not only glow golden but go a very very long way.

[1] Actually it’s way less than 50%, what happens is to stop a project being one of the 9 in 10 that fail to fully reach objectives, and to avoid the embarrassment, bad press, shareholder rejection etc. The “official” objectives get “trimmed back” with some excuse like “the industry changed direction” or some such “no blaim excuse”.

[2] NFTs are just another speculatory vehicle, to get a fresh load of idiots to buy into “Block-Chain” and similar systems. Why? Because others realise that whilst Crypto Coin Cons are still on the rise the trajectory is not as it was because “perception is catching up” with the bull crap. Likewise Smart Contracts are not minty fresh any longer… So now we have NFTs to hype, hype, hype, all we need is what Joe Stalin called “Useful Idiots” to push it to the point the shills jump onboard and the Merry Go Round spins and around and around the self proclaimed “smart investors” go, as they get taken for a ride to nowhere… Some might even be alowed to keep their shirts.

Ted April 22, 2022 10:08 PM

@Clive, SpaceLifeForm, All

The guy who found this flaw said there are resources to check for known vulns. In a post on this issue, he said he updated a copy of the Wycheproof test suite and found the issue in Java 17 immediately.

He hopes the JDK team will adopt the test suite going forward. Seeing as so many people could have been affected, I feel like we should get a say in this.

He adds:

Yubico is one of the few WebAuthn manufacturers that support the more secure EdDSA standard in addition to ECDSA. EdDSA signatures are less prone to the type of bug described here.

Do you know where people can choose what standard is used?

SpaceLifeForm April 23, 2022 12:38 AM

@ Ted

re: Do you know where people can choose what standard is used?

That is a good question. Unfortunately, you may not have any control, because the server may dictate.

To 2g or not to 2g.

D Chapeskie April 23, 2022 6:29 AM

@Clive Robinson

That is you have say a 1 in 2^1024 chance of something happening in a “random” selection. Well, the thinking is,

A, Common sense says there’s no chance of that in my lifetime.

I get your point but you chose your example number very badly. Even if the event has a chance of happening once every nanosecond, if it only goes badly once every 1:2^1024 times (and is indeed random) it will take longer than the current age of the universe to occur and will not yet have occurred. Nor will it occur before our sun expires¹. It would be more than reasonable to bet the success of your software, your company, or your life savings on the event not happening.

¹ The age of the universe when our sun expires will be something like six hundred septillion nanoseconds. It would take on the order of 10^280 such universes to get one in which your event occurs between universe creation and sun expiring.

D Chapeskie April 23, 2022 6:36 AM

Addendum to my previous:

Perhaps you were trying to make the point that an attacker can change those odds, if so you are of course correct, but it wasn’t clear to me that was your point.

Clive Robinson April 23, 2022 12:34 PM

@ D Chapeskie,

Perhaps you were trying to make the point that an attacker can change those odds

Yes that is one of two major points,

1, An attacker has “agency” thus is not probability limited.

2, The proberbility spectrum is “from here” and now “to infinity”.

That is people forget probability is about “on average” and random events do come up “double six first throw of the dice” from time to time.

So from a security asspect the tests must never be left out.

There are also other issues like “A failing RNG” that changes the probability towards a much more limited range of numbers.

A few years back now a couple of young graduates at the UK Cambridge Computer Labs did some research on a high specification IBM “tamper proof” TRNG that had atleast 2^32 probability space. They subjected it to a little Microwave energy that got into the device and took the probability space down to less than 2^7… So from 1 in 4billion or better to 1 in 130 or worse, without any physical entry being used…

That was a low power CW signal they used. Back in the 1980’s I was already using envelop or frequency modulated EM signals to “inject waveforms” into hardware to do two basic things,

1, By cross modulation get to read internal device activity.
2, By using 1 inject a synchronised fault signal into the device.

It worked quite well on both a “Pocket Gambling Machine” and on an early Electronic Cash Wallet…

JonKnowsNothing April 23, 2022 4:25 PM

@SpaceLifeForm, @Ted, @Clive, @All

Q: What standard is used?

A: That is a good question. Unfortunately, you may not have any control…

The other end of the tail is exactly this: most programmers would not have any control because the place the test SHOULD have taken place is far away from the section of code they are working on.

The implementation group dropped the test, but nearly every company I worked for held the mantra that:

  • “The test has already been done, treat the incoming data as good because someone else is/was responsible for testing it.” (aka No-Es-Ur-Joab)

This of course varies with what you are doing and where you are working in the larger part of a project. If you are not on the outer boundaries and your data is supposed to be already filtered and tested, you get this response. If for some other reason of design or possible corruption, you might include some sanity test on incoming data or even the return data from calling other code segments.

It varies how much sanity testing is tolerated, particularly if you are in a team that holds to the “clean, efficient and elegant code” method. They don’t like extra lines of anything stomping on their elegant but potentially flawed design.

It takes a great deal of experience to stand up to someone who signs your paycheck and if you do, you can expect someone else to be signing your paycheck soonerish.

  • To Endian BE or Endian LE is still a question…

Clive Robinson April 24, 2022 8:31 AM

@ Ted, JonKnowsNothing, SpaceLifeForm, ALL,

Re : Choose what standard is used

As @SLF notes,

Unfortunately, you may not have any control, because the server may dictate.

This is a very high level symptom of a very deep flaw in very nearly all developers mind sets.

As I’ve noted there is a “push it to the left” mentality when it comes to “error checking”, well there is also a “push it up” the page mentality when it comes to “state”.

To see why the mentality exists in so many developers needs a bit of a “deep dive” to get your head fully around. In fact it can take two to three years for those doing development work outside of certain environments for developers to get their heads around. Which is perhaps why most non engineering based CompSci courses at or below Post Grad don’t go near the subject if they can avoid it (Interviewing applicants for oh four decades has shown this is a general “time honoured” problem).

So back to the begining…

Before most of us in the Software Industry were even thinking about computers some one made an observation of,

“Programs = Algoriths + Data”

But they did not go on and add the fact that “Data” is subject to,

“Storage, Sequence, Time”

Nor did they add the fact that only the most trivial of Algorithms do not have “State” So basically,

Algorithms = Rules + State

Worse nor did they mention that the notion of “Atomic Action” is at best illusory. So State which is it’s self “Data” can change whilst any Rule is being processed… So,

State = Indeterminacy(Data)


State = Data + (Chaos + Noise)

That is the Indeterminacy has it’s own “algorithmic”[1] and “random” components. And yes the algorithmic is frequently “chaotic” in the mathmatical sense.

That is an iterative algorithm with both input and state, as it progresses shows great sensitivity to the input in both it’s ouput and feed back into it’s state…

Unfortunately this change of state is often to the point you can not “wind it back”. That is move from a current state to an earlier state, even just a few iterations earlier with any certainty, with the indeterminacy growing as a power law… So it in effect aproaches the idea of a “One Way Function” and gives “uncertainty”. So,

State = Data + (Uncertainty + Noise)

Which brings us back to the underlying,

“Storage, Sequence, Time”

Of which “Time” is the only thing that moves in a known direction…

But… the storage is by definition “uncertain” as other unknown to the algorithm things happen in both hardware and software. Some are “determanistic but not synchronised” (think Real Time Clock, threads, other processes for instance). Others are just “nondetermanistc / random” events or “noise” (think external interupts, or changes from polling loops).

This “uncertainty” in “storage” is one of the reasons for the “volatile” keyword qualifier[2]. Which gives a very false impression and is in reality at best of little use in even a sequential program single task running on a micro-controler.

Another issue is the order and manner of the way algorithms progress.

We are generally taught to write “main flow” as a “logical thread” that forms the skeleton we flesh out with other code.

Often there is not one “main flow” even in just a couple of pages of “businesses logic” code. Then adding other necessary code makes things almost impenetrable. When this happens it is usually because the design focus or methodology is at best inappropriate or simply wrong.

So to “untangle the ball of string” developers try to rebuild with sequency of action giving a higher level top-down flow. Thus the problamatic “state” gets pushed “up and out” of the “business logic” as much as possible. More often than not this is a bad idea, because most peoples grasp on state is not good, as it appears at all but the most local focus to be? extranious complexity that “gets in the way” so changing all but very local state is pushed out of the business logic as part of “localisation by encapsulation” and so it’s updating gets at best delayed or forgoton within the sequence… In effect you get two barely synchronised control sequences compeating, that of the “functional state” and that of the “local state”. Thus progression through the desired operations is twisted and moves in a jerky manner.

Another problem with “state” is the confusion between “sequence” and “time”. Way to many assume they are somehow the same, when in fact mostly they are not. It’s not helped by statments like “A sequence is a set of events that occure in an order way in time…”, that at best only applies to an issolated chain of events of serial occurance. In the normal course of events nearly everything happens in parallel and though broadly not synchronised do interact significantly. So the resulting “state” is of,

“High ‘complexity’, ‘chaotic’ and mostly ‘indeterminate’.”

Three words designers and developers hate and do what they can to banish from their curent point of focus. Esprcially as “humans” we mote or less think “serially” with “Time’s Arrow” being rather more than a philosophical touch stone. In part because we use “time” as a fundemental measure, as it alows us to add “certainty” to events with statments such as “Cause and Effect”. Time becomes in almost every way our foundation stone on which we build our knowledge of the world around us and of the universe.

But there is a problem, in that we make “assumptions” about “time” we now know “thoretically” are not true, but “practicaly” appear true, mostly due to our limitations.

It’s even worse inside of computers where a single processing unit multi-tasks by switching tasks in time slices. Which due to trying to gain “efficiency of use” often have variable duration and re-ordered on some non time related priority.

So there is time when a process is active, and time when it is not, this time is local to a processing unit. But the time on the processing unit is not related to what many call “Wall clock time”, which in of it’s self has significant issues due to location, distance, and movment being “relative” to another location. With the number of locations being as near to infinite as makes little difference to a computer.

What few developers realise is that this relativity in time issue applies to every processing unit, even inside the same computer[3]. Communications engineers however are acutely aware of it as the changes in time pop up all over the place causing issues, especially in the likes of navigation systems. Whilst the comms engineers have developed techniques to within reason cope with but not solve the issues, the software industry has not and in many cases can not.

What software designers and developers do is “push the problem away” and try to pretend it does not exist. In reality they end up creating lots of little local realities that have significant issues outside of them.

Talk to a software designer that has gone from a “single central” database design to a “multiple decentralised” design and how they deal with “consistancy” over the basic CRUD operations. Basically they don’t, they try to make the database static and unchanging and consist of highly localised bubbles of data/state. It is one of the reasons the likes of LDAP and DNS Databases are very unlike the relational etc databases DBA’s are familiar with.

So rather than meet “state update” issues head on and deal with the complexity where it should be, in general the software industry pushes the problem away and hopes to make it “somebody elses problem”. Part of that is insisting that their very local reality is the only reality, and that is always going to cause iresolvable issues.

[1] As oft noted by “the usual suspects” on this blog,

“Turtles all the way down!”

Or when the pathogenic or pain nature of “regression” is being considered,

“Upon their backs are ‘lesser fleas'”

That originates via Johanathan Swift’s 1733 ditty,

The Vermin only teaze and pinch.
Their Foes superior by an Inch.
So, Nat’ralists observe, a Flea
Hath smaller Fleas that on him prey,
And these have smaller yet to bite ’em,
And so proceed ad infinitum.
Thus ev’ry Poet, in his Kind
Is bit by him that comes behind

I guess mindfull of the caution about Poets, the English philosopher, mathmatician and logician Augustus De Morgan in 1872, nearly a century and a half later, applied it instructively to the notion of regression with,

Great fleas have little fleas upon their backs to bite ’em.
And little fleas have lesser fleas, and so ad infinitum.
And the great fleas themselves, in turn, have greater fleas to go on,
While these again have greater still, and greater still, and so and on and on.

But note importantly De Morgan added the “greater fleas” part, in deference not just to those who “stand behind” to bite, but the logical inverse inferance of the ancient saw of,

“What goes up must come down”

Must have been proceaded by,

“What comes down must have gone up”

(see “De Morgan’s Laws” on logic that are fundemental to not just logic and mathmatics but life as well, ).

Thus implicity giving is the notion of a “stack” structure, or as Turing thought of it as an “infinite tape” along which his universal engine crawled.

[2] In C programing we have the “volatile” keyword qualifier…

We get told things like it warns a compiler to not assume data at a given location can be relied upon to be unchanging between read and write instructions to it from withing the programing fragment being compiled to object code. So should not be optomised out in some way (such as held in registers / cach etc)…

It’s actually at best ‘a half truth nicety’ to make a fuller explanation “go away”… Which fails badly with threads, IPC and any form of parallel process be it hardware or software that can access that memory location.

From the security aspect,


And there is nothing you can actually do about it when a low level insider attack happens. As an example consider the apparently near infinite possabilities of a universal “RowHammer” attack. Where any bit, byte or larger in memory including that in highly privileged space, can be changed by another process running on the system even in compleatly unprivileged space. Then consider the consequences of that attack from below the MMU layer of the computing stack, that is in turn below the CPU level of the stack. So there is no “top down” method formal or otherwise, that can stop such an attack “bubbling up”. Right through all the layers above the storage layer, right up and through the “geo-political” at “Stack layer 15” etc. Potentially bringing death and destruction on more people than we can count in a ransomware or worse strike, for criminal or geo-political gain.

[3] We can now have processing clock speeds up in the 10Ghz range, where the wavelength is ~30mm and so the round trip distance has to be less than half that, with a third or less due to delays in processing elements (gate delays, meta-stability delays, rise/fall times and capacitance and inductance effects). So any distance over 10mm is of significant concern, which is about as big as we generally make computer processing chips these days and try to gain control of the time effectcts by slowing the clock to about a third of what we can make it (hence 1-3GHz CPU speeds). When we wire chips together we generaly run at speeds 1/20th to 1/30th of the maximum processing speed and use specialised PCB layout and tracking. But to keep efficiency up we run the communications effectivrly asynchronously. Because not talked about is the effects of relativity. If you have a CPU with another two CPUs just 30cm away and even though rigidly fixed in distance you spin them, at those processing speeds “relativity” becomes an issue and the CPU clocks will shift in phase. Then there are Doppler effects, which do impinge on our human macro existance with the pitch of a siren on a viehcle changing as it comes towards us or goes away, however the pitch remains constant to those in the vehicle. Remember pitch is frequency which is effectively 1/time so relative time has to be changing if the pitch is changing.

But Doppler though easy to calculate with respect to a single observer and a moving object is not with multiple observers. Take a surface with a unity circle upon it and for points equidistant around it. Lable them ABCD around and a central point P and a moving object V emitting a waveform. As V moves from one point to another the waveform is compressed towards the point V is moving to and stretched away from the point V is moving from. If the line V moves along is directly between the two points and V has constant motion the pitch observed at one point will be above that V emits and below when observed at the other point. But what of the other two points? Well the motion motion will not be constant thus the observed pitch will change in a more complex way. Now instead of a tone imagine V emits a very very short duration pulse, that each point returns to it, what does V see from each point? More importantly what does each point observe about the other points emited pulses? It quickly gets complicated. Now imagine that all the points actually move, and they use all of those pulse measures to adjust their movments. But… They have to make their control decisions very fast, several orders faster than the expected pulse time delays… The oft used solution is to create a “synthetic refrence point” and make control decisions with respect to that. But that just “localises the problem” not solves it, all you realy do is shift the problem from the local control logic into the “synthetic refrence point” calculation. You will probably then find that it uses some kind of localised refrence trick to push the issues out further. Hopefully up off of the page and out of sight, thus mind.

lurker April 24, 2022 3:27 PM

@Clive Robinson

Coming from a history of analogue comms, I have never been able to accept the Article of Faith that Data Is, ie. it always exists at the time and place determined by the software, and no questions can be entertained concerning the Purity of the Data Signal. The logic can be analyzed, reviewed, tested, corrected and will remain sacrosanct, unless somebody forces it against an edge case and discovers a coding bug.

Back in the day it was presented to us that by converting our signals to digital Noise and Distortion are Eliminated. As your post shows, this is a useful first order approximation only when the environment is confined in time and space. Would you please apply a tiny bit of editorial polish to that post and publish it as a treatise to be compulsory reading for CompSci students. Thanks.

SpaceLifeForm April 24, 2022 5:01 PM

@ lurker, Clive

The Cliff Notes

Heisenberg was right.

Go-faster stripes fail faster.

Cosmic Rays exist.

GOGI – Garbage Out, Garbage In

supersaurus May 2, 2022 3:57 PM


in another nutshell: how do you feel about riding in a self-driving car?

none for me thanks…

Clive Robinson May 2, 2022 5:33 PM

@ supersaurus,

how do you feel about riding in a self-driving car?

Is the wrong way to ask the question…

Put simply I avoid getting in small road vehicles as much as I can…

Why? Because of other road users. If I get in a car even if it’s perfectly fine and you or another driver are safe, it helps very little when two idiots in a souped up Golf GTI jump the lights at twice the legal limit and smash into the side of the vehicle you are driving sending it sideways thirty to fourty feet with the side you are on all crumpled up in your ribs and hip and your head smashed into the side window thus glass in your scalpe.

Or you are on the passenger side front and the driver has pulled correctly into a road junction to turn to their side and a half drunk idiot with no lights again smashes into the vehicle. This time pinning your leg and giving you a whiplash that gives near on six months of neck pain and migraines.

Oh and one or two similar when having been in mini-buses with ten or more other people. Being T-Boned on a motorway/freeway junction three lane roundabout with cars wizzing buy at 50MPH or more on both sides does kind of give you an interesting perspective when you have to jump out and push it off the road likewise the vehical that hit you, because it’s driver and the rest of his family is paralysed with fear because other a-hole drivers actually rev up to get around you…

I used to ride a push bike from my teens through to my fourties. The number of times I got side swiped, pushed off of the road, or other colision well four a year was about average… In every case the driver of the vehicle that hit me was to blaim. Mostly they were too busy doing something else other than concentrating on their driving… But some were so stupid they realy should have had a brain check with a four ounce hammer. For instance a woman who owened a hairdressers shop had parked with the drivers side into the very busy road, and just threw the drivers door open luckly she had the window wound down, as I went through it and the bike was a right off with twisted forks and frame. Her excuse her side mirror was broken so she did not see me…

So I regard roads as being a very hostile and dangerous place, where the driver of a vehicle has no control over other drivers.

So when you consider “self driving road vehicles” have a very poor record on “hazard perception” I think you can guess what my answer is going to be.

But what about other self navigating vehicles… I generaly do not have problems with the simple mechanical “auto pilots” on sail boats and used them frequently when sailing off shore and needing to sleep on a week or month long passage. Likewise I don’t have any real issue with electro mechanical autopilots in older light aircraft provided they are well maintained and the pilot is “old school”[1]. In fact the computerized ones and modern pilots cause me way more concern, especially when you take a good look at how they have been installed[2].

As for commercial vehicles I travel in them quite a bit, and unmaned “light railways” do not phase me in the slightest, nor do comercial ship and jet aircraft auto-pilots. Especially as I’ve designed auto navigation systems for ROV and similar sub-surface autonomous vehicals and some larger than quad copter wing based drones and gliders.

So to answer your question,

No I have no intention in getting in a self-driving car. Not because I don’t trust the navigation asspect, but because it’s a very very dangerous environment and automatic hazard detection is not at all good, worse you’ve no idea if it will decide to prioratize your safety or the safty of the baby in a pram a person has just pushed into the road because they did not hear the vehicle you are in comming as they were too busy on their mobile phone.

But also consider this… Some drivers think it funny to have a pasanger lean out the window and smash peoples post boxes on poles in front of there houses with a baseball bat or similar… With that sort of mentality in people the road environment is never going to be safe. And that’s before you consider those doing “Suicide by Bridge Abutment” or “Suicide by wrong side of the road driving”.

First we need to change the road environment and we’ve no way to do that, because there are always idiots and worse you can not stop.

[1] The problem is the more reliable or accurate an auto-pilot / navigator is, the more the human pilot / navigator trusts it. So when it does go wrong the human is generally not in anyway prepared to take over… I’m known for doing Dead Reconing navigation and celestial navigation very frequently even when I’ve got a top of the line auto pilot[2].

[2] As an engineer I know about fuses and other “guatenteed to fail” parts. For those that do not know, an electric current has the ability to migrate metal in a wire. Fine with Alternating Current(AC) where it enfs up going back and forth. But with Direct Current(DC) just like copper plating the metal moves in one direction so the wire gets thiner and it’s resistance rises and as the hearting effect is based on the current squared multiplied by the resistance you know that the fuse will fail. Then there is “Anodic Corrosion” any electrical system where a licquid with free ions in it is going to cause corrosion that increases not just resistance it makes the metal less ductile or more fragile depending on the way you want to look at it (it’s why if you have copper pipe central heating with brass screw thread fittings that happen to be the same threads,as steel gas fittings and some idiot adds a gas fitting as they can not find a brass one ten or twenty years down the road you will have a serious leak…).

Faustus May 5, 2022 2:24 PM


“That is you have say a 1 in 2^1024 chance of someting happening in a “random” selection. Well, the thinking is,

A, Common sense says there’s no chance of that in my lifetime.
B, So the code is never going to run.
C, Adding the code here will be messy.
D, So put the code elsewhere.
E, And add a note to the docs…

Whilst point A on “average” is true… security is not about average. Because “attackers have agency” so point B is very definately false…”

I think you have a good point. It is bad design to spitball the chances of something happening in order to avoid programming checks. It is better to check for the impossible. (Let the optimizer send an error or take it out so you are sure (well, sortof).) It is also very true that attackers often can control variables that affect these probabilities.

But I always have in mind that much of security IS based on probability, such as cryptography. If I have a AES key that has a one in 2^1024 chance of being guessed, that is considered more than enough security, today at least. Though such probabilities worry me. Especially when I think about my crypto.

I don’t think cryptography offers another option to probability based security. Am I wrong?

But reading about parapsychology experiments where some participants gain results with infinitesmal probability, and people winning major lottery prizes more than once, I wonder if we really understand the probability of extremely unlikely events.

Clive Robinson May 5, 2022 5:53 PM

@ Fsustus,

I wonder if we really understand the probability of extremely unlikely events.

Simple logic says we do not.

About the fastest we can make “qualified” observations of events that are sufficiently issolated is about a million asecond (or the time it takes light to travel 300m). Which is around 2^20/sec. So not even a tiny tiny dent in 2^1024… But to spot trends you have to somehow not just store information with sufficient accuracy you have to be able to search and process it.

Or to put it another way as option A says,

“Common sense says there’s no chance of that in my lifetime.” 😉

And so we “assume” things without sufficient knowledge.

Which brings us to,

I don’t think cryptography offers another option to probability based security. Am I wrong?

It depends on that weasel word “probability” and some other equally as weasely words “Random”, “Chaotic”, “Complexity” and “Determanistic” as seen from two different points.

Obviously the AES algorithm is fully “determanistic” and gives high “Complexity” but it’s certainly not “Chaotic” or “Random” or based on “Probability”. So it’s in reality just the same as some kind of pointer into a map.

But to an observer looking at the output of the map they have no way to invert it back to the pointer sequence it is, or atleast should be a “One Way Function”(OWF). In fact the observer should have no way in a reasonable time period to tell if the output is truely random or not[1]. But that depends on your definition of “reasonable” in theory you could tell with three or less output blocks of any standard block cipher[2].

So there is a bit of a problem in that whilst an observer can not reverse the OWF and give you the contents of the state array they can at some point tell there is a state array and how large it is.

Such is the nature of determanism, no matter how complex it might be in it’s output mapping or feedback mapping.

But this gives rise to another issue, and one that few consider though they rarely do.

Because the likes of Block and Stream ciphers are “determanistic” their output is fully dependent on the input. So if the input is determanistic so is the output which means there is no “deniability” in the system.

The “One Time Pad”(OTP) whilst it looks like a stream cipher, it is not, because the input to the mixer function is non-determanistic.

So if I give you a ciphertext and a key and say “this is the message I received…”

The sender can give you the same ciphertext but different key and say “this is the message I sent…”

As with an OTP because all keys ate equiprobable, so are the messages, and you have no way to tell.

But with a stream cipher, as the key is fully determanistic, your chance of finding two different keys that produce acceptable plaintext is very very small

So with a stream cipher unlike an OTP you have no real deniability if the other party in the communications decides to betray you to a third party… With an OTP however you do.

[1] Actually there is a way, but whilst simple to describe… The input sequence to the AES mapping algorithm, requires there to be “state” which means in practical terms the sequence is bounded by the size of that state. That means in turn the sequence is bounded, and under certain conditions the size of that state can be deduced. The designers of AES took some care to ensure that the state would not be easily determined. A problem arises in that True Random events are “unbounded” which means that although their distribution is eventually flat, it should in the short term not be “too flat”. Due to the limited size of the state and the nature of the mapping function AES like all ciphers has a trend towards being “too flat” fairly quickly.

[2] Think not of the complex mapping, but the state array behind it. It’s not hard to see that the longest sequence it can have of any non repeating pattern is 3N-2 where N is the number of bits of state.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.