Anders September 17, 2021 4:24 PM

Saturday, September 18, 2021 is the next Warpstock 2021


Free event on youtube, 5 hours.

if you have nothing to do and you are interested how OS/2
is doing today (under the name ArcaOS) and where it is headed,
take a look.

FA September 18, 2021 1:15 AM


Re. nuclear decay TRBG

I’ve been analysing a generator exactly as you describe (with a 1-bit counter), both mathematically and by simulation.

Let F be the output frequency of the counter, M the average decay event rate, and R = F / M.

There will be no bias in the 0 to 1 bit ratio, regardless of R, for exactly the reason you stated: the process is ‘memoryless’. Where R matters is for the correlation between succesive outputs (i.e. XOR with previous bit).

For R = 5 this will be biased by around 0.1%. For R = 50 even 10^9 bits are not enough to detect any bias.

Winter September 18, 2021 2:24 AM

@Clive (last week’s squid post)
“For those interested in developing lithium battery technology,”
“Firstly, there is one heck of a difference between a consumer device drawing milliwatts of power and vehicals that can draw hundreds of kilowatts of power.”

Wearable rechargeable batteries are obviously low powered. Charging and discharging generate heat, which should not be discarded through the wearer’s skin.

Hundreds of kilowatts are out of the question then.

(Sorry, not open access)
ht tps://

We are able to produce metres of high-performing fibre lithium-ion batteries through an optimized scalable industrial process. Our mass-produced fibre batteries have an energy density of 85.69 watt hour per kilogram (typical values8 are less than 1 watt hour per kilogram), based on the total weight of a lithium cobalt oxide/graphite full battery, including packaging. Its capacity retention reaches 90.5% after 500 charge–discharge cycles and 93% at 1C rate (compared with 0.1C rate capacity), which is comparable to commercial batteries such as pouch cells. Over 80 per cent capacity can be maintained after bending the fibre for 100,000 cycles. We show that fibre lithium-ion batteries woven into safe and washable textiles by industrial rapier loom can wirelessly charge a cell phone or power a health management jacket integrated with fibre sensors and a textile display

Clive Robinson September 18, 2021 3:50 AM

@ FA, MarkH, Freezing_in_Brazil, ALL,

I’ve been analysing a generator exactly as you describe (with a 1-bit counter), both mathematically and by simulation.

You realise that a “1-bit counter” is a clocked latch, which is also a sampler and more interestingly a frequency mixer?

Something tells me you do not, in fact when you say,

the process is ‘memoryless’

Can be fairly easily disproved, is a fairly good indicator.

It’s a mistake in understanding I see many people make when using ring oscillators as TRNGs.

It’s fairly easy to see why, if you take two reasonably stable oscillators, and feed one into the data input of the latch and the other into the clock input of the latch and observe the Q output of the latch, on an oscilloscope.

According to what you say you will have a “menoryless” output, that is no correlation between outputs.

Well if you only look “close in” with an oscilloscope it may look that way, in fact to the human eue it may appear “random” but it is not. Open the scope timebase out and you will see that the Q output transitions appear to bunch up and seperate in a stable pattern.

If you take the Q output and put it into an appropriate low pass filter you get a near perfect sinwave, at the difference frequency between the two oscillators. So there is a very high degree of correlation between the Q output states of the latch (unless of course you subscribe to “magic pixie dust” thinking).

If you don’t have a scope to hand you can do this by drawing out the signals on graph paper.

So it is clear that the time difference on the inputs of the latch are very clearly taken forward through the latch. Which is what you would expect from sampling theory.

In fact look up the use of oversampling and undersampling for the use of A to D and D to A convertion. You might also want to look up the use of “dither” in the processes as well to get fractional improvments in measurments.

So if you regard the high frequency oscillator as one input to the latch and the output from the particle detector to the other. You will realise that even though the 1-bit counter may look random it’s not it represents an accurate measure of the system over time.

So as we know over a period of time the decay rate drops and the time period between detector outputs increases. So this will be seen as an effect on the Q output, it can not in any way be avoided.

Likewise as the XTAL oscilator “ages” and it’s frequency decreases over the years –a well studied phenomenon– it to will be seen as an effect on the latch Q output.

But in the much more short term so also will the delta F changes caused by the temprature inside the instrument case caused by room temprature and by sunlight if it shines on the case. Such changes will give depending on the crystal cut oh something like +- 10ppm change in frequency over the daily internal temprature cycle of the case (look up why Temprature Controled Xtal Oscillators also called TCXO were designed and still used).

Thus the “random” you see is in part the fairly determanistic effect of such things as “temprature” likewise mechanical “vibration” which is a well known effect known as “microphonics”.

To argue that none of this is true as you and MarkH have would be entirely pointless because anybody could verify what I’ve said above, and the several times in the past I’ve described the behaviour before.

I sugest at this point you and MarkH both stop, what other readers are probably somewhat tired of, which is these protracted and pointless arguments you have both started several times in the past over what I’ve said.

Winter September 18, 2021 4:27 AM

Forgot the title of the link:

Scalable production of high-performing woven lithium-ion fibre batteries
(Sorry, not open access)
ht tps://

echo September 18, 2021 4:48 AM

Rolls-Royce’s all-electric aircraft completes 15-minute maiden voyage

I know this requires a bit of imagination but this aircraft reminds me of the Supermarine S.6B. Sponsored by Lady Huston to the tune of £100,000 (which back then was enough to set most people up for life with a wide margin to spare) after the UK government pulled out. Her sponsership allowed Supermarine to go on and win the 1931 Schneider Trophy. This aircraft was the foundation of what later became the Spitfire.

When I was small the Schneider trophy was an occasional talking point and one of my sisters bought me a model of this aircraft. As it turns out, just reading the wiki, Lady Huston was a suffragette which may have had something to do with this. It’s something my mum used to bang on about from time to time and she would always go out to vote come rain or shine.

I know it’s only a movie but one of my favourite bits is the three Spitfires doing a flyby of the camera position. Then there is the low level flyover of the fishig boat and the three Spitfires breaking. Honestly this brings tears to my eyes (and one reason why I wear waterproof mascara. This isn’t for reasons of national pride or jingoism but remembering the many fine young men and women who died saving the world from evil and, yes, this includes all the other people from other countries around the world who fought the evil of the Nazis before anyone complains.

Ooh that jumpscare. You rotter you.

In other news the French are a bit miffed about the Anglosphere nuclear submarine deal. I cannot blame them for this and given various geopolitical concerns about one self-serving hegemony which shall not be named not to mention the horrors of Brexit I’d be miffed too.

FA September 18, 2021 10:25 AM


the process is ‘memoryless’

refers to the nuclear decay, and the exponential probability distribution that describes it. And to nothing else. Not to two oscillators combined with a latch or anything similar. Nor do I claim that such a system of two or more oscillators would produce anything ‘random’, nor do you need to teach me very basic DSP theory.

The system that @MarkH and me presented does not depend on any digital electronics as a source of randomness. It quite also easy to analyse, I did so and I assume @MarkH did more or less the same. The only source of randomness we assume is the randomly timed impulses from a Geiger tube exposed to some decaying isotope.

So all the talk about ‘things that may look random but are not’ is pretty irrelevant.

As said there will be some bias on the XOR of successive output bits. This can be quantified quite easily, and turns out to be proportional to 1/(R^2) with R as defined in my previous post.

Mowmowfi September 18, 2021 11:06 AM

@fa all
It has a accurate time decay, you might be able to use two sources, but a giegar counter will read any inosition ration so can be injected, seem high technical but its stupid

MarkH September 18, 2021 11:06 AM

@FA, Clive, Freezing_in_Brazil:

To underscore what FA wrote, consider two U235 nuclei: one embedded somewhere in Earth, about 6 billion years old; the other freshly minted yesterday in some supernova.

Those nuclei have exactly equal probabilities of spontaneous decay within the next 24 hours.

The ancient nucleus doesn’t “know” or “remember” that it has already survived more than eight half-lives, or that it’s “overdue” to disintegrate. The phenomenon is perfectly memoryless, and stateless.

FA, I’m very interested in your analysis. Can you extend it for n bit counters? If you write it up and post a reference, I will study it attentively.

JonKnowsNothing September 18, 2021 11:15 AM


MSM report of an app that converts Farsi language plain text to gibberish using a steganography encryption algorithm.

The app was vetted by German penetration testing firm Cure53 security firm which identified several failure points, that have now been fixed.


ht tps://

Sut Vachz September 18, 2021 11:46 AM

Re: going nuclear with TRNG

In addition to taking note of von Neumann’s well known remark concerning the state of soul of those seeking to generate random numbers by arithmetical means, one might well be cautioned against peering too deeply into the atomic nucleus, lest the chthonic gods that rule there be offended.

Like the infinite straight line, true random numbers can not be had (because they don’t exist), and are never needed.

MarkH September 18, 2021 11:52 AM

@FA, Freezing_in_Brazil, Clive:

Consider a sequence of occurrence times for global earthquakes, above some magnitude threshold (to filter out rippling aftershocks and man-made temblors).

Convert those earthquake times by stripping away the date, hours and minutes, to obtain a sequence of numbers modulo 60 seconds. This models the correct method for sampling nuclear decay in a TRNG.

Would you expect those numbers to have detectable bias?

For example, do earthquakes occur more often in the first half of a clock minute than the second half? More often on odd-numbered seconds than even-numbered seconds? Why, or why not?

MarkH September 18, 2021 11:58 AM

Previous comment continued:

If a quake occurs when the clock’s second hand is at 34, will the time of the next quake correlate to that number?

Suppose that the clock runs a little slow, progressively drifting behind UTC. Will that add bias?

Suppose that the magnitude threshold gradually rises month by month, to model “decay” of the source. That will surely decrease the number generation rate … will it also add bias?

lurker September 18, 2021 1:21 PM

In other news the French did not recall their ambassador from London because they regard UK as an irrelevant junior partner of the triumvirate…

SpaceLifeForm September 18, 2021 4:19 PM

Infrared Laser will not get the cat moving


When an object hidden in the room is static, the new keyhole imaging technique simply can’t calculate what it’s seeing. But the researchers have found that a moving object paired with pulses of light from a laser generate enough usable data over a long period of exposure time for an algorithm to create an image of what it’s seeing.

SpaceLifeForm September 18, 2021 4:37 PM

@ Anders

ArcaOS is interesting, but a complete non-starter as it is not fully Open Source.

And having a dependency of Java…

Runs away

SpaceLifeForm September 18, 2021 5:07 PM

@ MarkH, FA, Clive, Freezing_in_Brazil, Sut Vachz

Objection! Assumes facts not in evidence! (pounding on Table of Elements)

We (collective we) Do Not Know nor Can Prove or Disprove that Spooky Action at a Distance is not in play here.

Space is tricky!

Your Honour, I move for Mistrial!

Those nuclei have exactly equal probabilities of spontaneous decay within the next 24 hours.

The ancient nucleus doesn’t “know” or “remember” that it has already survived more than eight half-lives, or that it’s “overdue” to disintegrate. The phenomenon is perfectly memoryless, and stateless.

MarkH September 18, 2021 6:43 PM


Some types of decay are subject to external influences within the ranges possible for a habitable environment. The source for a TRNG must be chosen with care.

Working scientists don’t like to say “prove” and “proof.”

Theory — and experimental confirmation — of the nature of radioactive decay has held up for considerably longer than we’ve been alive.

Extraordinary claims require extraordinary evidence. Personal doubts are not science.

GregW September 18, 2021 7:15 PM

For any DIYers, keep in mind that Geiger tubes themselves are subject to manufacturing variations that -despite a random radiation source – can lead to nonrandom distribution counts (whose behavior can further change or degrade under temperature or vibration). You need to statistically test what you’re getting (or trust the manufacturer to have done so, or no interdiction to have occurred, etc etc…)

Clive Robinson September 18, 2021 8:04 PM

@ SpaceLifeForm,

Infrared Laser will not get the cat moving

Do you remember not so long ago a similar technique was found to be able to take 3D pictures “around the corner”[1]?

“The system exploits a device called a femtosecond laser, which emits bursts of light so short that their duration is measured in quadrillionths of a second. To peer into a room that’s outside its line of sight, the system might fire femtosecond bursts of laser light at the wall opposite the doorway. The light would reflect off the wall and into the room, then bounce around and re-emerge, ultimately striking a detector that can take measurements every few picoseconds, or trillionths of a second. Because the light bursts are so short, the system can gauge how far they’ve traveled by measuring the time it takes them to reach the detector.

The system performs this procedure several times, bouncing light off several different spots on the wall, so that it enters the room at several different angles. The detector, too, measures the returning light at different angles. By comparing the times at which returning light strikes different parts of the detector, the system can piece together a picture of the room’s geometry.

I remember at the time the description reminded me of descriptions that described how holographic images were taken, where in effect the time was measured by the interferance pattern recorded by the photo emulsion.

People forget that like you can not destroy information energy it simply manifests in a different way[2] similar applies to temporal effects with measurments, and it’s important to know what you are measuring and sampled by what…

There is a lot more to using pulsed systems than meets the eye.

[1] Note the date as March 21st 2012, and not a lot has been said since,

[2] You see this effect in analogue electronics, if you apply limiting to the amplitude of a signal the “information energy” is not lost it just gets in effect folded back into the signal as harmonics. So when you then filter the signal, the amplitude limiting goes to pot. Which if people played around with an oscilloscope and real world signal source they would realise. That is take a sinwave and hard limit to make a squarewave to amplitude X when you filter it the signal level magically appears to go up as you filter out some of the harmonics…

Clive Robinson September 18, 2021 8:20 PM

@ SpaceLifeForm,

Objection! Assumes facts not in evidence!

Yes there is a lot “not in evidence” not just in what is being said by others but their thinking process.

Space is tricky!

Something else some are missing, all the time.

Anders September 18, 2021 8:23 PM


While not open source there’s ongoing research
to reverse engineer the original code. Lot
of work has already done and “unofficial” kernel
with lot of features is one output of that work.


Then, a LOT of open source code has been ported to


And you can forget the Java, even uninstall it.
It’s only needed for some legacy stuff.

echo September 18, 2021 8:33 PM


In other news the French did not recall their ambassador from London because they regard UK as an irrelevant junior partner of the triumvirate…

I have no idea what is going on. I suspect something along the lines of the French consider the UK a basket case (correct) while at the same time mindful of EU issues and considered saying nothing would be the best option. Or as you say simply irrelevant.

I’m not personally happy with the MAGA attitude coming from the US. They’re very bullish over there and as unilateral and selfish as ever. Australia as we know has had a lurch to the right for some time.

The current UK government is composed of very stupid individuals who are very clever at getting what they want or simply making life hell for everyone else.

The US “pivot to Asia” is a reneging on the post WWII arrangement. America has done very well out of accessing European markets and finance as well as skills. America has also done very well out of outsourcing debt onto Europe. The whole “pivot to Asia” strategy is all about America wanting to save money on Atlantic defence because it wants to suck the life out of the pacific side. If Europe is to beef up defence there are a number of factors at play. Europe has to resolve things with Russia and other countries on Europes doorstep i.e. clear the mess up that America left behind from the Cold War and Afghanistan. At the same time Europe has to deal with the full weight of America’s unipolar economy putting on the full “Hello buddy” big grin meet and greet charm offensive the American’s are going to pull with Asia, and all the dirty tricks and blackmail and corruption and God knows what else they are going to pull to maintain the illusion.

As for the French response to the deal as being a betrayal and their comment you don’t treat allies like that the French are correct.

As for mainly American jokes about the French it was the French who with ammunition running out and superior forces bearing down upon them held the line while British forces evacuated via Dunkirk. To say I am looking daggers in America’s direction at the moment is an understatement.

The worst betrayal is from the UK. The lapdog backing of the Iraq War to appease the right wing press, Brexit, and everything which is following including giving the nod to Australia to break the deal with the French.

Clive Robinson September 18, 2021 9:18 PM

@ GregW,

For any DIYers, keep in mind that Geiger tubes themselves are subject to manufacturing variations that

Ahh, it sounds like you have some practical experience with using such beasts and measuring radioisotope decay.

And have spotted where others clearly do not…

The question is how long before they realise it, if they can without being very explicitly told.

SpaceLifeForm September 18, 2021 9:51 PM

@ GregW

The timestamps are interesting


name.withheld.for.obvious.reasons September 18, 2021 9:56 PM

Hate to be a Debby Downer, after reading both the Memorandum from the 9th Circuit (Jewel v NSA, 4 pgs) and having a cursory examination of the ruling from the 4th Circuit (Wikimedia v NSA, 68 pgs, @SpaceLifeForm thanks) that both touch on state secrets, it is clear that a new or fundamental change is in the offerings from the federal courts. It is a disturbing trend away from law and a move towards doctrinal interpretation. It is not just a political influence but a rigid dogmatism from the bench that is completely tainting jurisprudence and any sense of fairness. I will provide an analysis of the latest from the 4th district court, at least the 4th district judges weighed in with opinions, unlike the 9th that followed a shadow docket format. There is another case before the district court concerning state secrets, but at this point I see no future with respect to the courts. They are as one would say, dead enders.

I’ve concluded that the courts have shifted, and in doing so put the U.S. on a footing with the early 1930’s countries of Spain, Italy, and Germany. And, now make the U.S. partners with Hungary, Poland, Turkey, Brazil, UK, and North Korea in a project that is anything but democratic. This is a lynchpin move…checkmate.

Clive Robinson September 18, 2021 10:44 PM

@ SpaceLifeForm,

It’s funny that you should mention a certain very nastily behaved quadruped.

Terry Pratchett having had extensive practical experience of them observed that people do not realise under that cute fur and apparent poise etc they are not at all nice,

“If cats looked like frogs we’d realize what nasty, cruel little bastards they are. Style. That’s what people remember”

Have you ever seen them play with their prey?

Often the target for such behaviour has actually made it’s self into prey and the “pounce instinct” in the cat is triggered even though it is not in need of food. You see this with the way a cat tries to jump on moving laser beam spots, in part it is actually an “attack defense” instinctive reponse. That is where fight is favoured over flight, deep down in the “lizzard brain” all mammals have from their evolution.

You know it’s not only Felis catus domesticus that apparently “Plays with it’s prey”[1] other mammals do it for differing reasons. Humans however have taken it to a whole different level often where physical contact is not involved[2].

An unfortunate result of this is some will think incorrectly there is a hierarchy, and place themselves as lower in their hierarchy than others (it’s one reason humans have so much belief in deities and kings). Some with the self belief they are seen as lower in their hierarchy, will attack those they alone see as higher in the hierarchy for various self esteem / ego reasons. As they are wrong in their beliefs they tend to fail, so they will repeatedly attack and retreat. Often when they mistakenly see what they think is weakness in what they perceive as an individual they see as higher in the hierarchy. Madness? Well sort of.

Should such attacks be regarded as some form of flattery? Well… the problem is that the attackers fail to understand that it is their own inabilities that make them place themselves in their immagined hierarchy, and at a lower level, not others and frequently not those they perceive as higher in their hierarchy. But the repeated attacks do reflect badly on the attackers in the eyes of others. Thus in a perverse form of wish fulfilment they do end up being looked down upon… hence they are driven to repeat their mistake because they do not learn the lesson each time they turn the wheel.

[1] It has been argued that this is not tormenting/tourture as humans see it, but actually an inate survival mechanism, whereby the cat judges if it is safe to kill what it has caught.

[2] Hence the expression “catty behaviour” often ascribed to two women socialy circling to establish dominance or territory. However it’s more visable in “get in your space” type posturing behaviour in men. If you’ve ever seen two cats “puff up” before they scrap, you can see almost identical behaviour in men “bracing up” before they resort to physical violence.

Sut Vachz September 19, 2021 1:18 AM

There is no random. It’s initial conditions all the way down.

https: //

MarkH September 19, 2021 2:08 AM


That’s a fun study, which goes into impressive detail. It might interest you that the paper’s estimate of p = 0.508 for the coin to be caught with its initial face up, corresponds to 0.999815 bits of entropy per toss.

Not perfect, but not bad!

If I can infer that “there is no random” when this simple procedure comes so near to perfect randomness, can I not also infer that “there are no white sheep” because 0.02% of the sheep observed are black?

A more sensible inference might be that bias cannot be reduced to zero. Here’s the good news: for any specified threshold based on meaningful security requirements, it’s possible to reduce bias below that threshold.

MarkH September 19, 2021 2:15 AM


I have great respect for feline nature.

I’ve always supposed that if a science fiction “shrinking ray” could scale me to smaller than any of the purring pets I’ve had over the years, they would kill me without hesitation.

It’s not personal … it’s just what they do

FA September 19, 2021 2:32 AM


Consider a sequence of occurrence times for global earthquakes, above some magnitude threshold (to filter out rippling aftershocks and man-made temblors).

Nice analogy, even if I doubt that whatever causes earthquakes is ‘memoryless’ 🙂

Yes, I think an n-bit counter could be used as well. To get the same R (as defined before) the input clock frequency would need to be scaled proportional to the number of states of the counter.

I’ll write up the calculation of the XOR bias when I’m back from holidays…

FA September 19, 2021 2:41 AM


For any DIYers, keep in mind that Geiger tubes themselves are subject to manufacturing variations that -despite a random radiation source – can lead to nonrandom distribution counts

Good point. Anyway a Geiger tube would be limited to low decay event rates, so maybe this is not a useful method in practice.

The object of this exercise was just to find out if nuclear decay could be used as a source of randomness with sufficiently low bias and without having to know the exact decay rate, half-life, etc. And that seems to be the case.

MarkH September 19, 2021 2:45 AM


Of course, quakes are not fully independent: big quakes make aftershocks; a slip at one position on a fault can increase the stress at another location; perhaps tidal forces can cause some tiny entrainment to the positions of moon and sun.

Even so, my starting hypothesis was that bias in earthquake timings (modulo a small interval) would be quite small.

For radioactive decay detections — with a suitably chosen source and carefully designed detector — bias should be even smaller.

MarkH September 19, 2021 2:59 AM


You’ve firmly offered assertions about bias in numbers obtained by timing independent unpredictable events modulo a small time interval, and even suggested (as I understood the message) that people stop posting comments contradicting those assertions.

Having had some time to digest the earthquake timing concept I wrote up, what is your analysis of how biased those numbers are likely to be?

Would you say that the quake timing scheme is fundamentally different from capturing radioactive decays using the same measurement concept?


Disclosure: Though I had never even glanced at earthquake timing data when I wrote up the concept, I’ve run some numbers in the past few hours, and will do some more analysis.

FA September 19, 2021 2:59 AM


That is take a sinwave and hard limit to make a squarewave to amplitude X when you filter it the signal level magically appears to go up as you filter out some of the harmonics…

When you filter out the harmonics of a square wave, the RMS level (which represents power or energy) will go down.

Peak level will indeed go up. Which just means it’s a bad indicator of signal power. Or in audio, of loudness.

That’s one reason why Bob Katz’ K-meter has become quite popular in audio engineering, it measures both RMS and peak level, the latter being important because digital formats impose a hard limit.

FA September 19, 2021 3:04 AM


Even so, my starting hypothesis was that bias in earthquake timings (modulo a small interval) would be quite small.

Of course it is – no criticism intended ! Apparently the smiley I put after ‘memoryless’ got lost…

FA September 19, 2021 7:21 AM

@Sut Vachz

I only scanned through the paper rather fast, but what happens with the coin seems to be similar to what is seen in knife-throwing: a coupling of linear and rotational momentum since both originate from the same initial conditions.
So to ensure the knife arrives at the target with the point forward, you have to adjust the initial angle as a function of distance.

JonKnowsNothing September 19, 2021 8:54 AM

@Clive, Winter, MarkH, SpaceLifeForm, All

re: Fires and Triage

While the fires are burning up thousand-year-old trees, COVID is making things rather miserable for many. There is the one Happy Group that goes out and about, sans masks, and sans jabs and then there is the Other Group, including moi, that stays in, keeps distance and continues breathing. Although there is a good bit of smoke particles in the air.

Our local vaccination rate hit a good milestone: 1,000,000 jabs
The bad news is: that 54% of the county population has no jabs, that’s 540,000 people.

We are 17 days into Triage Orders. This is Tier 1, Tier 2 will be worse.

nine critically ill patients who were unable to get into the
intensive care unit for more than three days

There are no ICU units to be had and while we have Tier 1 Triage Orders that mandate other hospitals take the local overload, we are stacking folks up in hallways, the ER is an ersatz ICU and ambulances are portable ICUs parked for 4+ hours waiting to off load their cargo. The by-catch is that people needing ambulances find there are none and the ambulance companies have been told to triage calls and not pick up those that don’t make the top of the list.

There isn’t much said about transferring people to other regions and hospitals. It is being done but when the hallway becomes an ICU unit and people wait days for a bed, it does not seem that other hospitals are able to take in the overload.

This is the replacement system for Surge Hospitals from the last COVID surge.

re: The Bank of Mom and Dad

A recent report on the costs of the unjabbed hospitalization found

Estimated hospital costs per no-jab COVID patient: $20,000 USD
(some health providers indicated $50,000 USD)

Preventable Hospital
June 2021     32,000
July 2021     68,000
August 2021    187,000
Total       287,000

287,000 * $20,000 USD = $5,740,000,000
$5.7 Billion USD for 3 months

The no-tax no-mask no-vax group has traditionally been a no-pay group but it seems that they have decided that $5.7B USD every 3 months is a good deal.

re: Good news on Vaccines and breadcrumbs

Gen2 Vaccines are doing well in testing. Some maybe out by Q4 2021 or Q1 2022.
Gen3 Vaccines are in the works some with ETA of Q2 2022 or Q3 2022

It remains to be seen if G2Vax and G3Vax will get the same Fast Track treatment as G1Vax. If they get the fast track, we might see some of them by winter holiday.

There maybe $$ involved but afaik not submarines.

There are more supportive therapies also in the works (ETA 2022) and some available now. Although the current list shows some items have a reduction in effectiveness there are still a couple that are good provided you get them early in the infection cycle.

If you do not want to do the M:M tables on mutations-vaccines-therapies, the US CDC has already done them. These are publicly available on their site.


  • note: I have reduced the number of sources as there maybe some correlation between links and no-posts or perhaps there maybe take-down notices from the link owners due to paywalls etc.

FA September 19, 2021 1:15 PM


Let me know one way or the other, I’ll go with the majority vote.

Oh dear…

Now you are trying to use our choices from your 1..4 menu to justify your future actions.

I’m not going to play that game. The decision to reveal or not reveal the world-shocking truth about nuclear TRBG should be yours and only yours.

If your time in the military (hinted to many times) involved showing leadership, you should know that taking responsability for your own decisions and not hiding behind others is an essential part of it.

MarkH September 19, 2021 1:17 PM


Option 3 suits me fine. It seems to me that have indeed told us, both last November and in recent days.

To the extent that I followed the reasoning you offered, I understood that you expect bias in the “roulette wheel” because of effects which occur when such sampling is applied to periodic, chaotic or other phenomena in which the timing of events is mainly deterministic, with strong causal linkage between each event and its successor.

My conclusion is that such effects don’t apply to independent events occurring within a population. Roulette-wheel sampling must be unbiased for a particle detector, provided that the detector has controlled timing skew, and the wheel’s spin period is much briefer than the mean time between detector “ticks.”

MarkH September 19, 2021 1:32 PM


To suppose that roulette-wheel sampling of decay detections is biased — provided the detector doesn’t cause significant timing skew! — is to suppose that the unstable nuclei somehow “prefer” to disintegrate at one wheel rotation angle over another.

If that is so, how can it come about?

You wrote that “significant bias goes straight through [latched debias circuits].”

In your comments on this topic, I’ve read two kinds of bias in isotopic sources: that there’s an average period, and that the period gradually lengthens. Those are both physical facts … which cause horrible bias if the source measures frequency or period of decays.

When the wheel is spinning an average of (for example) 1000 times between detections, its strobed position angle is not a meaningful measure of the time interval, but rather of the wholly unpredictable exact moment at which the next independent event occurred.

If decay were periodic, yes there would be bias. It just ain’t.

MarkH September 19, 2021 1:38 PM


Most of us are difficult in our individual ways. Sometimes the conversation seems like trying to push a donkey where it don’t want to go (pulling the donkey in the opposite direction has been known to work!)

Regardless of feelings of frustration, personalizing the debate is wholly non-constructive.

I do my best to address my arguments to the facts, concepts and reasoning, and NOT to the person.

In this case, I’m confident that our beloved Mr Robinson is mistaken. I want to get the truth firmly on record.

MarkH September 19, 2021 6:00 PM

re: Fires and Triage

Apologies, if this is redundant: I saw a headline a few weeks ago (didn’t read the story) to the effect that Covid cases appear (based on really preliminary data) to be greatly aggravated by inhalation of smoke from the forest fires.

[For those who haven’t lived through it, the smoke can be palpable at great distances from the fires.]

This relationship is intuitively appealing: I would expect lung tissue which is inflamed/clogged by inhaled particulates to have worse capacity loss (compared to typical lung health) when Covid takes root there.

Sufferers from the forest fire smoke have a “head start” on the road toward hospital admission, intubation, and death should they contract an acute Covid infection.

The compounding losses are heavy, irreparable, and could have been largely prevented.

MarkH September 19, 2021 6:32 PM

If I Could Predict the Exact Time of Earthquakes, I’d be Richer than Croesus

If anyone is interested, here are data reductions from global earthquake data for all recorded events of magnitude ≥ 4.5 from the start of 1990 (UTC) until a magnitude 5 quake late on 18 September (2 days ago), centered “88 km SSE of Panguna, Papua New Guinea.”

The dataset comprises 191,871 earthquakes. The “debias” mechanism is to take the official time in seconds, modulo one minute.

Quakes do prefer the first half of a clock minute (departure from mean is 0.152 percentage points), and they like even-numbered seconds more than odd (departure from mean by 0.510 points).

As a crude measure of correlation between successive numbers, I averaged the “clockwise distance” (how far a clock’s second hand would need to advance from one number to its successor). The ideal value is 29.5 seconds (mid-point of the possible values 0..59); the actual mean is 29.419 seconds.

The entropy computed from the set of 191,871 modular timings is 5.906697 bits (per reading, to be clear). A number in the range 0..59 conveys 5.906891 bits, so for these modular quake timings the entropy per bit is 0.999967.

Clive Robinson September 19, 2021 6:48 PM

@ FA,

The decision to reveal or not reveal the world-shocking truth about nuclear TRBG should be yours and only yours.

Nobody other than you has said “world-shocking” in fact I said “bl@@dy obvious” which is a world away. As for you stating it being my decision and only my decision… that is actually you playing games, not me.

If your time in the military (hinted to many times) involved showing leadership, you should know that taking responsability for your own decisions and not hiding behind others is an essential part of it.

I’ve already stated my prefrence, which is the one that any teacher will tell you is the best way which is “work it out for yourself from basic knowledge and information”. I have after all given you all the information you should need, you just need to join the dots together.

The fact I also offer you an alternative is not me playing games, just giving you a choice, in which path is taken.

By the way if you have spent any time in the military in the UK or Commonwealth forces, you should know that “drills” and similar only get the basics honed to instinct, they in no way teach an individial how to think thus be able to plan appropriately to lead.

There is some myth that leaders have to be charismatic, charming, orators, etc… Fine if you are a no good politician, but not being a leader of groups of people in most other endevors.

An essential part of being a leader in a military organisation is to recognise a couple of things,

1, Every one is fully expendable.
2, No plan survives contact with the enemy.

They are both part of the “Shit happens” ethos.

So to cover 1 you train those beneath you to replace you, which is more about teaching and learning than anything else, and you do it down the ranks. Which means a private is trained to be or atleast understand/appreciate one to three ranks above, so patrol and squad duties upto and including those of a sergeant. Likewise sergeants train to do the duties of lieutenants / captains / OCs and lieutenants train to be majors / OCs and also learn staff duties. This way delegation is not just possible but effective, and if a leader gets wounded or killed, somebody else simply steps into the position effectively.

However to cover 2 each person needs to be taught both tactics and strategy from basic patrol activities through combat skills upto and through being able to make not just immediate actions but plan various activities flexibly and effectively.

You might have heard of the John Boyd OODA loop from the US Airforce in the 1960s/70s? Well in the UK and Comenwealth armed forces much of it was “standard” in training going back before the Korean war in the 1950’s. It was based on leasons learned from WWII onwards by the likes of the long range desert group and various jungle fighting engagments in places like Malaysia, which also gave rise to the “hearts and minds” operations etc.

In essence it teaches that speed, agility, and independence of action of well trained small groups such as “4 man bricks/patrols”[1] can easily and successfuly take combat to far greater power and numbers (like entire battalions/Regiments[2]). It is part of what makes asymmetric warfare effective, especially when specialists are involved[3].

[1] The name “patrol” is a bit of a misnomer and “detachment” is sometimes used it rather depends on the skill sets required for a particular function and if they are tactical or support, as well as if they are stand alone or part of a larger grouping[2].

[2] Military personnel grouping naming conventions can get confusing (it is why NATO uses the ‘dot/line/X’ map marking to indicate group strengths). So what one arm, –say the infantry– call a basic group a “battalion” others –such as engineers, signals and other specialised units– call a “regiment”.

Infantry train in battle skills, that are not exactly marketable skills in the real world, engineers, signals and some other specialist groups whilst they learn battle skills are raised to other “skill sets” which are often called trades and attract increased daily payments as well as being marketable skills with equivalence to standard qualifications (HNC, HND, Degree).

[3] Some service personnel especially some specialists have more than one trade, which can cause problems from time to time. One such was “payday” I always elected to have payment made directly into “the bank” as it avoided a lot of embarrassment of signing for more money than the pay sergeant/captain etc earnt especially when they read it out for everyone on pay parade to hear. It was particularly problematical when you were away from your unit temporarily with other units or training especially abroad when they payed everyone in local currency (you could also get issues with the colour of your beret or stable belt). Worse your hair cut can give you away… Certain specialist trades need a level of training and experience modern militaries can not provide. One such is specialist Medics, in the UK there is an interesting “revolving door” effect. Some Regulars in specialist units get training in teaching hospitals and become fully qualified in nursing for A&E along with minnor surgery. Whilst some medical proffessionals become Reservists / Territorials often spending tours of duty in active war zones. It’s a problem because government ministers make false assumptions about the actual numbers of medically qualified personnel they have available. For instance at the begining of the pandemic one idea was “Nightingale Field Hospitals” where military medics would take load off of the National Health Service(NHS). Unfortunatly the plans “double counted” thus a member of the NHS who was also a reservist got counted once for the NHS and once as a reservist… I think most people appreciate that even with the best will in the world a person can not be in two places at the same time.

Clive Robinson September 19, 2021 6:55 PM

@ MarkH,

My conclusion is that such effects don’t apply to independent events occurring within a population.

That is because you are “over thinking it” and thinking about the wrong type of bias.

I suggested sketching things out as a diagram on graph paper, give it a try.

MarkH September 19, 2021 7:07 PM

Quakes, continued:

To express the entropy density in another form, the maximum entropy which could have been conveyed by the full set of numbers in the range 0..59 was 1,133,361 bits.

The measured entropy of the sequence is 1,133,324 bits, so 37 bits of entropy were lost along the way …

[1] I downloaded the quake records from a website of the US Geological Survey, an agency of the U.S. federal government.

[2] I know very little geology, but I guesstimated that a 4.5 magnitude cut-off would practically eliminate mutually dependent temblors.

[3] I chose 1990 rather arbitrarily as the starting point. I’ve seen nothing to suggest that results would vary much according to the range chosen; I again guesstimated, that by 1990 the seismometric systems were comparable in performance to the present standard.

[4] If anyone will kindly offer a proper formula for the autocorrelation between successive readings, I’ll be pleased to run the numbers.

[5] These data are an example of “roulette wheel” sampling of independent events in a population.

Anonymous September 19, 2021 7:19 PM

@FA • September 18, 2021 1:15 AM

Re. nuclear decay TRBG

Detectors for nuclear decay events do have memory (short term).

The classic geiger muller tube has a dead time where it cannot detect an event. This can be measured by moving a source incrementally to or away from the detector. Geometry and distance absorbance can be compensated for and this dead time measured when measuring radiation.

I need to scratch my head on radiation altering a bit in logic over random time. Two pulses separated by very short or long time gaps are possible and logic must latch nicely (but not latchup). Electronics needs to be shielded from radiation (easy for alpha particles).

Multiple detectors measuring the same source will not trigger in ways that correlate if they cannot see or interfere with each other.


Clive Robinson September 19, 2021 7:30 PM

@ Bernie,

Any comments from folks more knowledgeable than I?

I had a quick look for the “security evaluation report” the other day, but it did not appear to be available.

I have designed and described on this blog ways you can do the same thing but more securely…

From the ARS article it sounds like the words are just used instead of individual letters or numbers in a conventional cipher.

If that is true, a simple frequency analysis on the words will show it is not real “plaintext” just obfuscated “ciphertext” thus be easy to spot automatically.

For such a system to be not “painting a target on your back” the actual message sent should be able to get past a human reader. That is not just “words” but coherant “sentences” in coherant “paragraphs” with a coherent “context” from start to finish.

The way to do this is by “analysing for redundancy” and use the redundancy to send “covert bits”.

For instance the opening salutation can be “hi” or hello” or similar as they are fully interchangable they give you 1-bit of covert information. If you have four such words such as “hi, hello, wotcher, greatings” then you have 2-bits.

Most social communications is very redundant, for instance,

“How about we meet up for a drink”

You can replace “how about” with “Can”, “Should”, “How’s about”, etc. Likewise “drink” with “meal, snack, cocktail, tea, coffee, cupper, lunch” giving you a 2+3 or 5-bit sentance.

The trick is not to use the same word for the same plaintext bit pattern. That is you encrypt the plaintext bit pattern into a ciphertext pattern with a different key for each message or sentence.

Whilst the resulting message will pass a human censor, the covert channel bandwidth is quite small.

MarkH September 19, 2021 7:50 PM


For the record, I haven’t taken a position on the use of GM tubes as detectors for this application.

I suppose that several types of decay particle detector have some significant recovery time after each detection signal. This is no problem at all, when the TRNG uses the One True Sampling Method (what Clive calls the roulette wheel). Some decay events will be missed, which has the effect of reducing output bit rate, but the quality of the generated bits is not inherently affected.

A caveat, is that if the detector can introduce appreciable detection-edge time skew before it returns to its equilibrium condition, then some of the numbers could be biased.

The designer must comprehensively understand the detector’s “personality.” If bad readings are a possibility during the recovery interval, the TRNG can simply disable the detection signal for a pre-determined interval after each detection.

MarkH September 19, 2021 8:06 PM

More on earthquake-derived entropy:

min-entropy is a rather obscure (but very important) concept for the generation of random numbers for cryptography.

One way to think of min-entropy, is that it is the most conservative measurement of uncertainty in information.

As I’ve explained on another thread, for secret random numbers (like symmetric cipher keys), you don’t need to worry about min-entropy; Shannon entropy (the original) is sufficient to know.

However, for random numbers which are not kept secret — as happens in Diffie-Hellman key exchange, for example — highest security requires high min-entropy.

min-entropy is much trickier to measure than Shannon entropy. For a quick go, I applied the simplest estimation procedure endorsed by NIST for the assessment of TRNGs, and found the quake timing numbers to have 0.9797 bits of min-entropy per bit of data, as compared to the NIST requirement of 0.975 .

Of course, earthquake timings should never be used to generate high-security secrets! However, “roulette wheel” samplings thereof apparently meet the NIST entropy standard for hardware RNGs whose outputs can be used for high-security cryptographic applications without post-conditioning.

MrC September 20, 2021 12:41 AM


  1. There’s no good explanation on the product page or news stories of how key management is handled (and I haven’t the time to dig through the source code for it), but it sounds like it relies on a key server run by the app’s authors. If this wasn’t a honeypot to begin with, it will surely be suborned and turned into one promptly.
  2. Hide-stuff-in-a-picture steganography doesn’t work. In fact, there isn’t presently any means of steganography with a practical bandwidth that isn’t detectable through statistical methods.
  3. A lot of things this app does are downright suicidal in the context of a regime that is willing to imprison/execute you upon mere suspicion of political dissent. Simply being caught with the app on your phone is probably proof enough to punish you. Ditto for sending piles of random words over an insecure channel. Ditto for detectable hide-it-in-the-picture steganography. Ditto for a “purge password” that purges the app’s stored data, but then opens the app’s home screen (rather than deleting app itself too). The list goes on and on. It feels like this was written by someone from a Western country where the authorities at least pretend punishment must be preceded by proof of a specific criminal act (rather than suspicion of unorthodox beliefs) and it simply didn’t occur to them that some governments don’t even pretend to play by those rules.

FA September 20, 2021 2:59 AM


Nobody other than you has said “world-shocking” in fact I said “bl@@dy obvious”

You also said people were utterly surprised when you explained it to them. ‘World-shocking’ is just an ironic reference to that.

There is now a majority of one for option 3, and you promised to follow the majority. So go ahead and explain.

MarkH September 20, 2021 3:01 AM

@FA, Clive, Freezing_in_Brazil:

Clive has written that for a radioactive decay TRNG, the inevitable gradual decline in source activity will bias the generated numbers, even with modular-time (roulette-wheel) sampling.

To construct an analog to the effect of source decay — which is simply a reduction of the population in which independent events unpredictably occur — I built a “strikeout list” for the global earthquake data set, using numbers from

These randomly chosen deletions reduced the number of events by 14.5%. This change in the mean event rate would be equivalent to a radiation source ageing about 23% of its half-life.

For comparison, the first column is the whole dataset; the second is the reduced dataset to simulate source activity reduction:

Events           191871     164059
Mean Delta       29.42      29.46
1st/2nd Half      0.15       0.33
Odd/Even Sec      0.51       0.39
entropy/bit       0.999967   0.999963
min-entropy/bit   9.9797     0.9804

FA September 20, 2021 3:15 AM


Clive has written that for a radioactive decay TRNG, the inevitable gradual decline in source activity will bias the generated numbers, even with modular-time (roulette-wheel) sampling.

That would be the case if we would take the time between events modulo some small interval.

But that is not what we do: the counter and the phase of the clock driving it are NOT reset when an event is detected. So what we get is the absolute time of each event modulo some small interval.

But even if we would take the time between events the effect would be very small, it depends again on the R as defined earlier.

MarkH September 20, 2021 3:20 AM

Thank you, patient readers, for your attention.

The global earthquake data reductions I’ve presented are an instance of modular-time (roulette wheel) sampling of independent unpredictable events in a population.

The number sequence generated by this sampling has very low bias, and even seems to be of crypto quality.

Reducing the effective population (i.e., weakening the source) reduces the number generation rate, but adds no bias.

The predictions of bias problems were not confirmed by these observations.

Why should these results not “read across” to radioactive decay timings?


I can offer one potential answer: the global seismometer network is well established, and seems capable of measuring the starting times of quakes with considerable accuracy in relation to their modest frequency of occurrence.

Decay event detectors tend to be fussy pieces of hardware, and there are very many ways for them to distort event timing.

The engineering work must be done with meticulous care.

FA September 20, 2021 4:47 AM


the global seismometer network is well established, and seems capable of measuring the starting times of quakes with considerable accuracy

Enough to correlate recordings and find out the exact location of an event.
Microsecond accuracy isn’t so difficult to achieve today.

I was at Gran Sasso physics lab (Italy) when they were receiving those ‘faster than light’ neutrinos from CERN. Imagine what kind of precision timing that takes…

Clive Robinson September 20, 2021 8:48 AM

@ FA, MarkH, Freezing_in_Brazil, interested others,

That would be the case if we would take the time between events modulo some small interval.

But that is not what we do: the counter and the phase of the clock driving it are NOT reset when an event is detected.

You so very nearly got it but failed at the last moment with an incorrect assumption…

You need to understand the difference between a period/time counter and a frequency counter.

A frequency counter “gates” a stream of high frequency external pulses into a counter with the gate signal comming from an internal low frequency very precise timebase. Whilst a period/time counter works the other way around, a very high frequency precise frequency is gated into the counter by a low frequency external signal.

You are thinking “frequency counter” not “period/time counter” and misunderstanding what actually drives the Q output from the latch in the “roulette wheel” circuit (which is not what @MarkH claims it is with “This is no problem at all, when the TRNG uses the One True Sampling Method”).

@MarkH has said “high frequency oscillator” which implies that it goes to the D-input of the latch. Which means the output of the particle detector drives the CLK-input of the latch. Which is what most engineers do. But… it is the input to the CLK-input that changes the state of the Q-output to what ever the D-input is.

So it leaks the time between particle detections. You then using the same clk-input drive a two latch shift register using the two Q-ouputs to drive the XOR gate of the von Neumann debias circuit. Whilst that de-biases the “data” the data comes out clocked by the particle detector output, thus it is still time biased to the gap between particle detections, and frequently this time bias is easily visable at the TRNG output.

Stick that CLK-input signal through a lowpass filter and you recover the exponential decay rate of the source very accurately…

The hard design problem is not further leaking this time between particle detections.

Most RNG designers either miss this, or somebody tells them not to remove the time bias. The usuall trick of “clocking bits from the XOR gate untill you have 8-32 random data bits does not hide the time between detector outputs, nor does then hashing the new 8 to 32 bits and previous 480-504 output bits. In fact it makes the time bias even more obvious.

Worse when you clock all those latches in the counter, debias, and shift register accumulator it sends quite a current spike onto the PSU lines and generates a recognisable spike in the EM output the TRNG generates as a function of charge moving in the circuit.

For a TRNG to be of any use in high security use, you must solve this time leakage problem, as it leaves you open to some “Passive TEMPEST Attacks” and some rather interesting “Active EmSec Atacks”.

Oh and when you read the publically available information of TRNG’s you get all the blurb about “data” and the likes of “Die Hard(er)” statistical tests but you will have to search very hard to find actual usefull information on TEMPEST/EmSec attacks, the best you usually find is “EMC Guidence”.

There has only been half a dozen academic papers on Active EmSec Attacks, one of which funnily enough was against an IBM 32bit TRNG with all sorts of peotection. However hit it with an EM CW signal and the entropy dropped from 32bit down to less than 7bits, Opps that takes searching attacks from 4billion down to a little over a hundred…

What you won’t yet read academic papers on, is using EM signals modulated with crafted signals. To not only get synchronizing information out[1] but then use it to effect both “logic” and “software” by carefull “Fault Injection Attacks”.

It is something I’ve been doing since the 1980’s when I independently discovered it (as it turns out did one or two other engineers).

Any way, you now have the information to see that it is not just “data-debiasing” you need to worry about, but “time debiasing” as well. Oh and the fact that DPA type attacks can be repurposed from “smart card attacks” to TRNGs and other “root of trust” systems like HSMs. Along with a whole class of interesting attacks you had probably never considered.

[1] Look up the history of the “Great Seal Bug” or “Thing” the best technical description of which for many years was in Peter Wright’s early 1980’s book “Spy Catcher”. That he and Tony Sale –later saver of Bletchly Park– worked out how it worked. The principles of wgich are still used in all those “illuminator” or incorrectly naned RADAR bugs work.

Mowmowfi September 20, 2021 10:05 AM

@clive all
Would having two sources, one a spike signal a 1 bit, the other signal a 0 bit, so as not, or to remove the time part?

FA September 20, 2021 11:19 AM


So it leaks the time between particle detections.

If you look at the Q output as a signal, a function of time, yes.

But why would you want to do that ? Just store the bit in memory where it will be retrieved as necessary, probably to be the input of a crypto hash or sponge algorithm. Which will also take care of a any remaining small bias.

Nothing in the way that bit will be used depends on the time when it was generated. It’s just one bit of stored data, not a physical signal.

As far as I remember we were talking about random bit generators, not random signal generators.

And yes, the available rate will slowly decrease. Why should that be a problem, as long as it enough for the envisaged application ? Jus take that into account when specifying the system.

FA September 20, 2021 11:37 AM


As to TEMPEST attacks, all secure system designs have to consider those. But that is completely orthogonal to the question if random events with a slowly changing Poisson distribtion, as provided by nuclear decay, can be used to obtain unbiased random bits. Which was what was being discussed.

And yes, I know about the Great Seal bug.

MarkH September 20, 2021 11:56 AM


While TEMPEST and active EM attacks are of absorbing interest — especially if you’re a major national security target operating in a hostile country — I’ve understood you to repeatedly say that the the bits obtained by sampling a free-running counter (modulo a suitably small value) will be biased.

It’s that question of data bias, which I’m presently seeking to put to rest. Can we focus on that, before moving on to other considerations?

My thesis is that subject to the two conditions I’ve repeatedly stated, the bits are sufficiently unbiased for cryptographic use.

I saw one acute conceptual mis-match in the reference to “the XOR gate of the von Neumann debias circuit.” I’m not proposing any such thing; the bits latched from the counter are already free from bias, and no further processing of any kind is required. My apologies, if anything I wrote implied post-processing.

I suggest that earthquake timings are of the same essential character as particle decay timings. The extremely low-bias data I obtained from the USGS earthquake data were taken by extracting the whole-seconds field of the UTC date & time, with no post-processing whatsoever.

In a carefully designed TRNG using the same method for data extraction (provided the time modulus is much smaller than average decay interval, and the detector has controlled time skew), I expect that (a) the data bias will be at least as low as my earthquake example, and (b) the gradual weakening of the source will not increase data bias.

There is no known cause for nuclei (or earthquakes) to “prefer” one high-frequency low-bits counter value over another.

What say you? Are the outputs biased, or not?

SpaceLifeForm September 20, 2021 2:58 PM

Using Physics to Measure Physics

That is the best one can do.

Your Measurement is really an Observation.

Did your Observation disrupt the Quantum State?

SpaceLifeForm September 20, 2021 3:19 PM

Space is tricky

Fluctuations in measured radioactive decay rates inside a modified Faraday cage: Correlations with space weather


In the present report, we present an in-depth analysis of our measurement with regard to possible correlations with space weather, i.e. the geomagnetic activity (GMA) and cosmic-ray activity (CRA). Our analysis revealed that the decay and capacitance time-series are statistically significantly correlated with GMA and CRA when specific conditions are met. The conditions are explained in detail and an outlook is given on how to further investigate this important finding. Our discovery is relevant for all researchers investigating radioactive decay measurements since they point out that the space weather condition during the measurement is relevant for partially explaining the observed variability.

MarkH September 20, 2021 4:09 PM


Gott im Himmel !!!

I’m going to respond to two SpaceLifeForms: first the one who posted that link as a joke, and then his anti-particle who posted it seriously.

Form 1: Wow, that was a hoot! I’m tingling with anticipation of their “explanatory conjecture of these results (which will be tried in a next work)”.

Form 2: Because I like you, and I want to contribute to the education of all readers, I invite you to try an exercise: skim through the paper for ten minutes, and write a list of the red flags you find.


Perhaps the question of which SpaceLifeForm wrote the comment above can be modeled by a wave function, which will collapse when the relevant observation is made.

SpaceLifeForm September 20, 2021 4:23 PM

Space is tricky

Your Radioactive Decay Detector TRNG device may have Cosmic Induced Bias based upon your location on planet and time-of-day.

Is Cosmic Induced Bias random?

During day, it may be influenced by GMA more than CRA. At night, it could flip.

The Sun influences via GMA how much CRA reaches your device.

The Sun deflects CRA when active. As it ramps up thru it’s cycle as it doing now.

Currently, on

Cosmic Rays Solar Cycle 25 is beginning, and this is reflected in the number of cosmic rays entering Earth’s atmosphere. Neutron counts from the University of Oulu’s Sodankyla Geophysical Observatory show that cosmic rays reaching Earth are slowly declining–a result of the yin-yang relationship between the solar cycle and cosmic rays.

MarkH September 20, 2021 4:52 PM


Distinguishing pseudoscience is tricky, without sufficient scientific education. Sorry, that paper does nothing to illuminate the workings of the natural world.

If one supposes for a moment that the claims of its authors might be true — despite their complete failure to either find confirming evidence, or to offer any theoretical basis however flimsy — that still would not affect a properly designed decay-based TRNG.

That is the precise import of my above comment, timestamped 3:01 AM. It compares substantially different event rates … the output entropy is unaffected.

Of course, earthquakes are not nuclear decays! But both kinds of phenomena are independent* unpredictable events occurring within a population. The same principles apply to extracting randomness from their timings.


*Here “independent” means that there is no causal linkage between one event and its successor; A neither determines the timing of B, nor is their relationship in time determined by some third cause C.

Earthquakes actually do have causal linkages, but in the torrent of global seismicity, and with aftershocks filtered out, the mutual dependence has negligible effect, as shown by the data I’ve presented.

SpaceLifeForm September 20, 2021 5:18 PM

Supply Chain attacks

About 24 hours ago, it was allegedly not ransomware.


Clive Robinson September 20, 2021 6:12 PM

@ FA,

But why would you want to do that ?

When you’ve been around security engineering long enough, that question will come to haunt you.

It’s all to do with,

Security -v- Efficiency

As I’ve explained many times before[0].

You have to understand why the question is actually used by both sides.

You and @MarkH have actually demonstrated this, and it is only now I’ve explained the issue of how the output of a TRNG is biased by the half life decay curve that you say,

Just store the bit in memory where it will be retrieved as necessary, probably to be the input of a crypto hash

I’ve already explained that most go straight to the hash with “magic pixie dust thinking”, and they don’t store-n-forward “data” in a way that sufficiently breaks the “time” based side channel. As I’ve mentioned our host @Bruce Schneier with Niels Furguson designed one of the few systems to partially eliminate this problem, by using an “entropy pool” however it can and does fail as a store-n-forward “time” break, which is when you draw out to much from the pool in any given time. It is a real serious security concern in embedded systems, something I’ve previously discussed on this blog with regards discovering “primes” used in PubKey Certs.

I’ve also discussed the issue of “System Transparancy” and time based covert channels quite a number of times. The classic example and the first academic paper on it was from Mat Blaze et al on “JitterBugs”,

Which showed how a keyboard plugged into an ordinary PC could by using a time based covert channel leak user key press data such as passwords right through the PC and appear as time “jitter” on network packets.

Put overly simply the more “efficient” a system is, the more “transparant” it is, so the greater the bandwidth of any time based side / covert channel. One TEMPEST design rule is “Clock the inputs and clock the outputs” another is “Clock from in to out” I’ve mentioned them quite a few times in the past. If you understand the problem you can see how they reduce not just “transparancy” but the “bandwidth” of potential “time based” side/covert channels.

Such “time” bias in outputs is a very real fairly easy to implement covert channel in nearly every computer based system in the world… Because the designers of such systems put “efficiency” before “security” just about every time. This includes many of the chip designers that implement ring counter based on chip supposed TRNGs.


I’ve understood you to repeatedly say that the the bits obtained by sampling a free-running counter (modulo a suitably small value) will be biased.

I’ve said the output of a TRNG will be effected by this and so biased, yes but in the broader sense than you are assuming.

You know very well that I frequently talk of side channels that are “time based”. By simple inductive reasoning if information is leaking via any side channel then there must be some form of not just redundancy, but bias in that redundancy for the information to be carried out of the system.

But yes time differences can effect the “data” not just “time” in the output of any system which mixes two or more entropy sources.

It’s a question of “which comes first” random source A or random source B. Whilst there appears to be only two possibilities you have to remember transitions so there are actually three states each random source can be in,

1, High
2, Transitioning
3, Low

If we ignore “meta-stability” for the moment which most logic designers do, you will get four potential results,

1, A befor B
2, A after B
3, B befor A
4, B after A

They are different when viewed from the perspective of the sources, even though 1&4 have the same effect when viewed from the output likewise 2&3.

But you can not ignore “meta-stability” as you will find out if you actually “deep dive” into the way the “rising clock edge” works in latches.

Ignoring “soft lock up” meta-stability appears as an extra psudo random input. In that the actual transitioning edges on both data and clock have apparant indeterminacy. In effect around the transition of either random source you appear to have a period of uncertainty.

However if you add a bias signal to either input you can make that uncertainty realy quite biased in your favour. It actually requires very small bias to effect a significant change[1]. Such bias can be caused by an EM “fault injection” attack. That is you point a antenna at it and “illuminate” it with a RF carrier. Which a couple of students did at Cambridge University Computer Labs[0] did to a very expensive supposadly very secure IBM TRNG and trashed it’s entropy as a result they got a best paper award at if memory serves either Usnix or an IEEE conference.

There has been another paper again from CCL that discusses attacking an on chip “ring oscillator TRNG” on a Chip-n-Spin EVM Payment Card, that should have 2^64 bits of entropy, but gets draged down from 1.85e19 possible outputs to just 3300 which makes searching attacks fairly easy. Thus the practical attacks on it they describe…,

I suggest everyone downloads and reads it, as it has nice pictures that are worth rather more than a thousand words.

Now back to… That single latch roulette / stroboscopic / waggon wheel circuit, which is in that paper. It actually has TWO random/chaotic[2] inputs, the input from the particle detector and –the one you are ignoring in your analysis– the high frequency oscillator with it’s “delta F”. So you have the “A befor B” etc issue biasing the “data” as well as the “time”.

Obviously the more “sources” there are the more this issue arises detrementaly unless you specifically design it out or as some occasionaly do “just get lucky”.

I hope you now understand why I have repeatedly said the things I do, that appear to go against your outlook limiting assumptions.

[0] A statment that I should get a jingle made up for… Oh and as I said these issues have been discussed befor on this and other blogs by me and others. For instance,

Also take a look at Cambridge Computer labs “” site as well.

[1] Back in the 1970’s there was a design for an FM bug All it had was a 74LS13 dual quad input Schmit NAND 14 pin DIP an electret microphone and an overtone XTAL with a frequency above 29.34MHz and a 30pf silver mica or polystyrene capacitor and a couple of resistors. You still see it mentioned from time to time,

The circuit was configured as the usuall NAND gate XTAL oscillator but with tha capacitive output of the electret mic connected to the input side of the gate. It’s very small output voltage, moved the bias point which caused the XTAL to be phase modulated. The NAND gate output was connected to the antenna where the resulting square wave third harmonic appeared in the FM broadcast band.

I used to make dual mic bugs with them, because two audio sources alow you to post process them to make in effect a “steerable mic”.

There is also a version using a Hex Shmitt inverter TTL chip, that demonstrates an inverter “ring oscillator”,

But I can not recomend it as it has no frequency stability and drifts wildly if you go near it, due to the lack of tuning componets. But it does demonstrate by it’s gross instability why such ring oscillators are used as faux-entropy sources for “on chip RNGs”.

[2] When looking at signal sources in engineering you normally consider “Signal + Noise” and immediately make a lot of simplifying assumptions about “noise” the worst of which is that the “noise is a random source” because it makes the maths simple (see RMS). In reality the noise is not at all random it is actually determanistic from within the overall systems other functions you actually have a spectrum of

“Determanistic through chaotic to random”

Where chaotic is defined as being a determanistic process that shows extream sensitivity to the input conditions. Which makes it’s output difficult to predict, even though it is determanistic. So the normal engineering assumption is it can be treated as “random”. Fine if you are designing most signal processing systems be that analog or digital, but potentially fatal when you are doing security engineering.

SpaceLifeForm September 20, 2021 6:15 PM

@ MarkH

Space is tricky

Sorry, that paper does nothing to illuminate the workings of the natural world.

Exactly my point.

If one supposes for a moment that the claims of its authors might be true — despite their complete failure to either find confirming evidence, or to offer any theoretical basis however flimsy — that still would not affect a properly designed decay-based TRNG.

Using Physics to Measure Physics

That is the best one can do.

Please define a ‘properly designed decay-based TRNG.’ without using Physics. Without using some Physical Rube Goldberg device.

You can not.

Red Balls, Green Balls. Who is the Inserter? Who is the Dumper?

Do not waste your time and energy chasing ghosts.

The Random you seek may not be there.

Clive Robinson September 20, 2021 7:08 PM

@ SpaceLifeForm, MarkH,

Space is tricky

I’ve just very quickly “skim-read” the web page.

First of it’s not a “results” paper, but one describing an experimental setup and experimental findings.

Such papers alow other scientists to “play along” as they carry on with their experiments / research.

There is a bit of history as to why such papers appear.

Back a century ago experimental results were published, that others could not repeat. The usual sort of controversy started…

One researcher on visiting another researcher who had different results noticed there was a difference in their setups.

In his lab they used marble topped benches, in the other lab they used wooden…

Adding wood into the marble topped bench experiment, showed the wood was the cause of the experimental differences.

Oh speaking of “Space is tricky” a fun fact, somebody I used to work with for several years carried out his “particle research” in a Womens toilet… Which he included in the title of his results… He included this in his C.V. which always raised a question at interview… (and he suspected why he got so many interviews[1]).

The actual reason it was the womens toilet was to do with architecture and deficient construction at the University he was at, not the experiment.

It just so happened that the toilet was on the top floor of a tower block with a window pointing in the right direction. The tower block was closed for “refurbishment” of the offices and class rooms (something to do with flooring if I remember correctly). So he could use the stairs to access the toilet and his experiment.

[1] He mentioned it when a group of us were chatting one day Catherine had been grumbling about the problems she had had in getting interviews and was a bit annoyed about another woman who had got a lot of interviews. Apparantly the “other woman” had at some point won a pagent or some such and had been crowned “Miss Sausage Queen” which she put in her “otherwise less than mediocre C.V.”. Which had not got the “other woman” any interviews untill a “recruitment advisor” had told her to add in the “Miss Sausage Queen” in the personal statment section. In trying to lighten the mood I put my foot in it by “asking the air” “I wonder did she look like a sausage?” that got a smile from Catherine but a dirty look from “the boss”. It was then that Mark told his story. Like most of us, I lack such curiosity raising achievments. After all “Best dressed Klingon look alike” hardly sparks any interest 😉

Clive Robinson September 20, 2021 7:29 PM

@ SpaceLifeForm, ALL

Microsoft Win 11 180 swerve

I’m suprises you did not mention this bleeping comouter link,

As Oracle’s VirtualBox is a very popular VM due to being free and quite easy to use.

However the Microsoft 180 is going to effect other things, but it’s also an indicator of MS Managment “5h1t on you” attitude to all their customers. From which I guess it is safe enough to assume there will be more such “180’s” and worse to come.

SpaceLifeForm September 21, 2021 12:41 AM

Bug bounty is a farce


Mowmowfi September 21, 2021 1:28 AM

‘Bug bounty is a farce’
I found out the hard way, but they are getting cocky.
Silicon titles 😉

Clive Robinson September 21, 2021 1:56 AM

@ Thoth, ALL,

If you are still reading along 😉

Yet another CPU security enclave attack, thid timr on AMD CPU’s,

“AMD has advised Windows users this week to update their operating systems in order to receive a patch for a dangerous vulnerability in one of its CPU chipset drivers that can be exploited to dump system memory and steal sensitive information from AMD-powered computers.”

It’s basically two faults in the MS Windows “amdsps.sys” that gives access pretty much anywhere in system RAM so,

“[I]n a report published on Wednesday, Economou said he found two issues in this driver that allows a non-admin user to dump the system memory and search for sensitive information handled by the OS.”

I’ve been saying it for a long time but all Secure Enclaves or “Trusted Execution Environments”(TEE) that use “common CORE RAM” –which all consumer CPUs do– are going to fail… Looks like they all have done now.

FA September 21, 2021 2:41 AM


But why would you want to do that ?

Let me rephrase that: why would you want to do that in the context of the current diuscussion ?

I think it was very clear from the start that all event timing information would be discarded [1], and that for each event just one bit of data remained, and the question was how random and unbiased those bits were.

Also we were not discussing security engineering in general, nor the imperfections of digital logic, nor TEMPEST.

So bringing in any of those topics just results in derailing the discussion, which I must admit you managed to do.

After all is said, you argumentation is very similar to

It’s not possible to build a usable electric car because you can’t have perfectly round wheels.

Which is a pretty useless statement in the a context where the relative merits of internal combustion vs. electric engines is the topic.

[1] Certainly after @MarkH presented his earthquake analogy, where it was almost explicitly stated.

Clive Robinson September 21, 2021 3:10 AM

@ FA,

Peak level will indeed go up. Which just means it’s a bad indicator of signal power. Or in audio, of loudness.

Unfortunately it’s not just Digital Signal Processing that gets affected.

Take the analogue audio processing line up in a voice communications system, it’s all based on the peak level. Similarly speech processing in cordless phones etc back in the 1980’s through 1990’s (and still in some ‘cordless phones’ today).

The aim to get maximum audio signal in the modulated RF output, so you get minimum noise in the receiver.

If the peak level goes to high then you get “splatter up and down the band” in the output of SSB and NBFM radio communications systems. Which is not just very anoying to other adjacent channel users, but technically illegal in many places.

It’s why good money could be made in designing audio processors for the broadcast industry. Some of which I worked on the design of a few years back for a company then called “Broadcast Warehouse” later just “BW”. One of the founders Roger Howe and I had been at school together and did the Pirate Radio stuff where we were known as “little abd large” by some. Roger gave me the nickname of “Lurch” and his sister still calls me “bat”, whilst I called him Fesster ib return it did not stick[1]. We worked together on and off from our early pirate teens and into the proffessional broadcast equipment even though we had different careers through untill his untimly death a little over a year ago.

The last thing we were working on together was the design of a COVID mask using UV-C LEDs as an antiviral filter. Sadly it did not go ahead, due to his untimely accidental death, who knows how many peoples lives it would have changed if it had.

[1] Go watch the original Adams Family TV programs or see the photo in,

Clive Robinson September 21, 2021 3:50 AM

@ FA,

I think it was very clear from the start that all event timing information would be discarded

No, not at all. That is your and MarkH’s assumption and why the pair of you went flying off in the wrong direction, and have since tried to drag everyone down your chosen path, to prove you are right when in fact from the general security perspective you are arguing about angels on the head of a pin.

As I’ve said that time bias issue will still be present at the output of the XOR gate of the von Neumann data debias ciruit which would in many peoples views fixed any “bias” where as in fact it only removes one type of “data” bias (there are others such as sequence bias which as I noted can be introduced by the roulette circuit).

I don’t wish to be impolite, but it’s becoming more and more clear you have never designed a proffessional grade TRNG.

Also we were not discussing security engineering in general, nor the imperfections of digital logic, nor TEMPEST.

That is as silly as your,

It’s not possible to build a usable electric car because you can’t have perfectly round wheels.

Security engineering is and has to be “general” because every thing in it is usually subject to the “weakest link” principle. I’ve said this repeatedly in several practical ways[1] over the years. And I’m certainly not going to change any time soon to suit you or others.

As for the attacks on TRNGs they are very real especially for chip based TRNGs and also a very large amount of embeded equipment used for communications such as network equipment[2]. There have been several real attacks show to be practical on the likes of EVM Chip-n-Spin cards that are in just about all credit/debit cards and ID cards for the likes of Electronic Pasports and Citizen cards. So of concern in fighting financial crime and terrorism if you want a “rubber meets the road” argument.

It is a very real problem and one that is only going to get worse with time, unless people wake up to it…

[1] One of my more recent being why the so called “Secure Messaging Apps” are fundementaly “NOT Secure” because they get the “end points” wrong.

[2] There has been research on “poor prime choice” seen in PubKey certificates. It was found that a disproportiantly high number originated in embedded devices where the alleged TRNGs are obviously deficient or used incorrectly.

FA September 21, 2021 3:59 AM


It’s why good money could be made in designing audio processors for the broadcast industry.

Not always for the good of the listener.

While peak limiting is certainly necessary in broadcast and digital audio, things have been taken way too far with agressive multiband compression and the likes, resulting in the ‘loudness wars’ we have seen for many decades now. Such processing effectively destroys any dynamics most types of music have, all for the sake of ‘standing out’ and selling more commercials. Bleh…

Fortunately resistance against this sort of thing is building up, both on the production (artistic) side and the technical one (with efforts such as the ITU and EBU loudness standards).

MarkH September 21, 2021 4:26 AM


I thought I specified my concept clearly, though evidently not well enough. I never proposed to take one bit per event, but rather the free-running event time modulo some number, such that the wrap-around time is small compared to the mean time between events.

In the earthquake timing example, I conveniently use mod 60, and accordingly extract 5.9 bits per event.

The data appear to have extremely low bias; limiting to a single bit is not necessary.


Perhaps you missed my above comment timestamped 11:56 AM, in which I stated that there is no XOR circuit. von Neumann debias is not required, and therefore not performed.

I don’t think I’ve seen the expression “sequence bias” here before. A crisp definition thereof would be helpful.

I noticed that you wrote, “I’ve explained the issue of how the output of a TRNG is biased by the half life decay curve.” I haven’t seen any such explanation, except for invalid arguments predicated on properties of periodic signals. If you posted an explanation relevant to non-deterministic independent events, will you kindly point me to that?

Clive Robinson September 21, 2021 5:14 AM

@ FA,

Such processing effectively destroys any dynamics most types of music have, all for the sake of ‘standing out’ and selling more commercials.

Sadly yes, that is the state of play, and also why there is plenty of money to be made, as the cost of upto $10,000 –at one point– was taken from “marketing budgets” not from “CapEx” budgets…

However I use one I helped design at home because you can with a little fiddle also use them as “patametric equalisers” so making up for aging hearing and tinitus issues[1].

Fortunately resistance against this sort of thing is building up, both on the production (artistic) side and the technical one (with efforts such as the ITU and EBU loudness standards).

There is however a downside to this which is in “Advert free video recording” in Open Source and other FOSS projects.

Most actuall “programs” at most use “light compression” with “canned laughter” being an exception. Adverts however are usually “heavily compressed” which is why they give that hard perceptive punch that tries to subvert the human/mammal startle reflex and make you look at the TV.

Well it’s actually not that difficult to measure “dynamic range” you can do it with a rectifier and two low pass filters as lossy integrators though three is better to avoid false triggers. It does however take a little time. However with a “digital video recorder” time is not realy an issue as you can wind it back.

So you can detect the “heavy compression” of adverts and where it starts. So divert the probable “adverts” to a hidden set of files that normally would not be seen, but can be retrieved if “program” content gets treated as “advert”.

I developed a bit of software to do this, unfortunately it uncovered a trend I had not realised was happening.

Program makers are using the same heavy compression on the credits / titles / them music at the begining and end of programs. Whilst most actually don’t realy care if they see them or not obviousl program makers do, because on DVDs you started to see a trend which has continued where the content providers now lock out the “skip a scene” or even “fast forward” controls that used to only happen with the “Anti-Piracy” crud.

I guess things will develop into an ever increasing arms race between marketers and ordinary people.

I guess at the end of the day the “cop-tag” algorithms used for child exploitation image detection could be more widely applied to eliminating “adverts” something I’m sure the marketing people would not want widely known let alone be exploited… Just a thought as they say.

[1] Oddly whilst Blutooth Headsets can be had for less than £20, hearing aids from UK high st retailers that do the same thing and in some cases use the same chip sets cost upto £4000 or 20,000% inflated… If that became more generaly known maybe the price would come down…

Clive Robinson September 21, 2021 5:43 AM

@ MarkH,

Perhaps you missed my above comment timestamped 11:56 AM, in which I stated that there is no XOR circuit. von Neumann debias is not required, and therefore not performed.

Not realy relevant, the conversation had started about the general design of radio isotope sources long before that, and the XOR gate debiaser had been brought in likewise before you went of down your earthquake example.

… you wrote, “I’ve explained the issue of how the output of a TRNG is biased by the half life decay curve.” I haven’t seen any such explanation …

Go back and read what I’ve written about the roulette circuit and how the “time” domain effects the “data domain”.

I don’t think I’ve seen the expression “sequence bias” here before. A crisp definition thereof would be helpful.

The von Neumann debias circuit only works on pairs of bits, thus the resulting debias scope is very limited. It is possible to design sequences that appear not to be biased to the von Neumann debiaser because they are biased in a way that the limited scope of the deviaser does not act against.

That is you can find sequences that are “balanced” in terms of the numbers of set and clear bits, and you can add start and end sequencies that when concatenated likewise form balanced sequences. So you end up with very long sequences that are bitwise balanced and pass through the von Newmann debiaser. Howrver if you have say two balanced sequences you can use a lot more of one sequence than the other. The result is on a wider scope than pair wise bits you have bias in your data you do not want.

But a fun little experiment for you, as you like doing such things.

The von Neumann debiaser takes a pair of bits to output a single bit. Can you find a short sequence where it has a balanced set of set and clear bits before the debiaser but does not after the debiaser?

FA September 21, 2021 9:43 AM


I guess at the end of the day the “cop-tag” algorithms used for child exploitation image detection could be more widely applied to eliminating “adverts” something I’m sure the marketing people would not want widely known let alone be exploited… Just a thought as they say.

Another solution would be crowd-sourced content labeling, i.e. creating and distributing little files with timestamps telling you which parts to skip when viewing or editing a recording. This would just be an expression of ‘opinion’, so probably not so easy to stop by legal means.

JonKnowsNothing September 21, 2021 10:30 AM

@Clive, All

re: cop-tag algorithm

personal observation that means nothing…

For sometime now, I’ve noticed a “new” shift in my A-Phone behavior.

The existing behavior of +bars and -bars can be traced to the daily-weekly FBI flyovers of the area doing their collect-it-all Cessna flights.

The new behavior is that it takes 3+ attempts to send any image. The first attempt is a very slow green bar dropping at the last 2-3%, the second is another slow green bar dropping at the last 2-3% and the third try might go through. Then again, it might not.

Of course, there are a bazillion and one reasons for this, but it is Curious Mr. Ollivander.

vas pup September 21, 2021 3:38 PM

CIA officer ‘suffered from Havana syndrome’ during India trip

I just curious does CIA has in possession kind of detectors (in different sections of EM and sound spectrum – like wearable Geiger counters for detection of radioactivity)so IC officer immediately notified he/she is in dangerous zone?

Why IARPA can’t develop such thing?

Yeah, and start looking for cause in 2+2=4 paradigm, i.e. ‘WHAT?’ rather than concentrate primary on ‘WHO?’. It’ll be more productive in all aspects.

Moreover, it can be pulsing light like strobe effect but unnoticed for eye to generate similar effect.

Please read the article before responding to the questions above.
Thank you.

SpaceLifeForm September 21, 2021 3:42 PM

Technical Debt

At the end of this month, depending upon your client software, you may encounter issues reaching this site.

2021-09-30 14:01:15 GMT to be exact.

Note: If you use FireFox 50+, you will probably be fine, because Mozilla includes the list of root CAs inside, instead of relying upon the OS.


In normal circumstances this event, a root CA expiring, wouldn’t even be worth talking about because the transition from an old root certificate to a new root certificate is completely transparent. The reason we’re having a problem at all is because clients don’t get updated regularly and if the client doesn’t get updated, then the new root CA that replaces the old, expiring root CA is not downloaded onto the device.

SpaceLifeForm September 21, 2021 4:39 PM

@ Sut Vachz

I am shocked, shocked I tell you, that there is gambling going on in this establishment!

Like the infinite straight line, true random numbers can not be had (because they don’t exist), and are never needed.

So true. Should we mix in some other source of random? 🙂

SpaceLifeForm September 21, 2021 5:10 PM


Ask yourself: Self, what am I really using this alleged Random for?

Is it really important that I believe it is actually Random?

What is the application doing with this Random?

Am I just playing a game?

Or, am I in the Matrix?

How can I tell?

Clive Robinson September 21, 2021 7:09 PM

@ SpaceLifeForm,

So true. Should we mix in some other source of random?

I think your smiley has vanished in the blog software.

But if “random” does not exist…

But at the moment the belief is there is a spectrum of complexity that goes from

“Determanistic through chaotic to random”

That is, as an observer of the output of a source there is a limit to how much complexity you can determin form it, unlike the person who designs the source who knows the system.

Thus you might view random as being at the end of the line where complexity nears infinite…

So when someones says “Did the earth move for you?” you can say “Not on it’s own, but the universe did” 😉

MarkH September 21, 2021 7:13 PM


You have noted more than once, your experience in the design of (at least one) “professional grade TRNG.”

How did you assess statistical bias in the output sequence?

What were the pass/fail criteria?

Clive Robinson September 21, 2021 7:39 PM

@ SpaceLifeForm, ALL,

Is it really important that I believe it is actually Random?

Outside of information theory it does not matter as long as the method of generating it does not cause anomalies in the methods.

Crypto is a subset of information theory, but… You care not one jot if it is “actually Random”. What you care about is if an observer can determin your source of “data” and predict the next bit etc.

It’s why “Pseudo Random Bit Generators”(PRBG) are fine to use for many crypto tasks and in some cases more desirable than “True Random Bit Generators”(TRBG)[1].

However there will always be a need for numbers that are not predictable by you or anyone else[2].

So you realy have to know when something can be generated by a “Crypto Secure PRBS”(CS-PRBS) or the times when TRBG output is required.

For instance, you can use a CS-PRBS to print out what look like “One Time Pads”(OTP) and use them as such.

However a True Random Bit Generator, is needed for some things like “deniable ciphers”. That alow a communicating “First Party” to transmit using covert channels in the redundancy of plaintext messages. In a way such that even if the second party betrays the first party to a third party the covert communications can not be demonstrated.

[1] Basically you do not want “reuse” for the likes of “Numbers used Once”(Nonce). Which can not be easily done with TRBG but can be done trivially with Crypto Secure PRBG (CS-PRBG)

[2] Realy unpredictable numbers from TRBG’s, are needed for certain things, such as “seeds” for master keys, primes for PubKey certs, etc.

Anders September 21, 2021 8:30 PM

@Clive @SpaceLifeForm @ALL

Some fun.


Everything started from here




Don’t throw out those old computers!
They are perfectly usable today 🙂

ps. if you set up rendering proxy inside VM, you can start each day from clean page, just kill and restore from clean.

Clive Robinson September 21, 2021 8:51 PM

@ MarkH,

How did you assess statistical bias in the output sequence?

I did not, I pased it to a tame mathmetician with experience in running the standard tests (there are certain resource issues you need to overcome, and it’s easier to use someone who is already set up to do so).

However I did test things like the “source” in various domains other than just “data”.

All “natural sources” even flipped coins show bias and very occasionaly a few supprises (such as not landing face up or face down but on the rim).

The hard part is deciding what tests to use and what they tell you.

For instance if you look at a stream of data and the number of set bits match the number of clear bits, is it unbiased?

The answer is after a little thought obviously ‘NO’. All it is realy giving you is a “No DC Bias”. Look at the verious “line codes” such as Manchester coding, they are balanced but they can carry any orher “data” that may or may not be very biased.

You quickly realise that bias in any given domain “data”, “time”, “sequence”, etc is like the layers of an infinitely large onion. At some point you have to draw a line as to where you stop testing.

So at some point you realise the output of an RBG is always going to be biased in some way. And… your task as a designer is to not remove “All the bias” but sufficient that any bias that remains can not be used by an attacker.

Where do you draw the line? Well that is based on experiance of what can be exploited and at what level of risk, for the intended application. But at the very least you would reduce the lower dimensions in the amplitude, frequency, time, and sequency domains of your “natural source”.

As I’ve indicated in the past I bring out the “natural source” to a TNC or similar connector at a known impedence so it can be hooked up to monitoring equipment by the user of the RBG to check it is functioning correctly and not subject to external influance. Whilst an oscilloscope will show the amplitude, a spectrum analyser has many advantages especially if it has a decent “waterfall” function. After that you get into way more specialised testing that you need to build yourself. Back in the 90’s that was not easy to do and eye wateringly expensive. These days using modern Software Defined Radio(SDR) equipment and software you can build out most reasonable tests to run on a PC ot even laptop.

The important thing to remember is there are effectively two testing types. The first is tests that run on the source output in “real time” continuously, and tend to be fairly limited in capability.

Any way it is around 3 in the “wee small hours” in the UK so I’m turning in for the night. But will pick up again tommorow.

ResearcherZero September 21, 2021 10:02 PM

SSID Stripping causes a network name – aka SSID – to appear differently in the device’s “List of Networks” than its actual network name.


To check if your organization is susceptible to the SSID Stripping vulnerability, AirEye created a free simple Windows-based tool called Hide ‘n Seek.


ResearcherZero September 21, 2021 10:25 PM

According to Cuban physicists, the following violates the laws of physics (probably microwave ovens and electronic surveillance as well).

“The situation in India could have dramatic implications: the CIA director’s schedule is tightly held and there are deep concerns among US officials about how the perpetrator would have known about the visit and been able to plan for such an aggression.”

The person traveling with Burns who experienced the symptoms in India received immediate medical attention when they returned to the US, sources said.


“Some former officials suggested that if it was an attack and an adversarial power was responsible, striking at Mr. Burns’ delegation would amount to an egregious escalation.”

Nearly half of the known cases involve C.I.A. officers, although State Department diplomats and members of the military have also been affected, officials have said.


“the Pentagon warned its entire work force about the anomalous health incidents, which it said often involve strange sounds or a sensation of heat or pressure followed by headache, nausea, vertigo and other symptoms.”


Sut Vachz September 21, 2021 11:20 PM


Re: never step into the same riverrun past gambler ruin’s by a more commodious vicus of randomization

In addition to what @Clive’s says in his responses, one might enjoy to read what Carver Mead says about entropy and randomness in his book “Collective Electrodynamics”, which has a connexion to Ross Anderson’s work on “classical” explanations of “quantum mechanics”.

Also worth a look is the late Edward Nelson’s book Quantum Fluctuations, which uses the language of random, stochastic etc. but is really a pure mathematical attempt using measure theory. (All Nelson’s books and some of his papers are freely downloadable from his mathematics department page.)

lurker September 22, 2021 12:00 AM

@vas pup: …start looking for the what?…

In these times of Covid we have learned how cunning a virus can be, and how infectious in closed communities. So, thinking outside the box, can Havana Syndrome be explained by a virus which targets the sensory synapses? At present it seems to be targeting victims only from the diplomatic and spooks community, which might take some explaining…

Whether such virus is a wild zoonotic, or was enhanced/fabricated in a lab, is left as an exercise for the reader…

SpaceLifeForm September 22, 2021 12:51 AM


Random Roulette

I am designing a Random Roulette wheel. The only answers it can provide is a zero or a one.

Should I divide the wheel into two halves, so that one side is zero and the other side is one?

Or should I divide the wheel into quarters, with alternating zero and one?

Or should I use lots of pins and slots, and divide the wheel into 1024 sectors?

Does more sectors make the Random more Random?

If you believe so, please explain.

JonKnowsNothing September 22, 2021 1:24 AM

@lurker, @vas pup

re: viruses are not cunning, they just do what viruses do

A book from 2012 detailed how a viral infection went unrecognized and ended up diagnosed as a severe psychiatric disorder.

The “clock drawing” test and a knowledgeable neurologist made the connection between the misshapen drawing and a viral brain infection.

A recent MSM report of re-occurrence of Ebola virus in a person, a number of years after they had survived an initial infection, surprised MDs that the virus could lay dormant so long.

Chicken pox, Shingles lays dormant for decades. Once it reactivates, serious nerve damage can happen (optic nerve, chronic pain).

African Swine Fever Virus, which affects pigs, wild pigs, can remain active in the environment for decades and infect pigs coming into contact with contaminated soil, feed, garbage years after it was sent to the landfill.

Viruses are not cunning, they are hard core survivalists and opportunists.

Until more details are released, I’m not sure a Magic Ray Gun follows US Diplomatic personnel around the globe an like an electric bug zapper.

There is another possibility that’s not had much reporting and that’s the convergence of a number of food stuffs that were once “normal” and now “may not be but are equivalent”. What passed off as “food allergies” especially for people who have never had food allergies previously. The triggers maybe impossible to determine, particularly if the trigger molecule is only included on festival or holiday or celebration days and not present on other days.


ht tps://

  • clock drawing test

ht tps://

ht tps://

  • He was the first physician in New York University history to identify the mechanism of interaction between the immune system and the central nervous system

ht tps://

  • Starlink

JonKnowsNothing September 22, 2021 1:32 AM


re: roulette wheel

The wheel should have only 1 section. There can be only one. Zero is not an answer, it is a null finding.

Clive Robinson September 22, 2021 3:39 AM

@ ResearcherZero, ALL,

With regards “SSID Stripping” and AirEye article you point to.

The first few words made me think “this is familiar”. Eventually I got down to,

“… the Computer Science faculty at the Technion – Israel Institute of Technology …”

Every time I see their name pop up I get that “Déjà vu Fealing” of having seen the fundemental work before and they have just “re-boiled the cabbage”.

The essential part of the attack is the use of non printing characters in a user interface to hide information that is seen by a –different intetface– from humans.

Whilst that sort of thing has been known about for decades especially with the proliferation of character sets and the ability to hide stuff in input attacks on the likes of SQL databases, the first real creative use for it was on this blog.

It started out with the idea of somehow attaching digital signatures to posts in a way that would be there to be checked but not visable to the user.

The idea carried on in the hands of @Wael and @Ratio who used it as a form of steganography.

Yes even I had a small contribution, in that I noted that the old “WordPress” version of this blog software in the Hundred Last Comments page treated their messages quite a bit diffetently than other postings.

So yeah AirEye / Technion have pulled another “Cabbage re-boil”.

BUT the important thing is, although everything behind it was from other peoples original work. They have again taken what to many appears “theoretical research” and put a “very real practical face on it”.

Which I hope will wake people up to the fact that theoretical ideas should not be ignored.

Our host @Bruce Schneier has in the past called the repurposing of ideas in this way as “thinking hinky” a “technical term” I’ve not seen him use of late 😉

But back on the theory side of this for a moment, it is worth considering why it’s possible. Well it occurs because software interprets the SSID in it’s “raw” entirety, that is every bit in the string counts thus it’s a “one to one” mapping. Human Computer Interface(HCI) software by tradition has “non printing” or “printing as white space” characters even in 7-bit ASCII to hide “in-band communications signaling”.

Where it’s gone horribly wrong is the extensible character sets they have so many not defined or redundant characters they considerably out number the ones that are meaningfull to humans, thus HCI software has a blanket “non printing” or “print as whitecspace” policy that gives a “cooked” output… So have a “many to one” or “many to none” mapping which is very very different to the “one to one” mapping of the non-HCI software.

Which proves once again that where there is “redundancy” “covert channels” can be made, and they can be used for good or bad depending on your view point…

MarkH September 22, 2021 3:40 AM


Here are some lowlights, of the funny (yet disturbing) paper you kindly pointed us to:

• Published in Nature Scientific Reports … sounds good, right? Sadly, this is a “vanity press” incorporating rubbish which no serious journal would accept. The authors paid to get their work in there; the going rate is €1,690.

• Signed by ten authors! Who are these people? One of them is affiliated with a “Chemical and Nuclear Engineering Department,” sounds good. For the rest:
“Industrial, Radiophysical and Environmental Safety”
“Complex Physical and Biological Systems”
“Theoretical and Experimental Biophysics”
“Interdisciplinary Modeling”
“Traffic Control Systems” ??????

• Seven of the ten are at the Polytechnical University of València, Spain … ranked 525th in the world, and 734th in physics. Hmmm.

• I was struck by the odd term “cosmophysical” which I had not seen before. A quick search suggests that it is rare, and used mainly in junk science papers.

• They made vast numbers of capacitance measurements. What is that about?

• Most of their measurements seem to be of fluctuations on the order of 1%, or often 0.1%. For random phenomena like nuclear decay, small variations on that order are inevitable, and various disturbances to the instruments or the setup can cause measurement fluctuations.

MarkH September 22, 2021 3:51 AM

@SpaceLifeForm: (continued)

• The authors used “correlation” 119 times. Here are some interesting examples, my italics added:

“we present … detailed description of the circumstances or conditions under which those correlations occurred or not occurred

“this makes the presented results difficult to understand since there are various
circumstances under which those correlations take place or not

“It can be argued that sometimes the correlations are not be observed, but this is not necessarily a lack of consistency … it seems only to mean a lack of conditions for such correlations to occur.”

“we observed that those correlations exist
(or do not exist)
under certain circumstances”

no correlations with space weather were evident when the decay rates (or capacitance measurement) were the same as measured outside of the MFC”

What exactly are we looking at here?

MarkH September 22, 2021 4:01 AM

@SpaceLifeForm: (saved the best for last!)

The last quote above goes to the heart of the matter: when the readings outside the “modified Faraday cage” match those made within, there is no correlation with space weather.

So just what the hell is a “modified Faraday cage?”

First hint:

The interested reader can find a
particular interpretation of the results in the work of W. Reich (56).

This made my hairs stand up … Wilhelm Reich was one of the notorious pseudoscientists of the 20th century. Could they mean that W. Reich? Don’t take my word for it, look at their bibliography.

A “modified Faraday cage” is …

an orgone accumulator

Not. Making. This. Up.

That “paper” is a subtle blend of fantasy and science fiction.

We must learn to read with informed skepticism. I saw these red flags within 10 minutes of opening the PDF.

Clive Robinson September 22, 2021 4:09 AM

@ SpaceLifeForm, JonKnowsNothing, ALL,

Does more sectors make the Random more Random?

In the longterm no, in the short term yes.

To see why put the resulting squarewave signal through a low pass filter, it acts as a lossy integrator.

The lower the frequency of the square wave for any given filter cut off frequency the more “squarewave” the filter output looks, the higher the frequency the more it looks like a DC level of zero if the sector sizes are uniform or some other “biased” level if they are not. In between the “ripple” changes faster and faster so hitting it with a non synchronized sample gives different results.

But as I said about the latch based roulette wheel circuit, whilst it might look chaotic or random close in it is neither, as seen further out the bunching when integrated gives a very nice sinewave at the “difference frequency”…

Which kind of confirms @JonKnowsNothing’s tounge in cheak reply, to which you could add “what goes around comes around” 😉

MarkH September 22, 2021 4:11 AM


I’m delighted by the notion of a “tame mathematician.” Many organizations benefit from having one or two, presumably kept on a tether and given kibble and fresh water on a regular basis.

It’s worth noting that not only is bias impossible to reduce to zero, but also that for an ideal “perfect random” generator, any finite sample of its output would show some degree of bias because the outputs are … well, random.

For the ideal generator, average measures of bias converge to zero as the output sample size increases, but any particular sample can have high readings.

Clive Robinson September 22, 2021 4:42 AM

@ MarkH,

… but also that for an ideal “perfect random” generator, any finite sample of its output would show some degree of bias because the outputs are … well, random.

Err not necessarily, because the readings are “bounded” by the system of measurment.

An infinite sequence when measured can have no bias of any kind other than fundemental bias (like a DC level) because within the range of a measurment what goes up comes down, wraps around or hits an end stop, so eventually with enough samples the average converges on some constant value at infinity.

That’s what a “tame metrologist” will tell you 😉

Wesley Parish September 22, 2021 7:04 AM

@vas pup

Schwein im Himmel!!! Re: the Havana Syndrome, I’m thinking, Legionnaires’ Disease, or something related to that. India’s on good terms with the US. Vietnam’s on good terms with the US. Cuba’s at arms’ length.

(I see @lurker and @JonKnowsNothing see in in that same light. The one common characteristic identified so far, is it is happening to US diplomatic staff or US officers attached to US embassies. As far as I can recall, these medical situations occur in tropical countries. If someone can correct me on that, i would be most grateful.)

Clive Robinson September 22, 2021 8:23 AM

@ Wesley Parish, JonKnowsNothing, lurker, vas pup,

The one common characteristic identified so far, is it is happening to US diplomatic staff or US officers attached to US embassies.

And that would suggest it’s not an ordinary pathogen, but something that is “selective” for some reason. Especially as it has not spread.

Whilst we do have the genetic ability to come up with pathogens that do not reproduce, I’m not sure we’ve come up with a “Genetic kill switch” that acts as a count down timer for either virus or bacteria.

But as I’ve said before the basic physics is against it simply being a radiant energy device be it acoustic or EM.

Whilst you could develop a “multiple beam” system[1] with a very select target area, it would be an “interferance system”. Which whilst it would have a high information capacity over a small target area, the same would not be true for energy. To see the complications on that look up the use ot X-Rays and similar to kill cancers.

Can an interferance system hurt humans. The answer is sort of yes. It takes very little energy to stimulate nerve endings, we also know that seizures and similar can be induced by flashing lights. So it would not be that surprising if someone had worked out how to make an interferance based weapon to stimulate nerve endings to cause shock / insult to a hunans neurological system. In fact it has been done whilst researching “nonlethal weapons” for crowd control, and anti-piracy and just keeping those in their teenage years out of corridors and other places people do bot want them. But they have significant issues when the levals are anything above mildly annoying, in part due to the very very wide range of sensitivities in people.

So yes it could be a biological weapon but I realy don’t think viral or bacterial because it is way to selective. Could it be a higher order pathogen[2], like a malaria varient? Possible but then you would expect it to be found the first time someone put it under a microscope (which due to US health care being so instrumented by computers may not actually have happened).

There is of course another factor to consider… Think of the likes of dengue feaver. The first varient you get mearly stimulates your immune system and gives mild cold/flu like symptoms. However when you get a second varient you can be in a whole world of hurt unlike orhers around you who are getting their first infection…

In hypothetical theory you could design a virus in a lab to replicate this behaviour but tailor it to not be the second but maybe third different strain[3].

My fealing is we are not there yet, but it is probably not that far off.

[1] See Prof R.V.Jones 1978/2009 book “Most secret War” and the section on the “Battle of the beams” and the bombing of Coventry.

Start with chapter 11 and work through most chapters upto chapter 30.

[2] But there are other micro organisms and things like phages that have not been much researched in the West, and some that are only just starting to be,

[3] You only need to see what is happening to people with “long COVID” to see that some think SARS-2 with perhaps a little nudge might make an excelent multiple stage bio-weapon…

Clive Robinson September 22, 2021 8:23 AM

@ Anders,

Everything started from here

In some circles I’m known to have a shall we call it “the product of a more refined”[1] sense of fun…

There is a song[2] the most notable line of which I have misapropriated and well changed to,

“It started with an egg”

You would be surprised at just how many things you can say/sing that of… Science being but one.

You can sort of blaim my son, for it, because when he was young I used to put new words to existing songs on the fly, because it made him laugh, and the sound of childrens laughter is a reward of high value to a not as young as it would like to be soul.

Needless to say there is a lot more to my rewording of the song, as I used to sing it to him when making breakfast, or making cakes/pastry.

But it is true, science realy did start with the egg, it was in effect the first “standardised measure” by which all “chemical experiments” we call recipies were first defined.

The egg was used not just for measures of “mass” but “volume” as well and even occasionaly “density” all as part of “cooking”.

The second branch of science, mankind came up with was “Genetic Engineering” via the cross polination of grasses to get better grains, and later selective breading animals to improve desired characteristics (many of which were not at all good for the animals).

Oh and as Terry Pratchett pointed out to me once over a drink, the old saying about “The oldest proffession…” is probably not true, it was probably being a cook. Because research has found that cooking has caused irreversable changes in humans, the not least of which is our now usless appendices, and that kind of happened before we became human as we understand it.

[1] As Terry Pratchett once wrote a faux Dwarf saying,


blockquote>“There are two things that come from a shaft, muck and gold”



With the rider of “not in equal measure”, so refined has more than one meaning 😉

I have been told by others that Terry simply did a re-work of the US saying of “Sometimes you get the elevator, sometimes you get the shaft”. Me I’m not so sure…


JonKnowsNothing September 22, 2021 8:29 AM

@Clive, SpaceLifeForm, MarkH, All

re: language forms as a difference

Language, even computer languages, are based on syntax or order of connecting words, verbs, descriptors together in a manner that is understandable to others using the same syntax.

Human Language Translators map LangA to LangB… LangN, depending on how many languages they are proficient in.

Computer languages map a finite set of syntax of instructions and pass the results down the compiler chains.

Computer Generated Language Translators are not really as advertised, similar to claims of AI/ML, lots of mirrors and smoke that are backed up by Human Translators which do the heavy lifting.

Part of the reason is a form of communication: “Idiom”. A set of words or concepts that are known to one group but do not have a direct 1:1 translation to any other groups.

The USA made use of the Navajo Language for codes because there were so few Navajo speakers left after our historical attempts at eradication, that they were pretty sure none of them lived in Germany.

But idioms do not translate because we do not understand the context. There may be semi-equivalent versions but their hallmark is these are rarely 1:1 equivalent.

Removing fixed language order renders most syntax into gibberish and thinking outside of syntax order requires some deeper thinking.

  • Is a box really a box, or is it a box because we collectively agree to call it a box? What is really there?
  • No one knows what others really see but we agree to name “this object” is a box

Without the context, idioms remain less-understood than other forms, because they become a form of code-words for a situation, emotion, action etc.

This is not the same as a coded phrase, because phrases are based on syntax rules and not underlying meaning.

There have been some interesting fiction books that explore what happens when a word is not just forgotten but the entire concept behind the word is removed as well. There is no context at all. Such as seeing yourself for the first time in a mirror, you do not identify yourself with the image you see.

It maybe useful a concept in the consideration of random vs non-random.

  • Don’t count your chickens before they hatch
  • Il ne faut pas vendre la peau de l’ours avant de l’avoir tué


Whoever says “I” creates the “you.” Such is the trap of every conscience. The “I” signifies both solitude and rejection of solitude. Words name things and then replace them. Whoever says tomorrow, denies it. Tomorrow exists only for him who does not seek it. And yesterday? Yesterday is Kolvillàg: a name to forget, a word already forgotten.

The Oath: A Novel by Elie Wiesel

FA September 22, 2021 10:32 AM


so eventually with enough samples the average converges on some constant value at infinity.

Not always. There are probability distributions that don’t have a mean.
Ask you resident tame mathematician about the Cauchy distribution.

Sut Vachz September 22, 2021 10:57 AM

@MarkH @Clive Robinson

Re: infinite sequence when measured can have no bias

Not sure if the following fits the context properly, but what about a sequence of 1’s and 0’s where sufficiently ever longer and longer runs of all 1’s and all 0’s follow one another ?

E.g. for concreteness, let the sequence be 1 on the “even” power of 2 intervals [ 22k , 22k+1 -1 ], and 0 on the “odd” power of 2 intervals [ 22k+1 , 22k+2 -1 ], k = 0, 1, 2, … . If have done my sums correctly, the average oscillates forever between approximately 2/3 and 1/3. Amusingly, the original sequence can be coded 1010101010 … , where the binary digit gives the value on the corresponding binary interval [0,1), [1,2), [2,4), … .

SpaceLifeForm September 22, 2021 4:08 PM

Me: Really? Again? Do I really need read the article to verify my logical conclusion?

Brain: Yes, read it.

Me: Really? Do I need to? I mean, I know the answer. Really, I know.

Brain: Just read it to make sure.

Me: But, I know. I’ve seen this before.

Brain: Check it. It may be different.

Me: But I know. I don’t need to read the article. I know what is there.

Brain: Trust, but verify.

Me: (reads) Damnit Brain, I told you! Why do I listen to you?


Clive Robinson September 22, 2021 5:11 PM

@ Sut Vachz, FA,

Re: infinite sequence when measured can have no bias

Is not what I actually said…

What I actualy said was,

“An infinite sequence when measured can have no bias of any kind other than fundemental bias (like a DC level)

You left the very important last bit off.


Not sure if the following fits the context properly, but what about a sequence of 1’s and 0’s where sufficiently ever longer and longer runs of all 1’s and all 0’s follow one another ?

That depends very much on when “you measure” it (look again at what I actually said).

If you think about it in terms of a graph, it falls into a “what goes up comes down” catagory.

If you integrate it you get a triangular wave that increases in amplitude with time.

But do you in the real world? Simple answer “NO” even in a computer simulation you run out of bits to store the integration count in. Eventially you get either underflow or overflow. Then one of two things will happen either the counter will wrap around through zero and carry on going in the direction it was going. Or the over/under flow will get tagged or trapped in some way and then either the count gets stoped, or held at the endstop value.

In the real world think of a physical process it is in one of three states,

1, Energy is being put in.
2, Energy is conserved.
3, Energy is lost.

Your waveform is in the first catagory, a sinewave would be in the second and a decaying waveform would be in the last.

Now think about the real physical process of a long flat surface tipped up at an angle if you roll a ball onto it, it either rolls straight down the slope or it follows a curve that decays into rolling straight down the slope due to the action of the force of gravity and the force of friction on the surface and the resistance of the air.

However what happens if the ball gets to the edge of the sloped surface? Well it depends, if there is no “wall” at the edge then it drops off, if there is, the ball hits it and bounces back whilst still rolling down the slope.

If you photograph the slope, then the ball will be at some position with respect to the center line. That is the bias you see in your measurment. If you take three photos in quick succession then you will see from them the ball effectively heading in a straight line towards the bias point in the last photo. The further apart in time you take those photos the less straight the line may appear to be but you will in most cases see it converging on the bias point in the last photo.

There is off course the issue that whilst your first one or two photos show the ball heading towards the edge, your later photos may not show the ball as it went over the edge, or it has bounced away from the edge and is now heading in a different direction.

What you do know is, that no matter how long the slope is, if it has a wall at the edge eventually the ball will come to rest at the bottom as it gives up it’s energy to the environment by friction and if the bottom is fair and level and the friction uniform then the ball will travel in a straight line.

Those are the practical realities of “measurment”. Usually the ball will follow an entirely determanistic course, and from the output you can calculate back to the starting conditions. But sometimes that determinism may be “chaotic”, that is so sensitive to the input conditions the output can not be rolled back reliably to the input conditions, nor can knowledge of any point in the balls path be used to reliably predict it’s final condition.

But “chaotic” is not “random”. To be random then the determanism has to be removed, and we actually do not know how to do that for most physical systems. The laws of physics as we currently understand them do not alow for it, they just alow for increasing complexity as “particles/objects interact and transfer energy to each other under the influance of forces”.

Sometimes the flights of mathmatical theory get thrown under the bus of practical reality and crushed into the bounds of a physical constraintnt or measurment. A tame metrologist knows this as it is the bread and butter of their working existance. As for mathematicians their flights of fancy are unbounded and so can reach infinite highs or lows that neither the finite physical universe nor any simulation with in it can accommodate in reality.

If you ask “How many digits are there in Pi” the real answer is not infinite, but “we can never know”. Gregor Cantor showed that to be true.

SpaceLifeForm September 22, 2021 5:18 PM

@ JonKnowsNothing, ALL

Random Roulette

Jon’s wheel has good design. The bias is known, and no spinning is required.

Exercise for readers:

Using only Math and Physics, figure out why any roulette wheel with greater than one sector must have a bias no matter how many sectors exist.

Hint: As Clive noted, more sectors help it to appear more random.

Hint: The more sectors, the more spins it will require (observations) to determine which way (zero or one) the bias goes.

The bias will exist no matter what number of sectors exist. One need not build their own Random Roulette wheel. You can prove this to yourself via pen and paper, and thinking about building a Physical wheel without actually having to do so.

SpaceLifeForm September 22, 2021 6:33 PM


Always has been. (Gun visible)

Downgrade attack!

Always has been. (Gun visible)


Clive Robinson September 22, 2021 6:53 PM

@ JonKnowsNothing, SpaceLifeForm,

The existing behavior of +bars and -bars can be traced to the daily-weekly FBI flyovers of the area doing their collect-it-all Cessna flights.

You saying this, has kind of brought to a head, something I’ve been mulling over for a while now…

As you may remember from some of my previous posts I’ve been keeping an eye on ADS-B hobbyists and just what they can do in the way of OsInt.

The reason for this intriguing hobby is,

It’s a legal requirment by the FAA and others around the world that if ADS-B beacons are fitted they must be activated when the aircraft is under it’s own power. In effect even if the aircraft is not being flown but the “pilot” turns on the electrics for any reason, even when up on chocks in the hanger.

Now ADS-B systems transmit plain text packets of data including the aircrafts unique identifer, which can be traced through public open access databases.

Some ADS-B receiving hobbyists have uncoverd CIA covert operations and several other bits of “embarrassing” information about Federal agencies and some State agencies.

Well the FBI has to follow the law when it comes to aircraft flights just as any other government agency is required to do. Not doing so can cause one heck of a “5h1t storm” and “turf war”.

The thing is legally you can not just chuck a stingray unit in an aircraft it breaks any number of laws and regulations and it’s not just the FAA that will start “throwing the toys around” if you get caught doing so. And there is nothing more appealing to some federal agency heads than to have an excuse to send the Federal Marshals into FBI offices and business…

The chances of getting ADS-B encrypted any time soon is very low, or changing any of the existing international agreements about the required identifiers in the transmissions.

Which boils down to you can fairly quickly work out when an FBI flight be it an aircraft owned by the FBI or leased etc by the FBI is in the air around your location.

If you modify some Open Source software, you can come up with a programme that uses an SDR to get the ADS-B packets as they are transmitted and look them up in a DB of identifiers to owners, or a local “watch list” you’ve compiled.

You can use this to sound an alarm, or turn a mobile phone off.

So if your AFS-B antenna is up on the roof, some considerable hight above your mobile phone antenna then the “radio horizon” radius of the ADS-B antenna will be significantly larger than that of the mobile phone.

So… The aircraft ADS-B transmitter will be heard by your rooftop ADS-B antenna some considerable distance greater than your mobile phone will hear the aircraft mounted stingray.

So in effect you can have your mobile phone off before the stingray gets to have it’s way with it.

Now the fun thing is not only can you look up the ADS-B identifier to find the aircraft and owner, you get to find out all sorts of other information. Some of which is modifications permits.

In theory if you put a stingray in an aircraft you want the antennas to be on the outside of the aircraft. The work to do that needs not just a permit, but an inspection report which are entered into a federal register which in effect makes them “public documents”.

The thing is this has a knock on effect. That is once you’ve got a permit and report for one type of aircraft it is less expensive / arduous to stick with that aircraft type than go through the process with an entirely new type of aircraft each time.

So… just making a note of which type of aircraft fly at the time you see unusual behavior with the phone, by correlation gives you not just the likely suspect, but the types of aircraft favourd by the agency. So if a new aircraft comes into the area, just finding out it’s type can effectively give a “heads up” ability when the agency brings a new aircraft into the area…

Whilst not perfect, it can give a thoughtfull person way increased privacy in their home etc.

SpaceLifeForm September 22, 2021 6:58 PM

Beware of illusion


Mowmowfi September 22, 2021 9:57 PM

Markh and fa are starting to annoy, one post somed up what you meant, you try another angle , I’m just hoping they find out, or ask me.
You still do a lot of work, are you 25 with a brain?

Wesley Parish September 22, 2021 9:59 PM

@Clive Robinson

I’m thinking along the lines of “sick building”, which syndrome arrived on the tail of Legionnaire’s Disease, iirc. Such does not require some supernaturally smart and powerful adversary, it just requires that the state whose representatives are being made ill by these buildings, has been starving the agency whose officials these are, of the required funding to maintain their buildings.

I think Tom Engelhardt of might have some evidence on that issue, of various administrations starving the US State Department of support over a period of decades.

Unfortunately it is beginning to look as if the CIA and other interested parties are using this to ramp up development of biological and directed-radiation non-lethal weaponry … Uncle Sam’s traditional, congenital behaviour, sticking foot inside mouth then shooting oneself in the foot. It gets so, sooo messy!

name.withheld.for.obvious.reasons September 22, 2021 10:34 PM

I have mentioned before that search results from google have evidently trimmed the results of a search that would include analysis and critique of the U.S. national security state. For example when looking up HR 4168 of 2015 I used to get hundreds of results from articles and blog posts that covered the topic. Now results reflect government sites, think tanks, and pro-state media sources; only two references were returned for questioning the wisdom of such laws.

Now on the U-of-Tubes, searches for videos that include national security topics seem to follow the same pre-filtered results. Or, for those more interested in naming it in context, yes it appears that U-of-Tube is NOT including segments or videos that cover national security topics.

name.withheld.for.obvious.reasons September 22, 2021 10:48 PM

Over month ago I warned of issues in Florida based on some numbers that were not being monitored. The occupancy of hospital beds and ICU units available. The dramatic sweep upward of both data elements gave the impression that the state was in trouble. Now more than half a dozen states are nearing medical emergency and seeking outside support for the citizens of their state. And ironically, those asking for this assistance act to decouple the responsibility of managing medical utilization against the need to sell their citizens their freedom. In essence governors are selling their citizens the belief that they have their freedom. The citizens sold this just didn’t read the find print were it says your mileage and or freedom may varying depending on your driving habits or your ability to continue breathing.

AL September 23, 2021 12:40 AM

The government has for years enlisted the corporate media in shaping public opinion. Now, it seems, they are enlisting internet interests in pursuit of their One Narrative™ initiative.

In the case of Google, I put in a search term and got pages of useless stuff. I used Yandex and found what I was looking for in the very first result.

It seems that if there is a search result that interfer with the government’s narrative, it will be suppressed. Google is still good in technical search, such as coding.

Winter September 23, 2021 12:50 AM

“Now more than half a dozen states are nearing medical emergency and seeking outside support for the citizens of their state. ”

ht tps://

Clive Robinson September 23, 2021 1:40 AM

@ Mowmowfi,

You still do a lot of work, are you 25 with a brain?

Well… My body is of a similar age to our host @Bruce, I used to joke about who had “More badger in their beard”. Unfortunately my body is now very definately falling apart in several ways, so there is no,more walk twenty miles or cycle a hundred everyday to get excercise and thinking time.

My brain on the other hand, for various reasons, still thinks it’s in it’s early twenties, and is still extraordinarily curious and inquisitive and does not want to slow down or “act it’s age”.

It feels you can leave the perponderance of gravitas for “The near dead philosophers society” 😉

I’ve been known to say I’ll never retire, as I’ve to much to do, but that’s just the brain talking 😉

Winter September 23, 2021 1:42 AM

“Now more than half a dozen states are nearing medical emergency and seeking outside support for the citizens of their state. ”

ht tps://

While the number of COVID-19 patients in Florida hospitals has started to come down from record-breaking highs, Mestre is concerned that hospitals could be overwhelmed soon from a combination of a COVID-19 and flu surge in the fall if people refrain from both vaccines.

Sut Vachz September 23, 2021 1:50 AM

@SpaceLifeForm and others

A Wagon Wheel is like a Roulette Wheel, it has divisions, one or few or many; but unlike the bias of the Roulette Wheel, which disappoints your hope that it will take you home, the Wagon Wheel will is biased to really take you home, and even if it doesn’t make it, it still provides a true journey. Axiom: what goes around, comes around. That’s math.

https: //

SpaceLifeForm September 23, 2021 2:04 AM

@ Clive

LOL. You are going to confuse the AI/ML. I will try to help.

if you ask “How many digits are there in Pi” the real answer is not infinite, but “we can never know”. Gregor Cantor showed that to be true.

It was really Plato, Euclid, Archimedes, and Apollonius that started the party. Cantor was not around yet.

And everyone knows that Pi is equal to 3.


Clive Robinson September 23, 2021 2:26 AM

@ Wesley Parish,

I’m thinking along the lines of “sick building”, which syndrome arrived on the tail of Legionnaire’s Disease, iirc.

It arived via an accountants tut tutering… Architects were persuaded that brutalist glass and concrete design was the way to go, as it was quick, fast and above all cheap. But it was not just visual oppression to those walking by that came with it, it was the brutality of a closed and enforced environment. With the banishment of diurnal cycles for 24h working, killing circadian rhythms and those that need them shortly thereafter. They are places where a “mouse fart in the basment” gets endlessly recycled throughout the building in the name of “efficiency” and it is only by human ingestion that such organics get removed…

Now of course such inane stupidity has become an enabler to a virulant pathogen just to emphasize how stupid certain types of humanity are.

But the point against your argument for this as yet cause unknown syndrom is some people who don’t work in such stygian environments are suffering from it as well.

Which is just another reason why I’m thinking what ever it is, it has to be multi-part.

As for what is happening to the US State Department, it is to be expected. It grew to large, became to powerfull and staffed by those of a certain disposition and outlook. They are now out of step not just with the wishes of society, but more importantly those with the money to influence the politicians.

Such a large beast can not be dispatched by sword and shield with one clean stroke, so instead the insiduous poisoner is called in to slowely kill the beast body and soul organ by organ, whilst the official executioner inflicts the visable death by a thousand cuts. Such is the way of those who would make plans in the dark. So expect a round of “puritanical cleansing” to remove those who still have power or visability and to cower those who remain.

SpaceLifeForm September 23, 2021 2:52 AM

The Onion should sue for … something.

This is unfair competition.

Tim Cook says employees who leak memos do not belong at Apple, according to leaked memo


SpaceLifeForm September 23, 2021 3:14 AM


Parse slowly so it sinks in

Amit wrote:

In 4 months I gathered hundreds of thousands of domain credentials without sending a single packet.



Clive Robinson September 23, 2021 8:46 AM

@ Sut Vachz,

… the Wagon Wheel will is biased to really take you home, …

Look in the movies at old black and white cowboy films… which way do they turn?

Sometimes “the record of events” and reality are the opposite of each other. It realy depends on the way you measure things…

Clive Robinson September 23, 2021 9:00 AM

@ SpaceLifeForm, ALL,

Parse slowly so it sinks in

I don’t realy need to, I’ve seen the basic faults so many times…

Fault One “fall back attack” you MITMA a comms link and force the lowest level state you can.

Back many years ago there was a lot of noise about a well known supposadly secure terminal emulator falling back to “ASCII Plaintext”…

OK this is “basic HTML” but when sniffing for a credential “Do you care as long as it is there?”

So absolutly no excuse on that one…

Fault Two, the Microsoft favourit that just keeps on giving… It gave us amongst other things “Plug and Pray” attacks and dare I say “USB rooting”. It is the old “Ease of use over root of trust”.

I could go on but what the heck were Microsoft doing?

These are decades old attacks, that Microsoft developers are obviously not learning from…

Or for that matter many other developers of commercial / consumer software.

Clive Robinson September 23, 2021 9:20 AM

@ Sut Vachz, SpaceLifeForm,

And for a different turn of the wheel,

Those in the UK of a certain age or older will remember this tune from a Saturday early evening show and a body builder =( It came from some independent telivision company somewhere in the English midlands, that also used to give the grannies their favourite Saturday afternoon sport “wrestling” that gave us some bloke called “Shirley” who claimed he was every ones “Big Daddy”…

Apparently these were the hight of entertainment half a century ago… I kid you not. Look up Shirley Crabtree on Wikipedia…

Sut Vachz September 23, 2021 11:06 AM

@Clive Robinson

Re: strobing wagon wheels

And yet one gets through the counter-revolving door, a trick that, for most, is not negotiable in real life 😉

The original String-A-Longs version of that song has some nice guitar grace-note work and vibrato that seems to have been left out by later covers, perhaps to smooth it out for dancing.

Sut Vachz September 23, 2021 12:03 PM

@Clive Robinson @FA @SpaceLifeForm

Tried to post this earlier, but the chthonic gods, i.e. the Moderator, seems to have buried it …

You had said

”An infinite sequence when measured can have no bias of any kind other than fundemental bias (like a DC level) because within the range of a measurement what goes up comes down, wraps around or hits an end stop, so eventually with enough samples the average converges on some constant value at infinity.”

so I thought infinity or approach to it (of number of samples) was ”allowed”, since the word was used a couple of times, suspending questions about finiteness of machine and measurement resources.

I agree with the practical realities you outline.

Pedantic point about pi, and Cantor –

pi has no digits, it is just itself, a ratio, not even a quantity really, rather a relation between quantities. But you can compute closer and closer greater and smaller ratios of integers, by whatever scheme you like etc.

The modern way of arithmetizing everything, like the modern way of using symbolic logic, makes computation easy, but completely obscures and disconnects from the things that are ultimately being dealt with.

Cantor was the one of the ones largely responsible for reviving the error of the actual infinite, which had been disposed of by the classical Greek mathematicians, at least Aristotle, so perhaps one can say that Cantor did say there were an infinite number of digits in pi. 😉

SpaceLifeForm September 23, 2021 1:54 PM

Flying Bugs

It has always just been a matter of time.


MarkH September 23, 2021 2:50 PM

Clive has written that radioactive decay time measurements modulo a small value must be biased, and in particular that any radioisotope TRNG must suffer from bias as decay weakens the source.

I’ve taken his assertions very seriously, in the way scientsts do: by examination and test.

Using earthquake timings as a stand-in for radioactive decay timings, I failed to detect bias in the modular-time sampling method.

I randomly “struck off” earthquakes (at an invariant rate) to simulate a weaker source, and still failed to detect bias (results shown in an earlier comment on this thread).

Perhaps what Clive had in mind is not the “static” effect of a weakened source, but rather the “dynamic” effect of the continuous gradual lengthening of the mean time between events.

I’ve made an experiment to simulate decay, filtering earthquakes randomly with a probability which increases as a function of time to model the ever-decreasing population of nuclei in a radioactive source. I have failed to detect bias.

MarkH September 23, 2021 2:59 PM

Simulation of Exponential Decay

The column headed “full” includes all earthquakes in the dataset from 2004 up to a few days ago. The “const” column corresponds to source weakening by about 14.5 percent. The “exp” column simulates the inverse-exponential decrease of population as would occur in a radioactive source; the end-of-period population is 85.2 percent of the initial population.

Case         full     const     exp
----         ----     -----     ---
Events      128942   110276   119051
Mean Delta  29.39     29.37    29.44
Halves       0.22      0.43     0.11
Odd/Even     0.65      0.54     0.58
entr/bit     0.999950  0.999946 0.99949
min-entr/bit 9.9773    0.9770   0.9764

“Mean Delta” is the relationship between successive time samples (perfect would be 29.5). “Halves” and “Odd/Even” are gross imbalances (first half of minute vs. second half, odd seconds vs. even seconds) as percentage point deviation from the mean. The entropies per bit are respectively Shannon, and a simple estimation of min-entropy.

MarkH September 23, 2021 3:17 PM


[1] I switched the starting year from 1990 (in my previous analyses) to 2004, because the USGS records have a substantial increase in events per year from the late 1990s through 2003. Perhaps this reflects some expansion of the seismic network.

To include an increasing rate in the time series would be a rather stark departure from modeling a radioactive source; since the range seems fairly steady from 2004 on (excepting the noise inevitable in such datasets), I chose that as the start year for best approximation to a radioisotope source.

[2] If there were some dynamic bias effect from continually lengthening mean intervals, it would apply throughout the life of the TRNG — including when it’s first manufactured! — not just when it’s getting old.

The time required for the mean decay interval to increase by 1% is constant; that’s what exponential decay means.

[3] The exponential decay in the above simulation is, in a sense, absurdly accelerated: the source decays 20 percent in about 10^5 detections, whereas for a practical radioisotope TRNG, the number of available detection opportunities for this degree of source weakening would be at least several orders of magnitude greater.

If there were any bias effect from source decay, the simulation should show it very strongly indeed!

MarkH September 23, 2021 3:26 PM


At the bottom of the first column, where the min-entropy is shown as 9.9773, which is obviously wrong – the maximum possible is 1!

The correct value is 0.9773.

MarkH September 23, 2021 3:36 PM

Corrected Table

Case         full     const     exp
----         ----     -----     ---
Events      128942    110276   119051
Mean Delta  29.39     29.37    29.44
Halves       0.22      0.43     0.11
Odd/Even     0.65      0.54     0.58
entr/bit     0.999950  0.999946 0.999949
min-entr/bit 0.9773    0.9770   0.9764

I also missed out a ‘9’ from Shannon entropy in the 3rd column, My Bad

SpaceLifeForm September 23, 2021 3:58 PM

@ Clive

Flying Bugs

Yes, same story. Sorry for not finding better link. I will add NPR to my list. Which includes NYT, WAPO, BBC, and other Murdock junk.

MarkH September 23, 2021 9:48 PM


We have been discussing an assertion that exponential decay causes bias in the sequence of numbers from modular timings of non-deterministic events.

So far, I have been unable to imagine how both (a) the assertion can be true and (b) the experiments I have performed on non-deterministic earthquake data yield extremely low bias measurements.

What are your thoughts?

Clive Robinson September 23, 2021 10:40 PM

@ lurker,

“man charged withfaking positive covid 19 test”

Whenever something like this happens, it is “because they see” some advantage in it.

Thus the question arises of “can we see it?” and if so “is it an advantage to us?”.

As a generalisation criminal law regards a person as making mentaly competent and rational choices before they commit a crime. Remove any of the components and the crime may not have occured even though the event did (think murder -v- manslaughter etc).

So although the article did not say what the motivation was, the fact there has been an arrest kind of implies at the moment the authorities believe the man had a rational reason to do what he did, so we assume something to gain by it.

I guess we are going to have to wait to find out.

Sut Vachz September 24, 2021 5:12 AM

Those following measurement practice might find these articles of interest.

Whitney, Hassler. “The Mathematics of Physical Quantities: Part I: Mathematical Models for Measurement.” The American Mathematical Monthly 75, no. 2 (1968): 115–38.,

Whitney, Hassler. “The Mathematics of Physical Quantities: Part II: Quantity Structures and Dimensional Analysis.” The American Mathematical Monthly 75, no. 3 (1968): 227–56.,

FA September 24, 2021 10:01 AM

@MarkH (in previous squid thread)

I visualize a broad maximum in the neighborhood of the mean frequency. Because the intervals between decay events are extremely scattered, I suppose there to be energy spread throughout the range of the spectrum, slowly diminishing according to distance from that peak.

A signal consisting of a narrow pulse for each decay event and zero otherwise will on average have a flat power spectrum plus some DC [1]. There is no peak at the mean event frequency.

[1] A finite-width impulse will turn that into a sin(x)/x shape, with the first zero at 1 / pulse_width. This is still essentially flat in the region of interest.

MarkH September 24, 2021 2:47 PM


On the high frequency end of the spectrum, yes.

I visualize the low frequencies as dominated by the vastly longer (and very irregular) recurrence of the pulses.

The “left side” shows the character of the pulses; the “right side” the events.

I’ll run a histogram from intervals between quakes, as a crude proxy to the spectrum.

MarkH September 24, 2021 10:50 PM


I computed my approximate spectrum and tried to post it; three different edits all fell into the “held for moderation” bin, which as far as I have observed seems to work like the event horizon of a black hole.

MarkH September 24, 2021 11:47 PM

@FA, Clive:

Spectrum notes:

[1] The log-log plot was compiled by interpreting intervals between successive quakes as a period, and computing the log2 of the corresponding frequency (as events per day).

The earthquakes were then counted by octave in a histogram.

[2] The total number of events is 128,942 and the span of the data is about 17.7 years or about 6,470 days, so the mean frequency is about 20 per day.

Note what I would consider a broad peak spanning the frequency range of 8 per day to 64 per day.

MarkH September 24, 2021 11:50 PM

Spectrum notes, continued:

[3] My statistical competence is miniscule; in a purely visual sense, it seems to me that the shape of the plot resembles a Poisson distribution, which I believe describes the time (not frequency) distribution of radioactive decays.

[4] Clive, you have more expertise in the frequency domain than any of us. Is the computation I applied plausible for spectrum estimation? Can anyone suggest a software package with which it’s convenient to compute a cumulative (integral) spectrum for a long list of unequally spaced events?

FA September 25, 2021 3:49 AM


Can anyone suggest a software package with which it’s convenient to compute a cumulative (integral) spectrum for a long list of unequally spaced events?

A crude [1] version only takes around 40 lines of Python (including plotting and some empty lines). Uses random.expovariate() to generate the events.


Done on Linux but should work everywhere.

[1] Rounding event times to the nearest integer sample index.

FA September 25, 2021 4:08 AM

@MarkH, @moderator

I computed my approximate spectrum and tried to post it; three different edits all fell into the “held for moderation” bin, which as far as I have observed seems to work like the event horizon of a black hole.

Same experience here. Makes it quite difficult to maintain a discussion.

FA September 25, 2021 11:18 AM


The log-log plot was compiled by interpreting intervals between successive quakes as a period, and computing the log2 of the corresponding frequency (as events per day).

Interesting, but that is not a ‘spectrum’. Nor is it really the probability distribution of having N events per day. To get that, you’d need to divide the available time range in equally sized pieces (e.g. 10 days), count the number of events in each and make a histogram of the counts.

I don’t think that what you do – interpreting the time between succesive events as the reciprocal of the average frequency – is valid. But it’s not so easy to explain why in an intuitive way…

FA September 25, 2021 11:38 AM


(Continued, trying to explain the suspected flaw in your procedure)

Suppose you see two successive events that are one hour apart. So you say the average rate is 24 / day.

Then you see two successive events that are 6 hours apart, so you say the average rate is 4 / day.

In your scheme, the two get the same weight. But the latter covers a period that is six times longer, so it should get six times the importance of the former in the statistics.

Another unrelated thing to think about: the average of a set of numbers is not the same the reciprocal of the average of the reciprocals…

Sut Vachz September 25, 2021 1:17 PM

@FA @MarkH

the average of a set of numbers is not the same the reciprocal of the average of the reciprocals…

Unless it’s the geometric average.

Which suggests the question what is the most appropriate average to use, arithmetic, geometric, harmonic, etc., or does it matter ?

MarkH September 25, 2021 2:37 PM

@FA, Sut:

I’ve been careful to refer to the plot I constructed as a “proxy”, “estimation” or “approximation” … I don’t propose that it’s a proper spectrum!

My intuition is that the actual spectrum would be qualitatively similar, but I won’t know unless I run an actual spectrum, which would take some doing.

Literally, of course, the plot represents a distribution of intervals between successive events in the dataset.

Thanks to FA, for linking the Python code. I haven’t familiarized myself at all with the marvelous population of packages available for Python. From a first glance, it seems that numpy and scipy can do an prodigious variety of computations.

MarkH September 25, 2021 2:38 PM

As FA observes, the plot is definitely not “the probability distribution of having N events per day.” The tallies are non-zero up to extremely high numbers, something like a million per day.

Setting to one side the question of whether anybody would survive to tabulate the statistics of day with a million strong quakes, I’m dubious that it’s physically possible.

As a humble analogy, the ability to detect 15th harmonics in the spectrum of a square wave does not imply a measure of probability that zero-crossings might occur at 15 times the fundamental frequency.

Sut Vachz September 25, 2021 4:00 PM

@FA @MarkH

In statistics, the arithmetic average rules. Because it defines the expected value

That is so, but how do we know we are handing the appropriate measurements for our problem to the statistical machine ? The geometric mean can also be regarded essentially as the arithmetic mean, of the logarithm of another quantity. How do we know which to use ?

SpaceLifeForm September 25, 2021 5:07 PM

@ MarkH, FA, Sut Vachz

It may be curious to inspect changes in the running Root Mean Square of two or more consecutive event timestamp deltas.

May mean nothing. Just food for thought.

MarkH September 25, 2021 10:28 PM

@FA et al:

Thanks again for linking to your python fft code.

I adjusted the array size to the number of minutes in a year, and the lambda argument of expovariate for about the right number of events per year.

The resulting plot is completely level (extremely noisy, but with dense scattering of maxima at the equal strength, independent of frequency). I imagine this is how a white noise spectrum would plot.

For comparison, I tried assigning every member of array X from the random() function. Excepting a small difference in vertical (db) scaling, the plots are qualitatively identical.

I haven’t tested with real-world data, though it’s not obvious to me that the results would be different.

Mowmowfi September 27, 2021 2:43 AM

What I said ,add 01010101.. To when there’s a gap between ether quakes, apply any function to when there’s a quake and when there’s not.

The zero are randomized the data, so add basis.

Mowmowfi September 27, 2021 2:50 AM

Or 111100111100 ,its a he of bible codes, with exponential decay in accuracy after the last data point

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.