Comments

Sarah September 24, 2021 5:30 PM

Is there a use case for ciphers whose purpose isnt necessarily for security ( to easy ) but more of a similar vein to easily breakable ones like Rot13, but with more forward protection?

I encountered this phenominon where if you mix up an alphabet, and then you keep yours and the other keeps a version rotated 13 shifts, it acts like a rot13 but with the added benefit of being like a password on blog posts.

So only the person you gave the key can spoil a blog post. ( Provided they dont know frequency analyses and other methods, as its not meant for security. )

SpaceLifeForm September 24, 2021 5:52 PM

@ Clive, ALL

Data Attacks

Gee, who knew about Malicious Ads?

Not these groups apparently.

Two [redacted] decades to buy a [redacted] vowel? Really?

hxtps://www.vice.com/amp/en/article/93ypke/the-nsa-and-cia-use-ad-blockers-because-online-advertising-is-so-dangerous

SpaceLifeForm September 24, 2021 11:41 PM

Interesting

hxtps://www.ctvnews.ca/canada/china-releases-detained-canadians-kovrig-spavor-after-extradition-against-meng-wanzhou-dropped-1.5598969

SpaceLifeForm September 25, 2021 1:00 AM

Me: Really? Again?

Brain: Why not? It is a different day.

Me: Because I know what is there.

Brain: You say that all of the time. You have been saying this for years.

Me: So, Brain, do you really think it will be different this time?

Brain: Always check.

Me: But I know. I don’t need to read it.

Brain: Yes, just do it. You know you are a glutton for punishment.

Me: Good point.

Brain: Just listen to me.

Me: (reads) Yeah, you are correct. I am a glutton for punishment.

hxtps://thehackernews.com/2021/09/colombian-real-estate-agency-leak.html

Clive Robinson September 25, 2021 3:01 AM

@ SpaceLifeForm, JonKnowsNothing,

And in other news,

Of ducks quacking and geese hissing,

hxtps://bc.ctvnews.ca/mobile/b-c-health-officials-release-true-hospital-numbers-after-public-pressure-1.5599428

What is the betting calling ducks geese is going on elsewhere?

Clive Robinson September 25, 2021 5:57 AM

@ SpaceLifeForm,

Interesting

But not unsurprising.

The history of what went on in Canada under “just about Trump-dough” is a clear indication of politically inspired malice and worse toadying and fawning that did not get rewarded, just brought down a world of hurt on others.

A certain person has persistently said she is innocent of all charges and has pled that way in court and been released after nearly four years. The others have been found guilty and one sentenced to 11 years, the other had not yet had sentance handed down.

The truth is we realy did not know and still don’t if any of the charges are real or even applicable or not.

However when one side starts an abuse of power, what do people realy expect to happen?

The US has a history of “not thinking long term” you can see that from Nuremberg onwards, and it almost always has blowback or other undesirable repercussions.

To “act without thinking” is a fairly obvious indicator of “poor impulse control” and generally those with “poor impulse control” get themselves into all sorts of trouble. Worse they also get those that hide behind them in trouble as well which I suspect certain people north of the border are further contemplating.

As was once observed,

“Hang with a crook, expect to be hung like a crook.”

What applies to individuals also applies to sovereign states.

Clive Robinson September 25, 2021 6:55 AM

@ SpaceLifeForm,

Brain: Always check.

You might want to do a search on,

Sigurdur Ingi Thordarson

I’ll let your “Brain” talk you into it…

Then you could look up what some are saying about,

Marianne Ny

Yup, a mess to put it mildly, then there is the “follow the money” question,

https://www.rt.com/uk/535450-british-taxpayers-julian-assange-extradition-bill/

That 300,000 is in fact about the lowest figure you could arive at, the reality is probably ten times that, if it’s looked into properly.

Oh and this suggests you might want to add another media outlet to your “don’t link to” list,

https://antinuclear.net/2021/09/09/92998/

It’s all fun fun fun this month NOT, that you would know it, from the MSM.

Etienne September 25, 2021 8:28 AM

For about a week now Voip.ms has been under a DoS Attack, and have refused to pay a ransom. My hats off to them! They are spending the ransom money on themselves, by building against the attack.

Another reason my IPv4 is now fully obsolete, and why people using it should be charged a VAT.

JonKnowsNothing September 25, 2021 8:42 AM

@Clive, SpaceLifeForm, All

re: Of ducks quacking and geese hissing and some dogs barking

I’ve been digging through the empty-set the sub-lineages for SARS-CoV-2. It’s not been easy to find much on the topic.

  • There are 8 sub-lineages to Alpha B.1.1.7 Q1-Q8
  • There are 34+ sub-lineages to Delta B.1.617.2 AY.1-AY.34

For reasons best described as Dogs Is Dogs, the majority of reports on Delta are just “Delta”. Including the CDC, California Dept of Public Health etc. However, this is not exactly so.

Their reasoning is:
  The effects are the same, you get sick, you might go to hospital, you may die.

Genetically, theses are all different. Mutations arise and decline spontaneously and if there is an identical mutation, it is assigned to an existing pool. If it’s a new set of mutations, it gets a new designation. For Alpha it’s Q1-Q8. For Delta it’s AY.1-AY.34 (or more).

Their concept is: Dogs is Dogs. True.

However, if I’m interested in being a Dog-Servant, there’s a difference between a Great Dane and a Chihuahua. There’s a difference between a West Highland White Terrier and an Airedale Terrier. The size of the poopscoop you need for one, and the size of the bag of canine food you need to feed it is another.

The AYs are regional differences in the genetics and genome analysis is now mapping more than just the spike protein because SURPRISE! the entire virus is mutating, not just the spike section.

Just as in the original transmission of SARS-CoV-2, the transmission of the many sub-lineages can migrate the same way. This is not what HIP-Economies want to focus on, so they can hide the complexity of virus mutations and the interactions that provide the virus more mutation opportunities under a single placeholder name.

Consider: Alpha has 8 sub-lineages, Delta has 34+ sub-lineages

Woof Woof

===

ht tps://outbreak.info/
ht tps://outbreak.info/situation-reports

Clive Robinson September 25, 2021 9:44 AM

@ JonKnowsNothing, SpaceLifeForm,

Woof Woof

Indeed, like all cats look grey in the dark, all dogs sound alike.

One of the things just using “alpha” or “delta” makes dificult to see is the “rate of mutation”.

As I’ve mentioned befor the rate of mutation is based on the “prevalence” or people currently infectious. So knowing all the variants when and where they occured is actually important and should not be hidden away.

But as you note,

This is not what HIP-Economies want to focus on, so they can hide the complexity of virus mutations and the interactions that provide the virus more mutation opportunities

Because amongst other things it enables them to not just hide reality, but distort it to their political view point. Which is unlikely to be in “Jo(e) Citizens” favour.

I still say that currently our most powerfull weapon against SARS-CoV-2 is “robbing it of hosts”. Importantly because the less potential hosts there are the less likely there is to be anorher mutation.

There are two basic ways to rob a virus of hosts,

1, Effective area quarantines
2, Effective vaccination policy.

We certainly do not have the latter and quite a number of politicians are actively stopping the former, even though it has been shown to work, and do so effectively.

The only way “vaccination will get us out of it” is to stop the increase in mutations in the first place, which we are not doing. Because it requires the prevelence to be brought down significantly, which we clearly are not doing. We only know of one way to do this and that is by stopping community spread, and we only know of one way to effectively achieve that…area quarantines combined with hard lock downs where required.

It is not rocket science, it does not require a new “moonshot” just sensible and effective measures. All of which do not please certain people, especially politicians and their paymasters…

lurker September 25, 2021 12:19 PM

@J.K.N, Clive

… because SURPRISE! the entire virus is mutating, not just the spike section.

Yup, so the rate of vaccination must exceed the rate of mutation if we are to win. There seemed to be a hollow ringing sound to a certain promise of 2Bn doses of Pfizer to 3rd world countries by the end of next year. Timeliness, infrastructure, …

There are currently trials of a vaccine that disrupts the main protease, which sounds at first like a good idea, until the thing learns that spikiness isn’t all

JonKnowsNothing September 25, 2021 1:22 PM

@ lurker, Clive, All

re: Deliveries of vaccines to other countries by the end of next year

Among the many things that boggles, is the lack of insight into the vaccination of the global population. It’s been noted many times as to the necessity of this process.

What exchanges are noted in MSM are two forms:

  • ACountry is “giving” BCounty n-quantity of NearExpirationDate vaccines, but they are not “giving” as in gift/free gratis, they are expecting a “swap” for an equivalent n-quantity for LongGoodForDate vaccines.
  • ACountry is “selling” a portion of their overstock to BCountry for hard cash money plus a guaranteed replacement quantity.

HIP-Economies cannot see past $$$ but there is likely someone who sees a profit to be made by the above strategies.

A recent MSM business report “Second hedge fund pushes for change in leadership of GlaxoSmithKline ” details what would be common practice in Silicon Valley startups where the Original Crew is dumped after initial development (Proof of Concept) and a New Crew is brought in to BringItToMarket with VCFunding and then this group gets the big bother-boot as the VCs claim all the loots.

How this group is going to react to Gen2Vax (eta 4Q 21) and Gen3Vax (1Q 22) and the distribution systems is not likely to be altruistic.

There are parallels but do they meet on the horizon?

The real summer September 25, 2021 2:55 PM

@Clive Robinson

“we only know of one way to effectively achieve that…area quarantines combined with hard lock downs where required”

That boat has sailed. There’s no chance of getting rid of it in the UK now. I suspect even the Aussies will have to come to terms with it. At some point, sars-cov-2 will be officially recognised as endemic. Maybe if we can create an effective sterilising vaccine, we could stop it but nobody has ever done that for a coronavirus as far as I know. There’s a reasonable discussion of the issues here:
https://www.nature.com/articles/d41586-021-00396-2

vas pup September 25, 2021 4:42 PM

China declares all crypto-currency transactions illegal
https://www.bbc.com/news/technology-58678907

“China’s central bank has announced that all transactions of crypto-currencies are illegal, effectively banning digital tokens such as Bitcoin.

“Virtual currency-related business activities are illegal financial activities,” the People’s Bank of China said, warning it “seriously endangers the safety of people’s assets”.

===>It is the latest in China’s national crackdown on what it sees as a volatile, speculative investment at best – and a way to launder money at worst.

The technology at the core of many crypto-currencies, including Bitcoin, relies on many distributed computers verifying and checking transactions on a giant shared ledger known as the blockchain.

As a reward, new “coins” are randomly awarded to those who take part in this work – known as crypto “mining”.

China, with its relatively low electricity costs and cheaper computer hardware, has long been one of the world’s main centers for mining.

The activity is so popular there that gamers have sometimes blamed the industry for a global shortage of powerful graphics cards, which miners use for processing crypto-currencies.

The Chinese crackdown has already hit the mining industry.”

Anders September 25, 2021 5:41 PM

@Clive @SpaceLifeForm @ALL

‘Russian hackers target US food and grain production’

hxxps://www.thetimes.co.uk/article/russian-hackers-target-us-food-and-grain-production-zfn0p9r0z

SpaceLifeForm September 25, 2021 6:40 PM

@ vas pup

Re: China declares all crypto-currency transactions illegal

Me: Road trip?

Brain: You know where the plant is, why not?

Me: Because it is more than ten miles.

Brain: So? What else are you doing today?

Me: Tired, and besides, the plant and the cryptomining operation will still be there tomorrow.

Brain: Good point. BTW, I am opening an
AWS casino.

Me: Good Luck. I’ve not thrown craps in years in any AWS casino.

Brain: You are due. But, I am moving to the Random Roulette table.

hxtps://arstechnica.com/tech-policy/2021/09/old-coal-plant-is-now-mining-bitcoin-for-a-utility-company/

Which is why an investor-owned utility has dropped a containerized data center outside a coal-fired power plant 10 miles north of St. Louis. Ameren, the utility, was struggling to keep the 1,099 MW power plant running profitably when wholesale electricity prices dropped. But it wasn’t well suited to running only when demand was high, so-called peaker duty. Instead, they’re experimenting with running it full-time and using the excess electricity to mine bitcoin.

SpaceLifeForm September 25, 2021 7:30 PM

@ vas pup, Clive

I forgot to mention that there is no significant Solar or Wind generation close to this plant. None.

This plant does not, and never has served a big base.

Next closest big plant is coal. Next big plant is Nuclear.

There really is no Solar or Wind production between this plant and the Nuclear plant that is about 100 miles away (Callaway). There certainly could be as it is rare that it cloudy and windless in this area for long periods. There is a story about Callaway, but for another day. (Hint: welding X-Raying)

Ameren (nee Union Electric – hint) is lying.

This is fascism in action, on full display.

Clive Robinson September 26, 2021 3:34 AM

@ SpaceLifeForm,

The phrase,of note in the article is,

Joshua Rhodes, a research associate at the University of Texas at Austin, told E&E News. “It can have a positive emissions impact if it’s run the right way,” he said. “It can also increase emissions if it’s not.”

Whilst there is a germ of truth in that statment if you do some maths the required situation for the former is rather less than the latter.

Who is “Joshua Rhodes”

http://www.webberenergygroup.com/people/joshua-rhodes/

He has written what appears at first to be a paper that might be of interest available from that page link tilted “Are solar and wind really killing coal, nuclear and grid reliability?”

http://www.webberenergygroup.com/publications/solar-wind-really-killing-coal-nuclear-grid-reliability/

Well it’s not actually a “paper” it is an article for the “The Conversation” back in 2017,

https://theconversation.com/are-solar-and-wind-really-killing-coal-nuclear-and-grid-reliability-76741

And although it appears to have political overtones from the last sentance of the abstract,

“As energy scholars based in Texas – the national leader in wind – we’ve seen these dynamics play out over the past decade, including when Perry was governor.”

They are not realy there.

What is more important is to use the article as a basis as to what went badly wrong in Texes. Their market speeded up mainly due to cheap gas the prices dropped. Whilst some blaimed renewables such as Solar, Water, and Wind, it was actually they that saved Texes from it’s “overly efficieny” supply chain that had no “spare capacity”, something nature keeps telling us in every direction we look is important.

MarkH September 26, 2021 3:05 PM

@Andy F, All:

I just speed-read the bizarre article linked by Andy. To be clear, the authors are respected journalists and their reportage based on numerous sources; the facts, rather than the reporting thereof, make it bizarre.

It will interest many readers here, especially those like Clive who are interested in how intelligence agencies function/malfunction.

The madness seems to have reached a peak in 2017 when the CIA believed that Ecuador and Russia were intending to exfiltrate Assange to Moscow:

“It was beyond comical,” said the former senior official. “It got to the point where every human being in a three-block radius was working for one of the intelligence services — whether they were street sweepers or police officers or security guards.”

SpaceLifeForm September 26, 2021 3:54 PM

@ Andy F

Vault8. Missing dots. Spy vs Spy.

hxtps://www.emptywheel.net/2021/09/26/the-yahoo-story-about-all-the-things-cia-wasnt-allowed-to-do-against-wikileaks/

MarkH September 26, 2021 4:31 PM

@SpaceLifeForm:

Thanks for the link, to the detailed (as usual) critique from Marcy Wheeler.

For me, Wheeler makes rather too much of the sensational headline, because headlines are usually not written by the reporters (I’m aware of this because journalists often complain about headlines assigned by editors.)

We can probably rely on the facts in the yahoo article; Marcy properly points to the “English” (borrowing the term from billiards), and even more interestingly what the article didn’t address … though the hand of editors could well also be in play in that regard.

Clive Robinson September 27, 2021 12:43 AM

@ SpaceLifeForm,

Re : Empty Wheel / Yahoo

I’m cautious about Marcy Wheeler and not just because of her “potty mouth”.

She has some distinct bias and it comes through quite strongly from time to time.

For instance she claims in here article that Julian Assange lied, yet the evidence she presents –via a link– is of a proffessor making a statment in court.

She spends time berating the proffessors statments saying an student would not make such mistakes.

Yet there she is clearly making the same mistake not just of the proffessor but that she accusses the Yahoo article of…

The simple fact is too many journalists are turning towards “click-bait” and even earlier “yellow journalism”. Whilst I can not definitively say it’s “endemic” as I don’t read more than a very small percentage of media outlets these days, those that I have leave me with that feeling, and a strong desire to read less and less of it.

As for Yahoo news stories, I treat them the same way I do Bloomberg stories, with significant suspicion as to their content, so much so that I don’t read either any more, unless they are brought to my attention.

I’ll be honest and say, I’ve come to regard Yahoo news in the same way I do Rupert “the bear faced liar” Murdoch’s rags, in the way they obtain their news without checking or accreditation, dress it up falsely, slap click-bate titles on it, and spray it out, in much the same way those with chronic food poisoning do as part or fully digested waste matter.

Whilst others might say I’m being a little harsh, or even unkind, it would be upto them to show why they would think that.

I think we should all recognise the fact that “media circus” can have more than one meaning, and that manipulation of news is way beyond “fake-news”, but out and out propaganda by vested interests. Also the hypocrisy of “Pots calling Kettles Black” by political leaders, it’s all realy Orwellian, but ultimately a sad indictment on society on the West.

SpaceLifeForm September 27, 2021 12:45 AM

Silicon Turtles

hxtps://www.bleepingcomputer.com/news/security/microsoft-wpbt-flaw-lets-hackers-install-rootkits-on-windows-devices/

SpaceLifeForm September 27, 2021 1:35 AM

EpikFail and dots to SolarWinds attack

hxtps://www.dailydot.com/debug/epik-hack-subpoenas-data-preservation-leak/

Another domain that was hosted by Epik was allegedly used in the malware attack against SolarWinds, a Texas-based IT company that provides services to numerous federal agencies. Malware used against SolarWinds pinged a site hosted by Epik. In late 2020, Epik was asked to preserve its records on the domain before being served a subpoena one week later. Epik identified the subpoena as being from the FBI. SolarWinds itself was not a recipient of the subpoena, nor does it appear to have any involvement with Epik.

SpaceLifeForm September 27, 2021 1:49 AM

@ Clive

Petrol problems in UK

Lack of fuel or lack of delivery drivers?

Lack of drivers is a problem on this side of pond, but not for fuel. So far.

hxtps://www.getsurrey.co.uk/news/surrey-news/surrey-petrol-shortages-live-updates-21669651

name.withheld.for.obvious.reasons September 27, 2021 6:49 AM

27 Sept 2021 — Clinicians and Un-aspirated Injections of Corona Virus mRNA Vaccine, Study Findings

From the Oxford, Clinical Infectious Diseases

THE QUESTION:
The effect of accidental intravenous injection of this vaccine on the heart is unknown.

STUDY FINDINGS:
Post-vaccination myocarditis and pericarditis reported after coronavirus mRNA vaccines.

For the published peer-reviewed research paper: https://academic.oup.com/cid/advance-article/doi/10.1093/cid/ciab707/6353927

Clive Robinson September 27, 2021 7:18 AM

@ SpaceLifeForm,

Petrol problems in UK

Lack of fuel or lack of delivery drivers?

Well it depends on who you ask, but the actual problem is “significant supply chain” failures due almost entirely to the moronic approach to “Brexit”.

The incumbent political party however are claiming it’s some form of satanic plot to bring down the blow job by unions or some such nonsense.

If you want to point a finger at a person or two then you could start with,

1, Theresa May (Ex PM and former Home Office Minister).
2, Pretti Patel (Home office Minister and pure racist poison).

In essence, the number of Heavy Goods (HG) Drivers of UK passport holding has dropped very steadily for the past 30years or so. They got replaced by people from the far East of Europe as their nations became members of the EU (if not long before by fraudulant paper work).

The result the “cost of living differential” drove UK HGV drivers into other occupations which are way better for their health.

Now of course we have the party of “No foreign scum” who despite what was obviously going to happen “drove the bus over the White Cliffs”.

So major major shortage of HGV drivers. Apparently the politicians solution, call out the millitary and writing to those who held HGV drivers licences to persuade them to go back into a dead end job on very low wages in a time when the UK could be heading into consumer hyper-inflation… The price of food has already gone up 100-300% even on basic food stuffs.

Well as you can imagin the shortage of HGV Drivers has caused other supply chain issues especially with things that have to cross borders. Such as the 200 medications on the UN-WHO essentials lists and just about everything else the health service needs. Including those little tubes for taking blood… I was at the hospital for half a dozen different blood tests. Well rather than use the normal number of bottles they squeased it down to just two… As for patching the hole, normally as I’m a “right little bleeder” on “rat poison” they put a sensible pad of gauze and about a foot of medical “micropore tape”… This morning it was rationed, just one half square of gause folded up quite small and just a two inch length of tape… When I asked I was told that I was lucky because of the rat poison, others were not getting anything than a dab of cotton wool about the size of a pea, that they had to hold on the hole to stop bleeding…

Oh there is a story already circulating that the politicians are considering “protected status” for “National Security” such that past HGV drivers can be forcefully conscripted, if they do not respond to the letters being sent out.

It would not be the first time this political party had pulled this sort of stunt as many in the security agencies either remember all to well or have been told.

Clive Robinson September 27, 2021 7:50 AM

@ Name.withheld…, ALL,

Post-vaccination myocarditis and pericarditis reported after coronavirus mRNA vaccines.

I am not in the least supprised, there have been “pointers” in hospital data for some time.

I was recently admitted to hospital with “cardiac event” issues part of which was a blood clot the size of the end of your thumb in my right atrium, which is what they politely indicate has certain “circulatory disadvantages”, or as others might put it “Heart attacks, strokes, and death within 4minutes”…

What they kept asking were questions avout “COVID” and “vacinations”, they looked realy worried when I said I’d had my second shot just 19days earlier… It does not take much of a guess and a few pointed questions to see that the medical staff have suspicions reasonable enough to ask quite directed questions.

Mind you, don’t expect to hear much about it the Pfizer and Moderna mRNA “yellow card” reports show that when it comes to some blood clots (hepatic portal) they are someting like fifty times the expected back ground, and just as bad if not worse as the Oxford AZ vaccine for other blood clots. The reason you only heard it about the Oxford vaccine, well that was by Sky News reporters presumably at the request of Rupert “the bear faced liar” Murdoch who exerts considerable highly undesirable influance on US, UK and other Western Media and politicians (look back at that nasty bit of legislation in Australia).

As I said back at the begining the mRNA has no longterm efficacy findings, which is why I chose the less risky Oxford vaccination.

How ever that would not have been my choice if other more traditional vaccines with fairly well known longterm complications had been available.

The problem was and still is trying to discuss these issues rationaly as they are very definitely a “security issue” in particular “National Security” even if you get attacked on all sides by those for whom rationality does not appear to be of concern (though handfulls of cash may well be to them).

SpaceLifeForm September 27, 2021 1:59 PM

@ Clive

Re emptywheel

I threw that out there for another perspective.

Believe me, Marcy and I do not read the Tea Leaves the same way all of the time. There are plenty of missing dots. But, she is very organised and does the research.

SpaceLifeForm September 27, 2021 4:02 PM

@ Clive, ALL

Supply Chain issues

Some C-suite folk are learning ECON-101

Local major distributor is trying $500 signing bonus for warehouse or driver, IF THEY WORK ONE MONTH. They would do better if that was a permanent salary increase. Corps are cheap, and don’t want to pay well for hard physical labour. But, they still have not bought a vowel, so shortage of warehouse workers and drivers. Also, shortage of stockers for retail. The goods may be in the back but not on shelves, Deliveries are always late. BTW, this started pre-Covid.

https://www.reuters.com/world/uk/british-warehouse-worker-shortage-triggers-up-30-pay-spike-2021-09-27/

  • Warehouse sector is short of tens of thousands of workers
  • Crisis will hit customer delivery times
  • UK already reeling from lack of truckers

SpaceLifeForm September 27, 2021 5:07 PM

EpikFail Chewbacca defense?

The passwords do not fit into the glove.

hxtps://www.twitter.com/micahflee/status/1441554221183033346

Epik’s utter lack of security & terrible decisions boggle my mind. They logged plaintext passwords for login failures, MD5(password) on success.

MarkH September 27, 2021 5:32 PM

.
Design Study for Radioisotope TRNG

@Freezing_in_Brazil, FA, Clive et al:

https://onlinelibrary.wiley.com/doi/full/10.4218/etrij.2020-0119

This should be of special interest to Freezing_in_Brazil: the concept (Figure 24) is for a single-package device, 3 mm square, incorporating a β source, radiation shield, specially designed particle detector, and an integrated circuit for analog and digital processing.

The paper is the work of seven technologists in the Republic of Korea, with research institute and academic affiliations.

They hadn’t (at the time of publication) built the whole thing, but rather a source and detector, using lab instrumentation and a PC to do the work of the integrated circuit.

The mean output rate is 160 bits/second of cryptographic quality true random bits.

Though their English writing is quite accomplished for native speakers of Korean, patience and focus are needed for detailed comprehension.

MarkH September 27, 2021 5:54 PM

.
Design Study for Radioisotope TRNG, Part 2

According to the paper, a megabyte of collected output passed every NIST test for hardware random number generators.

For those unfamiliar with the NIST tests, they search extensively for numerous “symptoms” that the supposedly random data might be patterned, biased, contain internal repetitions or correlations, and so on.

According to the paper, no hash or other cryptographic “whitening” function was applied: the tested data were raw bytes from their decay timings.

The raw data were found to have extremely low bias — low enough, for the demanding application of generating random numbers for security-critical cryptographic applications.

MarkH September 27, 2021 6:10 PM

.
Design Study for Radioisotope TRNG, Part 3

From my reading of the paper, the raw bits are extracted one byte at a time, by sampling the 8 least significant bits of a free-running counter.

I see no other plausible interpretation of the last paragraph of section 4.1. I propose that their word “rest” should be understood as “remainder” or “residue.”

Because the mean interval between decays is about 50 msec, and the 8 least significant bits of their high-frequency (80 MHz) counter wrap around in 3.2 usec, the ratio of these times exceeds 15,000. This guarantees that “proximity bias” (an effect of successive decay detections occurring at a modest multiple of the time modulus) is too small to be measured. I expect that this ratio is needlessly large, and that they could extract more bits per detection while maintaining ultra-low bias; perhaps FA can do the analysis.

MarkH September 27, 2021 6:12 PM

@Freezing_in_Brazil, FA, Clive:

Note that the TRNG design presented above uses the so-called “roulette wheel” method for extracting random bits from radioactive decay.

According to the linked paper, it functions flawlessly.

Clive Robinson September 27, 2021 6:48 PM

@ SpaceLifeForm, ALL,

Some C-suite folk are learning ECON-101

But not all…

It would appear that Amazon are trying it on in the UK.

From what I’ve heard Amazon are desperately short of warehouse staff, many of whom left because the big boss and those below appear to be suffering from that special type of idiocy that some call moronic.

Put simply the Amazon corporate attitude to COVID and warehouse staff was probably bordering on insane. So when “UK Benifits” were effectively the same, why take the “health risk”?

Well now the UK Gov, wants to cut benifits by the equivalent of $30/week Amazon have now started advertising on major radio networks to get staff and promising good pay rates…

Only it appears Amazon will not be even close to honest what those pay rates or working conditions will be like, or more importantly what dirty “claw back” tricks Amazon Corporate will implement to get wages down below the UK legal basic “minimum wage”. Which was not even close to a living wage prior to Brexit and COVID, both of which have in effect caused major almost hyper inflation on basic things like food, cleaning materials etc.

Even without COVID these “close in” “supply chain” issues were predicted long prior to the Brexit vote. Where it was indicated that the Eastern EU “cheap labour” would be “kicked out” thus “getting the idiot vote” thinking they could blackmail better wages etc out of employers. All that has realy happened is that businesses have gone bankrupt, because there realy are way way to few people to fill all those supply chain jobs. The catch 22 is that the employers can not pay more because they can not put prices up because people don’t have the money to buy. So a reverse ratchet is in operation and the UK consumer economy is close to being put on “major life support”. Some shops have even gone to the extent of only stocking “high profit” items on what are in effect “essentials” like bread…

Oh and guess what, I’m fully expecting more companies to “go bankrupt” due to “energy prices”. Whilst we may have sufficient “petrol” but in the wrong places, we realy don’t have sufficient “natural gas” which as it’s the major base fuel for both heating and electricity production, is going to cause very very significant problems. From the news, several consumer “energy suppliers” have already gone bankrupt, with more to follow. For political reasons the UK Government are probably going to cap domestic prices, which means commercial prices will be used to take up the income slack, which will just get passed to the ordinary citizen through much higher food and other prices that are not regulated.

But what of the net effect, of food and fuel poverty that is rapidly rising? At the very least many people are going to be way way more susceptible to respiritory disease. So if not SARS, then Flu or other pathogen with a marked level of infirmaty / lethality and reduced life expectancy…

SpaceLifeForm September 28, 2021 12:08 AM

EpikFail

Who wrote this Comedy?

hxtps://twitter.com/micahflee/status/1442332192021958656

Yup. It logs failures if there’s a typo in the username too, which means lots of real passwords are in the fails

SpaceLifeForm September 28, 2021 12:54 AM

Another day, another dump

Bet this one was a result of the EpikFail.

And password re-use.

A few more tables, and some outer-join queries, and stuff will start happening.

hxtps://www.dailydot.com/debug/oath-keepers-hack/

A membership list for the organization contains more than 38,000 email addresses, although it’s unclear which are linked to current and former members. The email addresses in some instances are also tied to names, physical addresses, phone numbers, IP addresses, and donation amounts made to the militia. Official U.S. military email addresses are also littered throughout the breach.

Weather September 28, 2021 1:20 AM

@Clive All
Food prices in NZ have gone up 25% in two years, on some staples like cheese and meat, the labour government has increased min wage and the benefit.

A new green hydrogen plant is opening, but we still need to buy coal from offshore for a main powerplant, which is going to get worse with electric vehicles that use a whole days house usage in one hour.

The rich use to own cars, others horses, now rich horses others cars, in the future…

MarkH September 28, 2021 2:40 AM

Radioisotope TRNG Footnotes:

[1] The sampled bits (of a high-frequency counter modulo a number equivalent to a short time interval) are crypto-quality random because (a) all residues modulo the small time interval are equally probable, and (b) the times at which nuclei decay are completely independent.

These causes are not affected in any way by a decrease in the population of unstable nuclei, so gradual weakening of the source adds no bias.

[2] Clive made an interesting observation on a previous thread about metastability in the digital circuitry when the randomly occurring detection pulses “meet” the clocked counter at some latch.

The incidence of extra counts in timer values cannot be reduced to zero, though it can be much reduced by careful design.

Question: If you have a list of completely unpredictable residues modulo n, and you randomly increment some fraction of them (also mod n), how does that affect the statistical properties of the dataset?

Boiled down, suppose you have a record of 1,000,000 fair coin tosses, and toggle some specified fraction on the list at randomly chosen positions. How would this affect the balance of heads vs tails, the likelihood of long runs of the same face, or other statistical measures?

JonKnowsNothing September 28, 2021 2:51 AM

@Clive

re: we may have sufficient “petrol” but in the wrong places

I remember when that same dialog about the “half tank vs full tank run on the bank-pump” was handed out during the Big Oil Shock in the USA.

All the petrol disappeared from pumping stations and lines lasted hours and folks ran out of fuel while waiting in lines. It was a communal necessity to help push the dead engines up to the pumps as those cars were blocking the queue.

Then it was even-odd number license plate number days when you could go park in the line hoping to get a minimum amount of fuel. Some places you even had to turn on the engine so they could verify how much the gas gauge showed. If your fuel gauge didn’t work, it was iffy if you would get any petrol at all.

While we don’t have anything quite like this now, we do have some clever-clogs that found an unlimited source of free fuel: take it from farm vehicles, especially the ones that are not theirs. Yes indeed, the not commonly known market-theft rings that targets farms, farm machinery and farmers.

They not only take the fuel, the livestock, and fencing materials, they rip open well heads to extract the copper wires going to the submersible pump-head and they also steal the water out of the holding tanks and leave the valves open so that the well pump goes into overload and burns up the motor.

I would suggest it’s better to run out of petrol than to run out of water. Unfortunately, it is likely we will run out of both. The first because it’s not a long term viable option and the second because it’s all been polluted by high pressure injection attempting to keep the first going a bit longer as the process causes micro-fractures throughout the underground systems letting the crude-crud seep into the water tables.

Here in the dust jacket image of a Steinbeck novel, we already have the problem with what was “safe” but are now “prohibited” ag-chemicals. Cities have to play whack-a-mole-well to find clean water.

FA September 28, 2021 4:23 AM

@markH

I expect that this ratio is needlessly large, and that they could extract more bits per detection while maintaining ultra-low bias; perhaps FA can do the analysis.

This type of bias is inversely proportional to the square of the ratio you mention, so it would be extremely low.

The main weakness of a design such as this would be TEMPEST issues. If the 80 MHz clock and the event times leak it’s basically game over. The clock is a very narrow band signal and would be detectable at extremely low levels, limited only by phase noise. So a low quality clock with lots of phase noise would actually be better… The event times are high bandwidth so a relativey clean signal would be required to recover them with the necessary accuracy. Still this is a matter for concern.

A real-life implementation would require extensive shielding and filtering, adding considerably to size and price point.

Clive Robinson September 28, 2021 4:41 AM

@ MarkH, interested others,

With regards the paper you link to, I’d treat it with care.

For instance,

“Second, since it is an entropy source based on quantum mechanics, it generates almost perfect random numbers, which is verified by NIST test.”

Is not a very wise thing to say it’s effectively “magic pixie dust thinking” muddled up with lack of knowledge.

Because,

1, The quantum mechanics argument is not valid in this case.

2, The NIST tests in no way prove that an RNG output is “random”. All they give is a very weak assumption of not producing a very small set of statisticaly detectable biases.

But it gets worse. Take a carefull look at their first equation,

“where ncr is the count rate (number of pulses/s)”

Look at it and then have a think about why Sir Issac Newton used “infinitesimus” in his “fluxions” and the implications it has.

Freezing_in_Brazil September 28, 2021 7:48 AM

@ MarkH

<

blockquote>This should be of special interest to Freezing_in_Brazil: the concept (Figure 24) is for a single-package device, 3 mm square, incorporating a β source, radiation shield, specially designed particle detector, and an integrated circuit for analog and digital processing.

Thanks, my friend. MUch interested, indeed. That’s exactly in line with my views:
i>it is very cumbersome to embed security algorithms, random number generation algorithms, and even physical noise sources in small IoT devices.

I’ll be perusing the material. By the way, the Earthquake approach is very interesting, too. It could be even combined with other natural phenomena signals for improved strength.

Regards!

(*) Feedback to the admins: I notice that posts disappear only when I submit from the preview page.

Freezing_in_Brazil September 28, 2021 8:19 AM

@ All

Re MarkH’s TRNG link

160 bits per second seems fairly good.

(*) a quick search reveals lots of people experimenting many techniques [betavoltaic, photonic…]. Looks like a race.

SpaceLifeForm September 28, 2021 1:52 PM

@ JonKnowsNothing, ALL

hxtps://www.cvilletomorrow.org/articles/scientists-at-uva-believe-they-have-found-the-key-to-creating-a-universal-coronavirus-vaccine/

“It’s the part of the virus called the fusion peptide,” Zeichner said. “When the virus binds to other cells this is what it uses to stick itself into the cell so that it can release its genetic material. This fusion peptide is always the same across all variants. For some reason, there is something essential in it.”

SpaceLifeForm September 28, 2021 5:15 PM

@ Freezing_in_Brazil, Clive, ALL

For best results

Make sure you use a forward slash on your closing blockquote tag when you skip Preview. Same applies to other html tags. Avoid markdown.

Eyeball Parsing

vas pup September 28, 2021 5:32 PM

Human learning can be duplicated in solid matter

Findings may help to advance artificial intelligence
https://www.sciencedaily.com/releases/2021/09/210922121828.htm

“Rutgers researchers and their collaborators have found that learning — a universal feature of intelligence in living beings — can be mimicked in synthetic matter, a discovery that in turn could inspire new algorithms for artificial intelligence (AI).”

SpaceLifeForm September 28, 2021 6:08 PM

@ ALL

“Find My network”

In Soviet Russia, Network finds you!

hxtps://www.twitter.com/matthew_d_green/status/1442871446460452867

The text from a screen cap:

Participating in the Find My network lets you locate this iPhone even when it’s offline and after power off.

Clive Robinson September 28, 2021 6:22 PM

@ SpaceLifeForm, Freezing_in_Brazil, ALL,

Make sure you use a forward slash on your closing blockquote tag when you skip Preview. Same applies to other html tags. Avoid markdown.

I don’t use “markdown” and I don’t think anybody,does on a regular basis…

As for the forward slash… As has been noted before, sometimes the blog software blows up with the “blockquote” tags for no apparent reason, even when the tags are being used correctly.

I’ve checked this by using the “back” button/function in the browser, the “blockquote” tags are correctly formated with the right tags etc…

MarkH September 28, 2021 10:51 PM

@Clive, Freezing_in_Brazil, FA:

These days, my policy and advice is to treat all academic/research papers with care.

You wrote about the NIST tests that

All they give is a very weak assumption of not producing a very small set of statistically detectable biases.

While I’m rather baffled by the meaning of a “very weak assumption,” if you wish to offer guidance for better bias testing, I (and at least two other readers) will study your recommendation with interest.

I think it’s mathematically correct so say that no analysis of outputs can demonstrate (or “prove”) randomness in the comprehensive meaning of the term.

MarkH September 28, 2021 10:53 PM

@Clive et al, continued:

The outputs of the rig tested by the Korean authors are non-deterministic, because they are derived from high-resolution timing of non-deterministic events for which no prediction is even possible, beyond the plotting of a smooth probability density function. Unless the equipment is malfunctioning, the numbers are inherently a function of a random variable: it’s neither possible nor necessary to prove this by analyzing outputs.

The application of the NIST tests (or Diehard or what have you) is to report whether the bias in the a priori random dataset exceeds a defined threshold.

Weather September 28, 2021 11:44 PM

@MarkH
You probability won’t reply, but did you run a anlzyer over a signal that hit a spot, as the one spot before.

Could map it out, but #3 , say you have a beta+ source and a CCD and measure the saturation on a pixel, what is the likely hood it will hit the same spot, we are talking exp^10000 , then average out to flat line.

If you want to know more, say so otherwise I won’t waste my time.

Weather September 29, 2021 1:32 AM

@slf
No to green, I like my pancakes, not saying Green, doesn’t have original topics, which is good , but…..

MarkH September 29, 2021 2:29 AM

@Weather:

With much respect, I usually read your comments; most of the time, I’m not confident that I understood.

Clive Robinson September 29, 2021 4:31 AM

@ MarkH,

While I’m rather baffled by the meaning of a “very weak assumption,”

It means that the tests, only cover a very very small fraction of possible tests.

Therefore using the tests is seen as the “low water mark” or very least of testing you should do. Because if you fail them or get close to failing them then your generator is decidedly broken[1].

Also there are things the NIST tests do not show or can be cheated. If you take to smaller sample or to short a time span then they can only test what happens in the “window” and only detect some anomalies above a certain threshold.

I’ve already explained repeatedly what you need to be looking at and why, for some reason you want to push the dialog in some direction of your chosing to try and point score, and like others I’m getting tired of it.

My original points to Freezing_in_Brazil’s question still stand. The points I suggest you carefully consider you ignore.

So at this point I’m going to disengage from your “rabbit hole” journey, as any sensible person would have done long ago with the first early signs. Especially as this is not the first, the second, etc time you have behaved in this way…

[1] I’ve explained this before on this blog, it turns out that many ring oscillator and roulette wheel combination systems have very badly failed the NIST tests or their predecessors in the past. The IC manufacturers solution has a common path. Firstly they don’t let you the customer have access to the raw source output so you can test it yourself, take this lack of test access as a warning to avoid any TRNG. Secondly they add more ring oscilators, thinking this will improve things it usually makes little difference as they are chaotic oscillators, and even though they up the complexity of the product they have power supply etc in common thus tend to injection lock. Thirdly the manufacturers put some crypto function on the output in a chain mode as this then gets it past the NIST tests…

MarkH September 29, 2021 10:30 AM

@Freezing_in_Brazil, FA:

You may wish to note the above comment which lumps together ring oscillators — which are deterministic systems with noisy/chaotic timing — and the so-called “roulette wheel,” meaning (I infer from previous usage) high-resolution timing of unpredictable non-deterministic events.

These are absolutely distinct categories, obeying different laws. The confusion of these two has apparently led to wrong conclusions.

MarkH September 29, 2021 10:32 AM

@Freezing_in_Brazil, FA:

Clive wrote that NIST tests cover only a “very very small fraction of possible tests” for TRNG data, and seems to have made the leap that those other possible tests would detect TRNG bias missed by the NIST tests (one does not necessarily imply the other).

What kinds of bias do the NIST tests miss? Not stated.

What tests can detect them? Not stated.

I don’t know how to distinguish this from the “No true Scotsman” fallacy, which conveniently rejects any facts which falsify an incorrect thesis.

========================

Two important things about science I learned from Carl Sagan:

Extraordinary claims demand extraordinary evidence.

Science rejects arguments from authority; that Newton/Einstein/etc. said it doesn’t make it true.

- September 29, 2021 10:52 AM

@Winter:
@Moderator:

“In short, it looks like a mass dump of conspiracy theories copied from other channels.”

Yes and there is a few of the usual tell tales…

Look at the timestamps.

- September 29, 2021 4:38 PM

@Spacelifeform:

“It is troll-tool, new and improved, with re-worked GPT-3, for fresher scent.”

That is about right, but importantly without certain behaviours. Which raises the question of ‘what to do about it’.

It’s fairly obvious it is not ‘a woman’ and whilst it may not be the previous Trumpian 400lb incel bashing away at it, it is probably from the same festering muck sty on the farm.

I think it’s very clearly not triggered by a comment / subject, but an orchestrated attack specificly directed at the blog. The effort put into it suggests that the person(s) doing it are not doing it in ‘unpayed time’. So is it being motivated by some paymaster?

If so it raises the old ‘follow the money’ question, but… what would be the motivation for the expenditure?

SpaceLifeForm September 30, 2021 1:42 AM

@ ALL

In about 7.5 hours, there may be many problems reaching websites.

If you encounter such, a possible short term workaround would be to set the clock back a few days on your device, if you can make it stick.

But, I suspect there will be lots of angst.

hxtps://twitter.com/Scott_Helme/status/1443348197926113280

Awomen September 30, 2021 2:04 AM

  • • September 29, 2021 4:38 PM

@Spacelifeform:

“It is troll-tool, new and improved, with re-worked GPT-3, for fresher scent.”

That is about right, but importantly without certain behaviours. Which raises the question of ‘what to do about it’.

It’s fairly obvious it is not ‘a woman’ and whilst it may not be the previous Trumpian 400lb incel bashing away at it, it is probably from the same festering muck sty on the farm.

I think it’s very clearly not triggered by a comment / subject, but an orchestrated attack specificly directed at the blog. The effort put into it suggests that the person(s) doing it are not doing it in ‘unpayed time’. So is it being motivated by some paymaster?

If so it raises the old ‘follow the money’ question, but… what would be the motivation for the expenditure?

JonKnowsNothing September 30, 2021 2:24 AM

@Clive, SpaceLifeForm, All

Here is the smoke-belt of California, where the State is declaring WeWon on the current surge of Delta+34s, my local Steinbeck Lookalike Vistas made the front page of at least one MSM as the place were Delta+34s doesn’t want to go away.

It’s a bit of pretend and pretense.

An interesting version in a MSM article on “How safe is the cinema? Experts analyze Covid risks as (redacted movie name) opens”

A quick summary of the report analysis:

Yes, we know the pandemic is not over, but it’s the cinema and there’s popcorn and nearly 3 hours of being in likely, probable, and certain proximity to 1 or more persons with active and infectious Delta+34s but it’s a great movie and maybe the A/C is working with more wind flow… we’ll skip the part about filters and cleaners because you know there’s popcorn and it’s a good movie and you know you want to go and you know you will go and you know what will happen but you will enjoy the popcorn.

Best enjoy the popcorn ’cause there’s not much turkey for the holidays.

FA September 30, 2021 2:28 AM

@Clive

have a think about why Sir Issac Newton used “infinitesimus” in his fluxions” and the implications it has.

There are no differentials in the equation you refer to. Unless you think ‘d’ is one. It isn’t.

FA September 30, 2021 2:40 AM

@Clive

I’ve already explained repeatedly what you need to be looking at and why

You have repeatedly mentioned two things:

  1. A system of interconnected oscillators, latches etc. does not produce randomness, no matter how complex the waveforms it produces look.
  2. The way a latch behaves when when setup and hold times are not respected is not a valid source of randomness.

Both are true. Both are also irrelevant to the system being discussed which does not depend on either of them as a source of randomness.

Winter September 30, 2021 7:49 AM

@-
“I think it’s very clearly not triggered by a comment / subject, but an orchestrated attack specificly directed at the blog. The effort put into it suggests that the person(s) doing it are not doing it in ‘unpayed time’. So is it being motivated by some paymaster?

If so it raises the old ‘follow the money’ question, but… what would be the motivation for the expenditure?”

First of all, all the Troll-tool posts eventually converge on pandemic and vax disinformation. If there is any message that lasts in the ramblings, it is pandemic disinformation. The motivation could be to use the high search-engine ratings of this (and other) sites to spread disinformation. There is money to make in disinformation. And there are elections coming where stoking discontent with the incumbent, and apologetics for his predecessor, is worth a lot of money.

As for the money:
Inside one network cashing in on vaccine disinformation
ht tps://apnews.com/article/anti-vaccine-bollinger-coronavirus-disinformation-a7b8e1f33990670563b4c469b462c9bf

And 65% of all disinformation is from only 12 entities/people:
https://coronavirus.nautil.us/covid-misinformation-disinformation-dozen/

Note the prevalence of antisemitism among pandemic disinformation sources in this list.

Clive Robinson September 30, 2021 9:47 AM

@ FA,

There are no differentials in the equation you refer to.

I would suggest you look a little closer and think a little harder.

After all I’m not the one trying to convince everyone else that TRNG’s are a perpetually motion machine.

Go back and see what I originally wrote, and what others have written subsequently…

To be honest I’m realy quiet tired of this nonsense, and I’ve no wish to be unpleasent. But it’s not just me getting tired of it, others clearly are, and I suspect some know where it’s going to go.

I’ve repeatedly asked for people to consider ceetain things, but no shifting the goal posts is the name of the game fo them, the result well I’ll let you work that out.

AL September 30, 2021 1:23 PM

“And 65% of all disinformation is from only 12 entities/people:”
Not so fast on that one. On Facebook at least, that contention is contested.

People who have advanced this narrative contend that these 12 people are responsible for 73% of online vaccine misinformation on Facebook. There isn’t any evidence to support this claim. …
In fact, these 12 people are responsible for about just 0.05% of all views of vaccine-related content on Facebook.

The answer to disinformation is not equal and opposite disinformation. If Facebook sees 0.05%, it is a stretch that every other site is seeing 2/3rds.

Clive Robinson September 30, 2021 5:14 PM

@ SpaceLifeForm,

In about 7.5 hours, there may be many problems reaching websites.

Did anything happen?

I’ve had a busy day with Hospital tests etc so have not been paying any attention to it.

SpaceLifeForm September 30, 2021 6:08 PM

@ Clive

No major website problems. Some, mainly tech debt issues using old OpenSSL code.

Otherwise, crickets. Just another Friday 😉

AWS
CPanel
Sophos
OPNsense
OpenBSD 6.8, 6.9
MailGun
Fortigate
Debian (apt, Fastly)
Spotify
ovh
Openresty
Guardian Firewall
DigitalOcean

MarkH September 30, 2021 9:37 PM

@SpaceLifeForm:

Ironically, the only website I tried after the threshold time and was unable to reach was …

schneier.com

My creakingly antique browser claimed that my antique computer’s date/time were incorrect (not the first instance of the browser wrongly reporting this, so I don’t know whether there’s a connection with today’s event).

So I’m using a different outdated browser, yippee!

SpaceLifeForm September 30, 2021 10:18 PM

@ MarkH

Curious to know browsers, versions, and platform.

If you set the clock back, I’ll bet the failed browser works.

Also, it could be that your clock is way behind. Verify clock is semi-accurate. If so, try to set it back a couple of days and try the failed browser again.

Sut Vachz September 30, 2021 10:22 PM

@MarkH @SpaceLifeForm

There is only one browser that matters, and that is Lynx. Always old, always new, always outdated, always beyond time. It is thought itself.

MarkH October 1, 2021 12:43 AM

@Sut Vachz:

That was pure poetry! I salute you.

@SpaceLifeForm:

My configuration isn’t important … it’s all junk anyway. But I do keep the clock within one second of true.

SpaceLifeForm October 1, 2021 12:55 AM

Silicon Turtles

Always-on Processor magic: How Find My works while iPhone is powered off

hxtps://naehrdine.blogspot.com/2021/09/always-on-processor-magic-how-find-my.html

FA October 1, 2021 2:01 AM

@Clive

Commmenting on the paper pointed to by @MarkH, you wrote:

But it gets worse. Take a carefull look at their first equation,

suggesting there is something very wrong with that equation, and the paper in general.

Can you please explain what that could be, or must we assume this was just innuendo and hence trolling ?

Clive Robinson October 1, 2021 2:09 AM

@ JonKnowsNothing,

Best enjoy the popcorn ’cause there’s not much turkey for the holidays.

Yeh and the crops are “rotting in the fields” brcause there is nobody to pick them…

Not due to viral issues, just political stupidity ones…

I guess you could say it’s yet another demonstration of “supply chain” issues.

Oh and this will amuse, apparently yes we have farm equipment not working in the UK because some people have stolen the fuel out of it… So even automated crop gathering equipment is not working…

@ Bruce,

It’s funny how one “lack of labour created” supply chain issue actually causes another supply chain issue, because the “crooks” see an orportunity for very short term profit…

@ ALL,

In the past in the UK when there have been “shortages” in autumn and early winter, people have had their homes broken into and had their festive decorations and presents under the tree stolen…

Winter October 1, 2021 2:17 AM

@Al
“The answer to disinformation is not equal and opposite disinformation. If Facebook sees 0.05%, it is a stretch that every other site is seeing 2/3rds.”

A detailed answer disappears in the black hole.

But if you read the Facebook response, you see they are calculating percentages about different posts, those directly by the 12, while the report talks about the reach of the talking points inserted by the 12. Furthermore, Facebook response with a lot of weaseling words to hide the fact that they actually, do NOT respond to the facts of the report. As usual, Facebook is NOT volunteering any real information about the spread of misinformation.

On the whole, Facebook is telling us how much weeds they have removed from their fields, while the report talks about how much weeds are still in the fields. In the end, we are not interested in the weed harvest, but in the weeds overgrowing the real crop.

Clive Robinson October 1, 2021 3:20 AM

@ SpaceLifeForm,

Not Spotify, but Shopify

Ahh a shopping channel, that I would certainly miss[1]. As for AWS, well who knows what that money pit gets upto. There is not a pole long enough[2] for me to get within a country mile of them.

So all in all it went fairly quietly “this time”…

Which leaves me thinking about “epochs” are we not due the *nix 32bit time overflow[3] in mid Jan 2038?

It leaves me wondering just how many other “mini Y2K events” are going to happen before then…

[1] Miss as in “not see it” rather than “be bereft” over it not being available. The english language is so delightfully imperfect, it can mean almost anything you want it to mean… no wonder lawyers get rich.

[2] Most people don’t know where the expression “would not touch it with a barge pole” comes from. There are two parts behind it, the first is to do with very recently dead creatures, the second to the largest pole you’ve likely to have seen. When infected creatures are very sick or die, they frequently have lots of live fleas etc still on them looking for a new warm blooded home to jump to… If you get within two meters (~6ft) of such a creature you are likely to become the new home untill you become infected, sick and die. So it became the practice to use a long walking staff or similar pole at arms length to push the creature into a fire or pit etc to dispose of it. The longest poles people made and used normally were the poles used for “punting” or “polling” and similar to move boats by pushing down and away on the river bottom. At upto 5.5m (~18ft) these were three times longer than ordinary walking staffs etc. So to say you would not touch it with a barge pole implied it was something you really really would not go anywhere near at all by choice.

[3] https://www.epoch101.com

Clive Robinson October 1, 2021 4:08 AM

@ FA,

… must we assume this was just innuendo and hence trolling ?

It’s certainly not “innuendo” go look at the equations terms carefully and think about them, with regards physical devices. Go through the implications of each one.

Oh with regards,

Both are also irrelevant to the system being discussed which does not depend on either of them as a source of randomness.

True, but some things you have obviously missed.

1, Who brought them up?
2, Who described their deficiencies?

Why were they brought up well I originally said radio isotopes had bias, and should not be used without carefull processing in TRNG’s. A statment that is as true now as it has always been as for the two circuits I’ve made it clear they both have deficiencies that many have not realised, and given simple examples of how to demonstrate the deficiencies. I even suggested the use of a pencil and paper and simple “timing diagrams” was all that was needed to show the deficiencies. But no, certain people wilfully chose to ignore it… And so on.

As for “trolling” no, not by me, I made a factual statment to Freezing_in_Brazil, since then others have “trolled” me on them, I assume in one case because they want some kind of “bragging rights”.

When people have tried that in the past I usually tell them to go look things up and when they proffess that they can not find it on the Internet I provide a series of links[1]. Or if they are a second offender a simple search term.

But how would you suggest I deal with those who have apparantly become fixated exhibited by their stalkerish behaviour? Especially when they willfully chose not to listen to what they are being told, and gone down some rabbit hole of their own making, along with accusing me of not saying things? Especially when it is in the examples they have brought up, not me.

All I’ve done is simply point out factual things about the examples they chose and why they should be cautious.

And again, I’m going to say “get out the pencil and paper” and draw out what is being discussed, and look at how each one of those terms applys.

[1] Providing links has with this new blog software become associated with failed posts. So I’ve cut right back on giving links.

lurker October 1, 2021 1:08 PM

@SpaceLifeForm: Always-on Processor magic

Please advise what “smart” phone has a power OFF switch that disconnects the battery from the working parts. NB a RealTimeClock is not an essential function in these devices.

Clive Robinson October 1, 2021 3:07 PM

@ lurker,

Please advise what “smart” phone has a power OFF switch that disconnects the battery from the working parts

Better yet,

“What smart phone alows you to get the case off to get at the battery?”

Or

“What smart phone if you can get the case off, has space for you to fit at switch to turn the battery off?”

At the bottom of my “junk cupboard” I have a Motorola phone from the good old days… Not only does it have a slide off battery pack. The pack opens so you can get at the Double A cells to change them… and there is a space in the design to fit a small switch that I did oh cough cough years ago.

The thing is it’s now so old I don’t think it works with even old GSM 2 networks 🙁

It’s a shame realy as it’s a nice solid design with a bit of weight behind it. I know you could crack walnuts with it because I once did on Xmas as a bit of a joke. I’d just got myself a much smaller neater Nokia as a present to myself and was waiting to get a new small “SIM” as the one in the Motorola was the size of a credit card.

Freezing_in_Brazil October 1, 2021 3:40 PM

@ Clive Robinson

In the past in the UK when there have been “shortages” in autumn and early winter, people have had their homes broken into and had their festive decorations and presents under the tree stolen…

I hope everything goes well this time around. We are in the hands of the tuckers down here too. It’s interesting how logistics and energy crisis align throughout the world right now. As for the health department, I wish you a firm recovery.

@ Sut Vachz

I second MarkH. It is so me I will adopt it for myself. Well played. 🙂

@ SLF

Thanks for the site behavior info

SpaceLifeForm October 1, 2021 3:48 PM

@ lurker, Clive

I would not use a phone where you can not pull the battery.

They exist. They are not iPhones.

PinePhone, various Android.

If you can not pull the battery, you may want to look into a Faraday Bag.

Generally speaking, if you use an iPhone, you do not have good OPSEC.

Which, is clearly the point of Find My and AirTag.

hxtps://nakedsecurity.sophos.com/2021/05/11/apple-airtag-jailbroken-already-hacked-in-rickroll-attack/

If someone else swipes an NFC-enabled phone near an AirTag, it presents them with a supposedly anonymous URL pointing to the Apple server found.apple.com, where they can report the misplaced item.

[Note date. Note an Android with NFC can ‘find’ an AirTag. And probably ‘lost’ iPhones. Is your iPhone and/or AirTag really ‘lost’ when you are carrying them thru an AirPort?]

SpaceLifeForm October 1, 2021 4:09 PM

@ lurker, Clive

Dots. Think about the money source.

hxtps://themarkup.org/privacy/2021/09/30/theres-a-multibillion-dollar-market-for-your-phones-location-data

MarkH October 1, 2021 10:24 PM

@SpaceLifeForm:

Setting the date back did revive my usual browser … but I had to go back one day more the next day (the alternative browser was working badly).

When I searched, I found a few news stories about the internet shutdown scare, but no technical explanation of what caused it.

Do you know where I can find information on more practical/sustainable mitigations? It would be a great help!

FA October 2, 2021 5:39 AM

@Clive

True, but some things you have obviously missed.

No, those things were just not what was being debated.
You brought them up, thereby almost derailing the discussion. Which, as you put it, is annoying.

Why were they brought up well I originally said radio isotopes had bias

which is still an assertion you need to explain. Why would a nucleus prefer one counter state to another [1] ? Without referring to imperfect electronics, signals leaking into each other etc. Your statement above is about radio isotopes, not about practical implementations of random bit generators. And as far as I can see, it is in conflict with physics.

[1] Bias in the XOR of successive bits is another matter. But that can be quantified and reduced to an acceptable level, as discussed before.

FA October 2, 2021 5:48 AM

@Clive (continued)

You frequently bring up considerations about the practical realisation of things which may not be as simple as the basic principle suggests. And usually what you say is true. But please consider this: the fact that these aspects are not brought up by the OP does not imply that he/she is not aware of them. It may as well be a deliberate choice made to keep the discussion focussed. Just consider the possibility that you are not the only experienced engineer in this forum.

Clive Robinson October 2, 2021 6:16 AM

@ FA,

No, those things were just not what was being debated.

Again you get it wrong.

I said to Freezing_in_Brazil that the output of a radio isotope source is biased (which is a fact). I further went on to say that the output of a TRNG would be biased from it unless the designers took care (also a fact).

Now if you dispute those facts, come up with your evidence, so far you’ve come up with nothing. Worse I’ve given you pointers so you can work it out, yet you are still either missing them, or for some reason wilfully chosing to not follow them.

So another pointer,

“What is the most quoted, yet probably least understood science based equation?”

Why would it relate to physical sources / processes and also tell you why chasing after a perpetual motion machine, is not what people should be doing.

As I’ve said to you get out the pencil and paper and start drawing things out you should then realise where you should be going.

FA October 2, 2021 7:07 AM

@Clive

Now if you dispute those facts, come up with your evidence,

Since you presented those facts, I suggest you come up with some evidence.

Or explain why a nucleus would prefer to decay when a 1 bit counter is in a particular state. While quantum physics tells me that such a preference does not exist.

Note that I do not dispute some bias in the XOR of successive bits obtained by sampling a counter at decay events.

On a more general note: talking in riddles (like your reference to Newton) is not helping anyone.

Clive Robinson October 2, 2021 7:38 AM

@ SpaceLifeForm, lurker,

I would not use a phone where you can not pull the battery.

They exist. They are not iPhones.

More correctly they are not “Smart Phones”.

I’m still looking for a Smart Phone with a removable battery. However a friend who has similar views gave up looking.

Their solution is a small “Smart Pad” about the size of a large phone and a realy tiny phone with removable battery from a brand called ALBA.

They use the USB on the Smart Pad to talk to the ALBA phone, only when they need to go on-line.

In the UK that particular phone is less than around $15 equivalent and you can get “pre-pay” service from Supermarkets and even corner shops for less than $15 / month equivalent.

So throwing away the phone and SIM and getting a new pair each month, costs about the same or less than many mobile service plans for “Smart Phones” in the UK…

My friend uses a few other interesting tricks to get around actually making or receiving phone calls and likewise SMS but still appearing as though they do to others…

It’s all “good fun” and “totally harmless” but does show that even when others think they have the technology “all wrapped up” they infact don’t.

Clive Robinson October 2, 2021 8:04 AM

@ FA,

Or explain why a nucleus would prefer to decay when a 1 bit counter is in a particular state.

That has nothing to do with what is covered by “bias”.

So do I assume you are doing a MarkH and moving the goal posts to try to win an argument you have started and already lost.

If you and MarkH want to play that game at least be honest and admit it, but you won’t will you.

But atleast every one else can see what you and MarkH are upto…

FA October 2, 2021 9:38 AM

@Clive

That has nothing to do with what is covered by “bias”.

Then tell us which bias you refer to. In other words, which statistics would not be what they are expected to be for a truly random collection of bits.

I already mentioned the bias of the XOR of successive bits, and how it can be handled. Also note that timing is not considered, the bits are just data.

Winter October 2, 2021 11:37 AM

@FA
“Then tell us which bias you refer to. In other words, which statistics would not be what they are expected to be for a truly random collection of bits.”

One I saw mentioned somewhere is the result of the decay of the source material. Over time, the frequency of radioactive decays will reduce due to there being less radioactive atoms left. But that effect should be unmeasurably small for any well chosen source.

Another bias is the result of refractory periods in the detector circuitry. This leads to anti-correlations in the output.

Also, there might be external noise from other particles from cosmic rays or impurities in the surroundings that have diurnal or seasonal correlations.

In my view, nothing that cannot be controlled or corrected for.

MarkH October 2, 2021 12:23 PM

@Clive, SpaceLifeForm, lurker:

The gadget I’m writing this on has an easily removable battery. I don’t much like it, but it has that one virtue.

It’s an LG, so power/wakeup is by a button on the back side; it’s both bad ergonomics, and gets flaky after a while, so you never know whether the phone’s just responding with glacial slowness, or your button press was ignored.

I think the model is Phoenix 405, but I don’t know a convenient way to check right now.

I pop off the back easily using my thumbnail, and likewise for battery is removal. With practice, the maneuver can be completed in a few seconds.

This particular model is surely out of production, but if LG doesn’t have newer phones with replaceable batteries, I expect this vintage can be found used.

MarkH October 2, 2021 12:24 PM

@Clive, SpaceLifeForm, lurker:

The gadget I’m writing this on has an easily removable battery. I don’t much like it, but it has that one virtue.

It’s an LG, so power/wakeup is by a button on the back side; it’s both bad ergonomics, and gets flaky after a while, so you never know whether the phone’s just responding with glacial slowness, or your button press was ignored.

MarkH October 2, 2021 12:25 PM

@Clive, SpaceLifeForm, lurker:

My Android has a removable battery. I don’t much like it, but it has that one virtue.

It’s an LG, so power/wakeup is by a button on the back side; it’s both bad ergonomics, and gets flaky after a while, so you never know whether the phone’s just responding with glacial slowness, or your button press was ignored.

Weather October 2, 2021 12:58 PM

@Fa
I think its like card counting in blackjack, if you draw a card you won’t get that card again until a new deck is added, so you can have a moving average to semi predict the output, or the what day trader’s say reading the chart 😉

MarkH October 2, 2021 1:02 PM

@Winter:

All depends on the bit extraction method. The correct method is to measure decay times modulo a number such that the wrap-around (or rollover) time is small compared to the mean interval between decays.

Inherently, this method has extremely low bias.

Weakening of the source actually decreases bias, while also lowering the bit generation rate.

MarkH October 2, 2021 1:02 PM

continued:

The detector recovery time (often called dead time) is a feature, not a bug. Bias is maximal for decays detected very close in time.

If an accidental particle (not from the source’s population of chosen-isotope nuclei) triggers the detector, its time of arrival will be just as unpredictable.

MarkH October 2, 2021 1:05 PM

@Moderator, all:

Apologies for the redundant battery comments … until now, everything “held for moderation” simply vanished, so my algorithm has been to try, try again.

MarkH October 2, 2021 1:17 PM

@Weather:

Applying the blackjack analogy is the same mistake others have made.

Decay events in TRNG sources are completely independent, which is why they are accurately modeled by the Poisson distribution.

The degree of predictability in card games — or markets — comes from the partial dependence of events.

Weather October 2, 2021 1:32 PM

@Markh
Just so I understand it, you not planning to detect were on a sensor grid, just each hit is true or 1?

How many non signals 0 compared to 1 hits?

Weather October 2, 2021 1:39 PM

@Markh
If…if there are a lot more non signals, say 160bit only 10 bits are signal, you can apply patterns of know to the output only on non signals and bit flip it, which should show the bais in the non signals timing.

MarkH October 2, 2021 1:49 PM

@Weather,

I suggest you review at my comments based on earthquake data. All the randomness comes from recording the time of the event.

I discarded the date, hour and minute … the longer time scales have too much bias. But the seconds (time modulo 60) of the event time have very high entropy measurements.

Same for the TRNG: when a decay is detected, record the time modulo a small value. In this manner, several random bits can be extracted from each decay.

Weather October 2, 2021 2:06 PM

@Markh
Gotcha so its like a number between 1-10000 that loops and gets stored when a signal is detected.

Side note I would be interested in finding out a way, because win2k7 uses 64bit clock tick, and when a new connection start it uses that as the sequence/acknowledged numbers.

SpaceLifeForm October 2, 2021 3:31 PM

@ MarkH

Setting the date back did revive my usual browser … but I had to go back one day more the next day (the alternative browser was working badly).

Whoosh.

I told you that, and I gave you some links, so you could research, and maybe figure out a way to get a working browser on old platform.

Set your clock back a few months, so you have time to research. The Certificate expiration check happens on your end.

It seems that you don’t want to mention old Mozilla on XP for some reason. And, as you found out, old IE is useless.

hxtps://www.makeuseof.com/tag/browser-secure-old-windows-xp-system/

Your best bet is to move to a 32 bit Linux distro with at least a 2GB swap partition, run a modern 32 bit Firefox, and deal with the slowness due to lack of RAM.

If your response is that you must use Windows, then your OPSEC flat out sucks.

But, you may be able to live boot and have acceptable response time.

Check out hxtps://justbrowsinglinux.com/ 32 bit.

MarkH October 2, 2021 4:09 PM

@Freezing_in_Brazil, FA, Weather et al:

I’m preparing a write-up on estimation of bias in the timing of radioisotope decay events.

The math is mostly simple; I’m proceeding methodically in hopes of catching most of my mistakes.

I’ll post a link when it’s published.

SpaceLifeForm October 2, 2021 5:29 PM

@ MarkH

I’m preparing a write-up on estimation of bias in the timing of radioisotope decay events.

Strange. That is what Schrödinger’s Cat told me earlier tomorrow.

Clive Robinson October 2, 2021 8:51 PM

@ MarkH,

LG Phoenix 405

Not sure if that was available in the UK when I last looked.

But the LG Phoenix 5 apparently has a 3000mA “embedded” battery not “removable”.

I suspect “embedded” is going to be not just the norm, but the only option in the near future…

The Smart Phone I use is something like a decade old and with carefull behaviour I’ve kept the battery alive sufficiently that it is still usable. Importantly it’s both 4G and LTE compliant so should still be good for a few years yet.

MarkH October 2, 2021 9:16 PM

@Clive:

As it turns out, the specific identifier (whatever it means) isn’t useful — the category a Phoenix 3. As far as I know, all Phoenix 3s have the user replaceable battery.

Out of production, but findable on eBay, including sealed box apparently.

Clive Robinson October 2, 2021 9:18 PM

@ SpaceLifeForm, ALL,

Strange. That is what Schrödinger’s Cat told me earlier tomorrow.

“Does the cat said” it would be a vomited dogs breakfast as well, because of,

“I discarded the date, hour and minute … the longer time scales have too much bias.”

@ ALL,

For those that are not aware time like energy/matter can be neither created or destroyed.

For relativity to work time has to be accounted for in the local refrence, where it remains constant.

Also in the local refrence the basic laws of physics requires all independent objects to record all events in their history no matter how far back in time you go. Otherwise you would not be able to reverse events, and that would cause all sorts of problems with energy/matter.

Thus you can not discard “the date, hour and minute” because the effects of those times will appear in any time window you chose no matter how small in duration. The same with the exponential curve of the half life it is after all a constant percentage change against local time.

So what to make of,

“I discarded the date, hour and minute … the longer time scales have too much bias.”

It shows not just a failure to understand the basic laws of physics, it is also a direct admission that there is bias which I’ve been saying from the very begining.

Further it also demonstrates the willfull behaviour to disgard facts, so as to “move the goal posts” to make a false argument…

MarkH October 2, 2021 10:43 PM

@Freezing_in_Brazil, FA, Clive:

Consider a room with two vertical posts spaced one meter apart, with a very fine wire stretched nearly taut between them.

The wire is attached to one post at the height of one meter above the floor, and 1.1 meters height at the other, so the magnitude of the mean slope is 0.1; of course the ratio of the far-end heights is 1.1

Examining centimeter-long segments of the wire will also show a mean slope of 0.1 … but the ratio of end-heights for each segment is about 1.001

MarkH October 2, 2021 10:44 PM

@Freezing_in_Brazil, FA, Clive:

Consider a room with two vertical posts spaced one meter apart, with a very fine wire stretched nearly taut between them.

The wire is attached to one post at the height of one meter above the floor, and 1.1 meters height at the other, so the magnitude of the mean slope is 0.1; of course the ratio of the far-end heights is 1.1

Sut Vachz October 2, 2021 10:49 PM

For those with time on their hands and minds,

The Quantum Challenge: Modern Research on the Foundations of Quantum Mechanics: Modern Research on the Foundations of Quantum Mechanics (Physics and Astronomy) 2nd Edition, Greenstein and Zajonc

and the indispensable

Collective Electrodynamics, Carver Mead

and its bibliography.

MarkH October 2, 2021 10:52 PM

continued:

Examining centimeter-long segments of the wire will also show a mean slope of 0.1 … but the ratio of end-heights for each segment is about 1.001

For millimeter-long segments, the mean slope remains 0.1, but the end-height ratio is 1.0001

The process of diminishing segment length can be continued, down to limits imposed by the measurement process. Until that limit is reached, for any chosen small ε, the ratio of end-heights can be reduced below 1 + ε

Weather October 2, 2021 10:54 PM

@Freezing_in_Brazil, FA, Clive, Markh

My electronic is minimal, and don’t know Fft ,but if you get ltspice and make one voltage source as a linear slope and combined that to a discharging capacitor, the two wave might make a interesting pattern.

MarkH October 2, 2021 11:00 PM

continued, last of 3 parts:

Got that, dear readers?

Observing on progressively smaller scales reduces the ratio between endpoint heights, although the slope has not been altered.

No laws of physics were violated at any stage, and no magic pixie dust required.

Weather October 2, 2021 11:17 PM

@Freezing_in_Brazil, FA, Clive, Markh

There’s less averaging going on with smaller time scales, try plotting a sawtooth to mimick the loop counter and combine that to another source to mimick the output from the isotopes, on small time scales there will belot of noise but over a number of sawtooth cycles a pattern should show.

SpaceLifeForm October 2, 2021 11:24 PM

@ Clive, ALL

“What is the most quoted, yet probably least understood science based equation?”

E=mc^2

It’s just a reduction, in the case where everything is at rest.

But, nothing is at rest. Including your clock.

The real equation is relativistic.

https://physics.info/mass-energy/

Measuring Physics with Physics can lead to issues.

There is no way to measure velocity without a true Frame of Reference.

But, there is no true Frame of Reference. And there is no Universal Clock.

There is no way that you can trust a Physical Clock.

MarkH October 3, 2021 12:13 AM

@Weather:

It’s that same logical trap people keep falling into.

If the decays were periodic — or even if they were causally dependent in some much more subtle way — yes, a pattern could appear.

But they are completely independent.

MarkH October 3, 2021 12:17 AM

@Weather:

If it helps to understand more clearly, consider the question from the opposite direction:

If a pattern did consistently emerge — with the sawtooth or any other periodic function — that would be proof that the decays are not independent.

Weather October 3, 2021 12:48 AM

@Markh

I think they are dependent. View the electron orbit around the necules it looks random piping in and out but having a neibouring atom those two now and again share information like what happens with electron microscope.
The decay of the neutron by multiple paths may happen when there’s a curtain situation.
If you made a isotope a sensor and cage, you could look at through a electron microscope which should change something, maybe five different probe placement.

That magnetic thingy I posted awhile ago mimicks the hydrogen atom with two bulbs at 180 degrees, if two come together, there orbit will be different.

I think I’ve strayed to much into personal theory.

MarkH October 3, 2021 12:57 AM

@Weather:

I think I’ve strayed too much into personal theory.

So do I, my friend.

Remember what Carl Sagan taught us: extraordinary claims require extraordinary evidence.

Whoever wants to re-write the laws of physics, needs solid and convincing experiments — reproducible by others — which have no reasonable explanation other than falsification of some prediction of accepted theory.

Radiation counts fit Poisson distributions (and times between decays fit Exponential distributions) very accurately. This has been verified many, many, many times.

If the decays weren’t independent, the distributions wouldn’t fit.

Clive Robinson October 3, 2021 4:58 AM

@ SpaceLifeForm,

It’s just a reduction, in the case where everything is at rest.

Not quite, it’s where the object and observer are in the same refrence and…

It’s why I said “yet probably least understood science based equation”

The velocity and apparent kinetic energy of a massless object moving at the speed of light is another “rabbit hole” we don’t need to chase down.

The point is you have a finite source giving out energy to a sensor it is adjacent to and held at rest against.

If external energy is applied via heat etc both parts are effectively held in equalibrium so where does the energy detected in the sensor come from?

The answer is as the radio isotope decays to become stable copper it gives up mass over time.

Several things arise from this that are of interest,

1, Eventually all the radio isotopes in the source will have decayed and the source will nolonger emit energy to the sensor. Therefore it can not be a perpetual motion machine.

2, The equation in the paper appears to be for a single point in time, the incorrect implication being the count will always stay the same.

If you replace the mass term in the equation with one that varies with time then the count output must also vary with time.

Which brings us to the point where it gets interesting.

We know that decay is a given percentage of the starting mass in a given period of time. We should know that this produces an exponential curve (and it does). The thing about the exponential curve is no matter how much you magnify it the curve always has the same shape.

Thus the inverse of this is that no matter how “infinitesimal” the time period you chose is that bias is still there no matter how fractionaly small. The noise may be great but the signal is still there.

Now if you only regard each infinitesimal time period independently then you can ignore that very very small signal under all that noise.

But what about longer periods of time?

Obviously being determanistic the level of that signal becomes more prominent as the noise averages out and will given time exceed the noise.

Now consider two infitesimal measurments a reasonable time period appart[1]. Is the signal going to be visable in either of them seperately? Probably not, but when you compare them? Yes in this case the count rate will have dropped even though the shape of their distribution curves frequency spectra etc look the same.

So that “decay bias” is getting through, as I have previously explained and is the point @Weather has realised as well. That ia the bias must be present in each measurment no matter how small in time it is and no matter how much noise there is. Further and importantly it accumulates with time. To argue otherwise is to argue for perpetual motion, because you would some how “magically” need to maintain the radio isotope mass and keep it constant… So you would have to sprinkle on “magic pixie dust” or think that you can.

You can go on, to show that the likes of entropy pools act as accumulators for such bias, and no amount of crypto hashing will remove it. It’s why as I noted originally you have to excercise care in the design of good TRNG’s.

[1] One of the things most people do not consider when talking about TRNG’s is that all real physical sources decay in some way. If you throw dice the friction will grind a little fraction off each time. If you use XTAL oscilators the components “age” and the frequency changes very predictably. It’s called entropy in the clasic sense of things moving from the organised to the disorganised. Whilst nature as a rule finds linear abhorrent, and goes with ratios this type of entropy is very predictable. The consequence of this does not appear to get realised, if each reading of the source produces one bit, each subsequent bit is biased with regards to it and so on. So each sequence of bits, is biased predictably in time as are all subsequent sequence. For many people that will not matter, but for others it does and can be critical.

Sut Vachz October 3, 2021 7:43 AM

@Clive Robinson

Re: time, entropy and all that

“The wise always knows what time it is”, but time in this expression is a shorthand for the state of the individual relative to their responsibilities, which are a mix of past, present and future.

However in modern physics, time seems to have replaced the ether as an inaccessible reality, even if it no longer flows equably everywhere. There is no real time, the only reality is motion, with time being a numbering of motion in the mind of the physiker. Can be useful, but not fundamental. So where is the treatment of natural phenomena that takes this as its starting point, and compares natural motions to natural motions, eliminating the ad hoc “clock” ? Your “nature goes with ratios” is a nice co-ordinate free way of stating something about nature, there should be something similar for “time”.

Regarding entropy, this seems like another ether-trap. Carver Mead in his book quotes a statement by Price “Time’s Arrow Today”, to the effect that the real question is not why does entropy increase constantly, but “why it was ever so low in the first place”. If entropy is always increasing, why hadn’t it already increased to total disorder long ago ? Entropy can’t be fundamental either.

FA October 4, 2021 1:33 AM

@Clive

Therefore it can not be a perpetual motion machine.

Nobody ever claimed it was.

The equation in the paper appears to be for a single point in time, the incorrect implication being the count will always stay the same.

No, there is no such assumption. Nothing in this system depends on the exact value of the event rate.

FA October 4, 2021 1:42 AM

@Clive

… no matter how “infinitesimal” the time period you chose is that bias is still there no matter how fractionaly small.

Is it ? If this ‘decay bias’ exists, then you should be able to answer the following simple question:

  • Which statistical property of the generated bits is affected and by how much ?*

FA October 4, 2021 1:44 AM

@Clive

Obviously being determanistic the level of that signal becomes more prominent as the noise averages out and will given time exceed the noise.

You have not shown that there is a ‘signal’ to start with.

MarkH October 4, 2021 1:54 AM

@Clive et al:

We’ve both written a lot about “bias” in regard to TRNGs.

Several times, you’ve written (disapprovingly) about “moving the goal posts.” In the spirit of precision over ambiguity, I want to nail down what I mean by “bias,” regardless of alternative definitions or interpretations.

In my meaning of bias, it is a deviation of the numeric interpretation of the RNG outputs from an ideal random generator, such that

(a) knowledge of the bias-producing defect in the RNG and/or

(b) analysis of any set of previous outputs

enables prediction of any future output with better-than-chance probability of success.

MarkH October 4, 2021 2:06 AM

A definition of “bias” continued:

I have consistently had in mind precisely the preceding definition, neither more nor less.

For example, variations in the raw number generation rate — or other RNG properties or behaviors which might give rise to side channels — are NOT bias; neither to me, nor I think to the cryptographic community in general.

Surely Clive and I agree that TRNG bias can never be reduced to zero. The operational question is whether a particular TRNG design can dependably operate with bias below some specified limit.

One practical standard is that the bias is sufficiently small that an optimal guessing attack would be better than an exhaustive search by a margin too small to compromise security.

A more formal standard, is the inability to construct a mathematical distinguisher able to discriminate RNG outputs from those of an ideal random generator.

MarkH October 4, 2021 2:17 AM

@Clive:

To underscore what FA wrote a few minutes ago, in the cited paper from Korea — with its formula for the decay detection rate — I saw neither words nor math linking the claimed cryptographic quality of the numeric outputs to the average rate of 20 detections per second1.

For example, if we take their rate of 20 per second to be accurate, then by my reckoning the detection rate would drop to 18 per second (other parameters remaining equal) after fifteen years, owing to the shrinking population of Ni 63 nuclei.

At a mean rate of of 18 detections per second, the raw bit rate would obviously be ten percent less.

Would the bits be more predictable, than at 20 per second?

If so, why?

  1. They write that with more sensitivity, detector noise does cause bias. This says nothing about extra bias at lower detection rates. 

Weather October 4, 2021 5:16 AM

@Fa, Clive, Markh, all

https://pasteboard.co/WPgzQjJIts41.jpg

The green trace is the combined output, the red is the decay of the isotope, blue is the loop counter which counts up to a value then wraps around, while the counter isn’t a signal source its used to mix values. The red is a linear decay, couldn’t find the option for initial charged capacitor.

What I take is that over time a higher number will be selected from the counter.

Weather October 4, 2021 5:56 AM

The careful selection of max counter value and frequency of the clock would be important, you would need some type of negative bais to counteract the rising levels of numbers, 2.718 / number, but that would have to match the lowest bais you be happy with.
Maybe two sources, one flipped…
The isotope will have to match the counter.

MarkH October 4, 2021 10:38 AM

@Weather:

The simulation shown in the image seems to have the decay detections as periodic events.

They aren’t.

The fundamental mistake — you aren’t the only one to make this — is starting from the truth that an isotope source has an average rate of decay events.

Because that rate is written like a frequency, it’s tempting to imagine the decays like a periodic signal … but that’s false.

If I thought that a walrus is a kind of elephant because it has tusks, I’d be wrong.

If I thought that a platypus is a bird because it has webbed feet and a bill, I’d be wrong.

If I thought that plants with tiny flexible bristles that feel like fur are mammals, I’d be wrong.

Resemblance is not equivalence.

Weather October 4, 2021 11:13 AM

@Markh
The red line was meant to be a slow decay, that once reached zero stopped,I was in a rush.
But it does point out if the counter isn’t done probably at one point in the decay of the isotope you will get a long string of the same number.

Weather October 4, 2021 11:27 AM

Just take the slow decay before the shape drop of the first and just use that.

Resemblance is not equivalence

MarkH October 4, 2021 12:26 PM

@Weather:

I have a guess, about an interpretation you might have made.

We confusingly use “decay” to mean two REALLY REALLY different things about a radioactive source; I’ll call them nuclear decay and source decay.

nuclear decay: A single event in which a nucleus transmutes to a different element, emitting at least one ionizing particle in the process.

source decay: The gradual reduction of the average rate of nuclear decay events in the source, because each nuclear decay decrements the population of isotopic nuclei.

A smooth curve can represent an average of source decay as a function of time.

But the high-frequency counter records the time of each nuclear decay recorded by the detector.

There’s nothing smooth or continuous about a nuclear decay: it’s an isolated independent event.

MarkH October 4, 2021 12:59 PM

@Weather:

Unfortunately, “decay” means two very different things in these discussions. It’s really confusing!

A nuclear decay is an isolated independent event, in which a nucleus transmutes to another element and emits at least one ionizing particle.

Source decay is the gradual reduction of the average rate of nuclear decays as the population of unstable nuclei shrinks, and can be approximated (with sufficient time averaging) by a smooth curve.

The high-frequency clock records the time of a single nuclear decay event, which has no smoothness or continuity. It’s an isolated independent event.

MarkH October 4, 2021 1:28 PM

@Weather et al:

In a Poisson process, events occur independently, with the number of events per unit time converging to some mean rate.

Decay detections in a carefully designed radioisotope TRNG are extremely close to a perfect Poisson process, subject to the condition (as Clive ever reminds us) that the mean rate of occurrence slowly diminishes as a function of time.

Earthquakes are not completely independent, and so deviate appreciably from Poisson. However, the combination of using quakes from all terrestrial locations, and filtering out less energetic ones, results in a very good approximation to Poisson.

Short intervals between quakes (detectably below a half hour, and increasingly down to a minute or two) in my dataset are more frequent than would occur in a pure Poisson process.

I suspect that this results from linked events (one quake setting off others), but haven’t examined that. Even so, the entropy is extremely good.

For radioactive decays, the results should be even better.

Weather October 4, 2021 2:37 PM

@MarkH

There are 1-5 spacing of same magnitude, there was a rising over 100 lines that dropped for 50 lines then increased again then drop.

I can through programs at it but can download the text file?

Weather October 4, 2021 9:45 PM

@MarkH

Try a running average of 1296, add all 1296 up then divide by 1296, then 1-1297…2-1298…3-1299, it should show a sinewave type pattern.

Sut Vachz October 5, 2021 11:15 AM

@ Clive Robinson

Re: interesting papers

These are tantalizing and also give rise to some questions to try to understand the principles involved, at least in the mind of this lightweight patzer 😉

Both papers introduce a “segmented logistic” function to more expeditiously get to the chaotic region. What is this, e.g., does it really buy anything, or why 2 segements and not 3, or N ? Should the segmenting parameter be regarded as a new input variable, so the map is really from 2 dimensions (or more) and should everything be lifted up to maps from and into this higher dimension ?

There must be a lot of processes that are isomorphic to each other that look on the “outside” quite different. Is there some kind of set of invariants that characterize isomorphism ? E.g. are these methods actually examples of Bernoulli shifts, with entropy the sole invariant ?

The first paper seems to take 40 bits as a safe precision, whereas the second paper analyzes in more detail the numerical behavior to try to avoid losing chaos through computer arithmetic limitations. Should one really be looking at doing everything with special integer arithmetic, or using a symbolic dynamics approach ?

The papers provide illustrations of output versus interations that to the eye look “random”, but isn’t this slim comfort ?

Do these new methods somehow escape von Neumann’s warning about arithmetic methods ?

If for whatever method is used, calculations or partly physics based, the approval criterion of the method is a series of tests, then one is saying “random” or “chaotic” is anything that passes the tests. Has the general character of these tests been established ? One could look at the tests as a mapping from a space of some type to another space. What properties does this map have ? What can it in principle actually say ?

MarkH October 5, 2021 12:57 PM

@Clive, FA, et al:

My 3-part comment above (with wire and posts) showed by analogy how evaluating a smooth function over a small interval can reduce its variation, and how such “magnification” can be extended to limit the ratio of variation below any chosen threshold.

Suppose that instead of horizontal distance, the argument is time; and that instead of height above the floor, the argument is probability density (probability per unit time).

The smaller the time interval, the nearer to unity the ratio between the greatest and least probabilities within the interval.

Within a narrow time interval, the event modeled by the density function is nearly equiprobable.

Once you understand this concept, you understand all.

MarkH October 6, 2021 1:44 AM

@Clive, FA, et al:

In cryptography at least, true random numbers are numbers that can’t be predicted with better success than by chance.

With usual coin tosses a guess at any future outcome has about a 50% probability of success. If anybody can consistently predict about 51% of the time, the outcomes suffer from some kind of bias.

I consider two kinds of predictability which can make a TRNG unsafe for cryptogaphy.

First, if certain output numbers (or sequences of numbers) consistently occur more often than would happen by pure chance, then someone can more easily guess secret numbers by using those too-frequent outputs.

The second kind of predictability is dependency: suppose that an attacker discovers some previous TRNG output (or perhaps some internal state of the machine), and can use that information to compute predictions of future outputs with better success than by chance. The future output depends (to some extent) on what the TRNG was doing in the past.

MarkH October 6, 2021 2:15 AM

@Clive, FA, et al:

Because TRNGs can’t be made perfect, it’s futile to require (or design to the standard of) zero predictability … except perhaps for quantum-entanglement whiz-bangs I’m not considering. Real-world standards set very low upper limits on predictability.

If all outputs of an RNG have nearly equal frequency of occurrence, then the outputs are said to be identically distributed.

If the outputs are identically distributed, and past outputs don’t enable a meaningful prediction of future outputs, then the outputs are said to be independent.

I wrote above (in bold text) “Within a narrow time interval, the event modeled by the density function is nearly equiprobable.”

If a number is determined linearly by the time within that interval at which the event occurred, the nearly equiprobable probability density guarantees that the time-based numbers will, over a sufficiently large number of events, have nearly identical distribution.

Clive Robinson October 6, 2021 6:42 AM

@ MarkH,

If anybody can consistently predict about 51% of the time, the outcomes suffer from some kind of bias.

Which is ambiguous due to “domains” and “dimensions” within them.

If you study only the “data” once removed from the final TRNG output then you can as you have found reduce but not eliminate the bias to some small value. One such way to do this –which is usually a key indicator of a bad TRNG– is the use of a crypto algorithm on the data as a “processing step”[1].

That is think of it as the inverse of “Garbage in gives Garbage out”(GIGO), that is “Bias in gives Bias out”, the simple fact of encrypting bias does not remove the bias it simply makes it too difficult for many to detect if they do not have either the “key” or access to the secret “trap door”.

But studying only the “data” once removed from the TRNG is actually a very bad idea. Because it removes it “from the system” which means you do not see other bias in other domains such as time / phase / frequency which can reveal bias in the data domain that is not otherwise visable.

I frequently warn about “Security-v-Efficiency” because without care “efficiency” makes systems one heck of a lot more “transparent” especially in the time / phase / frequency domains. That is “efficiency” opens up “time based” side channels etc in practical implementations[2] that then haemorrhage information even though the “data domain” appears secure.

Thus it matters not a jot what probability curves you end up with when looking only at the data domain you have to consider other domains in practical systems. Not just because actual information can be leaked via time based side channels, but they can leak triggers / indicators to reveal information in the data domain.

Thus the time between two outputs from a physical source carries information. If it varies in any way beyween two different outputs then it is a time based side channel carrying information out through the transparancy of the system.

Further you have to be very carefull that any processing steps you take, do not reflect the likes of “jitter” in the “time domain” into the “data domain” which unfortunately the latch / counter based roulette / waggon wheel stroboscopic system does.

But there is another problem with bias, it is multidimensional in all the domains.

I suggested you think about “Manchester Encoded” messages. If your “bias test” is to count the number of zeros and ones then Manchester Encoded mesages will pass with flying colours, but carry any kind of message you like through the test. It’s one of the major failing of the “von Neumann de-biaser”. If you look at a hardware implementation of it using an XOR gate and then look at a Manchester encoded data “decoder” you will see how similar they are.

But it gets worse as you move up in dimensions. Having used time based jitter to get “phase reversed” Manchester encoded bias through the standard stroboscopic and von Neuman debias circuits can you still detect the bias?

Well mostly not. If you examine a plain text message, it is detectable because it has easily seen bias in the statistics it has. But it is actually fairly simple to “flatten the statistics” yet have the message fully recoverable. One way is to take the statistics of the plaintext alphabet and map them to another reduced alphabet so say 28 values reduced to 10 values by using an intemediate 100 value translation[3]. Unless the test you use specifically looks for this then the message bias will get through.

Put simply for every test you can come up with, there is a trick to get bias past it.

It’s why I said those NIST tests are just a very low “low water mark” that any TRNG should pass, but they do not mean it’s any good. For instance a counter the output of which is AES encrypted will pass those tests with flying colours, but there is absolutly nothing random about the data output even though it passes the NIST tests.

So… If I make my system pause it’s output in time so that encodes “in time” the upper bits of the counter on the AES output, just looking at the data will never show bias… But the practical system will alow an adversary to easily recover the actual counter value and become synchronised to it.

Knowing this tells you, you have to decouple the data domain from the time or other domains significantly. Whilst easy in the abstract / theoretical way, it’s actually very hard to do with practical systems. It is why TRNG designers have to take significant care otherwise potentially usable bias will get through moving you away from your desired 50:50 even if very small. Remember these days crypto systems can be broken with bias as small as 1:2^80 or less a lot less. Likewise statistical models used in engineering can be thrown off by 1:2^6 bias.

As a TRNG designer you usually have no idea what a customer is going to use your TRNG for, or perhaps more importantly how. Whilst you could argue it is the customers problem, that is not a good thing to do for various reasons. One of which is you do not know what a state level adversary currently knows, and more obviously you have no idea what they will find out in the near future whilst the product of your design is still in use for a quater century or more[4], so your design needs to be conservative. So “getting it right” or atleast the best you reasonably can is important.

Whilst you can not stop bias or triggers in practical systems you should make efforts to reduce them in all domains and dimensions. Something that way to many “on chip” TRNG designers significantly fail to do in the chase for “efficiency” they fail to give “security” where it matters, and then they try to hide the failure behind determanistic algorithms which may well be broken in some way.

That is the practical reality, not theoretical hypothesis of life.

[1] When designing TRNG’s, for a long term security perspective, you have to assume that all “determanistic processes” are reversable, even the idea of the supposadly “One Way Functions”(OWFs). That is assume that even if you are not aware of it there might very well be a “trapdoor function” within the crypto OWF (which is what caused all the stink over the NSA and NIST approving their Dual Eliptic Curve DPRBG).

[2] A clasical example of this was how the NSA rigged the NIST AES contest. The result was that although AES has very high “theoretical security”, most “efficient” or “fast” “practical implementations” in practice are full of time based side channels haemorrhaging either KeyMat, PlainText, or both out through the rest of the “efficient” thus transparent system onto the external network. It is one of the reasons why the NSA only approve practical implementations of the likes of their “In Line Media Encryptor”(IME) for “Data At Rest”. It is also one of the reasons why I keep saying that if you need privacy / secrecy of your data you should use two computers one for processing and crypto that is NEVER connected to any kind of communications network including the mains power supply. And a striped down minimalistic computer to connect to any communications network you might need to use. You then using an appropriately safe method take encrypted data from the secure issolated / gapped computer to the insecure probably malware infested communications computer. That way you put your “security end point” out of reach of your “communications end point” something “Smart Devices” like mobile phones do not do, hence are easily susceptible to “end run” attacks no matter what crypto any App on the Smart Device uses. Hence why I say over and over “Secure messaging Apps are insecure by design” something that whilst provably true, apparently riles many “gurus” and “developers” who hang the “secure” tag on the Apps like a magic incantation, thus making the job of “collect it all” authorities much much easier as many “crooks” using their supposadly “secure smart phones” have found out in recent times…

[3] Known as a straddling checkerboard, this technique was popular in Russian paper and pencil “Nihilist Ciphers” right through untill the middle of the 20th Centure (see Vic-Cipher of the 1950’s). It is a very simple device for achieving a basic form of information diffusion called “fractionation” by converting a 28 letter alphabet into a 10 digit alphabet via a reduced 100 letter alphabet. Importantly it is not just simple to use it also achieves data compression relative to other schemes using 10 digit alphabet outputs.

To use a straddling checkerboard you simply make a three rows by ten columns grid and write the most frequently used 8 letters –A Sin to er– in the top row and the remaining letters, a full stop and “escape character” in the bottom two rows that are labled 8 and 9. If you get a frequent letter you simply write down the single digit column number, if however you get one of the less frequent letters you right down the row number and the column number as a two digit number. The resulting string of numbers has flatter statistics.

https://en.m.wikipedia.org/wiki/Straddling_checkerboard

You can to confuse enemies and waste their resources use the straddling checkerboard twice with a One Time Pad. The first time turns the plaintext to numbers that are then encrypted via a numeric OTP the output of which is easily recognised by eye thus “stands out” to anyone who is going to try to break the ciphertext. If however you take the numeric OTP output and use the straddling checkerboard to in effect decrypt it, the output then looks not like an unbreakable OTP, but very similar to a simple substitution or transposition cipher as it has what appear to be “natural language” statistics. Thus the unbreakable hides in amoungst what appear to be breakable traffic. But if you are using “Morse Code” to “transmit the message” the fact you are now back to letters not numbers significantly reduces the time you spend “on the air” which can also help save your life…

[4] Knowledge generally improves with time and what was not possible yesterday becomes just possible today, hard tommorow, and trivial the day after. Take Smart Cards for instance back in the late 1980’s early 1990’s they were thought to be secure, then first power analysis and differential power analysis enabled their internal function to be sufficiently mapped that the crypto functions became useless. In more recent times Smart Cards have been used for National ID and similar cards, and have onboard TRNG’s, that unfortunatly alowed bias into the generation of numbers such that the cards were no longer considered secure.

MarkH October 6, 2021 9:48 AM

@Clive:

The comment you just contributed is erudite and informative, as per your usual.

A TRNG system is composed from a set of functional blocks.

In the spirit of not “moving the goal posts,” I have been focused on a single functional block, which is an extraction method to produce raw bits from radioisotope decay detections.

If the method can’t produce acceptable numbers, then there is no value in further examination. If the numbers can be acceptable, then the design of a system incorporating it must take account of numerous concerns — including some you just enumerated — to ensure adequate security.

========================

I offer two propositions concerning side-channel leakage:

• Any system creating or transforming data, no matter what its elements, can be designed or constructed so as to leak via various side channels.

• Most1 systems creating or transforming data, no matter what their elements, can be designed or constructed so as to control specified side channel leakage, provided that the functional specification might need some revision for this purpose.

What do you think about these propositions?

  1. In some cases, the inflexible functional requirements guarantee certain side channels, in which case any mitigation would need to take place at a higher level of specification or design. 

MarkH October 6, 2021 3:06 PM

.
Amount of Distribution Bias, Pt. 1

Suppose that a radioisotope TRNG has an average of λ decay detections per second1.

If

• the unstable isotope has the usual properties for this application,

• detector function is consistent, and

• the detector responds only to decays from the source2, then from well-established science the probability density per second of the next detection is

λ e^(-λ t)

where t is the elapsed time in seconds from the previous decay detection.

  1. It’s wrong to interpret “per second” in the units as implying behavior like that of periodic functions. 
  2. Cosmic rays might trigger the detector at low incidence; the statistical effect is negligible. 

MarkH October 6, 2021 3:07 PM

.
Amount of Distribution Bias, Pt. 1

Suppose that a radioisotope TRNG has an average of λ decay detections per second.

If

• the unstable isotope has the usual properties for this application,

• detector function is consistent, and

• the detector responds only to decays from the source, then from well-established science the probability density per second of the next detection is

λ e^(-λ t)

where t is the elapsed time in seconds from the previous decay detection.

MarkH October 6, 2021 3:42 PM

.
Amount of Distribution Bias, Pt. 2

Notes for Pt. 1:

• It’s a fundamental error to suppose that because λ has “per second” in its units, that the phenomenon is anything like a periodic function.

• Cosmic rays may trigger the detector at very low incidence; the effect on the outputs is negligible.

• The probability density function shown above decreases monotonically with time, having its largest rate of change (steepest negative slope) at t = 0.

• Accordingly, the distribution is heavily “front loaded” — short intervals are more likely than long intervals.

Sut Vachz October 7, 2021 1:19 AM

Re: earlier comment “There is no real time, the only reality is motion … ”

A small correction or clarification to this.

There is really something in motion that allows “a number of motion in respect of before and after” to be completed in the intellect and so to exist in the intellect. This completion is time.

The situation is similar to the case of quantity. There is really something in say discrete quantity that is completed in the intellect, for example some several apples is completed in the intellect yielding the number “6”. But we don’t find “6” as such in the physical world. Similarly we don’t find time as such in the physical world.

Clive Robinson October 7, 2021 7:47 AM

@ MarkH,

Suppose that a radioisotope TRNG has an average of λ decay detections per second.

Be cautious about,

λ e^(-λ t)

Because λ amongst other things can be misused.

In a perfect world it does not matter if you use the initial λ in a chain of measurments or the current λ at the start of each measurment in the chain as the time component will account for that.

However in reality our measurments are imprecise so are our calculations of “e” so the error rate is different with each method due to curtailed precision and other sampling and curve fitting issues.

Normally error functions are “hand waved away” that is they are “assumed” to act like “Additive white Gaussian noise”(AWGN) as it makes the maths tractable.

This is because the individual errors can then be assumed to be random around the average point of measure as the AWGN model assumes that all measurments are “independent” thus “sum out” or “average to zero” not “dependent” and thus carry forward by accumulation (the reality is the average is moving witg time due to the % of a % decay). Worse the effective error rate also rises.

This is because time is a continuous process thus any series of measurments made are not actually independent, it’s why you get “phase accumulation” in counter usage. Hence the reason the actuall error function whilst “looking random close in” in fact shows a clear waveform usually balanced around the measurment point when viewed correctly. As I’ve mentioned you can get with suitable integration a sinewave or sawtooth wave or similar depending on the measurment.

But… What if the measurment point is not constant but biased by a trend, or likewise what is being measured is not fundementaly balanced about zero or has changing bias?

Either way it accumulates in the readings, and what you see is very much dependent on three things,

1, The method of coupling the instrumentation head to the source.
2, The various measurment windows of the instrumentation head.
3, The actual method of measure.

Again to make things simple people create inapropriate windows and use inapropriate methods. For instance the concept of “AC coupled” is in reality “bandpass coupled” and may have some degree of less obvious very low frequency or even DC coupling (hence people use “lossy integrators).

A clasic example being a misunderstanding about “grounds” can turn a “buffer amp with roll off filter” into a high frequency oscillator (redraw a LC pie lowpass so that the two caps to ground are joined together but not to the refrence ground, you get a tuned circuit which if you redraw it around an op-amp circuit it will toot…).

But there is also a big issue with the use of statistics and examining the output. It’s “normalisation” if you take two period measurments and graph out the readings you will normally get something aproaching a normal distribution or bell curve. If you take two such measurment periods a significant time difference appart you are going to find you have problems comparing them. Thus the normal fiddle is to “normallize” the readings in some way. This can cause all sorts of problems.

Firstly consider ALL measurments are discreate NOT continuous you then perform some form of average on them and plot them on a graph centered on your chosen average.

But due to ranging on instruments the curve tails are of increasing inacuracy compared to those close to the average and may be useless beyond a quite small range. But how do you compare two time period measurments when the first has about twice the number of actual data points to the second?

Obviously the reading density is down which means that likewise the readings that fall in any time sub range change as well. How and by how much depending on how you measure them.

Oh and also how you group them, the readings might be 32bit, but in reality around 3-4 bits less at the expected average and worse at the tails. Then, you scale them down to maybe 5-7bits to make your table to plot your curve. If you are aware of the implications of Pascal’s triangle the number of readings in each range follows a natural distribution curve (see Gault board). Normalisation unfortunately “hides” things such as the time shifting mean giving a distortion that favours the left hand tail over the right whilst also making the mean steadily of less accuracy. Which means normalisation “lifts the noise floor” which is fine if AWGN but if not –and it’s not with a radioisotope source– makes things a whole lot more complicated. So normal statistical measurments will leave you prey to the old truism of,

“There are lies, damn lies, and statistics”.

And that’s just some of the stuff that is relatively easy to see…

MarkH October 8, 2021 1:06 AM

@Clive:

our measurments are imprecise so are our calculations of “e”

Surely you’re joking, Mr Robinson! I compute e to arbitrary precision with thirteen Python statements … on my antique computer, ten thousand decimal places take half a minute.

Knowing λ to reasonable accuracy (within a percent, for example) is also not difficult.

In fact — as I shall be showing readers of this thread — an error of estimation of 5, 10 or 15 percent has little impact. It’s simply not necessary to know the parameters to great precision.

Similarly, ordinary errors and drift in the counts won’t harm the quality of TRNG output.

you get “phase accumulation” in counter usage

In some applications, yes. For the timing of radioactive decays, “phase” is a meaningless term. Independent non-deterministic events do not, and can not, have a “phase”.

Unless I misunderstood the intended meaning, this looks like the same old mistake of confusing the statistics of independent non-deterministic events with the properties of deterministic (and especially periodic) functions.

The tusks of a walrus don’t make it a kind of elephant.

MarkH October 8, 2021 1:32 AM

Independent vs. Periodic

Some of us seem to have confused nuclear decays with periodic phenomena, and made wrong conclusions on that basis.

Spontaneous decay of unstable nuclear isotopes has no speed, tempo or pace. It has only a probability per unit time. Imagine that you’re observing a uranium nucleus, and someone asks you “how fast is it decaying?” How could you answer? It’s a perfectly meaningless question.

Some seem to think that a radioisotope source gradually “slows down” over time like a child’s spinning top. No, it does not! The nuclei exhibit the same temporal behavior as always — their population shrinks, but their behavior as a function of time is invariant.

MarkH October 8, 2021 1:44 AM

Independent vs. Periodic, Pt. 2

Imagine an α source and detector running at an average around 2000 detections per second.

Now apply a barrier (for example, sticky tape used in offices) positioned so as to cover half of the source. The rig now yields about 1000 detections per second, simulating the lapse of one half-life for the isotope.

The exact same decays detected with the tape in place would also have been detected without the tape (excepting effects of detector recovery time). The nuclei aren’t decaying at a slower speed; the population under observation has been diminished.

The half of decays still detectable with the tape in place, have exactly the same independence and lack of correlation as the half blocked by the tape.

If the tape is then removed from the source, the decays don’t “speed up” — rather, the population under observation increases.

MarkH October 8, 2021 1:51 AM

Independent vs. Periodic, Pt. 3

To sum up, the mean rate of decay (or detections) is a proxy for the population under observation. Except for a superficial resemblance in units of measurement (decay events are NOT cycles), this rate is absolutely different from the frequency of periodic phenomena.

Reasoning based on gradually diminishing frequency, shifting phase, “heterodyning” with timing circuitry etc. etc. is based on a fundamental misconception, and conclusions based on such reasoning are not valid.

Clive Robinson October 8, 2021 3:49 AM

@ MarkH,

Surely you’re joking, Mr Robinson! I compute e to arbitrary precision with thirteen Python statements … on my antique computer, ten thousand decimal places take half a minute.

As any one who is half sensible will know I am not joking.

I suggest you realy consider things in the light of reality not fantasy.

As for,

The tusks of a walrus don’t make it a kind of elephant.

Or the hooves of an ass either, but you apparently want to kick out with,

Some of us seem to have confused nuclear decays with periodic phenomena, and made wrong conclusions on that basis.

No, but you have made a fundemental mistake with,

Independent vs. Periodic

I asume to “shift the goal posts” yet again.

What is being talked about is not “Periodic” but “measured” which are normally not related.

You have some strange fixation bordering on aberration in your behaviour and to be quite honest when viewed alongside your previous similar behaviours suggests you have a problem that you realy should seek professional help with.

FA October 8, 2021 6:44 AM

@Clive

Re. the RBG as presented by @MarkH, you have asserted that the slowly decreasing activity of the isope causes the generated bits to have ‘bias’.

Nothing, absolutely nothing in any of your long-winded posts, has provided an argument that supports this assertion.

If you understand the mechanism that creates this assumed bias, then it should be easy for you to explain it wihtout having to refer to ring oscillators, straddling checkerboards, the precision to which we know the natural logarithm base, and whatever other topics you have introduced.

And it should also be easy to quantify it in function of the half-life time of the isotope and of any other relevant parameters of the RGB that may matter.

So far you have failed to do any of this.

FA October 8, 2021 6:59 AM

@Clive (continued)

Instead, you constantly appeal to your own infallible authority.

This has gone as far as calling upon our host to silence people who dare to disagree with you, and now to this (referring to @MarkH):

You have some strange fixation bordering on aberration in your behaviour and to be quite honest when viewed alongside your previous similar behaviours suggests you have a problem that you realy should seek professional help with.

Maybe you need some professional help yourself.

Clive Robinson October 8, 2021 8:37 AM

@ FA,

Re. the RBG as presented by @MarkH, you have asserted

Oh dear yet another fantasy / false statment.

It was @Freezing_in_Brazil that raised the question of using radioisotores as a source for a TRNG.

I pointed out the source had a bias (which despite all logic you imply is false).

I also pointed out that without care the bias could appear in the TRNG output.

Now if you realy think as you appear to do that either of those statments are false or incorrect come up with some evidence rather than moving the goal posts and making statments that have been shown to be not what you claim.

To be quite honest you hitching onto the waggon @MarkH is trying to pull was a daft thing for you to do. Worse rather than bow out gracefully as I’ve given you repeated opportunities to do, you some how see it as an offence to your manhood or dignity or some other squirrel running around inside your head.

I suggest instead of kicking out with nonsense like,

Instead, you constantly appeal to your own infallible authority.

You actually go and do a little thinking, that you are so obviously not doing right now.

MarkH October 8, 2021 11:31 AM

.
Amount of Distribution Bias, Pt. 3

Joe Bloggs designs a radioisotope TRNG. He measures the intervals between decay detections, using the time measurements as his raw random data.

Joe reasons that there’s lots of uncertainty (entropy) in his data — correct! He knows that from the laws of physics, the detection times are impossible to predict (if his detector works properly) — that’s two for Joe!

So the data are rich in entropy, and it’s the absolutely best kind for cryptography: impossible for anyone to predict.

Because of the uneven distribution of intervals, the entropy per bit is disappointingly poor. The entropy varies with design parameters, but if he wants to avoid his counter overflowing when the time between decays is long, it’s hard for him to get Shannon entropy even to 0.9 bits of entropy per bit. (Guessing entropy — the kind most relevant for secret numbers — is even worse).

So for his RNG output he has to use a hash function n bits wide, and collect more than 2n bits of raw data for each hash function call.

His gadget works, but he wanted something more elegant.

MarkH October 8, 2021 11:12 PM

.
Amount of Distribution Bias, Pt. 4

Joe Bloggs failed to build a satisfactory TRNG with his naïve design.

But he did notice something along the way …

Each time he increased the clock frequency, the entropy of the numbers improved. He didn’t think this was useful, because with each speed-up of the counter, the likelihood became smaller that a decay would be detected before the maximum count was reached; many decays were lost, and the number of bits per second was much lower than his design target.

What Joe didn’t realize, was that the numbers weren’t so much better because of decays happening very soon after their predecessors; the cause of the high-quality numbers was the short window.

MarkH October 9, 2021 1:50 AM

.
Amount of Distribution Bias, Pt. 5

As noted above, if λ is the average number of events per second, then the probability density (per second) at time t is λ e^(-λ t)

During a short time interval μ ≪ 1/λ which I call the time modulus, how much does the probability density vary?

e^(-λ t + μ) = e^(-λ t) • e^(-λ μ)

so the probability density at t0+μ is e^(-λ μ) times the density at t0.

Recalling from calculus that for small ε, e^ε ≅ 1+ε, the probability density at the start of a time interval of length μ is very close to 1+ε times the density at the interval’s end.

For example, if μ is one percent of the average time between decays, then decay at the start of an interval of length μ is one percent more probable than at the end of the interval.

Clive Robinson October 9, 2021 4:36 AM

@ MarkH,

He didn’t think this was useful, because with each speed-up of the counter, the likelihood became smaller that a decay would be detected before the maximum count was reached

What you are describing there is not a roulette wheel / waggon wheel system as previously discussed.

Further,

For example, if μ is one percent of the average time between decays, then decay at the start of an interval of length μ is one percent more probable than at the end of the interval.

You are clearly describing an exponential time bias, but you forgot to mention the point I’ve previously made, that no matter how short the time interval chosen the time bias is still there and provably continues to increase. Also because we are dealing with measurments of past events not future probabilities we can see that the bias accumulates as you would expect when you integrate the measurments[1].

It follows that even if you split the measurment into the slowly changing exponential time and what you think is random noise, the noise will still have exponential change, thus it is clearly biased. You are in a “turtles all the way down” situation, your measurments are biased. Also not only does the “time bias” get through the roulette wheel or other counter based measurment system it also gets through the XOR based von Neumann de-biaser.

It will unless you take care get through all the following systems as well either through the data domain or through the time domain, because of the nature of the general problem of “Security-v-Efficiency” that causes side channels that leak information about the source, that they are “transparant” to.

Which is the provable points I made to @Freezing_in_Brazil, and have been making since of,

1, The radioisotope source has bias.
2, The bias gets through to the output of the TRNG unless you take steps to stop it.

[1] Traditionally in mathmatics there are two basic types of proof, firstly “graphical” secondly “logical”. I’ve previously described a way you can prove the bias gets through a roulette wheel or other type of counter based measurment by the graphical method of drawing out the time base of a latch based system driven by two inputs one to the CLK input and one to the D input, and how integrating what looks like “random” output into a waveform that is very clearly not random. In the case of two oscillators you get a very nice sinewave at the difference frequency. As @Weather showed a sawtooth with a linearly changing ramp. If you feel like using an exponentially changing period then you will get an exponential curve after integration. The simple fact is “time bias” gets through the measuring detector, the graphical proof demonstrates that. It also gets through all the following stages eventually unless you take specific steps to stop it doing so, and those steps are generally not easy to get sufficiently right.

MarkH October 9, 2021 9:10 AM

@Clive:

My presentation was too obscure — the “Joe Bloggs” design is an example of using a mistaken data extraction method, which is heavily biased.

However, it’s governed by the same math as the smart method.

I agree with you that the bias will not be zero. “Zero” is not a meaningful specification in this (and many other) situations. However, the bias in the output numbers can be reduced to an extremely small magnitude.

As I explained above when I wrote about “moving the goal posts,” I am not using “bias” to mean potential side channels (have you ever seen the word used that way in cryptographic literature?). If by “time bias” you mean potential side channels, that’s not part of my present analysis.

MarkH October 9, 2021 9:42 AM

.
Amount of Distribution Bias, Pt. 6

Here’s an application to a real-world example, the previously mentioned design concept written up by a Korean group for a tiny single-package TRNG device.

They measured λ at 20 events per second; their counting time interval μ is 3.2 microseconds, using an 8-bit counter.

Because radioactive decay intervals are always “front-loaded” (as described above), the initial count in a μ time window is always more probable than any succeeding count, with the last count in that window the least probable.

In their design, the maximum probability disparity in any μ window is 1.0000638. Yes Clive, there’s bias!!! Just how bad is it?

This distribution of bytes has 0.999999999969 bits of Shannon entropy per bit of output.

Iff an attacker knows the initial count, then there are 0.9999971 bits of guessing entropy per bit of output.

Caveat: I am not suggesting that these are the actual statistics of the mini TRNG design; only that these numbers characterize the bias introduced by the data extraction method.

MarkH October 9, 2021 10:07 AM

.
Amount of Distribution Bias, Pt. 7

For the above real-world example of short-interval “roulette wheel” data extraction, I computed:

0.999999999969 bits of Shannon entropy per bit of output

0.9999971 bits of guessing entropy per bit of output — if the initial count is known

Imagine an exhaustive search attack on a secret key using a gigantic bank of high-speed processors, with a calculated average completion time of one year.

If the attacker knew that the key bytes came directly from our hypothetical TRNG, and knew the initial count(s), then (on average) he could use optimal guessing to succeed almost 8.5 minutes sooner.

This would give him time to put on his New Year’s Eve party hat, and enjoy some bubbly!

Is the distribution bias so high as to be insecure? Or is it good enough for cryptographic use?

Clive Robinson October 9, 2021 11:00 PM

@ MarkH,

As I explained above when I wrote about “moving the goal posts,” I am not using “bias” to mean potential side channels (have you ever seen the word used that way in cryptographic literature?). If by “time bias” you mean potential side channels, that’s not part of my present analysis.

What we are talking about is the ability for someone to potentially gain a better than 50:50 advantage from the TRNG.

Any bias that makes it to the output of the TRNG that an attacker can detect means they have the ability.

I have shown that,

1, There is bias in the source.
2, That bias passes through the stages of a standard TRNG measurment.

Thus I have proved that my original comment to @Freezing_in_Brazil is true.

However you want to in effect say “so what” with,

the bias in the output numbers can be reduced to an extremely small magnitude.

But you do not say if you are thinking on a single measurment or a chain of measurments.

As I’ve further proved the bias is accumalitive which means it will over time build up.

Unfortunately your,

If by “time bias” you mean potential side channels, that’s not part of my present analysis.

Indicates that your “present analysis” is not relevant to any argument or point I’ve made.

Especially when you appear not to understand what the time based side channel gives an attacker, which is a method by which the small signal can be pulled from the noise…

Several times in the past I’ve mentioned that the I believe the NSA rigged the NIST AES contest. That is the rules of the competition actively encoraged “side channels” to occur in practical implementations of AES.

So even though AES is theoretically secure to some rediculous level, the time based side channels in practical implementations reduced the work involved to break an actual implementation to just passively watching bias in the timing, which leaked the key information quite far across the network. Because the systems involved had suffered from “Security-v-Efficiency” issues in the design stages they were “transparant” to time based side channels.

The thing is that every time the measurment head picks up a decay product it marks a point in time. That point in time works it’s way through the TRNG as a time based side channel. This enables an attacker to make their own “measurments” and compare them to that of the TRNG. They can then in effect “synchronize” their counter to that in the TRNG.

As everything after the counter in the TRNG is both “determanistic” and “known” it matters not a jot how much “magic pixie dust” crypto grade hashing you add.

It’s why the designer has to break the time based relationship.

As I have noted some “entropy pool” –but not all– designs break the relationship sufficiently that an attack is not possible.

Now if you have a disagrement with that then say so as well as why.

Likewise if you want to chase down bias on a single measurment at least be very clear what it is you are stating, and that it is not relevant to what I’ve said. Oh and also give the bias in terms of “required number of bits” to represent it. Saying 0.999999999 might look good but it is aproximatly 30bits to register it, or if the bias accumulates, as it does with a radioisotope source, then 29bits with two measurments and that the number of bits keeps decreasing with the increasing number of measurments. So it goes down to 28bits with another couple of measurments and so on.

MarkH October 9, 2021 11:45 PM

@Clive:

It simply doesn’t work for me, to try to resolve a plethora of (at least partially) separable problems at the same time.

In the unlikely event that we ever come to an agreement concerning the quality of numbers from a TRNG, I should be very pleased to engage a dialog concerning categories of side-channel and how risks might be mitigated.

For now, I’m focused on the numeric outputs. If they’re crap, the machine is unsuitable for security-critical usage, side channels or not.

========================

Please note that when I mentioned hashing above, it’s what the imaginary Joe Bloggs needed to do, because he made a rookie design guaranteeing miserably high bias in his numbers.

My standard for a TRNG, is device with raw “hardware level” output suitable for crypto applications. No hashes or other “whitening” involved.

MarkH October 10, 2021 1:03 AM

@Clive, re styles of communication and analysis:

When I was a lad, a kind scientist neighbor used to show me things I found fascinating.

Once he brought out a bowl with some water, and little can containing butter-soft metal (sodium as I recall) bathed in oil. He picked off a tiny fragment (with tweezers, perhaps) and placed it on the water.

Not only did it burn, but it did so with such high intensity that the reaction caused it to skim across the surface of the water at impressive speed, zig-zagging and bouncing off the sides of the bowl.

========================

When I read comments you’ve written, I often have the same feeling — the concentrated energy, the bewildering changes of direction. I sometimes struggle to “reverse engineer” what I guess were your thoughts and meaning.

When I want to be sure of myself, I proceed as methodically as I know how. If you care to take some deep breaths, slow things down, and focus more specifically, perhaps our dialog could be more enlightening.

Though it’s an unending battle, I’m trying to use more standard terminology (often, I don’t know the lingo well enough to do so). Would you care to join me in that?

MarkH October 10, 2021 1:50 AM

@Clive:

you do not say if you are thinking on a single measurement or a chain of measurements.

Actually, I do say, explicitly and repeatedly. The word “Distribution” is in bold text at the top of seven numbered comments above.

Two fundamental properties required of an RNG for crypto are (a) uniform distribution, and (b) independence. My interpretation of “a chain of measurements” is that you mean to include the effects of any lack of independence among distinct measurements.

Above, I’ve been discussing uniformity of distribution. Independence is quite a separate topic, and I don’t intend to mush things together. I’ll come to independence presently.

I’ve … proved the bias is accumulative which means it will over time build up.

I understood each argument I have seen to presuppose (explicitly or implicitly) that the time measurements are applied to deterministic processes.

Surely one of the most common causes of invalid proofs in mathematics is the application of a rule or theorem, the predicates of which were not satisfied.

Great mathematicians have fallen into this error.

MarkH October 10, 2021 2:02 AM

.
Amount of Distribution Bias, Pt. 8

I’ve seen comments implying that errors or uncertainty in parameters could meaningfully harm the quality of numbers output by a radioisotope TRNG.

As an experiment in parameter sensitivity, I’ve run the numbers for the mini TRNG (see parts 6 and 7 above) with the parameters changed in the most adverse direction by a total of 25%.

This tolerance allows for

• poor knowledge or control of λ,

• the maximum credible frequency errors in a crystal oscillator timing circuit, and

• some mysterious imprecision in the knowledge of Napier’s constant.

Results:

Shannon entropy per bit: 0.999999999952
guessing entropy per bit: 0.999980
year-long search reduction: 10.52 minutes

Clive Robinson October 10, 2021 9:48 AM

@ MarkH,

In the unlikely event that we ever come to an agreement concerning the quality of numbers from a TRNG

As I’ve indicated it’s not the “quality” of the numbers but if the bias can be used to determin the numbers.

I think you would agree that it does not matter how good or bad the quality of the “random numbers” if the attacker can determine them with a reasonable degree of accuracy.

The “time bias” of the source forming a “time based side channel” that goes right through the TRNG due to the chosen methods “transparancy” alows for such an attack as I’ve indicated.

It’s a problem that exists in quite a number of TRNG’s including some that are “on chip”. Especially those where a designer might not expect it such as with oscillators. Many designers would not expect oscillators to have any “accumlative bias” as they are effectively regarded as AC coupled and the “noise” AWGN, so assumed to be “random around a constant average”. Because that is what the “convenient model” every one gets first taught says, when infact all natural sources decay or age in some way so do have accumlative bias even though it might be of very very low frequency. But problems get realy fun when you use two or more oscillators not only do you get the natural sychronicity issues[1] combining their outputs in most cases removes any bias de-coupling and simple de-biasing[2].

With regards,

Two fundamental properties required of an RNG for crypto are (a) uniform distribution, and (b) independence. My interpretation of “a chain of measurements” is that you mean to include the effects of any lack of independence among distinct measurements.

A “uniform distribution”, is an expression of an “expectation” of where any individual measurment will fall. Whilst it can be drawn up by making many measurments it does not in any way imply you are making more than one measurment.

To see this consider a Gaulton pin Board[1] you can drop lots of balls in and you will get one of very many near expected distributions, or you can calculate the expectedd distribution and measure against it. However if you do zero or more measurments the distribution is there in posse prior to the ball or balls being droped from the pin positions. So even if you drop just one ball you know there is a distribution and also what the probability is of where the ball will land before you drop the ball. But when the ball lands you know in esse what the probability is.

Thus you can make calculations based on each ball with respect to the expected average.

More importantly if you were to very fractionally move the ball drop point or fractionaly rotate the board from level, not just the average position but the distribution will change. The effect of accumulating such bias could therefore be seen in posse by callculation and in esse by measurment.

Unfortunately when talking about probability few take the time to actually state if they are using it in posse or in esse, they assume either it does not matter or that it can be determind from context.

[1] Named after Sir Francis Gaulton, it shows via the positioning of it’s pins how various distributions can be seen to occur. But importantly you can calculate the actuall distribution by simple maths and logic and not having to use the board at all,

https://en.m.wikipedia.org/wiki/Bean_machine

But they are fun to watch,

https://www.youtube.com/watch?v=h1TUvltRVdw

Sadly that vidio does not show what happens to the distribution when you rotate it around the viewers’ boresight and so move the board from being level thus put a constant bias into the distribution.

[1] As I’ve mentioned there is the “loose locked oscillator” issue, where supppsadly “independent” oscillators fall into some form of “synchronicity”. The reality is the fact that pairs of moving pendulums can become synchronized all to easily. We don’t know when it was first observed, but it was first recorded and inveatigated by the Dutch scientist Christiaan Huygens back in the 17th century. This “synchronization” issue of two or more events happening at the same time or in “lock step” with each other is actually one of the most common phenomena in nature and seen all over the place when you are aware of it.

[2] Back before transistors were common “DC amplifiers” for instrumentation were to put it mildly were very troublesome. The solution, use a less troublesome AC amplifier. By using a biphase chopper modulator the very very small DC or low frequency level gets turned into a high frequency envelop or “Amplitude Modulated”(AM) signal. After AC amplification and processing the DC or low frequency signal much amplified can be recovered by “envelope detection” or “direct conversion”. The single “latch” or the “counter” can be graphically proved to be a “biphase modulator” as can the XOR gate… So that DC or very low frequency bias, that has been incorrectly assumed to be removed by AC coupling the two or more oscillators into the system, goes sailing right through the following processing to become available at the output of the system.

john October 10, 2021 11:58 AM

Hmmm… I guess what you are saying is that modulated bias [more ones than zeroes] passes through the system unfiltered. Kinda makes sense.

I guess another side channel, the one you talked about with AES is that compute time varies in a way that if it can be observed, may leak useful stuff?

I like your explanations, but they are still difficult for me :).

Thanks.

John

MarkH October 10, 2021 12:19 PM

@Clive:

it’s not the “quality” of the numbers but if the bias can be used to determine the numbers

By “quality” I mean freedom from bias, analogous to reciprocal quantities (resistance vs. conductance). I wrote above that by “bias” I mean (quoting with added italics):

deviation of the numeric interpretation of the RNG outputs from an ideal random generator, such that

(a) knowledge of the bias-producing defect in the RNG and/or

(b) analysis of any set of previous outputs

enables prediction of any future output with better-than-chance probability of success.

========================

Maybe I wasn’t explicit enough. By specifying numeric interpretation, I’m using bias to mean vulnerability to “data at rest” attacks, not “data in motion” attacks.

When you write about “time bias,” do you mean

(1) time-based side channels as with AES?

(2) something that appears in the output numbers potentially enabling a data-at-rest attack?

(3) or both?

========================

Though I didn’t learn until just now that the concept was Gaulton’s, I saw such a machine as a boy in a famous science museum.

It seems to me that I watched it for a long time, and was impressed how accurately the accumulation of the balls matched the bell curve marked on the plexiglass, cycle after cycle.

I don’t know whether I had yet learned how to divide integers, but I was getting a wonderful lesson in the connection between randomness and order.

MarkH October 10, 2021 12:54 PM

.
Amount of Distribution Bias, Pt. 9

Note to readers: when I mention the “Korean” or “mini” TRNG, it’s a design study cited above; see the comment time-stamped “September 27, 2021 5:32 PM”

I showed yesterday that even with a large adverse variation in parameters, the bias in the distribution of numbers generated by the short-window data extraction method remains extremely low — in that sense, the method is robust.

The Nickel 63 source in the design study from Korea has a half-life of one hundred years; over a 15 year interval, the isotope population (and therefore the time-rate of decays) will diminish about ten percent.

What happens to the distribution bias of outputs, as the source fades? Using 18 per second for λ in place of their measured 20 per second, I compute

Shannon entropy per bit: 0.999999999975
guessing entropy per bit: 0.999986
year-long search reduction: 7.7 minutes

Note that these figures are all better, than for λ=20/sec.

I note two changes for a 10 percent reduction of decays per second:

• the raw generation rate drops from 160 bits/sec to 144 bits/sec

• the distribution bias of those bits improves

Sut Vachz October 10, 2021 3:02 PM

@ MarkH @ Clive Robinson

I have been trying to follow the physical TRNG discussion but it’s mostly beyond me and I don’t grasp much.

But a question perhaps – why are we (if we are) so certain that elemental decay and radiation is random, when any mechanism to measure its “randomness” are inevitably biased, as the discussion seems to be saying ?

The random universe is perhaps not random, instead perhaps just the long term deterministic result of initial conditions and physical interactions, radiation and absorption etc.

MarkH October 10, 2021 4:16 PM

@Sut Vachz:

(1) Various isotopes under certain conditions undergo decay that is not (completely) spontaneous. That’s what happened to Hiroshima. However, the isotopes in TRNGs — and their environments — are chosen for purely spontaneous decay.

(2) Measuring “randomness” and extracting random data are very different things. The first is a statistical measure, based on the aggregation of much data; the second is the polar opposite, making isolated particular measurements to gather the unpredictable.

(2a) The measurement challenges of testing for randomness, vs. extracting random numbers, are also very distinct.

(3) All measurements are imperfect; there’s a lot of literature about that. A large (often predominant) proportion of laboratory science consists of characterization, analysis, quantification, and compensation of measurement errors. Scientists actually know how to do that stuff.

(4) The randomness of spontaneous decays (like those of Nickel 63 in a TRNG source) rests on the “solid granite” of decades of observation and analysis.

(5) When I consider the propositions that “the universe is random” or “the universe is not random”, I don’t know how to assign any meaning to them. I’m a wretched old engineering geek, and focus on more concrete questions.

(6)

… the long term deterministic result of initial conditions and physical interactions, radiation and absorption etc.

We know that some types of decay — see (1) above — are not affected by “physical interactions, radiation and absorption etc.” (at least, up to certain energy thresholds) because this has been intensively studied for a long time.

If you want to get philosophical (I don’t; the philosophy spouted by commenters on this site is uniformly at the comic-book level), there’s a concept — and I think some elaborated theory, as well — that the laws of physics as they have existed for billions of years were indeed created by initial conditions.

This notion extends at the macro level to the anisotropy of cosmic background radiation, and the very uneven distribution of galaxies.

Who would suggest that particular interactions of elementary particles were somehow foreordained by the big bang, would receive a LOT of pushback from scientists who have excellent agreement between theory and observation, without the addition of such an exotic (and needless) extra hypothesis.

Clive Robinson October 10, 2021 5:07 PM

@ Sut Vachz,

The random universe is perhaps not random, instead perhaps just the long term deterministic result of initial conditions and physical interactions, radiation and absorption etc.

If you are saying “can the Universe be determanistic or chaotic” we used to believe it was, hence the famous “God does not play dice” comment a littlr over a century ago.

The thing is we actually have no way to “measure random”, all we can say as the observer of the output of a black box bit generator is “We can not see a determanistic process that gives us the bit sequence observed”.

Which at the end of the day is actually saying very little indeed, which is why the “Die hard”, “Die harder”, NIST and other statistical tests are realy “to find failure” not “show randomness”. So your random bit generator is really shoddy if it fails any of them, but passing all of them does not meen your generator is in any way random. Because a basic counter driving any modern crypto algorithm will produce an output that passes all the tests.

It’s one of the reasons I say using a crypto algorithm such as a crypto-hash on the output of an alledged random source is “magic pixie dust thinking” and it in no way “adds entropy” to the generator as some people have claimed in the past.

To see why, rather than use a crypto-hash, use AES in a chaining encryption mode. Feed the Input with all zeros and observe the output. What do you see? Well what in theory can not be distinguished from a random stream of bits. However if you know the AES key and starting IV you can decrypt the stream and get the all zeros input back as output.

How much entropy is in that all zeros input?

Well you get exactly the same amount of entropy after the encryption that is obvious. Because you get the same amount of entropy as the input after you decrypt the encryption output, and importantly both the encryption and decryption algorithms are fully determanistic.

The thing is the encryption determanistic algorithm is so complex you can not in theory tell it appart from a high entropy source…

Speaking of entropy, rather than “information based entropy” let us consider physical entropy. That is all coherant physical objects in time decay, that is they go from an ordered state to a disorderd state.

Whilst that sounds like “Posh Speak” all it realy means is physical objects wear out with use.

So those dice you throw leave a little bit of themselves behind on your hand, and another little bit with every surface thay touch as part of the throw. You are in effect “grinding them down”. The same with any coin you flip or any other physical object (yes even diamonds).

As this “wear” on physical systems is accumalitive it means all physical sources used for TRNG’s have bias. Now as the likes of dice and coins are subject to a uniform gravitational field and their mass decreases with the wear of use, the energy resulting to “grind them down” also decreases by a small fraction… So the wear is not linear, but more a power law or exponential decay with usage (rather than time).

Winter October 11, 2021 4:13 AM

@Clive, Sut Vachz,
“The thing is we actually have no way to “measure random”, all we can say as the observer of the output of a black box bit generator is “We can not see a determanistic process that gives us the bit sequence observed”.”

Does “Random” in the conventional sense actually exist?

“True Randomness” in the conventional sense means unpredictability due to “information loss”. All of physics is “unitarian”, which means, no information is ever lost. So, in this sense, “true randomness” cannot exist.

Contrary to what popularized publications write, quantum mechanics (actually, quantum field theory) is not random. The “wave function” is strictly deterministic. What is stochastic is the measurement outcome. But inside quantum mechanics, the measurement process is not understood. For one thing, measurements in quantum mechanics cannot be described by quantum mechanics (or quantum field theory). It is still debated whether measurement outcomes are really “random” in a meaningful way.

So, it is unclear whether “True Randomness” in the conventional sense actually exists.

If we talk about randomness in terms of information reconstruction in terms of computability and computational complexity, then you can define randomness as information that cannot be reconstructed again within certain limits of computational efforts. But then you have to define “computational effort” in a useful manner.

Clive Robinson October 11, 2021 7:39 AM

@ Winter,

So, in this sense, “true randomness” cannot exist.

If so, then the queation arises as to if we are,

“Just chasing complexity and chaos?”

Which might be the case, when you consider the way things are with complexity by “black box” and chaos by “white box” testing by output observation of assumed or known inputs.

As I’ve mentioned in the past you can intuitively draw a path between fully determanistic and truly random, with the path representing a spectrum where both chaos and complexity exist.

As we both know “intuition” is the start not end of a journey of discovery and any path is seldom straight or undivided.

Winter October 11, 2021 8:28 AM

@Clive
““Just chasing complexity and chaos?””

Indeed, that part can be rigorously proven (coarse graining in entropy, Kolmogorov complexity, quantum computational complexity). There is only so much you can know within a limit in time and energy.

@Clive
“As I’ve mentioned in the past you can intuitively draw a path between fully determanistic and truly random, with the path representing a spectrum where both chaos and complexity exist.”

Except, that the laws of physics might not leave room for “truly random”.

Clive Robinson October 11, 2021 9:30 AM

@ Winter,

Except, that the laws of physics might not…

I’m always a little cautious about “laws” as in reality they are “mathmatical models of reality”.

I was once told a truism,

“Words can not describe the simplest of objects perfectly.”

I feel the same way about maths and the real world.

Back in the early 1890’s Gregor Cantor published a little proof that was kind of grapgical as well as logical. It showed that nomatter how large the set of integers the set of reals would always be larger. One consequence of which is we can not count the real numbers. So we can not measure them in the then traditional way. Often called The Cantor Diagonal Argument it is still to this day “throwing rocks in the road” of much reasoning.

So it could be argued, that we can neither measure random or be able to show a sequence is determanistic or not in the sense that it can be produced from a lesser sized sequence.

As for physics there is that old saw,

“In physics you are taught a succession of lies, each more accurate than the one before.”

So we have to look at the turtles and ask not “Do they go all the way down?”, but “Will we ever be able to know, even in a finite universe?”.

One thing I do know however like the issue of integers and reals, there will always be more infinite sequences than it is possible to measure.

Winter October 11, 2021 10:12 AM

@Clive
“I’m always a little cautious about “laws” as in reality they are “mathmatical models of reality”.”
“So it could be argued, that we can neither measure random or be able to show a sequence is determanistic or not in the sense that it can be produced from a lesser sized sequence.”

Still, we cannot use current physics to argue about “True Randomness” as current physics does not seem to allow this to exist. If you want to argue that it does exists, you leave the realm of empirical and theoretical science.

MarkH October 11, 2021 10:45 AM

More dollar-store philosophy …

Considering that this is a security blog, the practical requirement for the generation of secret numbers is that they be unpredictable to any potential attacker.

The most conservative sources of unpredictability, are those which are unpredictable to everyone.

No one knows how to predict (to within a fraction of half-life) when an unstable nucleus will spontaneously decay. To my limited understanding, if present theory is correct, no one can ever discover how to make such a prediction: it’s literally impossible.

That theory and its associated base of experimental evidence have stood for three generations.

If you can convincingly demonstrate that it is possible — even in theory — to successfully predict such an event, then I think it very likely you will eventually receive a Nobel prize worth more than USD 1 million.

For a population of one unstable isotope which is insensitive to local environmental variability and does not decay into other unstable nuclei whose decay products would also activate the detector, the probability density as a function of time for the next decay detection is accurately given by

λ e^(-λ t)

That is the only predictive information we have about decay timing in that system. It is physically impossible to have any additional predictive information, unless the science is wrong.

If you can show that the science is wrong, then go persuade the physics community: I guarantee you that they will be extremely excited by a convincing case that the theory is wrong.

Extraordinary claims require extraordinary evidence.

MarkH October 11, 2021 11:02 AM

@Winter:

The probability that a fair coin toss will turn up “heads” is quite nearly 0.5. One could say that the probability is deterministically one half.

That doesn’t mean that the outcome of the next toss is therefore fully deterministic!

Quantum wave functions are probability functions. If the wave function is fully deterministic, does that mean that the outcome is also?

Winter October 11, 2021 2:05 PM

@MarkH
“That doesn’t mean that the outcome of the next toss is therefore fully deterministic!”

But it is fully deterministic. At least according to physicists.

@MarkH
“Quantum wave functions are probability functions.”

That is an interpretation (or dogma). There is nothing in quantum mechanics that requires that to be true. The outcome of a quantum measurement is not described by quantum mechanics.

@MarkH
“If the wave function is fully deterministic, does that mean that the outcome is also?”

We don’t know as the measurement process cannot be described by quantum mechanics. Empirically, it is stochastic. But there might be “hidden variables” that fully determine the outcome. Bell’s inequality have a few loopholes that might not exclude such hidden variables.

SpaceLifeForm October 11, 2021 3:30 PM

What is Time?

Does Time really Exist?

What is Measurement?

Remember, you, the Observer, are inside the System, and Obsevation or Measurement of ‘stuff’ inside the System interacts with the System, and results in Change.

There are Observations that we can suspect to be True, but can never Prove.

See Gödel.

A Random Roulette wheel can not exist, as it will always have a bias due to the fact that Pi is irrational (also transcendental, but irrational is sufficient), and you can never, with a physical device, divide a circle into any number of slots without bias (except when the entire wheel is one slot).

Maybe the universe is really an infinite set of Roulette wheels. And the Croupiers are really Turing Machines processing a Randomly Changing tape.

See Recursion.

Clive Robinson October 11, 2021 4:45 PM

@ MarkH, Freezing_in_Brazil, John, SpaceLifeForm, Sut Vachz, Weather, Winter, Other interested folks,

No one knows how to predict (to within a fraction of half-life) when an unstable nucleus will spontaneously decay.

Whilst probably true, you are looking at the problem from the wrong perspective.

Measurments take a period of time (this is an accepted fact).

Thus the question is not,

“Will a specific atom X decay in measurment period Y?”

Which is what you are saying.

The queation actually is,

“Will any atom from population size N decay in measurment period Y?”

The probability of which changes exponentialy –ie as a % of a %– as the population decays.

So the 50:50 probability point for any fixed duration of time Y, only holds for one single moment in time and no other.

So to hold at the 50:50 point for more than one measurment you would have to change the measurment period/duration exponentially in synchronisation with the actuall decay rate.

I think most will realise that changing the period/duration of a measurment with the required degree of fine control is going to be somewhat problematical even with analog tracking loops.

Sut Vachz October 12, 2021 12:06 AM

@ All commenters, respectfully

Just a couple of points arising from the above in regard to mathematics and the use of mathematics in applications.

Cantor is among those influential in introducing actual infinities as a part of mathematics. This was never really done before. Not to rely on quoting authorities, but it is worth noting that Gauss stated that it is an error. Since true science is about that which exists, and actual infinities do not exist, this has damaged mathematics. Vast parts of modern mathematics depend on these actual or completed infinities. Many of these results seem interesting and may be true but stand without real proof. Some other way will have to be found to establish them, and even properly understand what they are about. The “paradise” of Cantor is illusory.

Statements about the integers often use these infinities and are in need of revision. We can not even say “there are an infinite number of integers”. While it is true there is no largest integer, it does not follow from this that there exist in actuality an infinity of integers. There is a potential infinity, one can always add another one. There is no way, however, even in the intellect, to complete this.

Beyond integer questions, there are malformed constructions of “real numbers”. Cantor’s and similarly Dedekind’s treatment of the notion of “real number” are typical. They boil down to the notion that any sequence of longer and longer decimal expressions (“expansions”), because its terms get smaller sufficiently fast, defines something. This is unjustified. Sometimes the sequence does define something, because for example, there is a known existing quantity, for example sqrt(2), e, pi, etc. for which there is defined method for approximation by decimals. But there is no general claim possible that the expansion will always tend in the limit, or “sum” to something. Saying “let the (equivalence class of the) expansion be the real number” is ultimately meaningless, without some kind of qualification on the expansions.

For computation, then, there is a question. What are we computing when we try to simulate “real numbers”, perhaps on a machine, algorithmically ?

MarkH October 12, 2021 12:53 AM

@Clive:

(1) The preceding comment under your name seems to suppose that the time allowed for each measurement is fixed, and further that it should (or even must) be fixed at the mean time before the next decay. Did I understand correctly?

Neither of these is necessary for a TRNG.

Some might find it inconvenient, or even offensive, but there’s no law dictating that the raw bit generation rate be constant.

A hungry bear catches a fish (taking an unpredictable length of time), eats it, and then catches another (taking an unpredictable amount of time). As long as the bear doesn’t starve, the process is satisfactory.

A TRNG can work the same way. It waits for the next decay, extracts data, and waits again. The time required for each measurement is controlled by decay detections, not imposed by the time-measuring system.

(2) I’ve read the phrase “time bias” in several comments from you, both on this thread and others. I think it would be really helpful for you to write a definition (as one would when giving a lecture to a class, or writing a paper) so we can all understand its meaning with clarity.

In the definition I offered for “bias” above, I tried to be concise, explicit, and to use language I hope minimizes vagueness or ambiguity.

Clive Robinson October 12, 2021 2:36 AM

@ MarkH, ALL,

seems to suppose that the time allowed for each measurement is fixed, and further that it should (or even must) be fixed at the mean time before the next decay. Did I understand correctly?

You are asking two actually independent questions there,

1, Fixed Measurment.
2, Fixed mean.

So two answers.

Answer 1
As I’ve noted previously with the roulette wheel / waggon wheel measurment system there are two inputs the Data or D-input and the CLK input and an output Q. Likewise a counter has two inputs.

Now depending on which way you connect the inputs depends on what you measure, frequency or time. But you need to remember either can be a valid measurment for use in a TRNG.

So you can count the time between decays in the population N, or you can count the number of decays in population N in a given time period. Both will record the exponential change in the source output formed by population N. That is time will increase and frequency will decrease exponentialy as population N ages.

However one thing is clear, despite not being able to predict when an individual atom will decay, when dealing with a population N, if N is large or the measurment period is large, the number of decays in that time period or the time period for a number of decays becomes more and more predictable.

So two effects logically must be occuring one is the obvious exponential half life, the other is that for every long or short decay there is a balancing decay or decays otherwise there would be no averaging to the half life curve.

Answer 2
You set the argument for 50:50, I’ve simply pointed out that due to the exponential decay, for any given time period that 50:50 can only happen at one point in a given population N aging.

Something I’m assuming you won’t contest.

To claim otherwise would imply a belief that science would not agree with.

Someone has already claimed that the count period calculation given in the paper you quoted did not have a term that changed with time. The implicit implication of that claim being,

1, That the count remained invarient with time (ie constant).
2, Therefore the decay rate must likewise be constant.

Now, for that to be true, either new radioisotopes would have to forever magicaly appear in the population N, or the person implicitly believes in perpetual motion devices. Because each decay is giving out energy from the source to the detector, so with a count output that does not change with time the energy output would be constant for ever without any balancing input of energy or it’s dual matter…

Winter October 12, 2021 4:07 AM

@Sut Vachz
“This is unjustified. Sometimes the sequence does define something, because for example, there is a known existing quantity, for example sqrt(2), e, pi, etc. for which there is defined method for approximation by decimals. ”

You are entering treacherous terrain. Your “defined methods” requires numbers for which an explicit or implicit algorithm can be written down in a symbolic language. There are only countable such algorithms. So your definition only allows for as many real numbers as their are integers, ie, algorithms.

However, Cantor showed that not all real numbers can be counted by integers. This proof is based on the axioms of set theory. It is only invalid if you reject set theory in it’s current form.

Clive Robinson October 12, 2021 7:27 AM

Not to rely on quoting authorities, but it is worth noting that Gauss stated that it is an error. Since true science is about that which exists, and actual infinities do not exist, this has damaged mathematics.

We live in what is currently believed to be a finite universe. So from the prospect of energy/matter being finite then Gauss’s “true science” is likewise finite in some respects.

However I’m not aware of anyone with sufficient knowlesge claiming maths is “true science” or “real science” or even science.

Maths is the equivalent of a drawing or model, it helps explain not just what goes on in science but many of the affairs and creations of man that have no basis in science what so ever (think finance etc).

You could argue that science follows a line through physics, chemistry, biology and at some point when objects of energy and matter have the ability to make what appear to be choices, science either ceases or changes.

That is a point is reached in the sense where fully determanistic mathmatics ceases to be a valid descriptor and statistical / probabalistic methods start to be the only way forward.

Some say it is the boundry between “hard and soft science” others indicate it is where philosophic reasoning takes over from the various logics by which fundemental maths is formulated.

Either way you could say that neither maths or philosophy are “true science” or even science at all. They are in fact just frameworks in which “true science” can be described and modeled in limited ways.

As for infinity, does it exist in a meaningful form in a finite univers?

Well if you can “name/count it” no, as by definition any countable integer is finite, that is it has a logical successor so can not be infinite. So you keep counting forever and no matter how fast you do not reach infinity. Arguably between any and every two integers there are other numbers, those that can be found as ratios of integers, but between them there are other numbers some of which have infinite sequences such as 1/3 due to the integer base / radix used to express them and others that are infinate because they are not representable in any integer base, thus “go on for ever”. The number of these “go on forever” numbers is not just unknown it can not be quantified in any meaningful way in “natural maths”. However the fact they can not be quantified does not stop them existing or of being of very real use by various induction methods. After all come up with a realistic definition of the number represented by “zero” you very quickly run into trouble unless you have the concept of “infinite” because otherwise what would 1/0 be? How about 0/0?

Clive Robinson October 12, 2021 7:31 AM

Opps,

My imediate above was for @Sut Vachz.

Oh and any other reading along who have neither a headache or splinters in their fingers from scratching their noggins.

Winter October 12, 2021 8:25 AM

@Clive
“As for infinity, does it exist in a meaningful form in a finite univers?”

Mathematics is the language of science. It can describe things that are real and things that are not. Many scientific models use mathematical concepts that have no counterpart in reality. Noteworthy examples are imaginary numbers and infinities. Quantum mechanics in build on imaginary numbers and integrations over infinity. If all ends well, the imaginary numbers and infinities are gone by the time the calculations reach a measurable quantity. If they are not gone, the outcome is generally discarded as wrong.

In this way, infinities and imaginary numbers are crucial concepts in scientific theories and models, but not in reality.

But the whole question of “What is real?” has no place in quantum mechanics and physics. As Kant already asserted, we cannot know the true nature of things. Perception, models and theories is al we have.

JonKnowsNothing October 12, 2021 9:58 AM

@Clive, MarkH, All

re: splinters in their fingers…

Only my page up and page down keys are getting a workout so far …

This is a fascinating discussion! Only wish I’d been exposed to some of this information in this format years ago. A downside of our “cram it to test it” method of education is that deep ranging explorations of concepts are hard to come by.

It doesn’t always matter if you “get it” on first pass. The exposure and counter point ideas are what’s important. Knowledge, Understanding, and Comprehension in practical life terms, comes in stages.

Please carry on all.

MarkH October 12, 2021 10:49 AM

@Clive:

Thanks for your clearly reasoned and worded reply. Your words

when dealing with a population N, if N is large or the measurement period is large, the number of decays in that time period or the time period for a number of decays becomes more and more predictable

point toward the heart of the matter: with modular time sampling over a short period, the resulting count becomes less and less predictable.

========================

Where our perspectives diverge, is how the distribution of such counts depends on the precise value of λ. Would you say that I got that right, or am I misconstruing the point of disagreement?

If I understand correctly the point of view from which you’ve been reasoning, the intervals between decays will — over a sufficiently large number of measurements — exhibit a regression toward the mean, causing the distance between (a) the mean of measured values, and (b) the exact value of λ to grow smaller and smaller: eventually within milliseconds, microseconds, or even nanoseconds.

Am I painting the picture about right? If not, will you kindly correct my understanding of your perspective?

Weather October 12, 2021 1:58 PM

@MarkH all

A expansion attack, say you have ten numbers and want to guess the next 100 ,you generation of the 100 ,and then when the next number comes in you remove 50% of the generated numbers. Rinse repeat.
But if you move the slide window too the right number 99 isn’t out of 2 generated numbers but 100.

Take K14-15 it has a half lifetime of the universe, it might look pretabile but we have only recorded 1-e1000 of the time, it works the same logic wise for 100 year half-life.

Winter October 12, 2021 2:31 PM

@MarkH
“exhibit a regression toward the mean, causing the distance between (a) the mean of measured values, and (b) the exact value of λ to grow smaller and smaller: eventually within milliseconds, microseconds, or even nanoseconds.”

You are not estimating lambda, but the time until the next decay. The number of historical decays let’s you estimate the current lambda, but not the time to the next decay event. No amount of historical observations will let you do that.

Sut Vachz October 12, 2021 8:55 PM

@ Winter @Clive Robinson et @ al

“Cantor showed that not all real numbers can be counted by integers. …”

His proof only holds if you accept that actual infinities exist. In his diagonal argument, the completed infinity of integers is presumed in setting up the “list” of real numbers, and also in assuming a real number is always defined by just giving its decimal coefficients, when he constructs the counterexample along the diagonal.

One could try a lesser theorem: there is no function f(m,n) of two integer variables such that every function g(n) of one integer variable is f(m0,n) for some m0. This statement does not use actual infinities; the functions involved are assumed to be defined “algorithmically”. And this form of the theorem is true, using the same Cantor style diagonal argument, considering the function h(k) defined as f(k,k)+1. This shows that not all “algorithmically” defined real numbers can be listed “algorithmically”.

Cantor’s set theory also involves the assumption of actual infinities. Wherever those creep in, set theory has to be reworked.

“However I’m not aware of anyone with sufficient knowlesge claiming maths is “true science” or “real science” or even science. …”

Science is knowledge through causes. Mathematics works by (formal) causes. We see the causal chain in its deductive arguments. It deals with quantity, abstracted from any sensible qualitative particulars. It can then be applied to real physical quantities because these can be considered in abstraction from their particular distinguishing physical qualities.

“ … imaginary numbers and infinities. …”

“Imaginary” numbers were speculative originally, but can be justified and do exist as much as any other numbers do. Infinities can be used sometimes, for example, when understood as meaning some value that exists and is being obtained by a limit process, as in integration formulas.

It’s true we know by effects, and can’t get back to some ultimate inner cause, but that doesn’t mean we don’t know reality to some degree.

Clive Robinson October 12, 2021 10:06 PM

@ Sut Vachz,

His proof only holds if you accept that actual infinities exist.

I both do and do not think that “infinities exist”, and no I’m not mad or likely to go mad from this belief (unlike some mathmaticians around Cantor’s time are reputed as having done so).

Further my reasoning for both is rather more obvious than both set theory and logic.

Firstly we have reason to believe our universe is finite and importantly is bounded. Which anyone who has studied thermodynamics knows makes it a special case.

What is in question is if it is bounded in time or not, for which currently there are no “obvious” answers.

Why is time important?

Well we believe the universe is expanding in volume in what we belive is a non finite space. That is simplistically think of it as being inside a ballon that is being inflated, but we can not look outside of the ballon to see if there are any external limits.

If time is not bounded then the Universe will at some point expand to some definition of infinity.

But further consider there is a very real difference between information and the duality of energy/matter. Science currently believes that energy/matter is “granular” and has a smalest size for objects.

Therefore the universe is made up of a finite number of finite objects and as such is discreet. However the space they occupy is expanding towards infinity and as such is continuous.

Whilst the former does not alow for infinity, the latter does at some point in time.

But… even if the Universe is made of finite objects in a finite space, this does not exclude the potential for infinity.

Consider how the discreet finite objects can be placed in a continuous space, it presents you with an infinity of possible positions for the finite objects to be positioned in.

But further, consider “natural numbers” or integers from 1 upwards.

There is the very reasonable assumption that every integer has a successor, therefore no mater how big a number is there will always be another number bigger. But from that it follows that every number has a predecessor thus we have negative numbers that arguably do not have a “natural form”. That is whilst I can give you ten apples, I can not give you minus ten apples. That is whilst positive integers have physical meaning, the negative integers only have meaning when we consider the movment of physical objects.

But we also know that fractions are meaningful within the integers when drawn on a line. That is I can give you 1/2 an apple, or 3/4 of an apple and so on with an integer divided by an integer to give a ratio that can be scaled to fit between any two points.

What Cantor also showed was that despite those integers being infinite, in a continuous space, there would always be other numbers that due to the spacing between integers could not be represented by any integer or ratio of integers even though the integers are finite, they are still discreet not continuous.

So the bottom line is continuous alows infinity of position to exist even though the points of refrence are disreet and finite.

Which is kind of what you would expect with “scaling” for any given ratio it can apply across an infinite range.

To see this consider standing between two infinite parallel lines. As they go into the distance from the point of the observer those lines become the same point even though no matter how far you travel along them you will never get to such a point as it does not physically exist, but it does exist in terms of observed distance / effective movment. How do you explain it in a meaningful way without accepting that the finite and infinite can co-exist in an observed universe.

MarkH October 12, 2021 10:24 PM

@Clive, Winter:

I see that I should have written “causing the distance between (a) the mean of measured values, and (b) the exact value of 1/λ to grow smaller and smaller”.

Apologies for my leaving out the reciprocal, and using wording which perhaps obscured that I intend

| avg – 1/λ |

where avg is the mean of a series of measurements of the interval between successive detections.

We’re not talking about measuring λ. Rather, as I understand Clive’s position, the diminution of λ as the source ages must inexorably bias the numbers output by such a TRNG; and even small changes in λ may be significant.

I’m spelling out what I guess could be the underlying logic of Clive’s position, and await his reply as to whether I “got it”, or misunderstood.

Sut Vachz October 12, 2021 11:50 PM

@ Clive Robinson

“Why is time important?
Well we believe the universe is expanding in volume in what we belive is a non finite space. … “

I can’t see that time and space exist outside the mind. Objects, things and objects in motion are real and have a separate existence. It’s true objects can be in different places, but “space” seems like the “ether” and isn’t needed and doesn’t exist except perhaps in the mind as the negation of objects. Time is the completion in the intellect of something in motion, a kind of number of motion. It doesn’t exist outside the intellect, just as numbers like “6” are not found as objects in the physical world. There can be several physical things, but the numbering as 6 things occurs in the mind. Time is analogous but for physical motions.

The continuum if there is one perhaps can be divided and points picked out, and this can be done repeatedly, so there is a potential infinity of locations for things. But that doesn’t mean that the continuum is composed of an (actual) infinity of points. It’s just something that can be divided indefinitely at will. A line is not “made up of” points.

“ … positive integers have physical meaning, the negative integers only have meaning when we consider the movment of physical objects.”

The natural numbers 1, 2, 3, … are the species of discrete quantity. They are neither positive nor negative. Positive and negative come in as relations between a number and a number, e.g., +3 expresses a relation between say 8 and 5, and -3 the relation between 5 and 8, analogously to double and half. The arithmetic of “signed numbers” can be developed from the arithmetic of the natural numbers in a straightforward way.

Finite and infinite in the potential sense do seem to exist, but actual infinity does not exist. Actual infinity was brought in as a move in relatively modern mathematics, but it is a false move and does not connect with genuine mathematics.

Winter October 13, 2021 1:06 AM

@
“His proof only holds if you accept that actual infinities exist. ”

But nothing in mathematics “has to exist”. Do imaginary numbers exist? Does it matter?

Cantor used the axioms of set and number theories and the rules of proofs. I have not seen anyone finding an error in his proof.

If you want to add an axiom that actual infinities do not exist, I suspect you will find that things start to break in many places in mathematics.

PS: Many people have complained about Cantor’s use of set theory in his proof. There are obviously discussions about contradictions in set theory, but we already know from Gödel that no axiomatic systems van be free of contradictions.

ht tps://en.m.wikipedia.org/wiki/Russell%27s_paradox

Clive Robinson October 13, 2021 4:02 AM

@Sut Vachz,

It’s true objects can be in different places, but “space” seems like the “ether” and isn’t needed and doesn’t exist except perhaps in the mind as the negation of objects.

I’m realy uncertain what attributes you are attaching to “space” to say that.

Especially when you say,

“objects can be in different places”

For any physical object to move it must have,

1, A point of origin, to be comming from.
2, A direction to move, to be going to.
3, A clear path in which to travel.

Underpining these is the idea of “space” that is the the freedom to move can only happen if there is no hinderance to the desired movment.

At a more fundemental level, if physical objects are discreet they can not be joined to other physical objects otherwise they are elements of a larger object. That is there is a gap or volume between discreet objects where there is no other physical object or connectivity. This absence of a physical object or connectivity makes a volume that is “empty” and is what space is commonly considered to be.

https://dictionary.cambridge.org/dictionary/english/space

However science as a general rule has more formal definitions for commonly expresed ideas and definitions. Importantly different knowledge domains may have different definitions and “space” is no exception to this.

However from the perspective of Physics,

“Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because nothing more fundamental is known at the present. On the other hand, it can be related to other fundamental quantities. Thus, similar to other fundamental quantities (like time and mass), space can be explored via measurement and experiment.”

(https://en.m.wikipedia.org/wiki/Space)

It’s the definition of space I use unless I state otherwise. In part because whilst it alows for non-euclidean space (spacetime) over considerable distance it generally indicates that except under extremese local to an object space is effectively Euclidean.

That is on a large enough curved surface of a spherical object the curve can not realistically be determined from being flat. Which is why “local maps” work by a simple normal projection.

SpaceLifeForm October 13, 2021 3:38 PM

My Philosophy (Cliff Notes version)

Space is Infinite

Space makes H

Big Bang is Illusion

As to a Practical TRNG device, I believe that the decay detector would have to be a sphere, with the radioactive source inside. And, then hope that none of the decay events can tunnel thru your detector, un-detected.

MarkH October 13, 2021 9:35 PM

@SpaceLifeForm:

For a TRNG, a decay detector with 4π steradian coverage is not necessary. The closer a design gets to all-angle coverage, the more difficult it becomes; and in any case, no detector will catch every decay.

If you want about 100 detections per second, then detecting ten percent of the decays from a 1000/second source is exactly as good for a TRNG as detecting one hundred percent of the decays from a 100/second source.

Every detected decay is an excellent source of unpredictable data; provided that the rig has about as many detections per second as the designer intended, the fraction of decays not detected — however many — does no harm to the TRNG’s collection of unpredictable entropy.

I showed above — based on simple math flowing directly from well established physics — that with modular timing, not only can the uniform distribution of output numbers be excellent, but also that parameters can be varied a lot while maintaining excellent distribution; see the comment timestamped October 10, 2021 2:02 AM (and one earlier comment for comparison of results).

Any configuration in which detects a reasonably predictable fraction of source decays can work just fine!

Clive Robinson October 13, 2021 11:14 PM

@ SpaceLifeForm,

As to a Practical TRNG device, I believe that the decay detector would have to be a sphere, with the radioactive source inside. And, then hope that none of the decay events can tunnel thru your detector, un-detected.

That is what you would need if you wanted to catch all decays, as you would in a “nuclear battery”.

However the assumption is a uniform source…

That is the decay output is on average the same in all directions for a sufficiently large source.

So… If you have a large population size N then you will get a total of X particles decaying in a given period of time. If however the source is uniform and you only monitor 1/12th of the surface you expect to get X/12 of the particles. Or in effect you actually have a population size of N/12.

That is you would expect to be further down the half life curve, but as the curve is the same “% of a %” you would expect to see on average the same distribution etc but with the count rate the same as a source with that fraction of the population N.

The thing about “distributions” is they are for significant measurments not sparse measurments. You can see this on a Gaulton board as you reduce the number of balls or watch one in very slow motion. Look at it this way if you have a relatively large number of bins at the bottom you have “no idea” which bin any given ball will drop in, so the first very few balls look to be totaly random. But as the number of balls increases the distribution will start to appear as expected.

If you watch several runs of a Gaulton board in slow motion you will see a number of things happen, then follow the thinking through in terms of reducing the number of balls exponentially.

Sut Vachz October 14, 2021 12:30 AM

@ Winter

‘” … nothing in mathematics “has to exist” … “

Things don’t “have to” exist, they either exist or they don’t. Imaginary numbers do exist because a way to account for them was found, even though originally there was no justification for them, but only the alluring “imaginary” prospect that if they were to exist it would make a lot equations solvable by uniform formulas without having to worry about special conditions on coefficients etc.

“Cantor used the axioms of set and number theories … add an axiom that actual infinities do not exist … “

Cantor and the set theorists in general have varying sets of axioms they start with, and on the basis of these are able to use deductive argument to prove things.

But axioms shouldn’t be just meaningless or hypothetical jumping off points, they should be true if we hope to work on things that are not just deductive exercises in some kind of arbitrary world, which would be pretty sterile. So, it does matter if we are talking about things that exist.

One probably doesn’t need an axiom that something, e.g. actual infinity, does not exist, because mathematics is concerned to establish things that do exist, so the axioms are, as far as I have seen, always asserting existence or properties of existing things.

Eliminating actual infinity does look like it would leave a lot of mathematics without proofs, but it may be possible to prove a lot of those things using other methods or it may induce a new way of looking at those things and a recasting of the theorems.

Sut Vachz October 14, 2021 12:41 AM

@ Clive Robinson

There are mathematical models of space or space-time, using 4 or higher dimensional mathematical objects, and to the extent they help us understand real things and phenomena, all the more power to them.

But an impudent thought intrudes: Ptolemy’s epicycles do an arbitrarily good job of accounting for planetary motions, so were a successful mathematical model. But nobody takes them seriously as models any longer (although they still have utility), and says that there really are these stacked circles out there. Maybe the models of relativity, strings, quantum foam etc., and space and time are destined for the same oblivion. 😉

Winter October 14, 2021 2:27 AM

@Sut Vachz
“Maybe the models of relativity, strings, quantum foam etc., and space and time are destined for the same oblivion.”

As there is no empirical evidence on any theory of space, theorists are very sure space will not be what we think it is. The bets are that a theory that unifies quantum mechanics with general relativity will tell us the nature of space. But as no one knows how such a theory would look like, that does not help.

The current “best guess” from a theoretical standpoint is that “connections” (i.e., space) are determined by entanglement between patches of space, or (virtual) particles [1]. The more/stronger entanglements, the closer the patches of space are.

It is obvious that this is currently as vague as you can formulate anything at all, and no one is happy with it.

[1] This is an idea “popularized” by Leonard Susskind. It is about ER=EPR (there is a wikipedia page). To say this is complex would be an understatement (and complexity plays a role too).

Winter October 14, 2021 3:32 AM

@Sut Vachz, All
“Maybe the models of relativity, strings, quantum foam etc., and space and time are destined for the same oblivion.”

I found a popularized description from Quanta Magazine of the idea that space == entanglement. I sometimes cringed but on the whole it does a good job.

ht tps://www.quantamagazine.org/tensor-networks-and-entanglement-20150428

I do not think this is related to there being “True Randomness” in the universe. But we have to move on, I suppose.

Clive Robinson October 14, 2021 2:52 PM

@ Sut Vachz,

But an impudent thought intrudes: Ptolemy’s epicycles do an arbitrarily good job of accounting for planetary motions, so were a successful mathematical model.

Yes, for more than a,millennium and a half, and still holds the record for the longest use of what you might call applied mathmatics in science.

Actually the epicycles are effectively still very much used, I use them in various satelite and planet movment calculations in a generalised form.

These days we call it Fourier Analysis and use FFT’s that are based on DFT’s that are based on multiplying and adding “circular functions”[1] that do not have to be harmonically related.

The fact we change the name does not realy invalidate what is being calculated either by maths or mechanics.

[1] This short video shows circles of different radii and rotation rate adding together to make a square wave,

https://www.youtube.com/watch?v=LznjC4Lo7lE

The scond half shows how it relates to epicycles.

It is actualy quite similar to one of several programs I wrote in Prime BASIC to drive a Techtronics graphics terminal or A0 plotter using HPGL back in the late 1970’s. Oddly I still have the punch paper tapes. It formed part of a project to produce a satellite tracking program… I drew several such rotating models and the wave forms as part of the “interactive” part to explain the inner workings.

The course I was on required a “project” and for that I had gone with an “official project” and developed an 8 channel A-D board and PCB for a CP/M computer that was just starting to be standard in the lab. Whilst my project partner wrote the software to demonstrate it. I also developed a second board that was a 16 channel 12bit D-A board using a couple of 8bit D-A chips wgich worked fine hardware wise. However due to the fact the software was shall we say getting a bit two complicted we did not include it in the final project submission and I suspect it cost us. The satellite tracking program I had written just for fun in my own time. However my arm got twisted and I put it in as a project on a different part of the course. Whilst the A-D got a distinction, my partner was somewhat aggrieved to find my side project got best project of the year with photos of me and it going in the education establisments prospectus the following year. An honour I could well have done without…

SpaceLifeForm October 14, 2021 5:11 PM

@ Clive

However the assumption is a uniform source…

That is the decay output is on average the same in all directions for a sufficiently large source.

Is this not the equivalent of wondering about the state of Schrödinger’s cat?

What if there is only ONE radioactive atom in the box?

If the detector misses the event, does that mean that Schrödinger’s cat gets to live?

SpaceLifeForm October 14, 2021 5:57 PM

@ Winter

As there is no empirical evidence on any theory of space, theorists are very sure space will not be what we think it is.

Yep. Space is everywhere, it is not where “Nothing” is.

Is it everywhere. It exists inside in every atom of your existence.

It is not “Nothing”.

MarkH October 15, 2021 12:02 AM

@Clive:

It seems to me that you are certain that gradual de-population of the radioisotope source must adversely affect the unpredictability of outputs from a TRNG using time-based decay measurements.

I’ve been trying and trying and trying to understand why you believe this to be so, without success to date.

I pore over various of the comments you’ve offered, as though I were an archaeologist studying a tattered palimpsest, clutching a magnifying lens in one hand.

A few questions I’ve asked of you remain without visible reply; perhaps the site software discarded some answers …

Looking back to your comment identified “October 9, 2021 4:36 AM,” I see some possibilities to clarify mutual understanding.

In response to a simple derivation I made from the properties of the exponential distribution (which gives probability densities for time until the next event) you wrote

You are clearly describing an exponential time bias, but you forgot to mention the point I’ve previously made, that no matter how short the time interval chosen the time bias is still there and provably continues to increase.

I’ve been much confused by the term “time bias.” It seems to me that from the foregoing, I can infer that “time bias” means that earlier decay detection (and therefore, earlier counter values where a counter is used) are more likely than later.

[1] Does that catch what you mean by “time bias”? If so, is there more to the definition?

As to the content of that sentence, we can surely agree that the greater likelihood of temporally prior values is beyond dispute, and can never be reduced to zero.

The part that surprises me every time I read it, is the assertion that said bias provably continues to increase.

[2] What is this process, of continuing to increase? I don’t see it at all. Does it mean, increasing as the source fades and λ grows smaller? Does it mean increase from one detection & extraction to the next, even when λ can be considered to be held constant because of the short lapse of time between the pair?

[3] What is the cause of such increase?

… because we are dealing with measurements of past events not future probabilities we can see that the bias accumulates as you would expect when you integrate the measurements.

I confide that you see it; for my own part, not as yet. My reading of the footnote is that the motivation for integrating is to increase the visibility of bias in the data which might otherwise escape notice.

[4] Did I correctly understand the purpose of integrating?

Integration could mean different things: pure summation; an op-amp integrator with gain as a function of frequency; a low-pass filter with a suitable corner frequency; and so on. Further, raw data could be interpreted as unsigned so that sums increase without bound, or as signed so that sums are expected to converge toward zero as more data is gathered.

[5] Exactly what kind of integration did you have in mind?

There must be some truth underlying this debate. If you’ll be so kind as to respond to my numbered questions, it might be a step toward helping everyone following this discussion to grasp that truth.

Clive Robinson October 15, 2021 1:51 AM

@ SpaceLifeForm,

If the detector misses the event, does that mean that Schrödinger’s cat gets to live?

Logically yes.

And now you are “joining the dots”… Distributions only work on mass.

Now ask the questions about “sparcity and random” and “what is sparce”.

That is how many balls in a Goulton Board are needed to show a “normal distribution” from a “flat distribution”?

MarkH October 16, 2021 2:26 AM

.
Review of Modular Extraction

For the few of you who’ve been patiently following this lengthy discussion of radioisotope True Random Number Generators … I’ve been thinking that not everybody is “on the same page” concerning modular decay timing. It occurred to me this evening, that perhaps no two of us are picturing it the same way!

Clive has gifted us with the wonderfully visual metaphor of a roulette wheel. Perhaps a “wheel of fortune” is more appropriate. Imagine that all of markings are whole numbers, in sequence from zero, all distinct; and that the usual indicator is in place so everyone can agree which marking is “selected.”

If the wheel has 2^n numbers marked on it, each reading will extract n bits of raw data.

Unlike the wheels used in gaming, this one spins all of the time, at a (nearly) constant rate.

Let’s further imagine that each time the TRNG detector captures an ionizing decay particle, this triggers a strobe light of such short duration that the wheel seems to be “frozen” for a moment, and that a clerk records the number lined up with the indicator.

That is what I mean, by modular decay timing. Worthy of note:

• the data output rate is irregular

• there is no set upper bound on the time between numbers being recorded1

• the times at which data extractions occur are controlled by decay detections, NOT by the wheel or the clerk

• this process is nothing like equally spaced sampling as typically done with A/D converters

• the only periodic function in the process is the rotation of the wheel

  1. The Korean design study detects an average of 20 decays per second. By my math, about one decay in a billion will come a second or more after its predecessor; during that time, no raw data would be created. 

Clive Robinson October 16, 2021 6:44 AM

@ MarkH, ALL,

Clive has gifted us with the wonderfully visual metaphor of a roulette wheel. Perhaps a “wheel of fortune” is more appropriate.

Not my name, not sure who chose it but it’s,easy to see the wheel spining at a near constant rate with a ball running at an approximately constant rate but puterbed by the diamonds.

As I said unless you are using just a single latch you are using a clocked counter circit. Depending on which way around you connect them it is either a frequency counter or a period counter.

As you describe it,

Unlike the wheels used in gaming, this one spins all of the time, at a (nearly) constant rate.

That would be a fixed frequency oscillator, but is it driving the D input or the CLK input of a latch or the count and gate inputs of a counter.

that each time the TRNG detector captures an ionizing decay particle,

This implies that the output from the particle detector is controling when a reading is made.

this triggers a strobe light of such short duration that the wheel seems to be “frozen” for a moment

So the particle detector is controling the “gate” signal on a “counter”.

So what is being measured is the time in oscillator periods with a loss of precision on a single reading due to the signal ariving part way through a counter period.

But if the counter runs continuously that loss of precision has to carry forward into the next count, and so on repeatedly untill the fractional count periods become equal to a full count and shows up as one more or one less “count” in the reading.

There is a well known technique where using a frequency counter or time counter of say 3 decimal digit precision you take ten readings, average them and get an extra digit of precision[1].

@Weather produced a set of graphs that show this phase accumulation occuring.

Now the important thing to remember is,

Whilst in the data domain this may look random it is only because you have lost the information from the time domain

If you can get the time refrence back, then you can fairly easily “phase lock” another oscillator to the oscillator driving the counter and so get the same numbers from another counter.

Think on that for a moment.

That is if an attacker can get the time refrence and the data they can synchronize their oscillator thus their counter as accuratly as they like to that of the TRNG.

As long as their oscillator remains synchronized which it will for long periods of time –look up loose locked oscillators– then they can recover the “random” from only the information coming from the time domain.

Even if their oscillator drifts a little, their “data” will be within one or two count errors which means the “search space” is very small[3].

Now do you understand why I say stoping the time based side channel is so important?

Now even if you add some crypto algorirhm to the data before you output it you have two basic options,

1, Reversable algorithms.
2, Oneway algorithms.

To remove the effects of the first you only need the key.

To remove the effect of the second is easy if it has a trapdoor function you know about. But failing that because you have access to the leaked time information you know within a fairly good guess what the actual data value is so you have a quite small search space in which to find it and use a “buck saw” method to lock into it[3].

It’s why you care not a jot how predictable or not your random source is, because you are not predicting anything, you are using things that have been measured, and as an attacker you can measure the measurment if a time based side channel exists that you can get access to. That in modern “efficient” systems is all to easy. Heck Intel provide you with access to very accurate,timers within the CPU that are driven by the same oscillator the rest of the chip is and that sort of noise gets around.

So the question you might now be asking is “Can you turn this into a practical attack?”.

Well the answer is I’ve built and demonstrated a system back late last century that showed you could do it not just through the “roulette wheel” circuit but the following XOR based de-bias circuit. Look on repeating it as a graduate level project.

But there is an old saying,

“The proof is in the pudding”

In this case the “pudding” is a timing diagram as a “graphical proof”.

[1] If you think about it for a moment it is equivalent of adding the circuitry required for an extra digit of counting and changing the effective count rate by ten. So insted of counting say 314 and an unknown fraction on a single short reading you count 3141 with a longer count. That is those 1/10th fractions accumulate[2].

[2] There is another technique that has been around for a while invented by an engineer at the UK company RACAL where you use the fractional accumulation to correct the phase of a phase locked oscillator. The net result is you can control the frequency of an oscillator to some fraction of the refrence frequency. It’s called “Fractional N Synthesis” and there are two ways you can do it, by using an analog phase correction with a D-A circuit, or by changing the divide ratio of the “pre-scaler” counter from say 10 to 11 thereby “swallowing a pulse” thus changing the divide ratio by 1/10 for that cycle. It is important to understand the importance of the improvment made at UK company Marconi by the use of a random signal,

http://www.aholme.co.uk/FracN/Synth.htm

Remember this is a reciprical process if you do not loose the time refrence.

[3] Because of the reciprocal nature of the way this works you can use a “buck saw” method to home in and lock the oscillators and counters using only very occasional reads of the data. Which as it pulls out any bias in the source can be used to “track the average” thus feed it back to reduce the search space.

Sut Vachz October 16, 2021 10:19 AM

@ Winter @ Clive Robinson et @ al

If any interest in delving into mathematical foundational questions, one might want to look at the papers (and a book or two) of the late Edward Nelson, professor of mathematics and Princeton.

https://web.math.princeton.edu/~nelson/papers.html

One might start with the paper “Completed versus Incomplete Infinity in Arithmetic”, where the subject of the title is explored and pushed to the discussion of how much mathematics can be done without complete infinity, and of the connexion of the idea to computational complexity, including the feasibility results of Bellantoni, Cook, and Leivant (!!!)

https://web.math.princeton.edu/~nelson/papers/e.pdf

Food for thought.

MarkH October 16, 2021 12:08 PM

@Clive:

Are you thinking of a TEMPEST attack?

A data-at-rest attack?

Or some combination of the two?

Clive Robinson October 17, 2021 7:14 AM

@ MarkH, ALL,

Or some combination of the two?

It is actually way broader than that.

It is primarily an attack on “The Root of Trust”(RoT) of most modern information security.

The RoT is an intangible information object used for “Authorization”(AuthZ), “Authentication”(AuthN), and much more as well. Sometimes called a “seed”, “shared-secret”, “master-secret” or in more limited cases a “key”, or less corectly an “Initialisation Vector”(IV), “Number used Once”(nonce), etc.

At heart a RoT is an “Abstract Data Type”(ADT) “bag of bits”(BoB). Where the bits are independent of each other and not produced by a wholly determanistc algorithm. So in “the general view” they should not be able to be determined except by chance or brut force.

There are many physical processes that are believed to be able to produce such a BoB and they are the source for what we call “True Random Bit/Number Generators”(TRBG / TRNG).

We define “statistical properties” for such processes and when used in practical generators the are frequently called “entropy sources” or just “sources”.

Before we consider them to be usefull as sources for such RoT producing processes, we take thousands of millions of output measurments. This is to build up “statistical evidence” that each output is suitable for use or how to mitigate those that are not what they could be.

However we all to frequently make the mistake of thinking in one measurment domain in one of three time frames,

1, Past
2, Present
3, Future

One side effect of this is,

“… can not predict with better than 50%…”

Thinking. As “the” security requirment along with the assumption that past events are unavailable.

Which usually leaves out thinking about the “present”, which is where a lot of things can go horribly wrong.

The reality of the RoT BoB is that it is a Data Domain object arived at as a result of “measurment in the present”.

All to frequently the measurment is made in the time domain or the reciprocal frequency / phase domains, by use of an oscillator driving a counter and some form of triggered gating mechanism to read out the counter value.

So you end up with three things,

1, The Counter (state).
2, The Trigger (time).
3, The Reading (data).

If you know any two of the three you can work out the third.

Thus the entire security depends on only making one –the data– available outside of the process.

Importantly it does not matter if you can predict the trigger time in any way (future) as long as you can detect the trigger when it happens (present) or has happened (past). So the statistics of the source are in this case not really relevant.

The security of the system is thus very very dependent on a second or third party not knowing or being able to determin when the trigger happened with respect to the counter.

The way the source measurment process creates the trigger is by definition “work” so has both “energy domain and “time domain signatures as do the oscillator and counter. The time signiture can leak via the energy signiture of any hardware involved and any software it effects which also has energy domain and time domain signitures of it’s own.

There are many examples of attacks on energy signatures by “Power Analysis”(PA) attacks, that come in several flavours,

1, Simple (SPA)
2, Differential (DPA)
3, Spectral frequency/Phase(SpPA)
4, High-Orde(HO-DPA)

Therefor the entire BoB generation process needs to be treated carefully to remove any “time domain” information leaking. Which it can do in,

1, Power / Amplitude Domain
2, Time Domain
3, Phase / Delay Domain
4, Frequency Domain
5, Sequency / Code Domain

The only effective way to do this is via, extended EmSec techniques that have as their foundation stones hardware based “Passive EmSec”(TEMPEST / EMC), “Active EmSec”(Fault Injection), and numerous specialised software techniques including including some crypto techniques.

I’ve covered some of it in the past on this blog but you would be looking at quite a large book just to give a reasonable feel for it.

For instance Prof Ross J. Anderson of both Edinburgh and Cambridge Universities “Security Engineering” book at over a thousand pages just brushes some of the techniques you need to be well aquainted with,

https://www.cl.cam.ac.uk/~rja14/book.html

As do some of our hosts books.

The point is the “Root of Trust” is the security equivalent of Tolkin’s “One Ring”, which makes getting access to it in any way possible a highly desirable goal. Not just of Level III agencies / entities but just about anyone else who desires “To be in the know”, so any method imaginable will be tried.

JonKnowsNothing October 17, 2021 8:22 AM

@Clive @MarkH @All

Please correct if I have not grasped the concepts.

  • Q: Does the failure of RoT BoB lie in the usage of mechanical or oscillators for driving the input/output?
  • Q: If you had the same starting inputs and did not use any mechanical systems to drive the outputs (no timing tells) would that have better results?

From the thread it seems that there are 2 sources of failure:

  1. Using any mechanical or other means to collect a base seed (decay rates, number of starts in the sky or sheep in the pen etc).

These only appear to be random but given N-Time these all end up flat lined. However, within P(N-Time) the distribution may appear to be unique. This faux-unique passed into a mechanical system does nothing to improve the outcome.

  1. Using a mechanical method of driving the output (oscillator) which gives the entire show away.

Even if you find some event that might work, by dropping it into a known frequency system (PC, Mainframe, Supercomputer) the very basis of how these systems works at the lowest level (clock timing, gates and registers) and that all current systems work on a similar methodology, this alone will make the appearance of security (RoT BoB) just Emperor’s Clothes.

Clive Robinson October 17, 2021 10:31 AM

@ JonKnowsNothing, MarkH, ALL,

Q: Does the failure of RoT BoB lie in the usage of mechanical or oscillators for driving the input/output?

1, The physical source, oscillator, counter, trigger and data out mechanism are all required to get the required data.

2, The oscillator is assumed to be both periodic and over the period of a few measurments to be stable in frequency thus predictable in time

3, With a stable oscillator the count rate or period (which ever used) is likewise stable with time.

4, This means the “state” that provides the data at the time of measurment is also predictable with time.

Think on the internal counter as a clock on the wall, that you glance at from time to time randonly, and write down the time you saw. If there is a CCTV camera pointed at your face even though it can not see the clock face it can see your eyes look at the clock. So as it’s internal clock runs at very nearly the same rate and is set at very nearly the same value it can fairly acurately predict the time you write down. This makes the search space very small.

If you get to see the times you wrote down it can correct it’s clock to compensate. But… It might only need one or two actuall times you wrote down (the data) to adjust maybe hundreds of other times it saw your eyes look up but does not see the data.

So an evesdropping system can build all the data readings with a very high degree of accuracy even though it might only see one in a thousand or million data ouputs to have either no or negligable search required to recover the data for the others.

So you need a reliable way to “decouple the data from the time of the measurment.

It sounds easy but actually it is quite hard because the waste energy from those measurments wants to get out any which way it can. And with time it is monotonic so you can not easily fritz with it, because what ever you do to fake the time at some point you have to get back on time. As the old saying has it “What goes up must come down” applies to both time and frequency…

It’s one of the reasons they use a crypto algorithm on the actuall data. However if you know the design there are ways to remove even the effects of crypto secure hashes because of the very small search range.

Which means not only do you have to stop the time triggers getting through you have to make the state of the counter non-determanistic.

One way to do both is with an entropy pool that uses the little real entropy it gets from each measurment to drive a chaotic algorithm of some form. Whilst it changes significantly at each measurment, because you do not know the actual input data or the internal state of the chaotic algorithm it makes the search space much larger, prohibitively so if you do things right.

So

1, The input you have to the entropy pool goes in at a variable time rate.

2, The data you read from the entropy pool is at a different fixed time rate.

3, Providing the input algorithm and output algorithm are sufficiently different they act as encryption and get the benifit of the avalanch effect.

4, Providing the algorithm that stirs the entropy pool is both sufficiently chaotic and uses data from across a very large time interval then the pool state will not be predictable from either the input data or input measurment time points.

So the question becomes “Does the output algorithm stop the entropy pool state becoming known?

This is one of those “black swan” or “Unknown Unknowns” questions. Currently yes we believe we can decouple it by using a Crypto Secure algorithm. However that assumes,

1, True one way functions(OWF) and chaotic algorithms really exist.
2, The KeyMat can not become known to either the first party (owner) or any second party (authorised user) or third party (unauthorized observer).
3, The crypto algorithm is not trapdoored, or becomes broken in some other way.

Getting these points right, in a way that mitigates one or two of them failing is not an easy task.

MarkH October 17, 2021 11:39 AM

@JonKnowsNothing, all:

• With the exception of military or diplomatic installations operating in hostile countries, it appears that virtually all successful infosec attacks are non-TEMPEST.

• Every imaginable electronic TRNG can leak the generated bits by a variety of side channels.

• As I understand the attack Clive envisions, it requires high-resolution recording of TRNG emissions (or power cables and the like) at the time the secret is generated. It cannot disclose outputs generated before or after the interval of data capture.

• Any imaginable electronic TRNG can be safeguarded, by a variety of design and deployment practices, so as to prevent successful attacks via every known side channel.

There is absolutely nothing special about TRNGs in this regard. EVERY PIECE OF EQUIPMENT through which keys, plaintexts, or other secrets can pass can leak the secrets via side channels. If your threat model includes TEMPEST attacks, you must safeguard every single box and wire handling secrets.

• In typical TRNG applications, the computer(s) consuming the random numbers are several orders of magnitude more difficult to secure from side-channel attacks than the TRNG itself.

SpaceLifeForm October 17, 2021 2:18 PM

@ Clive

The crypto algorithm is not trapdoored, or becomes broken in some other way.

Don’t you mean backdoored instead of trapdoored?

A good crypto design uses trapdoor functions, i.e., easy to do, hard for an attacker to undo.

Clive Robinson October 17, 2021 2:35 PM

@ JonKnowsNothing, ALL,

With regards @MarkH’s comments it’s way way more than a TEMPEST attack, and it would be foolish to think it was just TEMPEST.

Time based side channels leak information in many ways. Take for example a TRNG on a computer CPU chip the time based side channels are available to,

1, Passive EmSec (TEMPEST).
2, Active EmSec (EM etc Fault injection and what some call RADAR illumination).
3, Application Software running on the computer.
4, Attack software running on the computer.
5, Due to “Efficiency-v-Security” and the transparancy it creates the side channels leak out on the communications channels and can be detected by upstream routers
6, Untill recently you could relatively easily get access via JavaScript to a computer to get high precision timing.

Which as most can work out certainly makes @MarkH’s

“Any imaginable electronic TRNG can be safeguarded, by a variety of design and deployment practices, so as to prevent successful attacks via every known side channel.”

Decidedly problematic.

The part highlighted is provably false, and he says as much himself,

“Every imaginable electronic TRNG can leak the generated bits by a variety of side channels.”

Oh and it is not just “electronic” devices the same applies to “electro-mechanical” and “mechanical” devices as well as a little reading of history shows.

But as for the first part that is aproximaetly true in theory, but as for actual practice… as I was telling @Freezing_in_Brazil whilst it can be done, few know how to do it, and even fewer actually do it.

As for,

“It cannot disclose outputs generated before or after the interval of data capture.”

As I’ve explained there are three time periods Past, Current, Future. Now @MarkH has not said what he means by “data capture” where I was quite explicit about the three parts of internal State, Trigger, Data.

Due to “Efficiency-v-Security” systems are quite transparant and leak data to communications channels very readily. It is believed that the NSA “collects it all” to do various things with, some of which require precision timing. The way the NSA tends to operate, if they require some precision timing then everything they collect would have not just precision but synchronized timing. Thus for them the “past” is a matter of recorded data. So any time based side channels are probably tucked away in their repository due to precision time stamps (we can not say for certain but we can make reasonable assumptions on the little we do know).

As for the future as I noted the oscillator in TRNGs and TRBGs that drives the counter to measure frequency or period is likely to be of reasonably high stability probably good to a few parts per million. This stability means that once an attackers system is synchronized it will remain sufficiently so for many measurments. Thus only occasional measurments are required to keep stability, especially as part of it’s ordinary communications a computer leaks it’s clocks Delta F readily enough that has been shown possible to track individual computers from halfway around the world via “timestamps”.

So we know it is reasonable to assume that any computer sending out network packets at a reasonable rate is going to leak sufficient information. Microsoft’s more recent OSs are known for their blizzard of telemetry and heaven alone knows what going out onto the Internet from early boot-up onwards… so no real shortage of network packets.

So my standing advice about using two computers one for “on-line” insecure work and one for “off-line” private work holds here as well when generating PubKey certs etc, or any other “Root of Trust” “Bag of Bits”.

Also I suspect @MarkH actually has little knowledge of TEMPEST techniques because if he did he would have been quite unlikely to make the comments he did.

Please remember @MarkH no matter what is said has personal not technical reasons for making the statments he does. As I’ve indicated before on quite a few occasions he has tried to “get one up” his usuall technique as in this case is to keep moving the goal posts then make claims that he thinks disproves something I’ve said, but does not or is pure fantasy on his part.

As normal he has “lept without looking” and effectively got “kicked to the curb” by his own actions, not those of others yet again.

My comments to @Freezing_in_Brazil still stand as I’ve demonstrated, and if people want to do the graphical proof that is upto them.

But in the process of trying to “get one up” @MarkH has monopolised several threads for some considerable period of time for no good reason. So much so that at times he has outnumbered every other individual on daily posts…

Perhaps one day he will explain his behaviour, but I’m not going to hold my breath.

Clive Robinson October 17, 2021 2:50 PM

@ SpaceLifeForm,

Don’t you mean backdoored instead of trapdoored?

A deliberate “backdoor” in an encryption algorithm is usually by a “trapdoor” function.

But trapdoors can come into existance by lack of knowledge which is why some early PubKey’s were not as strong as expected.

So “trapdoored” covers the method intended or otherwise, whilst “backdoored” covers a specific intention.

MarkH October 17, 2021 5:36 PM

@Clive, JonKnowsNothing et al:

There’s been a little banter on another squid thread about the wonderful ambiguity of language.

Please note well that “every imaginable electronic TRNG can leak the generated bits …” has a very distinct meaning from “every imaginable electronic TRNG must leak the generated bits …”

An implementation of a TRNG — or any other information processing component processing secret data — without design and construction measures to mitigate side channels is sure to provide a rich target for attackers willing to invest in sufficiently intense surveillance. This is true regardless of the specific techniques used to generate the numbers.

If I wanted to prepare a product for critical security applications, I’d hire an expert like Clive to advise on and review safeguards against side channels.

========================

In my opinion, Clive prioritizes recovery of (or synchronization to) the clock frequency in a TRNG to an unnecessary degree. If an attacker can precisely time the decay detections, then they have a greatly reduced search space, even though their knowledge of the clock is imperfect.

Whatever an attacker knows about the clock — or other aspects of the TRNG — inferring the generated numbers requires “snooping” the decay detections at the time the numbers were generated. Either you’ve got the contemporaneous record, or you don’t.

It’s a real-time process. If you have a ten-second gap in the surveillance record, you can’t reconstruct the numbers generated during that interval.

MarkH October 17, 2021 5:51 PM

@JohnKnowsNothing, all:

In the context of the TRNG discussion, I’ve seen some recent references to Intel CPUs, “application software running on the computer,” and (God help us) JavaScript.

I picture a TRNG for cryptographic use as an independent system with its own enclosure, shielding, power supply, power isolation, and processor — that processor preferably being a low-power microcontroller, and assuredly not a PC-style CPU.

It’s my hope that anyone who has read the Schneier blog for more than a few months has some awareness of how vulnerable PC data and computations are (especially when accessible to the public internet).

MarkH October 17, 2021 6:23 PM

.
Amount of Distribution Bias, Pt. 10

I’ve discovered a rather pleasing formula for guessing attack effectiveness.

By way of review, this is an analysis of extracting unpredictable numbers from radioisotope decay with an average of λ detections per second, by recording the low-order bits of a high-frequency counter when each new decay is detected.

I call the time for those low-order bits to wrap around the time modulus, or μ. Note that μλ is the time modulus as a fraction of the average time between detected decays. The smaller that fraction, the less the bias.

I showed above that the ratio between the most probable and least probable extracted bit patterns is 1 + μλ.

========================

I’ve just been seeking a formula for the guessing “benefit” that can be gained from the bias. By “guessing benefit,” I mean the proportion of time and/or computational resources that can be saved using an optimal guessing strategy which favors the more probable outcomes over the less probable.

For example, if the guessing benefit is .01, then an average optimal attack will succeed at 99% of the cost of a brute-force exhaustive search.

The formula I found is that the guessing benefit for modular timing numbers is μλ/4. If μ is one percent of the average interval between detected decays, then the an optimal guessing attack will (on average) cost 99.75% of a brute force attack.

In the case of the Korean design study, μλ = 0.000064, so the guessing benefit is 0.000016. There are almost 525,950 minutes in an average year; the product of those two numbers is about 8.4 minutes, the savings I calculated above.

My earlier numbers were from direct evaluation and summation of the distribution function; the little formula above gives the answer quickly and easily.

========================

A very important caveat: optimal guessing is only possible if the attacker knows the initial (most probable) count at least approximately. Without such knowledge, the guessing benefit is zero and the effective entropy is perfect.

MarkH October 18, 2021 12:08 AM

@FA:

I thought you might enjoy the little guessing-attack formula presented above.

Its derivation is elementary, but has a number of steps which would be difficult to represent given the reality of comments formatting.

If you’re interested, I’ll try to pass it along one way or another.

MarkH October 20, 2021 3:03 AM

.
My Mistake in Guessing-Entropy Impact

In my above comments, numbers I gave for average time saved by an optimal guessing strategy (compared to a year-long exhaustive search) are surely incorrect.

As a reminder, those numbers were extrapolations to large secret numbers composed from numerous radioisotope modular decay timings in a TRNG.

I’m confident that the ratios and formula are correct for an individual extraction (for example, 8 bits in the Korean design study): they accurately predict the average fraction of time saved in guessing one 8-bit output.

However, my assumption that the same ratios apply to concatenations of multiple modular decay timings was too simplistic.

I’ve just been reevaluating the calculation, and now believe that my illustrative numbers (for example, cutting a year-long computation by about 10 minutes) considerably understate the benefits to an attacker, of following an idealized optimal guessing strategy.

My examination of some tiny “toy size” examples seems to show growth (in proportional time saved) by perhaps a factor of two as individual modular timings — all with the same small degree of predictability — are concatenated to make large numbers.

However, my test cases are so small, and rely so much on estimation, that I suppose they must be a very poor indicator of what the truth would be for real-world problem.

Accordingly, the numbers I posted above for the reduction in an hypothetical year-long exhaustive search attack (roughly 8 or 10 minutes) should be regarded as incorrect.

MarkH October 20, 2021 3:24 AM

Report of Mistake, continued:

[1] Correct analysis of guessing cost in concatenated bit-strings with equal known bias poses some interesting challenges, and would take some doing. My mathematical wheels turn very slowly.

If I eventually come up with a good formula, I’ll be pleased to post it here.

However …

[2] I chose above premise — that the bit strings all have the same known bias — to be as conservative as possible when discussing distributional bias.

As I’ll be presenting this week, that premise is definitely not true for a properly designed TRNG. Accordingly, my mistake in computing advantage to an attacker is not relevant to the performance of the correct design.

Clive Robinson October 21, 2021 6:11 PM

@ MarkH,

…advantage to an attacker is not relevant to the performance of the correct design.

The trouble, first find a “correct design”…

As a rough rule of thumb the accumulated error at the end of the whole year would be ~half the mean of the half life at that point.

You can see this graphically with a Gaulton Board the first ball to drop falls anywhere but has a very significant effect on the average. Each ball thereafter has successively less and less effect on the average, to at some pont it is vanishingly small[1]. However the probability of which slot the ball drops into is still the same as the first ball.

In effect you see the same entropy on the edge of the first decay at say 1e-5secs as you do on the edge of the last decay in the year at 31557600secs.

Which means the optimum stratagy for getting entropy out is to grad the diference in the period between decays with as much resolution as you can measure (that is the fastest running oscillator you can get).

However a dirty little secret of high frequency oscillators is most are “XTAL overtone” which means the higher the frequency the more stable the oscillator is (ie the less “pull” there is).

Stability in the oscillator is bad news as it makes synchronizing to it by an attacker that much easier.

Now a little secret that is not widely known “Make the radioisotope source ‘self clocking'”.

That is put two particle detectors on the source, one either side. Take one to the D-input on the latch, the other to the CLK to the latch.

Not only are they both as unpredictable as each other they both decay at the same half life rate thus remain balanced[2].

[1] Though unrelated if you have a CS-PRNG built from a counter and a block cipher, you also get a decrees in observed entropy if you don’t take care. That is the first output is say 1:2^56 but the millionth ~2^20:2^56, eventually you will get to (2^56 -1):2^56 meaning the last output is 100% known. That is you are in simple counter mode, in effect taking a ball out of the urn each time you draw, but not putting the ball back so the number of remaining balls in the urn decreases steadily. The solution is to use the block cipher in a feedback mode. That is XOR the counter output with the previous output from the block cipher, and use the result as the next input to the block cipher.

[2] There are however still a whole haystack of other issues to resolve.

MarkH October 21, 2021 9:41 PM

@Clive:

the accumulated error at the end of the whole year would be ~half the mean of the half life at that point

I don’t know what “error” means in this statement. Typically, error is a difference between two quantities — which quantities do you have in mind here?

you see the same entropy on the edge of the first decay … as you do on the edge of the last decay in the year

If this means that the less-significant bits of each decay timing contain the same amount of entropy1 as any other timing in the sequence, that is essentially my conclusion.

========================

For clarity to all readers, when I write “correct design” for a radioisotope TRNG, I’m using this as shorthand for the number extraction technique in which the decay detection triggers the sampling of a free-running high-frequency counter (or at least, the less significant bits thereof), such that the rollover period of the extracted count is much smaller than the mean time between decay detections.

Of course there’s much more to designing such a TRNG than the extraction technique — and many potential mistakes along the way! My analyses have been focused on this particular number extraction method.

  1. For a TRNG, entropy means the extent to which a prospective attacker cannot know any particular set of outputs without either (a) finding a way to inspect them (via espionage or other disclosure of output data), or (b) making sufficiently intensive surveillance of the generation process at the time those outputs were generated so as to be able to (at least partially) reverse engineer the outputs — in other words, side channels. 

MarkH October 22, 2021 4:27 AM

.
Amount of Distribution Bias, Pt. 11

The purpose of this note is not to add anything to the knowledge of the distribution bias of modular time extraction, but rather to help to visualize how big (or small) it may be.

Clive has helpfully contributed the imagery of the Galton pin board (please look it up if you don’t know it), a simple mechanical gadget which beautifully constructs a normal statistical distribution.

I keep referring to the Korean design concept cited near the top of this thread, not because there’s anything very special about it, but rather because it’s a nice exemplar of the modular timing extraction method.

Because of the distribution of decay timings, shorter intervals are always more likely than longer ones. Modular timing doesn’t eliminate this bias, but allows it to be greatly reduced. In the design concept from Korea, the product μλ is less than one in fifteen thousand.

If a pin-board machine could be made representing the distribution of the 256 possible counter values, from most probable to least, and the balls were 1/2 inch in diameter (like typical marbles kids play with), the apparatus would need to be more than 650 feet tall in order to register a height difference of one marble between the extremes of probability.

Each run of the pin-board machine would require more than four million balls to fill the columns.

With the tiny differential of probability, I suppose that hundreds of runs would have to be averaged, before the bias would be visually evident.

MarkH October 22, 2021 5:26 AM

.
Bias Toward What?

Above (9 October) I posted about the Korean etrij design concept that based on the way it extracts bits from timing of decay detections,

This distribution of bytes has 0.999999999969 bits of Shannon entropy per bit of output.

Iff an attacker knows the initial count, then there are 0.9999971 bits of guessing entropy per bit of output.

In essence, this says that the values of output bytes are very evenly distributed among the 256 possible numbers, but not perfectly so.

========================

To formalize the meaning of “initial count,” it’s wherever the counter (effectively 8 bits in the example) happens to be in the first clock cycle which could be latched, at the earliest moment a decay edge could be received from the detector after its recovery time1.

Calling that initial count x, it will be the most probable count when the next decay is detected; after that x+1, x+2 and so on (in descending probability) with x-2 and x-1 being the least probable2.

========================

Although a the distribution of values has tiny deviation perfectly uniformity, this cannot be exploited unless an attacker knows which values are more probable.

By definition, an optimal guessing attack tries guesses in order of descending probability, thereby succeeding (on average) faster than a simple exhaustive search. Without knowing which values are more probable, an attacker cannot make an attack better than exhaustive search against random numbers.

We must assume that an attacker knows that some small bias is present. That knowledge is not sufficient, without at least approximate knowledge of the initial count.

  1. Practical detectors generally won’t respond to a decay for some “dead time” after the previous detection. 
  2. The additions to or subtractions from counter values are inherently modular; for our 8 bit example, mod 256. 

FA October 22, 2021 7:13 AM

@MarkH

Iff an attacker knows the initial count, then there are 0.9999971 bits of guessing entropy per bit of output.

Why should it be less ? The effect of knowing the initial state disappears after a few events. See

https://pastebin.com/bLTmyX47

Clive Robinson October 22, 2021 9:47 AM

@ MarkH,

Without knowing which values are more probable, an attacker cannot make an attack better than exhaustive search against random numbers.

That is true of all searches of future random events.

But it misses a couple of relevant points

1, Past knowledge
2, Present knowledge.

Past knowledge provides information that “reduces the search space”. That is the more past readings you have the better you can find the average and varience.

Present knowledge provides information that alows you to “sync” to what are regular changes of state, that again reduces the search space significantly.

More importantly as the measurment has reduced the probability to certainty in the TRNG the side channel that exists from the measurment will tell anyone else who has a syncronized clock and counter to within just a few bits what the actuall measurment value the TRNG made.

As that time based side channel from the measurment propergates through a determanistic system like “turpentine through a pig”, unless non determanistic or sufficiently chaotic measures without their own energy or time channels are taken the output of the TRNG will give it’s self away because both the data domain and time domain are “known”.

The problems you need to address are covered in four main areas,

1) Not breaking the time side channel from the measurment so it is available at the TRNG output. It has to be decoupled not just in hardware but software as well. So it can not be of use to an attacker.

2) Not have a regular period state update process, it needs to be non determanistc or sufficiently chaotic. So it can not be of use to an attacker.

3) Not update the state sequence in a predictable way, it needs to be non determanistic or sufficiently chaotic. So it can not be of use to an attacker.

4) Not have the data output coupled from the state in a predictable way, it needs to be non determanistic or sufficiently chaotic. So it can not be of use to an attacker.

I’ve listed them in order of actual difficulty to achieve in the design of a practical TRNG with the most difficult first.

Do not underestimate the difficulty of the first, that is breaking the time based side channel from the measurment process to the output.

In the US the techniques you use for the hardware design are technically still classified to quite a high level, even though they are mostly in the public domain these days.

Similarly with software, all sorts of nasties exist in and below the CPU level, in the RTL, Microcode and similar between the hardware interface and the ISA interface you write the code to.

You have no direct control below the ISA level, and frequently not much above either due to the gulf between the ISA and high level languages. Likewise you mostly have very little indirect control over what happens in that area between the ISA and external hardware I/O. Due to the “Make it as fast as possible” or “Make it as efficient as possible” mentalities, that have pervaded the industry for five decades or more, some real nasties exist there. As was “officially” eventually found towards the end of 2017 (though many older engineers in the industry knew damn well such problems were not just possible but very probable). As Meltdown/Specter and much since has shown (hence the reason I christened them “The Xmas Present that keeps on giving, and predicted five years to a decade of further discoveries, with two more so far this year and four years elapsed…).

In theory breaking the time channel in hardware digital logic is easy… You have,

1) One totally independent time source in the measurment system, that clocks data into an infinate sized buffer, at a rate that is always greater than the maximum possible output rate.
2) The output of the buffer is clocked by a totally independent read process at a rate that can never block, or exhaust the buffer.
3) An infinite sized perfect buffer between them thus breaking the time relationship between input and output.

In practice it is not possible to do this with standard hardware buffers. Because buffers based on even hardware such as Dual Port RAM and Register Files will block if you push them, so will cause a degree of coupling between the input and output. But buffers of a reasonable size to meet the rate requirments need to be large and your only available option is bi-directional port RAM be it Static or Dynamic.
Whilst you can use a multiple buffer solution with biphase clocks they have there own set of issues[1].

Then there is the fun of what leaks around in the analogue circuitry that Digital Logic actually is. Which is the “work domain” of “energy amplitude, against time”.
Where EM / acoustic / thermal signals can just “fly off the components” due to the movment of charge. As well as get into “Grounds and Commons” that are never the “sinks of charge/energy” or “Stable refrences” we are taught to kid ourselves they are when we train as engineers.

As for the other three major problem areas of “State update period”, “State update sequence”, and “State to ouput data coupling”, I can go through them if people want but hopefully they should be easier to see for most readers here.

[1] You can operate indipendently circular buffered input and output to the main buffer using biphase clocks to stop dead-lock on the main buffer. But unless designed in a certain way will under certain circumstances still have issues. That is the independent Input buffer can be overwritten / blocked and the Output buffer forced into underread / blocked. However they will still be independent of each other so keep the time based side channel broken. But the input block / overwtite causes data to be lost which will introduce some bias, whilst the output is way more significant effected. If the ouput buffer goes into underread it will pull out very recently used values in the same order, and if it blocks but the down stream proces is not in turn blocked then it will keep reading the same value both are highly undesirable. It’s an issue that has plagued Real Time System system desigers for years, the only solution they have come up with is guarenteed rate limiting, which can be highly inefficient thus not liked at all.

MarkH October 22, 2021 11:12 AM

@FA, Clive:

I’m going to (at least roughly) partition the categories offered in Clive’s preceding comment thus:

• “past knowledge” as analysis of previous outputs, i.e. a data-at-rest attack

• “present knowledge” as side-channel attacks at the time of random number generation

FA grasps the truth of the matter: when the counter is free-running, the predictive value of previous outputs is far too small to be of use. I’ll post some analysis of this matter presently.

========================

As I wrote to Clive some days ago, I’ll be happy to discuss mitigation of side channels when the statistical analysis is concluded.

MarkH October 22, 2021 11:58 AM

.
Why the TRNG Decay-Time Modular Counter Should be Free-Running

Previously I showed the explicit link between (1) knowing the “initial count” as defined above, and (2) exploiting the tiny residual bias in the count extracted from each decay. Without the first, the second cannot be.

The designer of the hardware might be tempted to reset the counter after each decay detection, especially if it’s Joe Bloggs, still nostalgic for his flawed approach of measuring the entire interval between decays as raw random data (never do this!).

For modular time extraction (which I call the “correct design”), resetting the counter in this manner has one attraction: it makes successive extracted numbers completely independent. In other words, knowing the count extracted from the detection of the 1,000,000th decay detection adds absolutely nothing to your ability to predict the count for the 1,000,001st detection.

Statistical independence is good — in fact, it’s a fundamental requirement for cryptographic random numbers that independence be very high — so why not?

========================

If your design resets the timer, there is no “chained” predictor from one extracted number to the next.

However, if there is even modest consistency in the detector’s recovery time, the initial count for each number extraction will be somewhat predictable.

In consequence of such predictability, there will be a tiny bias shared by all of the extracted numbers, toward some interval of more likely initial counts. Rather than a very weak predictor from one count to the next, there’s a very weak predictor which applies statically to every count.

This gives rise to (a) possible guessing attacks (too poor to be practical, but possible at least in principle), and (b) the potential that statistical analysis of outputs can distinguish the RNG from a perfect uniformly distributed function of a random variable. For cryptography, the ability to detect this difference is classified as a weakness.

========================

Why does free-running the counter eliminate the consistent bias? Because (as we have seen in detail) the distribution of extracted counts is extremely uniform, and the initial count for the next extraction will depend on the unpredictable and uniformly distributed count from the current extraction.

Each individual count making up the TRNG output will in a sense be biased … but in aggregate, the counts are biased equally toward every possible count. The “magnetism” is everywhere; there is no “north pole.”

In the free-running counter design, a tiny dependence between successive counts is tolerated in exchange for

• guaranteeing that optimal guessing is impractical, and

• statistical indistinguishability.

MarkH October 22, 2021 12:19 PM

.
Amount of Distribution Bias, Pt. 12

From my previous comment:

Each individual count making up the TRNG output will in a sense be biased … but in aggregate, the counts are biased equally toward every possible count.

Setting aside statistical dependency, which I’ll address below — the resulting distribution of outputs is, in practical terms, perfect. Not just really good, but perfect.

In other words, the distribution of numbers will have no deviation from that of a uniformly distributed random variable, which an attacker can exploit to guess or predict output values.

Every output bit has exactly one bit of distributional guessing entropy.

MarkH October 22, 2021 12:57 PM

.
Radioisotope TRNG Statistical Dependence

Ideally, there should be no correlation between successive outputs of a random number generator (or indeed, between outputs paired in any selection not based on their values). The absence of such correlation is statistical independence.

The departure from perfect independence is a form of bias, which makes numbers predictable, at least to some minute extent.

My comment above explains that free-running modular number extraction from radioactive decay timings — imagine a roulette wheel sampled via a strobe light — achieves practically perfect uniformity (the impossibility of prediction based on any number being more prevalent than another).

However, that remarkable and desirable property is attained at the cost of some dependence between successive outputs.

========================

We don’t have to calculate the amount of this dependence (whew!) because we already did above in discussing distribution bias. Here’s the reasoning:

1) When a decay is detected and “captured” by the counting circuitry, the extracted count of course corresponds to the time of detection.

2) The decay detector will be ready again after some recovery time or “dead time”. Wherever the counter happens to be at the end of the recovery time is the initial count, which (along with its nearer successors) is more probable than other count values for the next decay detection.

So, each detection sets up some bias for the next detection.

========================

To be conservative, I assume that recovery time is constant (the worst case for dependence), so if xi is the ith extracted count, then the initial count for the next extraction will be xi + k mod m where k is the detector recovery time in counter clock cycles, and m is the counter’s modulus (2^n for an n-bit counter).

The most probable value for xi+1 will be that initial count, the next most probable the initial count plus one (mod m) and so on, with the initial count minus one (mod m) as the least probable value.

The ratio between the greatest and least probabilities is 1 + μλ (see the above comment headed “Amount of Distribution Bias, Pt. 5”) where μ is the modular counter’s rollover time and λ is the mean rate of decay detections.

When that product (which is effectively the time modulus as a fraction of the mean interval between detections) is small, so is the non-uniformity of probability.

MarkH October 22, 2021 1:34 PM

.
Radioisotope TRNG Statistical Dependence, Pt. 2

How does the deviation from perfect independence affect the usefulness of the TRNG outputs for cryptography?

Concretely, this is equivalent to asking, what is the maximum benefit an attacker can derive from the dependence between sequential outputs?

First, I assume that an attacker has no alternative data path for discovery of the random outputs (side channels, espionage, etc.), because an attacker who possesses such information need hardly labor to exploit subtle statistical imperfections!

I conservatively assume that the attacker knows a sequence of TRNG outputs generated immediately before (or immediately after) the sequence of outputs composing the secret number under attack, and that the attacker also knows the constant k from my preceding comment.

The adjacent outputs known to the attacker might have been used publicly, or perhaps were gathered by some form of spying (it’s a cryptographic maxim that the disclosure of any part of an RNG’s output should not enable computation of any other).

[As we shall see, only the adjacent outputs are potentially useful — separation by even one step in the sequence effectively destroys all dependence — so it’s the output immediately before, or the output immediately after.]

To an attacker, the exploitation of statistical bias is measured by work saved in guessing a secret number. To an excellent approximation, the mean savings (compared to exhaustive search) based on the dependence between successive outputs is μλ/4, which for the mini-TRNG design from Korea is 16 parts per million (about 8 and a half minutes shaved from a year-long guessing attack).

Suppose that an attacker wishes to recover a secret number (a symmetric cipher key, for example) and possesses the output that was generated immediately prior to the start of that key. Knowledge of the dependence between the known extraction and the first part of the key provides some guessing advantage, even if it’s too small to be useful.

But what about the dependence between extractions which are not adjacent, for example xi and xi+2? Could that be used to guess later bits in the key, and increase the guessing advantage?

It’s a somewhat intricate question, which I’ll address below.

MarkH October 22, 2021 3:08 PM

.
Radioisotope TRNG Statistical Dependence, Pt. 3

Here I take up the question of dependence between non-adjacent outputs from a TRNG using free-running modular timing of decay detections.

As I described above, an attacker knowing xi — the output immediately prior to the number under attack — computes xi + k mod m as the initial value g, and knows that to be the most probable value for xi+1.

However, because the distribution is so uniform, that most probable guess g will be wrong the vast majority of the time. It’s the attacker’s misfortune not to know xi+1, only to have a very weak guess.

So what about xi+2? The only possible starting point for the attacker is g‘ = g + k mod m. Within the attacker’s knowledge, there exists no better guess.

So how bad is this situation? For the attacker, it’s very bad. If g‘ happens to be correct — or nearly correct — then there will be some tiny work savings by guessing.

To make this concrete, suppose an 8-bit counter and g‘ = 101. In the half of the domain nearest to this — roughly ± 64, from about 38 to about 165 — there will be at least some work savings (on average) by starting from this best guess. But in the other half of the domain, starting from that guess will require more work than a simple exhaustive search. Oops!

The gains and losses don’t cancel out entirely, because the favorable values are minutely more likely.

It occurred to me that because g‘ is known to be poor, optimal guessing would zig-zag in a sequence like g‘, g‘+1, g‘-1, g‘+2 …

For the Korean design, the guessing advantage from this optimal attack — for the second byte! — is 5.0000e-013 of the cost of exhaustive search, or 15.8 microseconds shaved from a year-long attack.

Obviously, the situation is even worse for subsequent extractions.

MarkH October 22, 2021 9:44 PM

For clarity, the 6th paragraph above should be:

To make this concrete, suppose an 8-bit counter and g‘ = 101. If xi+1 falls in the half of the domain nearest to this — roughly ± 64, from about 38 to about 165 — there will be at least some work savings (on average) by starting from this best guess. But if xi+1 is in the other half of the domain, starting from that guess will require more work than a simple exhaustive search. Oops!

MarkH October 23, 2021 12:11 AM

.
Summary

@Freezing_in_Brazil, FA, Clive, other interested parties:

With my most recent comment above, the essence of my analysis of bias in numbers output by a radioisotope TRNG, which extracts random bits by free-running modular timing of decay detections, is completed.

Note well that this analysis is confined to the behavior of radioactive decays, and of the modular timing method for extracting random bits. Other parts of the TRNG might aggravate bias, so the net result is by no means guaranteed.

Principal conclusions:

• With a little care in parameter selection, statistical bias is ultra-low.

• The distribution of output values is, for practical purposes, that of an ideal random source.

• Without knowledge of adjacent outputs (which might be called special intelligence as it will not in general be available), statistical independence is, for practical purposes, that of an ideal random source.

• For an attacker who does not know adjacent outputs, and doesn’t have some side channel or other kind of espionage available, the outputs are as opaque and mysterious as the flux of neutrinos happening this moment in some star on the other side of our galaxy.

• With the benefit of adjacent outputs, guessing work is reduced by a minute fraction of one percent: not useful to any attacker.

• No compensation, “whitening,” or any other kind of post-processing is required. The raw counts are the ultra-low bias random data.

• The amount of bias varies only slightly with substantial changes in parameters.

• As the radioisotope source depopulates over time, the bit output rate decreases, and bias also decreases. In the correct design, at least, there is no such thing as “decay bias.”

========================

To any readers patient enough to have slogged through my many many comments, I thank you for your attention.

Anyone wishing to review the work, just look for comments under my name with bold-text “headlines” at the top of the comment body.

MarkH October 23, 2021 12:27 AM

.
Foundations

I believe that the radioisotope TRNG analysis must be true (excepting such errors as I have committed along the way) because:

• From generations of physical theory and observation, nuclear decays in the type of source used in a TRNG are completely independent events with a constant decay probability density for each nucleus.

• Except for its known decay probability, the timing of such a nuclear decay is absolutely unknowable before the instant of its occurrence.

• Such radioisotope sources exhibit a mean rate of decays.

• Processes with independent events with incidence converging to a mean rate are Poisson processes.

• The amount of time before the next event in a Poisson process conforms accurately to an exponential distribution.

• A well-functioning particle detector capturing some fraction of decays from such a source exhibits decay detections which are likewise a Poisson process, adhering to an exponential distribution.

• The formulae and figures presented above are derived — using basic algebra and first-year calculus — from the properties of exponential distributions.

Each step above is based on sound, well-established and tested knowledge.

If any reader sees an error (or a plethora thereof) in any of the work I’ve presented above, I’d be grateful to you for pointing it out.

Weather October 23, 2021 8:06 AM

@MarkH

Good paper. Just something to look at, if you have Xi+0 which is 32 byte, that is unknown to the attacker, but the attacker records Xi+1 and Xi+2 ,I think in theory the guessing attack against Xi+0 would be 16 bytes ,think like a Otp if you xor 32 bytes by 64 bytes you would have half equal probably accurate output.

Hard to explain at present sorry. Agree with not resetting the counter.

Are you publishing it?

MarkH October 24, 2021 1:22 AM

@Clive:

For the moment, I assume that the TRNG is designed and operated with sufficient mitigations to frustrate observation of its internal processes via energy measurements by an attacker within the design basis threat model.

Subject to this assumption, I suppose that the only time-based side channel would be via either (a) timing of its data outputs, or (b) its response time to input messages, if its specifications require reverse communication.

Because the design for processing messages from outside would depend on functional requirements, I begin with a concept for the output-only case. In this concept:

• There would be no processor able to function as the main processor of any server, PC, or mobile device. The processor would consume a fraction of a watt, have no DRAM, and run no operating system. My preference would be a dirt-simple 8-bit microcontroller (μC), which is perfectly adequate to the application. It would have its own crystal controlled clock independent of the decay timing circuit.

• There would be no “hardware” buffering (outside the μC). For a low-speed radioisotope TRNG, it would be sufficient to poll the decay timing latch circuitry.

• Output from the μC would be via a simple serial interface implemented by an on-chip hardware module.

• The timing of initiation of each serial output sequence would be derived from the μC’s crystal-based clock via a timer interrupt, independent of polling of the timing circuit.

• Random bits would be buffered in the μC’s SRAM by the software.

I expect that timing of the output signals would be independent of the polling, acquisition and buffering operations. [Note well that the μC family I have in mind has fixed interrupt latency independent of the code executing at the time of interrupt.]

If you care to critique this concept, I’ll study your commentary with great interest.

MarkH October 24, 2021 1:34 AM

@Clive:

As an elaboration of the foregoing, it might be desirable for the “outside world” interface to be handled by a second microcontroller in order to mitigate any of several side channels. The collection and buffering of random bits would be the sole responsibility of the first μC.

This could be done at modest additional dollar cost. Though they would live in the same “box” the two μCs could have separate shielding, independent power supplies, full electrical isolation etc.

MarkH November 11, 2022 10:29 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 1

Q: What’s this all about?

A: Rebutting the mistaken idea that random numbers generated from radioactive decay times are unavoidably biased. I call this hypothesized defect “depopulation bias.”

Q: Isn’t this beating a dead horse?

A: I explained above that radioisotope TRNGs can have near-zero data bias despite the number of unstable nuclei decreasing with the passage of time, but it seemed to me that at least 4 participants in this thread thought that increasing bias as the radioactive source ages cannot be avoided.

Q: Why take it up after all this time?

A: I was working on this “appendix” a year ago, but got busy with other things … it’s been in the back of my mind all the while.

MarkH November 11, 2022 10:41 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 2

Q: What is meant by “data bias”?

A: Patterns in the data output of the TRNG that could enable guesses of future (or previous) output data with better accuracy than by pure chance. [I noted above that in narrow cases, bias can be as much as a few parts per million, but not useful for a crypto attack. If the TRNG suffers from depopulation bias, the data bias would increase over time.]

MarkH November 11, 2022 10:51 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 3

Some radioisotope TRNGs do suffer from depopulation bias.

If the design measures the whole interval between successive decays — or the number of decays per unit of time — the output is will have too much bias for cryptographic use without a conditioning function.

Depending on parameters, the poor statistical properties of such a mistaken TRNG design can get worse as the radioactive source ages.

MarkH November 11, 2022 11:04 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 4

Clive mentioned another interesting possibility (in a separate thread).

A designer who made the fundamental mistake of basing a TRNG design on whole-interval or count-per-time measurements, might then add compensation (mathematically in software, or even by hardware tricks) to “flatten out” the distribution of output numbers.

Such compensation depends on the mean number of nuclear decay detections per unit time. If it’s not adjusted as the radioactive source ages, the output data will grow more and more biased.

MarkH November 11, 2022 11:18 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 5

As my above comments (from 2021) explain, a TRNG generator using a free-running timer to measure decay times modulo some small value not only can have extremely low data bias — but also the data bias decreases as the radioactive source depopulates.

An interesting question, is why did some of my interlocutors here seem convinced that even the modular timing design must suffer from depopulation bias?

I noted that the reasoning as to why depopulation would increase data bias, implicitly referred to properties of periodic functions. Some revealing words include “heterodyning” and “sawtooth”.

Because Poisson processes (like decays in the TRNG) are fundamentally different from periodic processes, reasoning that “reads across” from one to the other can be completely invalid.

If anybody is confused about this, we could have some interesting discussion about periodic functions, quasi-periodic processes, and Poisson processes.

MarkH November 11, 2022 11:26 PM

The Myth of Inherent Radioisotope TRNG Bias, Pt. 6

Last year, I actually spent a few days trying to construct arguments as to why a modular timing TRNG should have increasing data bias as the source depopulates.

I pretty well wracked my brain (or more accurately, brain fragment) trying to make a case that was logical, however mistaken it might be.

The only reasoning I could find to claim “data bias must grow as the source ages” was that based on the fallacy of treating Poisson processes as though they were periodic.

As a reminder, my case for the ultra-low bias of a modular timing TRNG is (a) presented above in transparent detail, and (b) based on well-established physics and basic math. Either I made a mistake, or the absence of depopulation bias is proved above, QED

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.