LezGoBrandon March 24, 2022 8:12 AM

Because, once you introduce 7, almost immediately, what comes to mind is Eleven.
But then again, ya never know, I might be onto sumpin’ here.

Jim K March 24, 2022 8:41 AM

LoFi SciFi at it’s best, if you were allowed to stay up that late.
I seem to remember Blue Peter making one of the communicator bracelets.

TexasDex March 24, 2022 9:27 AM

I’m slightly confused here by the word ‘identical’. If the new algorithm is identical to the old one, was anything even changed? If so, what?

Ted March 24, 2022 9:29 AM

@Alan, Flotsam

This is far beyond my expertise, but is BLAKE3 too new? It looks like BLAKE2 came out in 2012. BLAKE3 was announced in 2020.

Jason Donenfeld said “In 5.17, I began by swapping out SHA-1 for BLAKE2s… With SHA-1 having been broken quite mercilessly, this was an easy change to make.” But he didn’t say anything about BLAKE3?

tfb March 24, 2022 9:44 AM

@TexasDex: /dev/random and /dev/urandom have historically had different semantics and, at least long ago, and almost always in weird edge cases I think, /dev/urandom could provide not very good randomness as a tradeoff for not blocking. Now they are identical so I think neither will block and neither will provide rubbish randomness.

Ralf Muschall March 24, 2022 9:44 AM

@Ted: No, /dev/random (the formerly pure entropy source which sometimes blocked) and /dev/urandom (PRNG seeded by the same entropy source as /dev/random) are now identical to each other (i.e. /dev/random essentially becomes a copy of /dev/urandom, supplying PRNG data instead of entropy). I hope this is secure.

Dave A March 24, 2022 9:49 AM

TexasDex, I was a bit confused by that too. What I guess he means is that it behaves identically from the outside, in terms of the API and such, but is different under the hood. Kinda like how a Yugo and a BMW might behave the same WRT basic controls, running on gasoline, approximate size and shape, carrying capacity, etc., but one is a much better “implementation” of that system than the other.

Alex March 24, 2022 9:59 AM

Is there a reason modern computer systems don’t come with built in hardware RNGs? Is there any reason HNGs couldn’t be put on the mainboard, or even in the CPU itself?

I don’t know how inexpensive hardware RNGs work, beyond the fact that they use zener diodes and the avalanche effect, but it seems like it ought to be possible to do it really cheaply.

Roflo March 24, 2022 10:18 AM

@Alex, this is mostly speculation on my part…

… but I assume a HW RNG would have significant disadvantages of its own, mainly: it’s not easily replaceable (if a bug is found, if a better alternative is deployed).

Also, a given piece of software would behave differently in different machines, depending on the HW installed. This could make the same algorithm secure in one platform but insecure in another.

Clive Robinson March 24, 2022 10:44 AM

@ Alex,

Is there a reason modern computer systems don’t come with built in hardware RNGs? Is there any reason HNGs couldn’t be put on the mainboard, or even in the CPU itself?

The short answers are,

1, Most modern computers even $10 SBC’s come with what are claimed to be RNG’s in hardware.

2, The majority of these hardware so called RNG’s are in the CPU chips.

A wise man would treat these so called RNG’s with a great deal od skepticism. Especially as most chip makers make it impossible to verify the RNG’s by hiding them behind various forms of crypto “magic pixie dust”.

The point being you can only be an observer to the output of the crypto algorithm. If the algorithm is any good, then it is impossible in human terms to determin from just observing the output, if the input to the ctypto algorithm is actually,

1, Truely Random
2, Chaotic
3, Of high Complexity
4, Pseudo Random
5, Purely Determanistic

Only the first on the list is of use as the “Root of Trust” in any security system, the last is actually desirable for running Monte Carlo type simulations in engineering where as the first would not be.

The first is actually so rare that it is at best a very very tiny fraction in any hardware circuit that claims to be an RNG. Even the bulk of “True Random Number Generator”(TRNG) sources when they are analysed show most of their output is from the second and third groups[1] often with a lot of the fifth thrown into the mix.

The fifth can be as simple as a counter incrementing by one each time. With the crypto algorithm turning it into a “stream generator”. Which is effctively what AES in Counter Mode is. All you need to break it is the “secret key” which might be secret to the observer, but not the designer of the hardware or anyone they might have told.

So… If you follow the logic you will realise that,

“If you can not see the raw source output directly you would be a fool to rely solely on on CPU chip RNGs for any kind of security application.”

Especially when the “raw sources” are at best extreamly “fragile” and highly susceptable to “influence”.

I could go on, but whilst it is often fine to use those CPU RNG’s as a “jiggler” in various ways to spread any real entropy you have across as many bits as you can. Security wise you still need that true entropy source being the “Master” secret otherwise “no secrecy == no security, just obscurity”.

[1] The old “flip the coin” process has been repeatedly shown to be highly linked to it’s input conditions and is thus at best “chaotic”. Even those “draw the ball” machines are very determanistic in nature, and their output is almost certainly a mix of narrow range chaotic behaviour and high complexity in the orbit arcs.

Hans March 24, 2022 10:46 AM

@Alex, some reasons are stated in the linked article. Baically, there are HW RNGs. But they are not fully trusted.

LeoS March 24, 2022 11:20 AM

Has there been any explanation given as to why they’re still using what appears to be an ad hoc construction, as opposed to something like Fortuna (which is what FreeBSD and Apple are using)?

David Leppik March 24, 2022 11:58 AM


The purpose of the PRNG is essentially to shuffle together a variety of sources of randomness, including the hardware RNG, to make sure biases or backdoors in any of them can’t be exploited.

Think of a high-stakes poker game. Shuffling cards isn’t very random at best (the cards can only travel so far), and with very little practice, the dealer can guarantee the top and bottom cards don’t move. With enough practice, the dealer can do a perfect shuffle, where the cards are interwoven exactly, making a predictable pattern. So what do they make it fair enough to play cards?

For one thing, poker only uses a small number of cards from the top of the deck before it gets shuffled again, typically by the next player. Also, another player cuts the deck, so the middle of the deck—where the shuffle is most chaotic—becomes the top. Finally, dealing is done in a circle, magnifying the effect of off-by-one errors. The combination of these little tweaks ensures that no one player—nor any simple combination of players—has an advantage.

Your PRNG is like that. Imagine the NSA designed the hardware RNG, a Chinese manufacturer hacked it during manufacturing, a Russian hacker tinkered with the network drivers, and all the other sources of randomness have been tinkered with by a different state-level actor. Like in poker, you want to mix all these untrusted sources of randomness so that none of them have any insight into the random numbers that come out of /dev/random.

Quantry March 24, 2022 12:46 PM

@Ted, Alan

Thanks for the zx2c4 link.

Closer to home:


lots more good linkage there.

I think part of the answer to “why not blake3” is

‘BLAKE2’s documented “salt” and “personal” fields, which were specifically created for this type of usage.’

whereas, if my memory serves, blake3 emphasizes other uses, (such as high-speed blockchain hashing). Speed isn’t the point here, I suspect. Apologies for the facetious answers you received. Now you have a transport mechanic’s best guess.



One personal concern is whether any cpu timing bias (during mixing) was tweeked out of blake2, since this is about hair-splitting by rogue engineers.

And also… as always folks, continue surveys for existing systems weak points, since strengthening one area always forces insanity to look elsewhere.

Denton Scratch March 24, 2022 1:15 PM

@Ralf Muschall,

You have perpertrated a couple of misunderstandings.

The old /dev/random didn’t provide a stream of “pure entropy”; like /dev/urandom, it used an entropy source to seed a PRNG. The difference between them is that /dev/random cares about the estimate of entropy remaining in the pool, and blocks when it thinks it’s all gone; /dev/urandom doesn’t block, it just carries on producing PRNG output.

I think it’s also a misunderstanding to think there’s ever any “pure entropy” in the Linux RNG; or anywhere, really. All you can say about a Linux entropy pool is that it probably has some entropy in it. The idea that you can measure how much you’ve got, and then account for how much you’ve used up and how much you’re putting in was an interesting idea, but I don’t think it stands up to any scrutiny.

Ted March 24, 2022 3:06 PM

@Quantry. Thanks! And thank you for the links to BLAKE.

@Denton Scratch, Ralf Muschall

I don’t mean to get involved here, but @Ralf accidently responded to me instead of @tfb. But I did see that the author of the changes, Jason Donenfeld, had interesting comments on dev/random and dev/urandom. He also added:

[Update 2022-03-22: apparently we cannot yet unify /dev/random and /dev/urandom, because the day after this change made it to mainline breakage was detected on arm, m68k, microblaze, sparc32, and xtensa, and Linus decided to revert with Revert “random: block in /dev/urandom”.

MrC March 25, 2022 2:01 AM

@LeoS: The link addresses that near the bottom. Fortuna or something like it is on the roadmap, but the constraint of “don’t break stuff” forces an approach of incremental improvements. (In fact, the recent edit says that they had to revert the random=urandom merger because it broke stuff on some architectures.)

Sumadelet March 25, 2022 6:56 AM

The discussion around the revert can be found in the Linux Kernel Mailing List here:


fib March 25, 2022 9:50 AM

@ Alex

Is there a reason modern computer systems don’t come with built in hardware RNGs? Is there any reason HNGs couldn’t be put on the mainboard, or even in the CPU itself?

One day there’s going to be a radioactive TRNG on a chip – mounted on the motherboard [or so I hope].


Denton Scratch March 25, 2022 9:52 AM


Is there a reason modern computer systems don’t come with built in hardware RNGs?

They do – modern i86 CPUs have built-in HWRNGs. Many people don’t trust them, because their output is encrypted, making it impossible to examine the raw bitstream.

The i86 HWRNG doesn’t rely on reverse-biased zener diodes; it relies on jitter in free-running loops of inverters. Noisy diodes provide noise that derives from unpredictable quantum phenomena, but they require analogue circuitry; it’s hard to integrate analogue circuitry on a digital chip like a CPU. You have to isolate the digital and analogue circuitry, because RF from the digital interferes with the analogue, and also feeds back through the power supply.

TBH, I’ve given up on making a random bit-source using reverse-biased diodes. I’m sure it’s possible to make a diode-based TRNG that fits in a USB stick, that also allows the raw bit-stream to be inspected, but I haven’t yet seen one on the market.

Quantry March 25, 2022 11:16 AM

Not saying pseudo-random is so very objectionable but…

Depending on bit rate you feel the conversation with your girlfriend so desperately needs: (assuming you’ve got €1,400+ up your sleeve)


even looks like they have a USB2.0 option good for 4 Mbps.

OR h–ps://

Better yet h–ps://

Honestly, this is a blind obsession, while so many other vectors are wide open to “suitably motivated” clops by “spooks with inexhaustible funding”.

Clive Robinson March 25, 2022 11:24 AM

@ fib, Alex, ALL,

One day there’s going to be a radioactive TRNG on a chip

There already are radioactive sources mounted on chips, as low power “power supplies” so the technology is already available.

But you can buy very very cheaply a radioactive source and detector all mounted up and ready to “play with”[1] at your local hardware store or similar. That is the detectors in smoke alarms are radioactive and can be used to make TRNG’s but just don’t do it.

Other sources that are now considered sufficiently hazardous to humans are pre-WWII green glaze potery like “Fiestaware collectables” certain uranium salts based glass, oh and do not forget all those “see in the dark” early paints used on instruments in aircraft cockpits and on wrist watches issued to military personnel etc back then.

As has recently been demonstrated even kicking dirt about can be detected hundreds and thousands of miles away (Chernobyl site).

And that lovely granite worktop or slate tile back you want in your kitchen… Yup they will make a Geiger Counter click like the meter in a taxi being used as a getaway vehicle…

Remember the NRPB has not indicated a “safe lower limit” so…

[1] Though I realy would suggest you don’t play with any radio isotope source (and that includes bananas[2]). Whilst we normaly think alpha emitters are “harmless” they are very much not. The reason we incorrectly assume they are safe is the alpha particles will not go through the layer of dead skincells you have… However if you ingest an alpha emitter, and you do it gets past the gut barrier, then your future will not look good, nor will you as the radiation.poisoning destroys you at the DNA and lower levels… So “smoke detectors are not toys” and should be treated with real caution… Oh and some radio isotopes are realy very very poisonous in their own right… Look up plutonium it’s about as poisonous as a metalic element as you can get[2].

[2] Whilst radio isotopes turn up in all sorts of odd places like bananas, and exhaled breath (potasium and cardon isotopes respectively)

It’s not the radiation that should be of a worry but the poisoning effects… Potasium Chloride is the leathal component in the leathal injection and well, I think most know about the letality of both carbon monoxide and carbon dioxide.

There are similar issues with other radioisotopes the man made element Plutonium is just about as bad as it gets as far as being a poisonous metal. How it gets into body cells has for quite some time been a bit of a mystery. However,

Clive Robinson March 25, 2022 11:48 AM

@ Quantry, ALL,

Honestly, this is a blind obsession, while so many other vectors are wide open to “suitably motivated” clops by “spooks with inexhaustible funding”.

The thing is we know the spooks at the likes of the NSA have fritzed with RNG’s

1, There was the one (Dual EC) that was so badly done and so obvious that even the kiss-ass NIST people had to withdraw a standard.

2, There was that strange software vulnerability that got into Jupiter Networks software stack, that was a very definate back-door, that nobody wants to talk about.

In the long past I’ve pointed out that any spook-works would want to attack,

1, Implementations
2, Protocols
3, Standards

And we now know some decade or so later that is exactly what the NSA has done repeatedly.

Attacking RNG’s very definately falls well within their scope, especially on “Network Appliances” and other Embedded systems where the level of entropy can be low very low as little as ten or twenty bits in some cases…

You need to remember that the NSA or GCHQ don’t have to actually tamper directly with the hardware, when crappy software designs make their job oh so very easy in so many different ways.

Remember they have got NIST on more than one occasion, the AES competition was stacked up to make time based “implementation side channels” not just easy, but just a quick download away… Even now there are AES implementations out there that hemorrhage either KeyMat or PlainText or both out onto the network where they can be picked up one or two routers up-stream of where you are.

lurker March 25, 2022 2:22 PM

@Quantry, @Clive, All

The hotbits link has a number of suggestions for radioactive sources, followed by the warning, paraphrased, do not try this at home.

For another modern urban warning on the perils of tinkering with radioactive material see “Goiânia accident”

David Leppik March 25, 2022 4:42 PM

@Quantry, @Clive, @lurker, All

Radioactive sources of randomness* might have made sense in the 20th century, but the same quantum noise that makes radiation a good source of randomness* abounds at the nanometer scale of modern computer chips. Intel uses this quantum noise in their HW RNG—it’s something like a really noisy flip-flop—and I suspect that’s typical these days.

*I’m not saying “entropy” because it has a different meaning in physics, which in this case could be misleading.

Clive Robinson March 25, 2022 5:48 PM

@ David Leppik, lurker, Quantry, All,

I’m not saying “entropy” because it has a different meaning in physics, which in this case could be misleading.

Yup, the actual term is “meta-stability” and it’s a whole diferent bag of snakes. I’ve talked about it and the issues with “Ring Oscilators” of CMOS inverters[1], “injection locking” from any kind of noise, and the unfortunate “mixing effect” of D-Type latches etc.

A week or so back I posted refrences to quite detailed technical knowledge. If people are interested I can dig it out again.

[1] Contrary to what many think, logic gates like CMOS inverters are not “digital” they are actually very high gain analog amplifiers with poorly defind transition (just like Op-Amps). You can turn such inverters into analog amplifiers of more controled gain and transition with just a couple of resistors (just as you can with Op-Amps).

MarkH March 25, 2022 6:08 PM

@fib, Clive, et al:

Ages ago when I worked in a smoke detector engineering group, I was told that the sources had a thin deposition of a precious metal (forgot which element), and that if ingested by a person would pass the digestive tract with negligible health risk.

I would not assume that all smoke detector radiation sources are made so carefully.

The electronics of a smoke detector are wholly unsuited to detection of individual alpha emissions, and the chamber might not work well either.

Smoke detectors don’t detect decay events (whose timing is supremely stochastic), but rather the comparatively steady electrical current through the chamber enabled by a substantial rate of air molecule ionizations: they throw away most of the randomness.

Denton Scratch March 26, 2022 5:33 AM

@David Leppik

Intel uses this quantum noise in their HW RNG—it’s something like a really noisy flip-flop—and I suspect that’s typical these days.

The Intel HWRNG relies on jitter in free-running rings of inverters. The jitter is caused by variability in the propagation time of the inverters; I don’t think that variability is of quantum origin. All gates have some variability in their performance characteristics, and switching time is one of those characteristics.

The bitstream from the raw device is rather biased. So it is passed through an AES encryption stage, which produces output indistinguishable from randomness, even if the output from the raw device is a stream of zeroes. All test-suites for randomness will pass if you run them on the output of AES encryption.

I understand the device has been inspected under an electron microscope, and no fishiness was found. But I’m not willing to trust a RNG if I can’t examine the raw output from the randomness source myself; and that is not exposed by the Intel device. That means the quality of the raw bitstream can only be known to Intel engineers. I would have a lot more confidence in the Intel device if I could run my own tests on the raw bitstream, pre-encryption.

The minute fluctuations in the voltage across a zener diode are of quantum origin, being the result of electrons tunneling across a junction.
But there’s no particular reason to insist that a random source should depend on quantum phenomena. All that’s required is that the bitstream is unpredictable. I don’t know how one could make the output from a free-running inverter loop predictable without interfering with the hardware, so I’m inclined to trust the design in principle. But I want to be able to test the implementation.

Denton Scratch March 26, 2022 6:22 AM

Guenter Roek’s explanation of the revert, from LKML:

“This patch (or a later version of it) made it into mainline and causes a
large number of qemu boot test failures for various architectures (arm,
m68k, microblaze, sparc32, xtensa are the ones I observed). Common
denominator is that boot hangs at “Saving random seed:”. A sample bisect
log is attached. Reverting this patch fixes the problem.”

From what I can see, a patch was introduced to make the RNG use haveged-like jitter sources to seed an empty entropy pool in early boot, which made it possible in principle to make /dev/urandom block, like /dev/random. The reverted patch actually made /dev/urandom block, but the jitter hack didn’t work on qemu.

I didn’t know about the jitter hack. I’m not sure I like it; if I’m not mistaken, it means that /dev/urandom uses one of two different sources of randomness, depending on the condition of the pool. Considering the trouble the Linux RNG goes to in normal operation, it seems unlikely (to my ignorant self) that the jitter hack produces randomness of the same quality as the traditional approach. For the kind of people that don’t trust the Intel HWRNG enough to use it in the Linux RNG, I can’t see that the jitter hack offers greater confidence.

Clive Robinson March 26, 2022 12:44 PM

@ Denton Scratch, David Leppick, ALL,

I don’t know how one could make the output from a free-running inverter loop predictable without interfering with the hardware,

It’s actually all to easy and depends on how you define “interfering with the hardware”.

As I indicated above a CMOS inverter is just a high gain analog amplifier. Which means it is highly sensitive to injected signals at it’s “cross over point”.

It’s fairly well known that a few centuries ago back in the 17th Century the Dutch mechanical inventor with a significant interest in pedulum clocks Christiaan Huygens discovered an intetesting phenom which he wrote about in a letter to the Royal Society in London in Feb 1665, it was perhaps unsurprisingly to do with pendulums as he was at the time trying to solve one of the greatest issues of the time “navigation at sea” (something he failed to do and fell to the English clock maker Harrison who was cheated by the English Government).

But the supprise Huygens wrote about appeared to be almost magic, in that if you placed two or more pendulums close to each other they have a habit of falling into synchronicity but of opposit phase.

Over three and a half centuries later the issues behind the synchronous behaviour is still not fully understood and it still throws up a few interesting bits of research,

Simplistically the pendulums and their driving mechanism are oscillators. Important to understand is that the driving mechanism that keeps the energy topped up in the “tuned circuit” the pendulum forms, has to be,

1, In phase with the oscillation
2, Provide only sufficient energy to make up for the losses.

If neither is true two effects can be observed,

1, The oscillation phase and frequency will change.
2, The chaotic process behind the oscillation will become dominant.

In the second case the oscillation phase and frequency become eratic, and can cause both pendulums to stop.

So there has to be an energy transfer from one pendulum to the other. This is a physical implementation of an Shanon Information Channel.

We know from thermodynamics that energy only moves in one direction “down hill” or from the dominant to the subordinate. But as we are dealing wirh “cyclic” behaviour what is dominant or not and when is somewhat difficult to figure out.

However as a first aproximation the oscillator that is not crossing zero when the other oscillator is can be viewed as dominant and energy will be transfered from the dominant oscillator slowing it, into the subordinate oscillator either speeding it or slowing it depending on the phase at the zero crossing.

The tiny movments of energy result in the oscilators always effecting each other.

But how tiny?

The answer appears to be “below the measurable noise floor” on any given measurment…

Now consider other oscillators, like tuning forks, it can be shown that if you have two very lightly coupled tuning forks, if you strike one the other will build up a measurable resonance…

The same is true for every other resonant circuit that has ever been studied in sufficient detail. Including biological processes in individuals like humans (found out by sanitation engineers where RMS figures were not the ones to use for the sizing of waste pipes in high density accomodation).

But the waveforms do not have to be sinusoidal but they do have to be cyclic. They also do not have to be at or near the tuned circuits resonant frequency, a harmonic or subharmonic will do. In fact the waveform can appear to the human eye to be totally random, as long as there is a net transfer of energy at some harmonic relationship. And as previously noted it can be very very small and effectively unmesurable to most test instruments…

So a ring oscillator of CMOS inverters, has very high susceptability to energy being coupled in at any and all the inverters as their inputs transition from one state to the other. And the phase thus frequency will be “pulled” in a fairly well known process called “injection locking” that is in most First World Homes as standard (analog Colour TV and Stereo Radio).

It is further guarenteed that there is more than sufficient energy from the surounding logic circuits for energy to get into the ring oscillator. Be it via direct electrical coupling in the power supply traces, capacitive and inductive coupling between traces, or by radiation from other oscilators be it mechanical/acoustic or Electro Magnetic.

So… That WiFi chip on the motherboard can in practice not just theory effect the CMOS inverter ring oscillators used in those supposadly “Truely Random” generators pulling them into synchronisation.

It has been demonstrated that a very expensive very carefully designed TRNG for high security applications, designed by IBM can have it’s output “randomness” pulled from over 2^32 to under 2^7 just by illuminating it with an RF CW signal in the microwave region sufficiently small in wavelength that it could get through ventilation grills and the joints between metal plates[1].

So we know without any doubt what so ever that without very significant precautions –which the chip makers do not take– that all those “On Chip” RNG’s using CMOS Inveryers in Ring Oscillators are most definately not “Truly Random”.

If you want to dig in further there are a couple of books that are reasonably aproachable by some one who has engineering degree level knowled on the subject of CMOS logic and their use in Ring Oscilators. Both are written by Prof Behzad Razavi,

1, Design of Analog CMOS Integrated Circuits

2, Design of CMOS Phase-Locked Loops

Oh and another of his books,

3, RF Microelectronics

Will help bring non RF engineers upto speed.

But as a last note, if you hear some muppet calling those who distrust thos On Chip RNG’s “tin foil hatters” or equivalent, you have my permission to send them off to a proctologist to have your boot surgically removed by “three sixty resection” or similar 😉

[1] I’ve talked about this before, if you want to know more about it look up “slot antennas”, but briefly if you have an antenna designed of wires in an insulated environment, you can replace the wires with slots in a conductive environment such as a metal plate.

Clive Robinson March 26, 2022 1:03 PM

@ Denton Scratch,

I didn’t know about the jitter hack. I’m not sure I like it

You would be wise not to, as it’s entropy content is realy just a few bits.

What it does is use,

1, A chaotic process
2, To drive a complex process
3, To smear any real entropy across a lot of bits.
4, Prior to a crypto algorithm

This means that to an observer of “only” the output the RNG can not be distinquished from a TRNG for quite a large number of outputs (by which time there is probably ebough real entropy in the system).

However, what about an observer who

1, Knows the chaotic process,
2, Knows the complex process,
3, Knows the crypto algorithm
4, Knows the probable start conditions.

The chip designers will know 4 and 1 with 2 and 3 being public knowledge.

As 4 is very likely to have a very very tiny “start space” it means that it may well be possible to carry out a forward or dictionary search on that start space and build a “Rainbow Table” to work back from…

It then depends on how chaotic the hardware is which the jitter comes from… I’m guessing it’s not at all chaotic after the initial start up, and can be followed using some kind of tree based estimation algorithm to keep searching well within quite small bounds.

Denton Scratch March 26, 2022 3:02 PM


I’m familiar with the problem of oscillators becoming entrained. I’ve tried to make an RNG using TTL inverters; the output was crap, and I think entrainment was the culprit (but I don’t know; I don’t own any electronic diagnostic gear such as an oscilloscope).

Making on HWRNG from scratch is hard, whatever principle you rely on; the principles seem easy enough, but the practice seems to be tough.

That’s aside from all the confusion introduced by the silly idea that you can measure entropy.

Quantry March 28, 2022 12:17 PM

@ All, thanks for the feedback. About entrainment:

I’m guessing that rejecting output bits based on a comparison of numerous adjacent zeners (or RTDs for that matter) could block the “local perturbation failures and attacks” aka “classical noise” (temperature, power supply harmonics, magnetic, etc) or even the entanglement and entrainment problems?

For example, by demanding that only pairs of opposing change enable a pass for the given sample: An equal number going high as those going low. A perfect chore for gate logic, no?

I realize this would massively hamper bit-rate, but that’s the idea really.

Heres to the concept of tunneling variation:
On: “Extracting random numbers from quantum tunnelling through a SINGLE [Resonant Tunneling] diode”.

Clive Robinson March 28, 2022 1:44 PM

@ Quantry,

A perfect chore for gate logic, no?

Yes you can make a John von Neumann “debiaser” with at most three TTL chips. I was doing this back last century and have described it before on this blog and otherplaces. I was using a “roulette wheel” Entropy source[1] that made life easier interfacing wise.

You need a two stage shift register, the Q outputs of which drive the inputs of an XOR gate which you can make in various ways with four NAND gates. You might also need to latch the result for slow read out by a computer via a serial port.

IMPORTANTLY you need to clock two bits in each time… Then after testing discard both. Otherwise it will not debias correctly. How you do this is rather dependent on your entropy source circuit. But you can make a simple state machine with logic gates.

Importantly these days small micro controlers are more easily available and way less costly than even individual logic chips. They also enable you to provide a real Serial or Parallel data signal (RS232/Centronics) to a PC via direct port or USB port… Spend an extra 50 Cents and the Microcontroler will have all the USB 2.0 or higher hardware built in, and you can download most of the code you need to drive it from the Chip manufacturers “Tech Sup” web site.

Yes the John von neumann debiaser is ineficient… you get around one debiased bit for every four bits in from the alleged entropy source. You can up the efficiency by pipelining more bits and coming up with a more complex debias algorithm. Then send the bits by parallel reading rather than serial. But it only gets you more bits of “maybe” entropy, that is in reality they are “complex” or “chaotic” not “True Random” in origin.

The problem is whilst the bits are provably debiased[1], that does not mean you are getting real entropy. Because the circuit actually gives output on a pure sinewave or other cyclic waveform… So the debiased signal is not of necesity actually “entropy”…

[1] You can find the proof of debias in loads of places on the Internet, the easiest one to see intuitively is by the use of a square of the two probabilities… However coming up first on a quick search today is,

[2] A roulette wheel or waggon wheel entropy source uses two oscillators one mostly stable, the other definitly not. You use the stable oscillator to “strobe”, “clock” or “sample” the unstable oscillator, which mathmatically is a simple sampling process. If you’ve ever seen an old cowboy movie where the waggon wheels appear to go backwards slowly, what you are seeing is the strob effect caused by the movie cameras shutter, likewise if your dad or school “motor shop” showed you how to adjust the timing on an internal combustion engine with a strobe light.

Whilst designing a stable enough oscillator is easy, you just buy a TTL Xtal Osc package for a dollar or what ever the rapidly rising price is these days… Designing a suitable unstable oscillator is actually nowhere near as easy… My solution back last century was to use an OpAmp as a VCO by moving it’s bias point with an amplified, semiconductor noise source. It sounds easy, but due to the nature of sampling and injection locking, it can take quite a few goes to get it right (those software CAD systems usually let you down so you have to build a number of actual prototypes and know how to test them properly…).

Denton Scratch March 29, 2022 5:12 AM


you get around one debiased bit for every four bits in from the alleged entropy source

Correct me if I’m wrong, but I think if you apply the VN debiaser to a perfectly random input stream, you get one bit of “debiased” output for every twwo bits of unbiased input, no? But if the input is heavily biased to zeroes, you’ll need much more than 4 bits of input for each bit of debiased output. That is, the output bitrate of the VN debiaser is variable, and depends on the amount of bias in the input.

Re. “alleged entropy”: I don’t like using the word “entropy” in this kind of discussion; it’s a confusing term. People speak of “distilling entropy” by passing it through a hash or crypto routine. But distillation of e.g. whisky produces much less liquid as output than you put in; a hash function doesn’t do that. If the input contains one bit of entropy, and the output contains 256 bits, the output is unpredictable; but there’s still only one bit of entropy. So why all the fuss about entropy?

Clive Robinson March 29, 2022 8:07 AM

@ Denton Scratch,

Correct me if I’m wrong, but I think if you apply the VN debiaser to a perfectly random input stream, you get one bit of “debiased” output for every twwo bits of unbiased input, no?

Err no, “perfectly random” is not in anyway an implication that it is dibiased in “any given period”.

Have a careful think about that… That is “random” is not equivalent to “debiased” it just “tends towards being true” when you start averaging things out for long enough.

To see why look at the John von Neumann circuit a bit more closely. It has two bits at it’s input from the entropy source that in a perfect world would be fully independent (but are not in the real world). The circuit had One output bit –call it valid– which says “the two input bits are different” the other two output bits are the input bits, that are indicated as “valid” thus debiased, call them Q0 and Q1. However they are also the inverse of each other when valid. So you can use either of them as the “Data Out” bit just as long as you always use the same one.

So yoy have two input bits for each test they and they form the following set,


Of which only the subset {01,10} give valid output bits. So eight input bits give you two valid output bits on a long enough average.

On to,

But if the input is heavily biased to zeroes, you’ll need much more than 4 bits of input for each bit of debiased output.

But remember “Each member” of the test set can be “randomly selected” for any test. So it’s posible to get,

[01,11,10], [11,11,00], [00,11,11],
[10,00,01], [00,00,11], [11,00,00],

Which are clearly biased as triples, but can and will result from random selection at some point and will continue to do so with some small probability.

But you can also see, that for those 36 input bits you would only get 4 output bits or ‘1 for 9’ even though in total you have 18 ones and 18 zeros so the input over 36 bits is not actually biased…

But onwards, you say,

That is, the output bitrate of the VN debiaser is variable, and depends on the amount of bias in the input.

True and true, but it says nothing about how random it is or is not.

So for further fun… How about inputs of,

[01,01,01,01,01,01] or,

Both can be seen to be NOT random but cyclic but as they are as far as the circuit is concerned “debiased” thus valid, you get [000000] and [111111] output respectively at a rate of two bits in for one valid bit out… But as can be seen over the length considered most would say “not” random.

Which tells you a couple of things.

Firstly you need to count the number of “raw bits” out of your entropy “source” and the number of bits out of the debias circuit and keep an eye on their relative values.

Secondly and importantly over various lengths of time as well as raw bit rate…

It’s one of the reasons why I say you MUST have access to the raw physical source output to see if the source is possibly working or in some limited cases “definately not working” or probably not working as well as you would like for some reason (ie it could be going faulty or it is subject to some kind of influance).

I hope that helps spur your thinking onwards, to what other tests you might want to consider.

Chris Drake April 15, 2022 6:44 PM

FAKE. This is the BSAFE lie all over again.

You INCREASE security by ADDING to the hash, not REPLACING it. i.e. XOR the new results into the old ones, with sufficient care that the new results can’t include negation of the old ones along the way.

Anyone who removes anything (instead of layering) in security is inserting a back door.

Dingbat April 16, 2022 11:38 AM

@Chris Drake
So your contention is that replacing SHA1 or MD5 (known hash collisions) with a different hash without known collisions is reducing security?

Hans October 10, 2022 4:06 AM

to the guys asking “why blake2s instead of blake3?”, Blake2s has like.. idk, 150 bytes of internal state, and blake3 has like 1500 bytes of internal state (*these numbers are not accurate, i don’t recall the exact numbers) which might be an important consideration for low-ram systems

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.