Catch and Release is in play in this space-time continuum.

]]>It really is philosophical.

Can you trust what you think is random is really random?

Can you trust that your coin flip is really random?

I submit, that you cannot.

And, if you really want to think deeply, ask yourself:

What really is ‘Mass’ and ‘Gravity’?

Why does it appear upon ‘Observation’ and ‘Measurement’ that they are related?

Are you sure they are related?

]]>The RMS value of a waveform is the square root of the mean of the square of the waveform. It’s just a number, not a new waveform.

Have a look at the output of any nonlinear square law circuit such as a bridge rectifier. Nearly everything in life follows a power law one way or another. Engineers spend much of their lives trying to work in the part of the curve that’s near linear.

sin(x)^k for increasing integer k converges to a series of narrow impulses of unit ampliture and spaced pi apart. If k is even they are all positive, if k is odd positive and negative pulses alternate.

Now take the second RMS of that including the issues of “sampling” that creates frequency fold over. Then keep going.

The world is part of a universe that is not only quantitized but most definitely not linear.

]]>Oh another fun thought, you are I assume aware of what the Root Mean Square (RMS) is and what it effectively does?

The RMS value of a waveform is the square root of the mean of the square of the waveform. It’s just a number, not a new waveform.

So I’ll assume you refer to just the square or higher powers of a waveform.

Thus you end up with a an infinite frequency series of small amplitude that successively approaches a “white noise distribution”.

sin(x)^k for increasing integer k converges to a series of narrow impulses of unit ampliture and spaced pi apart. If k is even they are all positive, if k is odd positive and negative pulses alternate.

That waveform will have a discrete (line) spectrum. This is definitely NOT white noise, not even in the limit as k->inf.

Now consider a white noise waveform having some continuous amplitude distribution.

Every point will have some probability of exceeding a given threshold T. For any T, those points that do exceed T define a sequence of events that will have a Poisson distribution, and the time between them will have an exponential distribution.

Taking the square or any higher power doesn’t change anything. The probability of p^k > T is the same as the probablity that |p| > T^(1/k) (assuming even k).

So what are you trying to show here ?

]]>As I’ve mentioned the decay closely follows a (1/e)^n curve. Which as I’ve also noted a number of people have been trying to get into politicians heads with regards the growth in a pandemic, which is a pecentage per unit time change and has an inverse half life growth –time to double– and a half life decay that gives us the R value.

The original assumption about isotope decay was that as atoms are very widely spaced appart small particles would mainly pass through a sample. And as had been observed with some high energy particles some small percentage would colide with the atoms and be deflected or as observed bounce back towards the source.

The argument effectively went that if a sample of isotope atoms was in a stream of such particles that were time invarient in intensity their probability of being hit by the particles was small but likewise invariant with time.

But importantly that time invariant behaviour of atoms and stream of particles has a time varient result as an isotope atom takes it’s self out of the game by decaying. That is in each time period you would trigger a small percentage of the atoms to decay, leaving less atoms for the next time period which as the probability had not changed would result in the same percentage –not quantity– change in each successive time period. Which gives you an exponential decay curve.

So two time invariant effects combining to give a time varient result, with the assistance of a little uniform physical randomnes in just one of them.

The result of such thinking is very appealing and still is to some. Hence the reason some people still look for an external trigger. They reason, that the fact that none has been found so far, does not mean it does not exist, so does not deter them looking. Because they reason the fact that something has not been discovered does not mean it does not exist on the old “You can not prove a negative” argument.

Oh one fun implication if it did exist is that stream of particles would have not just a constant rate, they would also need to have a small random spacial distribution with the randomness being uniform or ideal, that is of uniform density by time/frequency which is what many call “White Noise” and is found in most other noise sources and Quantum Mechanics requires and both it and Classical physics both give us[1]…

So yes some people will search the physical universe for a classical physics solution that they believe must be true because so much points to it, whilst most others just sit there and calculate with their Quantum physics mathmatical models irrespective of personal belief.

And there is a funny side to this, Quantum Mechanics effectively gave the universe “free will” but… At the cost of strict determanism in the mathmatical models that requires randomness to make it all work.

The down side of this is the “preordained” argument that alowed man and therefore a god to be master of all they surveyed has been replaced, not with a god that plays dice, but just the dice of randomness themselves… The implication of this is of course evolution is true, thus randomness might be the constant from the impossibly small to the impossibly large.

I suspect Alan Turing had come to this sort of conclusion from various of the things he said –including the nature of spots and stripes on creatures– and did. Not least of which was argue that all computers need a true physical random generator.

[1] Oh another fun thought, you are I assume aware of what the Root Mean Square (RMS) is and what it effectively does? Have you thought about the result of successive applications of it to any signal including a white noise “random” signal? If you take the RMS of a sine wave not only do you halve it’s amplitude, importantly you double the frequency of the sinewave the excess energy becomes an infinite series of harmonics of reducing amplitude. Thus you end up with a an infinite frequency series of small amplitude that successively approaches a “white noise distribution”. As all waveforms can be shown to be made of sinewaves the same thing happens to white noise which is what we assume randomness gives us but as the frequencies both constructively and destructively combine the distribution becomes uniform. So from order we get the chaos of randomness irrespective of if we want it or not.

]]>I’m looking at the physics as a technologist: “how do I apply rules and patterns from the scientific consensus to a practical application?”

The questions you posed are (a) way over my head, and (b) seem to be more about philosophical foundations.

That being said, my answers are:

No.

No.

Not off the top o’ me old gray head.

I haven’t the foggiest, but I also have no reason to believe that entangled particles at any appreciable distance from each other play any role in spontaneous nuclear decay.

I regret that I’m not scintillating this morning (pun intended).

]]>Here’s some thoughts to ponder:

Can you define ‘Time’ without requiring any ‘measurement’?

Can you define ‘Distance’ without requiring any ‘measurement’?

Can you define either without referencing the other?

Would your answers or questions provide a clue as to what may be happening in the cosmos with regard to ‘Spooky Action at a Distance’?

]]>The motion of a particle in Brownian motion is proportional to the square root of time.

𝔼(Δx) / Δt = the **drift velocity** of the Brownian motion.

m *Var*(Δx) / (2Δt) = the average **kinetic energy** of the particles in Brownian motion.

For now, I’m responding to the second part of your recent comment.

Humbly, the word “explain” has different meanings, and as a parent you know that the iteration of the question “why?” can proceed very far indeed.

In the quantum model of most kinds of decay, each unstable object has a probability of decay (per unit time) which is time-invariant.

The time-exponential decrease in a population of identical unstable isotopes is consistent with time-invariant decay probability. In essence, one implies the other.

Note that this is fully consistent with complete independence of decay events. When reaching its moment of death, a nucleus doesn’t need to “know” how many of its neighbors have yet to decay, or have already gone … nor does an Am 241 nucleus “know” whether it’s part of a purified metal densely packed with such nuclei, or whether its nearest Am 241 neighbor is thousands of meters away [1].

This might not be an *explanation* in the sense of your question, but observed decay behavior is consistent with decay as a perfectly random event having time-invariant probability.

I have a feeling that you had some other idea in mind with the question you posed about independence, but I haven’t grasped what that idea is.

=============================

I should say as a disclaimer, that I only know QM at the “comic book” level. I don’t pretend to understand all that spooky stuff … but I have a little notion of what some of the laws are, and of the experimental evidence which first led to their formulation.

Probably most of us know that our common-sense reasoning about the world of sensible objects is often strongly at odds with quantum descriptions of reality.

Consider the case of an incorrectly set mouse-trap, or a cocked firearm with a “hair trigger”. The spring energy sometimes releases without any apparent intervention (i.e., spontaneously).

These spring mechanisms might be triggered by a small mechanical vibration from its environment (a “predictable” trigger, because vibration can be measured), or perhaps even by thermal motion of their constituent molecules (an “unpredictable” trigger, because it’s not practical to observe those molecular motions … although in principle, it could be modeled as a deterministic process).

If it’s vibration, then we can say that the release was triggered by some discrete pulse of energy from outside the mechanism.

If the release is purely thermal, then we can say that although its timing is (for practical purposes) unpredictable, the ambient temperature affected the probability per unit time of spontaneous release.

=============================

The alpha decay case is even spookier than that: the vibration of molecules in metal parts requires energy from somewhere; without this, their thermal energy would gradually dissipate via radiation.

But nuclei can persist (science tells us) for billions of years, with *extremely* rapid incessant motion of their constituent particles, with no external energy source.

Nuclear vibration is lossless. Energy ceaselessly shifts about, but is not dissipated.

=============================

Finding an *external* trigger for what we understand as spontaneous alpha decay would — if I understand correctly — require a radical re-write of the “quantum book.” I wasn’t joking, about such a discovery likely being Nobel-worthy.

In some cases, isotopic half-life can be environmentally modified: the probability density can be adjusted. But that’s not the same as a triggering event.

Hypotheses that something is actually triggering nuclear decay can be tested by experiment, depending on the hypothesized trigger.

Quantum theory doesn’t require an external trigger for Am 241 fission; experiment has not revealed one.

=============================

[1] As the Steely Dan song goes, “Up on the hill, people never stare. They just don’t care.” Unstable nuclei don’t care what other nuclei are doing, when spontaneously decaying.

]]>With regards the overly dramatically named[2] “Elitzur–Vaidman bomb tester”, it actually has some interesting properties over and above it’s elegance. Think of it as a “observation sensor” that is it detects the collapse of the superposition at the point of measurment, which has implications for examining the Turing Paradox as well as real world sensors.

This is how a common circuit breaker with respect to an electric motor allows an overcurrent of 15–20× the running current to start the motor without tripping the breaker.

The breaker or even a fuse can thus be shown to work without tripping it.

The quantum states where the breaker has tripped are suppressed by the “flyback voltage” that appears across the coils of the motor.

The breaker only collapses into a tripped state to stop the current when the “probablity amplitude” of the tripped state of the breaker are sufficient to overcome the electrostatic force holding the electrical contacts together.

]]>