New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography

Lots of people are e-mailing me about this new result on the distribution of prime numbers. While interesting, it has nothing to do with cryptography. Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers. Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.

Posted on September 21, 2018 at 2:14 PM34 Comments


New Findings? September 21, 2018 3:50 PM

But if it’s faster to find all the prime numbers, then you can build a giant database of them all, indexed by their respective semiprimes, and surely that can be used to quickly factor cryptographic keys, right? Of course the database would need to be huge, think Utah Data Center huge… oh hey… does that already exist? hmmm…..

Wael September 21, 2018 3:53 PM

Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.

Two or more, as in multi-prime RSA. Or perhaps in the future: 3-D Elliptic Surface Cryptography[1] as opposed to the current patent-restricted “Epileptic Curse Crapography

[1] Or higher dimensions, given we find suitable mathematical properties that are workable in the crypto domain. Could be PQC, too — if we’re “lucky” 🙂

PS: Couldn’t resist to break my “two week” silence.

wumpus September 21, 2018 4:29 PM

@Me: Yup.

Bruce gets to stop people sending more of the same link, he gets to post a cool link, and he gets to point out that “prime numbers are not cryptography”. What’s not to love.


If you want ECC without patents, consider ancient cryptopp sources. Release 2.0 (1996) is provably patent free (and includes ECC). 2.3 (with ECC improvements in 2.1 or so) should be the most current “trivially provably patent-free” code, with 3.0 becoming patent free in the new year.

-this assumes a maximum 20 year patent. Check local laws before attempting yourself.

I remember using it in early the early 2000s, and the only patent left was something that allowed reducing key length (the amount you needed to transmit) by 1/2. But using such old code is obviously useful in limiting attacks from bogus patents that attempted to patent existing published code.

John Cochran September 21, 2018 4:40 PM

New Findings… That would have to be an awfully large database.

Using the Prime number theorem, the number of primes not greater than N is approximately N/log(N). Let’s assume that we’re looking for primes of between 500 and 700 bits with which to construct our RSA key.
2^500 = 3.2734e150, log(2^500) = 150.515, 3.2734e150 / 150.515 = 2.175e148
2^700 = 5.2601e210, log(2^500) = 210.721, 5.2601e210 / 210.721 = 2.496e208

And subtracting the lessor from the greater gives us about 2.496e208 possible primes in the database. To put that number into perspective, scientists estimate that the known observable universe has approximately 10^86 elementary particles which is quite a bit smaller than the number of estimated prime numbers with a length between 500 and 700 bits.

Somehow I don’t think that mankind is gonna be creating such a database in the foreseeable future. It would be a lot faster and cheaper to build a quantum computer with sufficient bits and then use Shor’s algorithm.

Alyer Babtu September 21, 2018 5:05 PM

Although the gist of the new observation has been known by number theorists for decades, even a slight variation in point of view, expression or a reformulation can tip the focus or surface of thinking so that a genuine advance occurs. Perhaps making understandable how factor properties change as the number is incremented, and perhaps providing insight into factorization.

Jeremy September 21, 2018 5:09 PM

I would have guessed cryptographers care a LITTLE bit about how to find primes, for purposes of key generation.

Anura September 21, 2018 5:50 PM


The people who implement the RSA algorithm care about how to efficiently find large primes (for which these findings about the distribution would still probably not be helpful), but the people who study how to break RSA are trying to solve an equation, not find primes.

Clive Robinson September 21, 2018 5:56 PM

The article leaves me underwhelmed to put it bluntly.

I first started to think of primes as holes in wave patterns when I was an early teen and quickly realised that twin primes nearly all fell in highly predictable places. The only thing of interest was when one of the proto twin prime pair got knocked out by another prime, again there is a regular wave like pattern to this.

For those that dont see this write down all the numbers between 1 and 210. Then draw a square wave for each prime you will see twin primes centerd at 2×3, 2x3x5, 2x3x5x7. But what you will also see is the prime pattern reverses around the multiples of six, thirty and if you extended further 210 etc. Knowing this pattern goes on indefinatly the interesting thing is to see how the predicted proto primes get knocked out thus thining out the candidates in the reversals.

If you were to draw the waves not as squares but sinewaves and add their values you get the patterns the crystalographer is refering to…

It’s something I’ve assumed for over fourty years everyone knows.

Jesse Thompson September 21, 2018 6:42 PM

@John Cochran

To save you some time, when dismissing numbers slightly larger than a Google, it’s faster just to cite that per the Holographic Principle the maximum amount of raw information, in any form, that can even physically exist in principal in the known universe is just below 10¹¹⁰ bits. 46.6 gly radius of universe -> 2.73×10²² ly² spherical surface area -> 10¹¹⁰ plank areas.

New Findings? September 21, 2018 8:43 PM

Big numbers greater than the atoms in the universe tend to be abused and used up quickly, because people mistakenly consider them virtually infinite… For example, remember IPv6? Have you tried to implement a local network with it? They keep halving it so many times to the point where the space you get is downright puny…

So rather than dismissing the large number, we could keep going: how can we eliminate many of them as impossible to be chosen by certain implementations? For example, are there any optimizations done in various implementations to find primes faster? or… slower? Would that limit in any way which ones they might stumble upon? If you ever end up doing this multiple times for various reasons, it starts to not be so big after all.

echo September 22, 2018 8:41 AM

I read about this last night and was very excited. It’s a big downer reading people pour cold water on it. Regarding maths I know enough to know this stuff goes over the top of my head. I’m also dumb enough not to be aware of what is and isn’t possible, given current understanding. The thing is knowledge changes and you neverknow how changed knowledge changes things. Whatis junk today is gold tomorrow. I also don’t have a professional reputation inb the fields of mathematics or cryptology to maintain so can believe what I want within boundaries of obvious stupidity. This can be very handy at times.

echo September 22, 2018 9:16 AM

@Jesse Thompson

I watched Youtube video on this only this last week. It was very interesting. Isn’t there a caveat though? The theory is sound up to the point of the limit of out knowledge. While fiction in Ian M Banks novels the AI minds had a physical interface but most of their conciousness was in another dimension. There are new theories emerging about quantum mechanics with claim space and time don’t actually exist nor do other dimensions. It’s really a question of how we perceive things but, again, there is a limit to our knowledge. As and when I getaround to this I may postthe link to the new quantum calculation scheme in the new squid topic as they may interesting from a theoretical point of view.

MikeA September 23, 2018 11:40 AM

@PowersOfTen — perhaps, given all the current interest in political corruption and Slavic affairs, @Jesse Thompson was sending a dog-whistle to readers of Gogol.

Alyer Babtu September 23, 2018 3:13 PM

A random thought

Matiyasevich showed that the primes can be generated as the positive output values of a certain polynomial in several variables with integer coefficients. (It turns out there are many such representations.) Are the methods behind this kind of work applicable to the factorization problem ?

Denton Scratch September 24, 2018 1:00 AM

Eeek! I was put off the article by this sentence:

“They are “effectively” limit periodic — a new kind of order — because the synchronicities in their spacings only hold statistically across the whole system.”

“Synchronicity” is a term coined by the noted physicist/mathematician Carl Jung, in a book of the same name. Oh, wait – he was a psychiatrist, not a mathematician, not a physicist. Much of his stuff was bonkers religious speculation. “Synchronicity” is a woo term, and doesn’t belong in any kind of scientific discourse.

I stopped reading at this point.

MarkH September 24, 2018 1:23 AM

It seems to me that it’s been rare for mathematical advances not directly connected with cryptography, to show any practical effect (even potentially) on real-world public key cryptosystems.

I have two exceptions in mind:

  1. Developments of number field sieve algorithms, mainly by a very small group of brilliant mathematicians; and
  2. Shor’s algorithm, the impact of which depends on hardware that has not been shown to exist, and is possibly unachievable.

What have I missed? Apart from specific attacks by cryptographers on algorithms and realizations, what other mathematical innovations have mattered for PK cryptography?


Typical key generation systems are designed to choose primes “at random.” Occasionally, this is done with great care.

There are lots of ways to get pseudo-random generation wrong, and appalling examples of this have showed up in widely used software.

But suppose that pseudo-random generation is not utterly broken, but merely weak enough that you can exclude 99.999999999999999999999999999999999999999999999999% of primes of the relevant magnitude, and you know which primes are likely or certain to be skipped.

You would still something like 10^150 bits of storage for the proposed table. All of Earth’s data storage capacity is probably less than 10^30 bits.

See a problem there?

Weather September 24, 2018 1:37 AM

What it automatic selection of a prime value,
You use some?? Exponent curve, but the alogo allows all answer, by filter to select value,

Ask a question about the theory, I can’t cross link

Clive Robinson September 24, 2018 6:53 AM

@ MarkH, Findings,

But suppose that pseudo-random generation is not utterly broken, but merely weak enough that you can exclude 99.99…% of primes of the relevant magnitude, and you know which primes are likely or certain to be skipped.

Easy peasy, I’ve already built one for fun, it’s not that difficult to do… Oh and the NSA is assumed to have used it in the Dual Eliptic Curve Digital RNG that NIST later pulled.

You can read of a better way in a quite recent article[1] I used a modified version of one given in the Cryptovirology book I mention from time to time.

You need to either read their book or look up the work of Adam Young and his project supervisor Moti Yung into “key stealing” they called “kleptography” it’s a sub field of the more general Cryptovirology they developrd back in the early to mid 1990’s which is oh about a quater of a century ago, which for some reason appears to be an aproximate constant between something entirely new and significant being published and it being put into general product applications. However the idea came about due to two other ideas.

Firstly the work of Gus Simmons involved with the SALT initiative and similar whilst at Sandia Nat Labs that gave us “subliminal channels”[1] which are a subset of covert channels which in turn are a subset of side channels.

Side channels come about through redundancy in a system which is also an inefficiency by definition. Which via thermal loss in systems to the environment gives us the thermodynamic process of “entropy” which Claud Shannon pibched as an idea for what is noe information theory.

The second idea apparently came from a single sentance in a 80’s paper by Yvo Desmedt, which I’ve read a couple of times and in no way did it jump out at me.

When you make an RSA key you use two primes of aproximately the same size. There are a lot of primes to chose from thus you have the 0.5(N^2 -N) issue to play with which gives you one heck of a lot of redundancy…

So what you do is have a known starting point in a Prime search made from a large “hiden secret” plus a small random number. This search gives you your first prime.

You then encrypt this random number and put it in the top bits of the public key. Which means you having the required key can get the start point and search for that first prime. Having found it just simple division gives you the second prime and it’s “game over”.

The problem is how to embed that encrypted number… Well you fully randomly find a new start point for the second prime, each prime you find when multipled by the first you check if the top bits match your encrypted number, if not find the next prime and so on. It’s that simple, but it is slow… Which is why over the years better systems have been proposed one is given in the InfoSecurity Magazine article[1].


[2] “The Prisoners Problem and the Subliminal Channel”, Crypto 83,

Clive Robinson September 24, 2018 7:12 AM

@ Bruce,

Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers.

Be carefull what you wish for, even negitively 😉

Hopefully he is right, he’s well past retirment and there is a saying that mathmaticians do all the their best work in their twrnties or at most thirties. If he is then it will be proof that “Old dogs can learn new tricks”, which atleast would make this “clapped out old woofer” feel better 😉

JG4 September 24, 2018 7:38 AM

@Clive – Can’t recall if I posted this last year, but a key player in the lithium-ion space published what is believed to be an important innovation last year. I sent him a note congratulating him on the work and asking for his tips on health and longevity. He said, “Pick good genes.” Apparently close behind good genes is a good sense of humor.

The new battery technology uses a form of glass, doped with reactive
“alkali” metals like lithium or sodium, as the battery’s electrolyte
(the medium between cathode and electrode that ions travel across when
the battery charges and discharges). As outlined in a research paper
and recent patent filing (of which Goodenough, 94, says more are
forthcoming), the lithium- or sodium-doped glass electrolyte offers a
new medium for novel battery chemistry and physics.
They find, for instance, that the lithium- or sodium-glass battery has
three times the energy storage capacity of a comparable lithium-ion
battery. But its electrolyte is neither flammable nor volatile, and it
doesn’t appear to build up the spiky “dendrites” that have plagued
lithium-ions as they charge and discharge repeatedly and can
ultimately short out, causing battery fires.

I had a comment on the observatory situation blocked toward the end of last week, but couldn’t figure out why. I don’t believe that the machine algorithm could discern that I had strayed into forbidden conspiracy territory. I’m pretty sure that Bruce said a while back that speculation about the Benghazi backstory isn’t welcome. I’m having some difficulty with knowing where that line is between actual conspiracies, of which there are many, and the unwelcome speculation.

echo September 24, 2018 11:12 AM


Azidoazide azide contains a lot of energy! It’s a shame it cannot be harnessed for batteries. Cars and mobile phones area long way down any use case scenario due to sensitive handling requirements. All you have to do is look at it wrong and it goes bang.

MarkH September 24, 2018 12:17 PM


If I understood the gist of your earlier comment, it’s about intentional weakening of supposedly “random” generation. I think it’s widely understood that this can be done in ways that make a break computationally feasible.

My argument is about typical crypto implementations, which I presume to be intended as strong but which are likely to suffer from typical weaknesses in well-meant (but not catastrophically flawed) PRNGs, which can result in dramatic reductions in the search space … while still leaving it vastly beyond computational feasibility.

Having a layman’s fuzzy notion of the Riemann Hypothesis (more accurately conjecture, but everybody calls it RH), I’m grateful for the link you provided, and will follow with interest assessments of the claimed proof.

You may be aware that another elderly mathematician, Louis de Branges, has been claiming proof of RH since 2004, and more recently proof of GRH.

de Branges perhaps has more “street cred” than the new claimant, having achieved a really important proof while in his 50s, and having focused much of his career on topics closely related to RH. However, his claims of proof have yet to convince his colleagues, and the analysis of a very long “proof” based on highly specialized work done by de Branges over the years is a very costly undertaking.

If Michael Atiyah’s claimed proof is a as simple as he says, mathematicians will probably be able to find the first error in a fairly short time 😉

If there is no error, it will probably be the most important proof in the last 25 years.

Though a proof would be an earthquake in the world of mathematics (it’s impressively common for papers to say “if RH is true then we can prove the following result…”), as far as I am aware its significance for cryptography would probably be modest.

There might well be some algorithm speed-ups enabled by such a proof, but I guess not enough to “alter the landscape.”

echo September 24, 2018 2:02 PM


de Branges perhaps has more “street cred” than the new claimant, having achieved a really important proof while in his 50s, and having focused much of his career on topics closely related to RH. However, his claims of proof have yet to convince his colleagues, and the analysis of a very long “proof” based on highly specialized work done by de Branges over the years is a very costly undertaking.

I noticed this when I read about this news a couple of days ago. While I appreciate thinsg take effort I do wonder if ageism plays a role in their laziness. I kind of experience something similar with sexism. It’s really annoying having to pull a short skirt on to get men to do anything but then mens minds are never properly on the task at hand when this happens. Just because people are mathematicians (or any other profession for that matter) doesn’t mean they are completely professional when need be. Far from it.

MarkH September 24, 2018 6:26 PM


I lack knowledge to assess the extent of age discrimination in mathematics.

However, when you wrote “laziness,” perhaps you weren’t aware that the work would need to be done by a mathematician already expert in some of the special topics (i.e., from a worldwide population of perhaps only a few), and consume full professional attention for possibly a year, or even several years.

It’s understandable before committing to such a project, to want to have some confidence that it will be fruitful.

My guess is that for de Branges, his “flying solo” may be a bigger obstacle than his age, to evaluation of his claimed proof. He doesn’t have collaborators, so nobody is intimate with the foundations on which his argument rests.

echo September 25, 2018 7:04 AM


I have a fair idea of the kind of expertise and effort required. Discrimination cases may be similar too. I know enough about this field I could advise him although have my limits which require other professional expertise to support. Most of the broad brush issues are already known but the number of experts who are able to deal with the sometimes very involved nuances and working out are similarly restricted. This is very infuriating.

If I said there were probably less than a dozen people in the world who could follow the complete discussion this would be fair. I have met or teleconferenced with a few who have a public profile but even they find it hard going.

The media aren’t hostile as such but want their chicken shrink wrapped and oven ready.I was offered a documentary but had to turn this down as they weren’t interested in the broadhseet issues. I would have been relegated to being eye candy and completely blown my right to a private life. This is fine if you want to gawp at my legs but not much more.

@Alyer Babtu

Oh please don’t! I get buried in detail enough as it is.

Marek September 25, 2018 12:30 PM

The primes generated by various cryptographic libraries do have some kind of detectable pattern – mostly on the most significant byte, but also elsewhere. The bias results from the way how the primes are searched for and prepared by a library. As a result, provided public key can be attributed to specific library responsible for its generation.

(got best paper award at USENIX Security 2016)

MarkH September 26, 2018 1:03 AM


Thanks for the link! It’s cool research, of which I wasn’t aware.

The bias introduced by strategies for prime selection are a good example of what I described above: they can narrow the “search space”, perhaps by a few orders of magnitude. But even with such reduction, it remains impossibly large for simple approaches like trial division (even if a table of candidate primes were possible, the best it could do would be to make trial division less costly).

The security implication of this research is that an attacker may be able to make an educated guess as to which software package was used for key generation.

It doesn’t enable an attacker to decrypt ciphertexts, or to infer the secret key.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.