N =~ 1.24 x 10^239

p =~ 5.09 x 10^119

q =~ 2.45 x 10^119

==================

Now I am interested in factoring

ϕ(N) = (p-1)(q-1)

Since (p-1)/4 and (q-1)/2 are *relatively* prime odd integers (better double-check!) we should have

ϕ(ϕ(N)) = ϕ((p-1)/4)ϕ(q-1)/2)ϕ(8).

This is then of interest because it is the size of the group of possible cryptographic keys under the RSA scheme. What is a minimal generating set for the multiplicative group modulo ϕ(N)?

If ϕ(N) would have been easy to factor, then it seems the problem would have been easy to attack by discrete logarithm methods. Is there is good explanation or comparison of current best well known strategies out there?

]]>If you haven’t looked at it, the Wikipedia entry on formulas for primes

https://en.m.wikipedia.org/wiki/Formula_for_primes

has a lot of interesting information. The Wilson’s Theorem based prime formula bears some resemblance to @SpaceLifeForm’s proposal above

https://www.schneier.com/blog/archives/2019/12/rsa-240_factore.html#c6802520

]]>Well, there’s a long history of seeking out formulas for primes. Both Fermat and Mersenne numbers were suspected (or hoped) to be always prime. In both cases the first few in the sequence really are prime, and after that the bigger numbers were very difficult to test in the days before computing machinery.

The proof attributed to (or at least written up by} Euclid of an infinitude of primes is constructive (it shows how to make a new bigger prime than those you already know), and is in fact a formula for primes which always works.

Making large-integer experiments using floating point was always going to be very difficult … but 50 years ago, doing integer computations for very large numbers took a lot of programming effort, and would quickly run into resource limits.

It’s super convenient to use Python for numerical tests. My program to check the formula needed less than 45 Python statements.

I don’t have any direct experience with the Gnu Multiple Precision (GMP) library. Roughly 15 or 20 years ago, I rolled my own big integer library. Sometime later, I saw that Python modular exponentiation was very dramatically faster than my own. In those days at least, Python had its own code for big integer math, and I learned a lot from studying it.

]]>Well, there you go. My Quick and Dirty tests with old software were flawed.

Basic on PDP-11. Floating point fail.

Modern software obviously works better.

But, I do not trust GMP. Have seen issues.

]]>The relation holds true for the first 5 primes; among the next 100 primes, the expression is composite for all but one: 52! + 53 seems to be prime.

It would be interesting to explore whether it can be proven that the number of primes of this form is finite, or infinite.

]]>Besides my thought of infinite double twin primes, here’s a really old one:

(old, as in 5 decades ago)

If P is prime, then (P-1)! + P is prime.

I can not even recall how I came up with that.

But, I never found it to be false.

]]>Glad you are back Clive. And for expounding.

“Steering clear of adjacency to them shouldn’t be a problem for crypto, because their density is extremely low.”

Bad assumption.

Re-parse what Clive wrote.

“Obviously you loose a few primes along the way, but when you get into realy big primorials of the first fifty or a hundred primes (which you can store in a file) you can then build a “location” file of the locations around a reflection and use this to build a fairly fast sieve around primorials and their sub reflections and sub sub reflections etc.”

It’s not simple. This is why good semiprimes that are large, and are also difficult to factor, require that the two factors, P and Q, are not only not close to each other, but also fall into different primorial classes.

It’s not that the twin primes thin out, it’s how you can use a primorial to look at semiprimes not (relatively speaking) too far away on the number line.

Relatively speaking. Obviously, the work involved is large with large numbers, which requires extremely large primorials.

You want P and Q to not be too close to the squareroot of N (the semiprime), but you also do not want P to be small relative to Q.

P and Q should be within 2 to 4 magnitudes of each other. Not closer, not further.

]]>@Clive:

How to think about finding and testing primes depends in part on how big they are, and whether the purpose is mathematical recreation or for cryptography.

A table of every millionth prime (what Weather was proposing, if I understood) is impossible for crypto, as Clive implied.

Suppose I want a pair of secret 1024-bit primes to make a 2048-bit RSA modulus. If my math isn’t too rusty, there are about 10^305 primes that require exactly 1024 bits to write, so a list of one millionth of these would need about 10^299 entries.

I’m indebted to Clive for his mention of primorials. Steering clear of adjacency to them shouldn’t be a problem for crypto, because their density is extremely low.

If I did my arithmetic correctly, there are no primorials which need exactly 512, 768, 1024, 1536 or 2048 bits to represent, so one needn’t lose sleep about bumping into them.

And even if you’re using some custom non-standard key size, the probability of randomly landing close to a primorial (for numbers of PKC magnitudes) is smaller, than the probability that you will buy single tickets for 30 multi-million dollar lotteries and win every one of them.

Clive, I hope you’ll forgive me for positioning you “under the weather” above … it’s my silly way of acknowledging your ordeal, and I think I can presume to speak for your loyal readership in saying:

a) we await news of your improving condition,

and

b) we salute you for continuing to post here during the thick of it!

]]>Would it make sense to say every one million prime put one in a file

What is infinity over a million?

It would be an infinite size file.

But you don’t need to do that. As I’ve mentioned you want to avoid “twin primes” as the usually sit astride a primorial[1] which makes them “Primorial Primes” which are fairly easy to find without needing a lookup table in a file.

But the problem stretches out from Primorial Primes, as I’ve also mentioned their pattern of location reflects around not just the main primorials but what are the sub primorials.

So 30 (2x3x5) and 210 (2x3x5x7) are primorials and “29,31” and “209,211” are their Primorial Primes that are also “Twin Primes”

However 210/30 is 7 and thus there are 7-2 sub primorials or reflection points of 60, 90, 120, 150, 180. The numbers either side of these also tend to be primes or twin primes.

If you make a number line from zero to 420 and mark the primes off you will see that the reflect around the primorials and their multiples. Thus if you work up from 210 you will find the primes match location wise as you move down from 210. If you do the same with 30 you will see a reflection around it, but you will then see another reflection around 60.

Obviously you loose a few primes along the way, but when you get into realy big primorials of the first fifty or a hundred primes (which you can store in a file) you can then build a “location” file of the locations around a reflection and use this to build a fairly fast sieve around primorials and their sub reflections and sub sub reflections etc.

Whilst it makes finding “probable primes” much faster than simpler sieves[2] it makes an adversaries job simpler as well thus it’s best to avoid the “close in” primes to the primorial points.

[1] The term “primorial” was thought up by Harvey Dubner[3] akin to the more common factorial which is formed by multiplying each positive integer in succession and is something all K12 students should be aware of. The Primorial works the same way, except instead of using the first n natural numbers it uses the first n primes in succession. It is sometimes written as P_{n}# for convenience. Thus more formally the nth prime number pn, the primorial p_{n}# is defined as the product of the first n primes.

[2] Like all sieves they are conceptually simple, and at low values around either factorials or primorials they are faster, but the advantage quickly slips behind just random number guessing and probability tests.

[3]who sadly dird on 23rd Oct this year. He had quite a number of claims to fame including writing up the first “card counting” method, and at one time holding the record for finding the most primes over 2000 digits. He has also come up with a number of “number sequences” of which primorials are one (see sequence A002110 in the OEIS)

]]>