Comments

AnuraMarch 15, 2016 5:49 PM

@Daniel

It's very confusing to think of a mathematical sequence in terms of "random", especially from a cryptography standpoint. No mathematical sequence is random, by definition.

As to whether it will help, public key algorithms aren't brute forced that way. A generator that can, with no overhead cost, find random primes with a probability of 1, will reduce the time of a straight brute force by a few bits at most. We can do significantly better than that through factoring algorithms.

WinterMarch 17, 2016 3:58 AM

@Anura
"It's very confusing to think of a mathematical sequence in terms of "random", especially from a cryptography standpoint. No mathematical sequence is random, by definition."

The correct mathematical term here is Kolmogorov complexity.
https://en.wikipedia.org/wiki/Kolmogorov_complexity

"Random" in this sense translates to a statement that a sequence has no computable pattern. This is "Random" as in "Unpredictable from its parts".

Clive RobinsonMarch 17, 2016 6:19 AM

@ Winter,

The correct mathematical term here is Kolmogorov complexity

Kolmogorov complexity[1] also called algorithmic entropy is of not much help either when it comes to deciding what is and is not random, because of an underlying assumption[2].

It's why you sometimes hear people say "a random string is one which can not be compressed" which implies it has no determanistic component that can be exploited to shorten it.

The thing is you can show that it's not realistically possible in human terms to recognise if a string is random or not. For instance a simple counter is easily recognised once it's length is determined, but if it is put through a Cryptographically Secure algorithm, by definition it's output sequence can not be determined thus is not realistically recognisable as compressible and is thus considered random, even though it's not[3].

@ Anura,

No mathematical sequence is random, by definition.

Which whilst true also suffers from the effective "one wayness" of unknown generator algorithms.

In the case of random numbers there are certain observations that people often miss.

One of which is finding "Twin Primes" and thus showing if they are infinite or not. It's easy to show how to find "candidates" as an infinite sequence, and we know how to test for their primality. What is not easy to show is when the candidates fail.

To find an infinite supply of candidates is fairly trivial. If you think of primes being multiplied in the same way as integers are for factorials you end up with an easily provable infinite sequence. All of these numbers are even and thus not prime but the odd numbers either side have a high probability of being prime. Thus you have a reasonable starting assumption that twin primes might be infinite.

A second observation shows that there are secondary candidates for Twin Primes. If you consider your primes that are multiplied to form the prime factorial there is a range of numbers from the previous prime factorial to the current prime factorial. That is,

1.2.3=6, 1.2.3.5=30, the range 6-30

If you examine the range you will find that every multiple of the previous prime factorial (6) is also a candidate for a twin prime but at lower probability.

This comes about from an observation that primes propergate like waves. If you look at all the primes between 6 and 30 you will find they all fall into places where the primes between 0-6 fall when reflected up from the previous multiple of 6.

Having seen this regular pattern --which you can use as a fast way to find prime candidates-- you then have to determin what strikes a candidate prime out.

This to has a pattern which you can work out for yourself with a little thought.

Any way you can then see that the position of primes is actually quite regular as a sequence.

Now consider those prime factorials as a number base, you would expect a patern in the least significant digit in that base and any others depending on their prime factors.

[1] The most often noted result can be expressed as p("s") basicaly it's that any string "s" can have an algorithmic representation not much longer than it is, in this case it's wrapped in a print function p().

[2] The underlying assumption is that there exists an optimal compression algorithm for it's generator function, that will shorten the string to it's minimum length if it is not already in that state. Which is reasonable as an assumption but does not in any way help you find the algorithm, or recognise that there is one for any given string.

[3] This then moves on to a discussion of what is and is not a "One Way Algorithm" and why and to the speculation that one way algoritms have trapdoors.

WinterMarch 17, 2016 7:49 AM

@Clive
"Kolmogorov complexity[1] also called algorithmic entropy is of not much help either when it comes to deciding what is and is not random, because of an underlying assumption[2]."

The Kolmogorov complexity is not a practical but a theoretical construct. But that is not different from "Random" itself. It allows mathematicians to talk about "Randomness" in fixed sequences. That was the question.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.