About SDLH: If you don't know the factors p and q - finding collisions is equivalent to factorizing p*q. That is cool and OK.

BUT, what if you KNOW the factors? Like, when you produce your private/public RSA key. Then with SDLH you can produce many collisions. Probably this is not very good property of SDLH.

]]>Arrow's Theorem only applies to voting systems that rank things in order without a score. If you allow scores, you can beat Arrow's Theorem because systems with scores are not "voting systems" according to the narrow definitions used by Arrow.

]]>It would be far far too slow. When that happens it becomes the lock everyone just leaves the bloody key in...

Security primitives must be "nice" to use and a big cpu hit is not nice.

]]>My Python wrapper of Doug Whiting's code exposes single block Threefish encryption:

http://www.coli.uni-saarland.de/~hagenf/personal/pyskein/

Decryption and different modes of operation will be included in the next update.

]]>One thing I'm concerned about is the fact that all MMX and SSE shift instructions do shift each vector element always by the same amount in the same instructions. This means that parallelizing the rotates (and finally the shifts) either won't be possible or will be quiet hard to do.

Also note that I was mixing up the terms "Skein" with "Threefish" before.

]]>Have you ever considered using SSE2+ integer instructions to further speed up the hashing? I mean, all those different suggested hashing algorithms have at least one reference implementation in C and almost all ones have one in assembly which both are used for benchmarking them. But the assembly code mostly uses only general purpose instructions - I neither have looked into every paper nor do I know the conditions of the contest in detail - but i guess that SIMD assembly code would be at least faster and maybe even simpler than the general purpose assembly code.

Basically, on a 64bit x86 CPU, you may use 14 to 15 out of 16 general purpose registers each being 64bit wide in your hashing routine. With all the current SSE... SIMD instructions, you have 16 out 16 general purpose registers each being 128bit wide. This means you can do at least (in theory) twice the stuff in parallel (especially since there is a double 64bit SIMD add instruction in the later SIMD instructions). Also, this may allow you to keep more of the internal state in registers, requiring less use of state (and less state) variables, fastening it up and rendering it more easy to implement (also, the SSE instructions are really easy to use).

There are similar SIMD instructions in the MMX/MMX+ instruction family, which would especially speed up the hashing codes for older 32bit x86 CPUs, because else you could only use 6 to 7 out of 8 general purpose registers each being 32bit wide. You can use 8 out of 8 MMX registers each being 64bit wide, which is better than using only the 32bit general purpose registers, but not as good as using the 16 128bit SSE registers in the 64bit CPUs.

Most x86 32bit implementations will actually run on a real 64bit CPU in the (32bit) compatibility mode. Those 64bit CPUs in compatibility mode and some later 32bit only CPUs actually also have SSE integer instructions (SSE2 onwards), but only 8 128bit registers.

One downer for all the 3 SIMD implementations is that you will need to use 2 SHIFTS and 1 OR to perform 1 ROTATE.

Now we conclude, that for every hashing algorithm to have an optimal implementation for each CPU, we need at least 3 different SIMD assembly implementation (pre 80686 excluded). This seems like a lot of work, but I guess it's worth a try. I think someone should tell this the other candidates and the committee/jury too.

About Skein:

Finally, I must admit that I like Skein quiet a lot for its ovewrall concept with the configuration space at the beginning, encouraging people to use their own variations and more seeds to the hash. I also like the combination of add and xor separated with a rotation. I think it gives a good mixture for a primitive, difficult to trace back, which I had the joy to personally do some basic experiments on, about 2 years ago.

One thing I am a little sceptical about is whether the total lack of sboxes provides enough diffusion for the non-linearity. Also, I think the fact that Skein had not had any external cryptoanalysis as of yet should be emphasized, rendering the 25 out of 72 rounds attack more insignificant. I believe that Skein should have more external cryptoanalisys, do you know of any group other than you working on it Bruce?

p.s. I also totally like the EDON-R algorithm, due to it being based upon super cool quasigroups (good description included) with a general building rule for arbitrary sizes, and all that without sboxes too!

Are you pondering what I'm pondering?

The current approach is to use functions with properties that are not well understood. So we basically rely on security through ignorance. Which would be totally OK if we knew that there are no one-way functions.

But don't you think that even a heuristic argument for a lower bound on a simple operation is more trust inspiring?

@Clive Robinson

I wrote a "Typographical Number Theorem" sieve, after I read "Goedel, Escher, Bach". It's been running on a 32 node cluster for close to 15 years now. Give me your contact info and I'll send you the output.

If it doesn't contain the twin prime proof, you win.

I have to go back to the lab soon, because at night I do what I do every night: TRY TO TAKE OVER THE WORLD!

]]>

> There is a public-key algorithm that is exactly as hard to break as it is hard to find prime factors (not discrete logarithms) of numbers.

Uh-huh. But ... is factoring hard? [1]

Oops. Looks like we're back to the "Let's try something ..." method of proof.

> Something similar must exist for hash-functions.

a) Why must it? For most purposes, an important feature of hash functions is that they be fast to compute; indeed, several of the algorithms considered "broken" would actually be OK if we could just double the number of rounds. This significantly restricts the choice of available primitives. For example, if your desire for a mathematical proof caused you to design a hash function based on PK-type functions, your algorithm would not be useful for speeding up digital signatures.

b) It is not true that all published hash functions have been broken. For example, there are no known breaks or even certificational failures on RIPEMD-160 -- even though it was published 12 years ago, and has seen widespread use. Whirlpool has been around 8 years, and also has no known attacks. (Quite a few others also have no effective published attacks, but are either too obscure or too new to have attracted much intensive analysis. Several others described as "broken" actually have only certificational attacks: the attack exceeds the algorithm authors' design criteria, thus causing us suspicion about the algorithm's security, but there is no known way to actually use the attack in the real life.)

____

1. I mean, of course, "is factoring hard on a Turing machine?" We already know that it is not a hard problem on a quantum computer. In that case, building the computer is the hard problem!!

Why not waste your time on deciding if there are an infinate number of twin primes,

I expect a working proof by oh say 12AM(GMT) of the 24th, so I have something to do whilst getting lunch ready for the 25th...

Ho Ho Ho...

]]>Egad! Now I have to split the 1,000,000.00 prize with a foreigner. Poit!

On the other hand:

gamma(P+1)=P! so you missed the infinite number of solutions for P

I think it's only fair to split the prize according to the number of solutions found, don't you think?

Zort!

]]>Your (P=NP and P!=NP) solution doesn't work:

If P=0, P!=NP becomes 1=0.

There are simultaneous solutions of P=N=1, and N=1, P=2.

There is a public-key algorithm that is exactly as hard to break as it is hard to find prime factors (not discrete logarithms) of numbers. Something similar must exist for hash-functions.

Speed of the algorithm, hash-size and memory consumption are secondary, as long as for filesize->infinity the following conditions hold:

hash_size/filesize->0

hash_mem/filesize->0

hash_time/filesize->const.

As for things 'getting better' the more time passes...

For instance, you can spend all the time in the world designing voting systems, but each one will have a flaw:

http://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem

Wouldn't surprise me if something similar applies to hash-functions.

After all, smart people aren't wasting their time trying to build a perpetuum-mobile anymore, thanks to advances of human understanding in physics.

But, it should be possible to describe something like an efficient frontier, as some functions are certainly dominated. The current approach:

10 PRINT "Let's try something a couple of people find hard to break and when it breaks, try again"

20 GOTO 10

is not very trust-inspiring.

]]>If anything, the history of cryptography has shown that the presence of a "formal proof" isn't really a very good way to judge the security of cryptographic algorithms, hash functions or otherwise. The whole process is very iterative, generalized breaks are developed, and new functions are presented to fix those breaks, so MORE breaks are developed, etc, etc.

The whole science is still pretty new, so I expect this to be the case for some time. AES has survived pretty well without a formal proof, and while hash function design is even less well understood than block cipher design, I don't imagine the winner of the SHA-3 process will be broken in a few years.

]]>You've got it the wrong way around!

You have to construct a hash-function that is secure, if and only if P!=NP. Now, if your hash-function is broken, it follows that P=NP. But of course that is too obvious and therefore no fun and even less funny.

Btw, most people jokingly point out that P=NP, iff P=0 v N=1, but nobody seems to notice that this is also a solution to P!=NP, since 0! equals 1.

Therefore, P=NP and P!=NP are both true. Tadaa!

Which of course means that hash-functions are secure and not secure at the same time, and Bruce et al. are wasting their time and not wasting their time.

More promising would be to use a constant hash-function since it cannot be inverted at all!

Some advantages of this approach are:

1. Extreme speed. Computing the hash can be done with a single mov [mem],#c (in assembly).

Disadvantages:

1. Unfortunately all messages have the same hash, so it's likely to be slightly more vulnerable to collisions, but since all hash-functions get broken anyway this shouldn't be much of an issue.

Where can I get the C source for reference and optimized versions as well as the specification and test vectors?

]]>According to the links all past hash-functions have been broken, so it does not look too good for the next generation, does it? Or maybe the guys in the past haven't been trying hard enough?

So, if all the hash-functions get broken anyway, why not put the effort to some use?

Like, prove your hash-function secure, iff zeta(c+di)=0 => (d=0) v (d=1/2)

That way you kill two birds with one stone:

If someone breaks your hash-function we know that the Riemann zeta hypothesis was wrong. :-)