New Attack on Threefish

At FSE 2010 this week, Dmitry Khovratovich and Ivica Nikolic presented a paper where they cryptanalyze ARX algorithms (algorithms that use only addition, rotation, and exclusive-OR operations): “Rotational Cryptanalysis of ARX.” In the paper, they demonstrate their attack against Threefish. Their attack breaks 39 (out of 72) rounds of Threefish-256 with a complexity of 2252.4, 42 (out of 72) rounds of Threefish-512 with a complexity of 2507, and 43.5 (out of 80) rounds of Threefish-1024 with a complexity of 21014.5. (Yes, that’s over 21000. Don’t laugh; it really is a valid attack, even though it—or any of these others—will never be practical.)

This is excellent work, and represents the best attacks against Threefish to date. (I suspect that the attacks can be extended a few more rounds with some clever cryptanalytic tricks, but no further.) The security of full Threefish isn’t at risk, of course; there’s still plenty of security margin.

We have always stood by the security of Threefish with any set of non-obviously-bad constants. Still, a trivial modification—changing a single constant in the key schedule—dramatically reduces the number of rounds through which this attack can penetrate. If NIST allows another round of tweaks to the SHA-3 candidate algorithms, we will almost certainly take the opportunity to improve Skein’s security; we’ll change this constant to a value that removes the rotational symmetries that this technique exploits. If they don’t, we’re still confident of the security of Threefish and Skein.

And we’re always pleased to see more cryptanalysis against Threefish and Skein.

Posted on February 7, 2010 at 8:06 AM25 Comments


spaceman spiff February 7, 2010 11:17 AM

That which doesn’t kill you, makes you stronger! Certainly true of cryptography and crypto-attacks as well. As you so often, and so cogently point out, security through obscurity is (in my own words) simply a death wish…

sick February 7, 2010 1:49 PM

Security through obscurity is not a problem.
Security through too little obscurity is.

A password is something that to be secure must be obscure. “password1” is insecure as it is easily guessable.

Security is all about protecting an asset against a specific type of attacker for a specific amount of time given a specific set of resources.

If you use an encryption algorithm which is well known and well tested you give up a little obscurity but you gain a lot more obscurity in return (the guarantee that your cleartext won’t be findable through other means). This only means much if your trying to defend yourself against an adversary that has the ability to crack weak codes.

However if you trying to defend the order of a surprise skateboard from your child then hiding it in a random draw in your house is JUST AS SECURE given your needs.
security = resources needed to break barriers – resources of attacker.

The greater that value the more secure you are.

Lets take an example:
If you hide your money in a safe and hide that box somewhere in the middle of NYC and don’t tell anyone it is secure.
If you put your money in a safe and make the code a long random series of digits and you put it in times square it is secure.
If you do both (hide it in safe and put it somewhere secret) it is even better.

Now obviously better to use a safe that other people have examined as it is likely that it will be able to be more resistant to attacks. However given the choice between multiple safes that have all been examined and all been found to be about equal in security you CAN gain a modicum of security by not telling people which safe your using. It raises the bar just a little, requires just a few more minutes of research, and so on.

Security through obscurity is one method of increasing the amount of resources needed to attack a system. The problem is that some people don’t know how to analyze obscurity and ending choosing poorly. They pick a homebrew algorithm over one one that has been tested. This is because they think that getting knowledge of the algorithm requires more resources from an attacker than it actually does.

Carl "SAI" Mitchell February 7, 2010 2:39 PM

“Security by Obsucrity” is generally used to mean any system that depends on the obscurity of more than the password/key/etc for its security. Having a password is not security by obscurity, having a super-sekrit algorithm that depends on the secrecy of the algorithm to work is. Having any DRM scheme where the user can only use the decryption key they have (but hidden from them in the depths of the application) in certain ways is security by obscurity.

sick February 7, 2010 2:48 PM

and yet if your only trying to defend your application from technically unsophisticated attackers (re DRM) or if your asset has to stay secret for a short period of time (re secret algo) these “security through obscurity” tactics are secure.

sick February 7, 2010 2:50 PM

I will admit that using a known algorithm is better. My point is only that “security through obscurity == always bad” is not correct.

Not You February 7, 2010 3:24 PM


The problem here is one of terminology, but also it’s an example of jargon standing in for a reasonably large concept that subject experts take for granted – and the use with the general public without explaining the large concept behind it.

In many cases, this is because the subject experts have tended to think of the subject in terms of its long form and just substitute the jargon. In this case Bruce is just as guilty – sorry, Bruce.

What’s the point of all this? Well, the idea of talking down “security by obscurity” is not that obscurity is bad – on the contrary, you need one piece of obscured knowledge for cryptography to work (a secret key, pass phrase, et cetera).

There’s no engineering reason why you need more than one piece of obscured knowledge, and if you have your obscured knowledge spread around in more than one place it is harder to measure it and much easier to attack at least some of the obscured knowledge – just ask the people who broke Enigma by recovering different parts of the obscured knowledge as different opportunities arose.

When cryptographers say to avoid “security by obscurity”, what this really means is to concentrate all of your obscurity in one place (the key), make it as secure as possible, and concentrate your efforts on making that one thing as obscure as possible (for instance, by never sharing the key).

It’s unfortunate that simply repeating “security by obscurity” has overtaken the actual principle behind the phrase, because obscurity is not bad – only reliance on excess obscurity is bad. It just happens that most obscurity is excess obscurity.

hippomous February 7, 2010 4:19 PM

@ sick “defend your application from technically unsophisticated attackers”

Technically unsophisticated attackers just wait for the technically sophisticated attackers to write software cracks and distribute them for free. Cracking is both a participatory and a spectator sport.

All software is condensed technical expertise.

Clive Robinson February 7, 2010 4:24 PM

@ sick,

“I will admit that using a known algorithm is better. My point is only that “security through obscurity == always bad” is not correct.”

You need to split the problem up a bit.

A password is like a key to an encryption system.

Both of necesity need to be “obscure”.

However the one way system used to protect the password like the encryption algorithm unlike the password or key should not of necesity be obscure.

Which gives rise to the maxim “The enemy knows the system”.

However if you have the expertise and some do (NSA / GCHQ et al) then keeping a secure algorithm “hidden” should not actually gain you much against a “chosen plaintext attack”, but will gain you something against an attack where either the plain text or key are known (and in some cases where both are known).

However the flip side of using a secure algorithm that becomes known to your enemy is that the enemy can start usig it to protect their traffic from you…

Which is why in the past the likes of the NSA / GCHQ et al have liked to keep their better algorithms secret…

However times have changed the academic community are moving forward way faster than expected. But also so are the criminals and the wars we fight. Thus it is generaly in our favour economicaly to protect our national security with well tested and reliable algorithms.

However the genie is out of the bottle with regards strong crypto and the military campaigns we fight today are not those where “covert” secrecy via hidden encryption gets you much at all. In fact generaly the opposit. Due to haveing various countries forces fighting the same foe in colaberation.

Thus the “super secret stuff” tends to be reserved for use by a nation for the likes of it’s diplomats and intel personel. And even that use is on the wane.

tintop February 7, 2010 7:31 PM

I appreciate Bruce Schneier’s insights but can’t help to consider the possibility that this blog is an elaborate front to establish credibility among the more critical computer security enthusiasts who would otherwise scrutinize the algorithms he recommends more closely.

He could be sitting on a backdoor to only “the “super secret stuff” [that] tends to be reserved for use by a nation for the likes of it’s diplomats and intel personel”

d3sm0nd February 7, 2010 7:42 PM

@sick et al. —

For the love of God.

“Security through obscurity” is a term of art. You may look that phrase up. It has a specific meaning in this specific field– you cannot just take all the dictionary definitions of ‘security’ and ‘obscurity’, mix and match them, poke a hole in the flawed definition that you construct, and then claim to have made a meaningful statement about security through obscurity. In this context, ‘obscurity’ does not mean just ANYthing unknown or hidden, any more than the ‘security’ refers to financial securities. Your willful choice to misinterpret the term in order to appear insightful is certainly not Mr. Schneier’s problem.

Michael Ash February 7, 2010 10:10 PM


If that’s his plan, I doubt it’s working well. While I’m sure there are some people for whom Bruce’s stature makes them trust his algorithms more, there are also a lot of people who try harder to find holes in his stuff, because of the prestige such a find would bring.


Thank you for that. Every time I see people holding forth on how the necessity of secret keys shows that security through obscurity is obviously necessary to some extent, it makes me want to scream.

AC2 February 8, 2010 12:31 AM


Naaah! Its actually part of his sooper seekrit plan to take over the world thru access to all critical personal/ financial/ governmental information in real-time….


Thank you sir!

Will February 8, 2010 2:00 AM

Well I had my terms adjusted by Sick and now I’ve had them readjusted by revisiting the comments later.

Thank you all, especially Sick.

Rob February 8, 2010 3:11 AM

@ sick:
“resources needed to break barriers”

Do we actually know the cryptanalysis work function?

2^128 is considered secure because NO amount of computing hardware that could conceivably be build, can perform 2^128 repetitions of even the smallest operation.

But longer keys requires hardware that costs more to build. And for embedded systems, that does count.

So, how about practical systems with effective key or state variable lengths of 80 bit?
(decommissioned by NIST this year, eg, Triple DES with two keys)
Or 88 bit? Or 92 bit? 96 bit?

The real question is how much hardware would be needed, and when can it be expected to become available?

That needs a theory to quantify the computational capabilities of hardware, any hardware. And how to quantify the resources needed to perform computations. I think we are not close to a comprehensive theory.

(author disclosure, just out of curiosity, I took a shot at it, see arXiv:0911.5262v1)

And to model “Security by Obscurity” you would need to model information leakage. Key-based security is good because leaking information about the bits of the keys can be modeled (controlled) quite well. Understanding how information about the algorithms leaks is much more difficult.

Bryan Feir February 8, 2010 11:27 AM

@Michael Ash:
I’ve usually heard that sort of thing (attacking the guy with the best reputation in order to increase one’s own prestige by a success) referred to as the ‘Gunslinger Mentality’.

You likely know this already if you’ve done study on this, but there has been some interesting theoretical work done on ‘Computronium’, using information theory and thermodynamics to define theoretical limits on data storage (Bekenstein bound) and computation (Bremermann’s limit). We can actually place hard limits on how many operations can be done in any given time based on the laws of physics. Granted, the limits based on what is possible or practical to actually build are much lower…

Rob February 8, 2010 12:21 PM

@Bryan Feir:
“We can actually place hard limits on how many operations can be done in any given time based on the laws of physics.”

See the work of Seth Lloyd
You find links to his papers in Wikipedia

S. Lloyd, Ultimate physical limits to computation, Nature 406, 1047–1054 (August 2000).

S. Lloyd, Computational Capacity of the Universe, Phys. Rev. Lett. 88(23), 237901 (May 2002).

S. Lloyd, A theory of quantum gravity based on quantum computation, arXiv:0501135 [quant-ph] (2005), quant-ph/0501135.

I knew the Bekenstein bound (cf, the interesting work following Verlinde’s “On the Origin of Gravity and the Laws of Newton”, last month). I had not yet seen Bremermann’s limit, though. Thanks!

I have ignored quantum computing as I do not really understand it well enough.


Clive Robinson February 8, 2010 12:57 PM

@ Bryan Feir,

“using information theory and thermodynamics to define theoretical limits on data storage (Bekenstein bound) and computation (Bremermann’s limit).”

The problem with it is it defines upper limits of what can be done by Turing machines.

The problem is applying constraints from the “tangable” physical world to the “intangable” information world.

Information is not limited to bits or bytes. A “bit” is the minimum representation of information in the physical world and axiomaticaly it is assumed to be bivalent in most cases ie asigned one of two values (true or false).

Thus two bits hold two truth values, but there is also the difference between them which is another truth value that does not have a physical representation. Thus as the number of bits increase the number of “difference values” goes up as 0.5n(n-1). The hard part is deciding which difference states have recognisable meaning and those that don’t.

The problem with information is we know little or nothing about it. It can be argued that our physical world is a subset of the information world .Thus we are in effect standing in the house looking out through the letter box, our view is not just constrained it changes as our viewpoint changes…

The only real measure we have on information that is not constrained by a “physical world” view/assumption is “entropy” which is a measure of possability. Thus it is not constrained by such things as the speed of light or a force…

Thus it is argued that the closest science we have to borrow ideas from is that of the quantum world…

If you look up some of the things that Seth Lloyd, Roger Penrose and others have argued you start thinking “how do I get a grip on information”

However it looks like nature is way ahead of us, a few years ago it was realised that photosynthesys by it’s very efficiency could not realy be anything other than quantum in nature.

And a couple of years ago we had the discovery of the “Taco Shell” protein which more recently has been identified as the most likely structure responsable (Google [quantum photosynthesis taco shell protein]).

Other scientists belive that smell is likewise a quantum effect. Which is maybe why some people are seriously looking for quantum structures in the brain to explain the creative thought process…

Such is the fun of these things….

Joe in Australia February 8, 2010 5:16 PM

@ d3sm0nd

Absolutely. It’s a term with a particular meaning. “Sick”, please don’t start pontificating about safes and combinations and so forth. It’s is actively annoying to read you going on about things that are obvious, and, when they are not obvious, wrong.

Rob February 9, 2010 5:10 AM

@Clive Robinson:
“a few years ago it was realised that photosynthesys by it’s very efficiency could not realy be anything other than quantum in nature. ”

From this quote I get the impression that you have not been involved in botany or biology.

A one or two photon photosynthetic reaction is of quantum nature by definition. The complex charge and energy transport mechanisms in the pigments and membranes were known to involve subtle quantum effects in the 1970’s and before. I have no clue what “new” insights you are referring at.

Wrt your piece about bit differences. Try to store and retrieve information from that in the thermodynamic limit. If you succeed, you will get a Nobel price or equivalent.


jacob February 9, 2010 10:44 AM

We have gone from a discussion to an attack on threefish to a discussion of quantum mechanics. Only you my friend. 😀
It does have bearing on trying to compute how difficult it would be to use a brute force attack. I tend to think that we overcomplicate things (is that a word?).

Previously, the point was made that intelligence, others don’t want enemies to use their best code. That is probably true and why some have said the DES NSA approval was their worst mistake. However, codes are still being used that the agencies would have problems, which is why “phishing” type attacks are preferred. Heck breaking knuckles is preferrable to using the computer resources to break, or more importantly to let the world know you can break it. I seem to remember the allies not letting the Germans know we had broken enigma.

If someone used Tolkien or klingon as the language….Just a thought. Even the geeks at NSA would scratch their heads for at least a little while even after they broke the code. Can you imagine the looks if tolkien had been a spy and used one of his languages never released?

For the record I find quantum mechanisms and quantum entanglement fascinating. Kind of like my cat watching the T.V. not that either one of us understand just what the hell we are watching. The older I get the more I understand how little I know.

BTW look at the smallest transistors made so far. It is 20 nanometers. They are now counting atoms!! for these. I’m relying on my memory, they may have made smaller since then. Would they have to coat these things. Any object with a static charge however small could kill these things.

Brad Conte February 9, 2010 10:58 AM

From the NIST’s timeline ( ):

Year 4 (2010): 2Q
Hold the Second Hash Function Candidate Conference. Discuss the analysis results on the submitted candidates. Submitters may identify possible improvements for their algorithms.

Year 4 (2010): 4Q
Submitters of the finalist candidates announce any tweaks to their submissions.

It seems that Skein will be allowed to be tweaked if it makes the finalists cut, and the decision to allow it into the finalists will take into account any proposed tweaks.

Pvblivs October 6, 2010 3:31 PM

I am looking at the “breaks 39 (out of 72) rounds,” “breaks 42 (out of 72) rounds,” and “breaks 43.5 (out of 80) rounds.” And I am reminded of the words: “It’s easy to design a block cipher if you have sufficient memory for 48*32 S-boxes. It’s hard to design an insecure DES variant it you iterate it for 128 round. If the length of your key is 512 bit, you really don’t care if there are key-complementation properties. The real trick — and the reason that real-world block cipher design is very difficult — is to design a block cipher with the smallest possible key, the smallest possible memory requirement, and the fastest possible running time.” Now, considering that computers have been getting faster, the considerations of possible brute-force attacks may warrant a 1024-bit key. But if an attacker can break more than 32 rounds faster than brute force, it suggests the rounds need to be strengthened. Back then Schneier was advising against what I will call “security through a slow algorithm.” And I wonder if he really has changed his mind.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.