Schneier on Security
A blog covering security and security technology.
« Robert Sawyer's Alibis |
| Printing Police Handcuff Keys »
September 15, 2009
Skein is one of the 14 SHA-3 candidates chosen by NIST to advance to the second round. As part of the process, NIST allowed the algorithm designers to implement small "tweaks" to their algorithms. We've tweaked the rotation constants of Skein. This change does not affect Skein's performance in any way.
The revised Skein paper contains the new rotation constants, as well as information about how we chose them and why we changed them, the results of some new cryptanalysis, plus new IVs and test vectors. Revised source code is here.
The latest information on Skein is always here.
Tweaks were due today, September 15. Now the SHA-3 process moves into the second round. According to NIST's timeline, they'll choose a set of final round candidate algorithms in 2010, and then a single hash algorithm in 2012. Between now and then, it's up to all of us to evaluate the algorithms and let NIST know what we want. Cryptanalysis is important, of course, but so is performance.
Here's my 2008 essay on SHA-3. The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here.
In other news, we're making Skein shirts available to the public. Those of you who attended the First Hash Function Candidate Conference in Leuven, Belgium, earlier this year might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before 1 October, and then we'll have all the shirts made in one batch.
Posted on September 15, 2009 at 6:10 AM
• 27 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
@ Daniel Wijk,
"With that said I actually dont know if Schneier has a Ph.D or not :-)"
It does not mater, I have posted about this in the past.
A Ph.D is not like a "taught" degree (BSc, MSc etc) it is for "research ability".
As many older Ph.D's will grudgingly admit their research choice was "kind of made for them" by those assessing them.
The big problem with a Ph.D is that the research is normaly a small increment on existing knowledge and often lacks any real originality.
This is due in part to the people assessing you, they have little way to judge highly original work.
But a pertanent question to ask is not "Has Bruce a Ph.D?" but "Is there a Ph.D for Bruce's originality?".
As far as "standing" in the "field of endeveor" I think Bruce has proved himself way beyond what most Ph.D's could ever hope to do.
Also I suspect that his work on his various systems and the "writing up" he has done would be regarded as sufficient by many.
@ Brad Conte,
"For those still interested in more on the definition of a tweakable block cipher"
Or to put it another way,
"when is a key not a key?
Or more poeticaly "A rose by anyother name would smell as sweet"
There are various answers to this depending on what you change and how.
Back with DES we had a fixed data transposition that gave rise to the idea of modifing the data into the block cipher either in a fixed or changable way (think about the Unix password system).
This gave rise to the idea of "whitening" where you xored the data with a changable value (equivalent of an IV). And also a progamable permutation has been touted for software only systems (the hardware takes up to much real estate). Neither method is in it's self is a secure step but it significantly adds to other issues for cryptoanalysis in a cheap and easy way.
You could look on it as a "pre-ciphering" of the data. I and others at various points in time proposed using a stream cipher instead of a fixed IV to do the same thing but get real strength out of the process for little extra cost (the down side is syncing up the system but there are ways to making it self syncing).
Like wise you can "whiten the key" in various ways before, during and after key expansion. Again by permutation or substitution as a fixed IV or via a stream cipher (which again I and others have proposed at various points in time for both software and hardware implementations).
None of the above in the general case should overly effect the strength of the block cipher design (yes I know there are specific cases to prove the "Don't do it Jim" rule ;)
Then you can look at the actual data ciphering and this is where it can get a bit dodgy on the strength of the cipher.
If you loosly assume that a block cipher is based on a Fiestel network of some kind then you have the one way functions and the reversable mixing functions.
You can "moderately" safely apply whitening in all it's forms to the inputs and outputs of the one way and mixing functions.
You can also take reversable data flow switching (block permutation) on more complex Fiestel mixing as well as the number of rounds. These can and will in most cases effect the strength of the block cipher so considerable care has to be excersised in both the design and selection of the control input.
It is arguable (and has been) that this is a very desirable idea because if somebody copies your design without knowing what is good and bad then you have a system that is always strong for you. But has the highly desirable feature (in some circles) that on occasions it will be considerably weaker for those who's traffic you might wish to look at.
This is kind of doing an end run around the
Kerckhoffs' axiom (restated by Claude Shannon) of "The enemy knows the system". Effectivly it gives your opponent what is occasionaly a "stealth" trojan horse...
You can then start to play with the way the oneway and mixer functions actually work for instance instead of XOR across a byte you might use add/subtract or even a more complex function (a reversable DSP MAD for instance).
One method that has been around for many years is "programable S-Boxes" or State arrays.
The problem is not so much what we call things (whitening, tweekable, programable, etc) but that in many respects they are to general to be specificaly meaningfull (concepts not actuality) except in a clear context.
And as we know from "software specs" this is where things go astray.
Perhaps the "concepts" need a taxonomy developed as a framework in which specific implementations can be pigeon holed.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.