Schneier on Security
A blog covering security and security technology.
« Erasing Data from Flash Drives |
| Pickpockets are a Dying Breed »
March 2, 2011
NIST SHA-3 News
NIST has finally published its rationale for selecting the five finalists.
Posted on March 2, 2011 at 7:53 AM
• 34 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I was somewhat surprised to see how many complaints came up in places like the SHA-3 mailing list when the rationale wasn't published alongside the announcement of the finalists. Obviously everyone wanted to see the details, but NIST said right from the start that the rationale was coming and none of the choices seemed THAT surprising or controversial.
Thumbs up for SKEIN ! Congrats to Bruce and the others.
Congrats to anyone that can understand any of that.
(I'm not a crypto guy, obviously. Just here for the donuts.)
I think the reason for the complaints is that the rationale should be determined (and therefore available) along with the determination of the finalists.
If you release a decision, and later produce a report detailing what led to that decision it can make it seem as though the decision itself was the deciding factor while the rationale was created after the fact to justify the results. Not saying that happened in this case, just that in the interest of transparency it's good to have all the info up front.
As I rest here in my hospital bed I keep getting those,
Moments, sadly I awake to the reality that the food has a blandness and consistancy that would make a ravenous warthog thing twice...
To Niels and the team (yup you as well Bruce) concrats on getting through.
It is however sad to see some of the contenders being dropped because NIST thinks they will not be able to establish the security in the next year.
I can understand why but it sure looks harsh when something is unsufficiently conventional to get through.
Congrats, Bruce, although I think they're going to end up going with BLAKE.
No disrespect to Bruce, but my favorites are BLAKE and Keccak. Their performance characteristics make them more versatile in modern organizations with so many different devices and systems. The military could also make use of them because the developers of Type 1 encryption have switched from hardwired implementations to side-channel resistant RISC processors custom designed for crypto algorithms. This is so the algorithms can be changed without buying new hardware. I'm sure these two ciphers would have excellent performance on Type 1 devices, commercial FPGA's, desktops, servers, and mobile devices alike.
As Prof. Bart Preneel has said in invited talks about hash functions, by the time the SHA-3 process is finished, then we will know how to make better hash functions than SHA-3. :-) (My rough paraphrase from memory.)
NIST is angling to have the least chance of failure -- of a SHA-3 that is broken or that we can't get confidence in SHA-3 within the established timeline.
That's a fine strategy! But it is not designed to come up with the best hash function, which would take more time and incur a higher chance of failure.
@ Zooko Wilcox-O'Heam
Honestly, I'd rather have half a dozen really good ones than the best one. Researchers will continue to crank out and review new algorithms. The most important thing for me is that I have fallback options in the case that one is found to be weak. Since this is often the case in crypto, I'm delighted that we have six [apparently] strong hash functions to work with.
They choose it because its all breakable by NSA
'They choose it because its all breakable by NSA'
Heh, so the NSA have a back-door that enables them to decrypt the hash to recover the original source data. If they really did, wouldn't that make a good data compression algorithm? ;-)
@ Nick P,
"Honestly, I'd rather have half a dozen really good ones than the best one. Researchers will continue to crank out and review new algorithms."
Yes I'd much rather have a collection of basic trusted and well charecterised primatives and a way to make the primatives into as many hashes as are required at any one point in time.
I'm not looking for speed or memory area or even extensibility, I'm looking for reliable and well tested.
Which means you are making my "frameworks" argument for me.
As you quite rightly say,
"The most important thing for me is that I have fallback options in the case that one is found to be weak Since this is often the case in crypto"
And the best way to ensure this is have a bloody framework that is mandatory.
It's why I keep banging on about NIST coming up with frameworks not algorithm competitions.
So you are making my case for me the question is how do we get NIST to take note...
Anyway the nice kind nurse has given me my pain killers and sleeping tablet so time to turn in at just gone midnight in London.
Selecting JH is a grave mistake, who wants an algorithm that cannot be implemented by its author?
Wow, nice timing....
I implemented Skein for the first time this week as a file-copy checksum generator. The fact that it's out-performing SHA-512 (which, in this implementation, is the fastest SHA variant) by 50% has gotten me pretty excited.
It's nice to have the details of why NIST chose each finalist. Now I'm really interested in comparing each one in a real-world implementation.
So far, I'm very pleased that I bought that Skein shirt :)
Hearty Congratulations Bruce and team for selection of skein.
It's good that NIST revealed it's justifications for its choices. They had a nice compilation of all the attacks and performance criteria in one document. Remember, though, that performance isn't just about some desktop or server computer. Many of the devices required to encrypt content will be resource-constrained systems, typical embedded systems, smartphones, security appliances, etc. That's why I like algorithms like BLAKE that do well in all these environments, where Skein mainly owns in PC's and servers. Try the other finalists to see if the performance is good (if not the best). Give us your results (and hardware config.).
@murray re NSA back-door
It is no secret (since it was on national TV) that Bruce included a backdoor to Twofish. It would not be a leap to assume Skein has one as well.
@ Nick P
What you say is true. Believe me, I'm still using a Google G1, and that thing benefits from every CPU cycle and bit of memory saved. However, in large-scale database systems where you're hashing HUGE numbers of records, a high security margin and speed are everything.
So ultimately, while the SHA-3 competition is important, it ultimately decides which hash is the "Jack of all trades". I think we can all agree that any finalist which is not ultimately determined to be broken should be considered for a place in industry.
My testing is trying to determine which will perform best for large scale operations where resources aren't quite so constrained.
Thanks for posting your results. In the test, Skein is unsurprisingly almost as fast as MD5, while my favorite general purpose hash (BLAKE) is last. But a four second difference isn't a showstopper. Goes to show these teams have all done a good job producing effecient algorithms.
I'd actually like to see a native test in C where each algorithm performs tests on a variety of platforms: slow/fast CPU; little or lots of RAM; 32 bit and 64 bit; with/without MMX/SSE/Altivec; lots of tiny files vs one or two huge ones. This would show us the upper and lower bounds of the algorithms in many realistic deployment situations. This would help implementers decide which to use for their purposes.
@ Nick P,
"I'd actually like to see a native test in C where each algorithm performs tests on a variety of platforms..."
I'd like to see the same BUT where ther is no timing side channel leaking information.
Although slightly less important for a hash than a cipher it is still something we should strive to resolve before everybody takes the competition code and stuffs it in their code libraries as happened with AES.
Otherwise we are giving certain people a gift horse backdoor again...
@ Clive Robinson
I hope library coders take notice of your point. It's a good one. We see the covert channel thing play out again and again because it's forgotten. I wonder what covert channel protections they put in that the European stream cipher competition, now that you have me thinking about it. I liked their competition and planned to use some of the ciphers, possibly. Now you have me wondering if I should do a covert channel analysis of them...
Clive, you know you're never going to get a marketing job, right? :P
@ Nick P,
"Clive, you know you're never going to get a marketing job, right? :P"
I like to think I'm to honest for that sort of work (yes I can hear the sharp intake of breath from certain marketing types ;)...
However the reality is once long ago in a period of my life now clouded thankfully by the mists of time I once sold life assurance. Worse for a short time I sold people (that is I was a modern day slave trader known as a recruitment consultant). Both jobs caused me to develop a skin condition and most times I had a strange feeling like I'd been caught in a spotlights glare in front of an audiance of thousands just as I was picking my nose and chewing it ;)
So your right I guess I can feel shame and guilt, and deeply feel self disgust and loathing thus I'm not cut out for that sort of work...
"Now you have me wondering if I should do a covert channel analysis of them..."
I've heard some rumour from some old greek bloke that even wooden horses have teeth, it's just a question of who gets bitten and when, thus you should always approach with caution.
Side channels are a pain because you first have to realise they exist and then be able to see them, it can be like wandering through a misty hall of mirrors where some of the mist is scalding steam.
I tend to use the "preasure cooker" analysis method. That is if you are getting out less than you are putting in then somewhere it's either escaping or about to blow in your face or both it's just a question of time.
However you also have to watch what comes back at you, nearly everybody who thinks of TEMPEST think "eminations/emissions" few think "susceptibility" or even "remodulation" or "transparancy" and thus don't see a great deal.
They forget the simple fact that all "ports" are "bidirectional" with "black boxes" and what goes in has to come out at some point, it's not that it does it that's important, but what else it brings with it that counts.
I long ago came to the conclusion that side channels of one form or another exist and that they cannot be avoided or to a certain extent be stopped even where known.
A simple example from the field as it where.
I become aware of a hidden antenna near my equipment what does this tell me?
Actually not a lot it might be compleatly unrelated to what I'm doing so how do I find out without going at it like a bull in a china shop.
Well the first thing I do is I observe not just from my current position but from another position as well. In effect I scan slowly up the EM spectrum looking for signals using a "baseline" sufficiently large that I can triangulate the source sufficiently well to tell if it is coming from the unknown antenna or not. I have to do this a number of times to see if there are any changes (they might only use it 9-5 for instance).
If there are any enimations I then analyse them. It is possible for instance that the antenna can passivly "re-radiate" a signal from else where (there is a story about a ferris wheel and this effect from the cold war).
For many people this is as far as it goes because of the "eminations" only thinking.
Well from passive observer you can go to cautious investigator.
You can "illuminate" or "paint" the target antenna with low level EM radiation and observe from another position just how much of this it re-radiates. Surprisingly you can learn a great deal.
Such as the bandwidth of the antenna, the length of the feedline, the front end bandwidth of any equipment connected to the feedline and in some cases what frequency the receiver is actually tuned to.
Knowing that you can do this opens up possabilities when you examin other systems.
For instance "quantum crypto" in the simplest case you have a polariser at either end of the fiber Alice has a photon emmiter and Bob has a photon detector.
It is "assumed" that you cannot determin the state of the polarisers because they are driven by true random generators.
Is this assumption true?
Some people used to think so but they got wraped up in the theory and forgot about simple observation...
If you could somehow get access to the fiber and you knew what frequency the photon source and detector ran at you could actually learn a great deal of information sufficient to know the state of one or both of the polarizers without desturbing the all important "quantum channel" that Bob and Alice are monitoring to see if eavesdroping is occuring.
For instance if you pick a very different frequency of opperation for your photon source you can shine it back down the fiber towards either Alice or Bob and because nothing in life is perfect some of it will bounce back towards you. You can tell from the way it bounces back what state the polarizer is in.
The polarizer is inherantly wide-band compared to either Alice's emitter or Bob's detector.
The only two problems you have is firstly not disturbing Alice and Bob's quantum channel (this is do able) and secondly avoid your photons being discovered and this is doable providing Alice and Bob are insufficiently aware ot th potential of this attack.
@ Clive Robinson
"For many people this is as far as it goes because of the "eminations" only thinking."
Are you referring to lay people? Regular electronic engineering types? Because I know the EMSEC types have been dealing with active RF attacks for a long time, hence the cell phone ban with the STU-III's. They also tried to ground certain equipment, probably to absorb active attacks. I'm kind of curious if grounding leaking computers sends the leaks directly into the ground in a way that's recoverable. Odds are that recovery is incredibly hard because I've seen no evidence governments are doing this in their EMSEC intelligence programs. They are still using the basic passive and active attacks, albeit with upgraded software.
On the issue of side channels, I think the hardware the algorithm is implemented on matters a lot. That is to say, we can look for channels at the abstract algorithm level, but we must also analyse each implementation and deployment for covert channels caused by the interaction of the algorithm and the hardware. For example, a processor with no traditional cache and a FIFO queuing scheme can run basic AES fine, but add a Intel-style cache and suddenly we have leaking key material.
I think the only way to beat this, other than leak resistant hardware, is to make a model of the security-critical software, make a model of the new hardware target, and use covert channel analysis techniques on these. Then, the software or hardware will be modified to account for whatever is found. Problem: I don't see anything like this happening with vast majority of crypto, largely due to logistics & time to market. It seems that only defense contractors and academics are really concerned with handling this problem.
A chip in programming language is pretty much xor/or/and. Out of those you could say make a adder, if you have two adders that shear the same wire wither ground or what not, you can send a abnormal frequency data bits into it, to get one adder to change the output of the other.
Go through to the output pins, what seems like junk in the input can be legit messages out of the output to other chips.
That polarizes could work for reading the chips internal structure. If the chip works at 15mhz, if you have a 150mhz multimeter, you could read the back voltages(wires are inductors,though small) you could read 10 transistors deep. Different back voltages would come from open gates,closed,or opening.
@ Nick P,
"Are you referring to lay people? Regular electronic engineering types? Because I know the EMSEC types have been dealing with active RF attacks for a long time"
Lay people are "civilians" in this particular battle, and most design engineers are judging by what they have done little more than "trench meat".
The real problem is in accademia and with those who come up with standards, I almost feel sometimes some of them are working for the other side.
If you have a trawl through the internet and various scientific journals, there is little information except on EMC.
Historicaly on the Internet nearly all the TEMPEST sites where of the "big scary monster" variety and banded around things like -174dBm, -163dBc etc as though it had meaning.
Some of these people had obviously been on "You Do It This Way Stupid" courses from the likes of various national "communications establishments" and thought that there was some big secret they had become part of. By and large they dressed up basic EMC information from before the 1980's with spurious figures and fancy names they had overheard and set up their shingle to tout for work.
We had the settop box crackers doing their thing by putting boxes in freezers and browning out supply lines as the first examples of fault injection attacks with purpose. The battle Sky had with these people reads like a Greek Epic.
Then phone cards got the cracker treatment and some of these were based on smart cards. That industry basicaly stuck it's head in the sand and went for the quick and dirty solutions backed up by heavy heavy PR.
Then shortly before the end of last century we had the DPA paper which kind of threw a spanner in the Smart Card world and opened a vast casam even they could not paper and gloss paint over.
Ross Anderson started looking at "self clocking logic" as a potential solution. Whilst some guys started looking at injecting faults directly into the chip with IR light and lasers and in the case of a Belgian researcher EMP attacks from picoprobs made from miniture inductors made for the microwave circuit industry.
I actualy contacted the authors of the DPA paper and said to them very specificaly what could be done with simple low power RF illumination where the chip modulated the RF and you could do the DPA contactlessly. I contacted Ross and warned him what RF illumination would do to his self clocking logic and he passed the details of the Belgian researcher who I contacted and told him about the issue.
Then the world went software bug / crack crazy as the Internet realy started to get going and not much surfaced about hardware issues.
Finaly a student under Mat Baze (he of crypto.com) actually produced a worthwhile "original" project that showed how "efficient systems" like PC's are transparent to things like time based attacks from the keyboard. And finaly opened peoples eyes to "clock the inputs and clock the outputs" briefly before they went back to sleep.
This "transparancy" to time based attacks is something that used to get taught to equipment designers before they went on EmSec Design 101.
There were also a few papers re-hashing the basic ideas that Van Eck had talked about in the 1970's and had applied the benifits of modern technology such as DSP and some very novel twists such as using photomultipliers to pick up reflection of a CRT trace off of a wall etc.
And as I noted to them at the time in some respects it was old news in that any serialised data source that goes outside the box is a potential liability especially "hanging an LED of the data line" as a confidence indicator.
Then finally just a little while ago two students over at the Cambridge labs showed that pushing RF into electronics such as random number generators had some serious implications.
This is a quater of a century after I had showed not only how to screw with electronic purses but also how to predict and influance hand held gambling machines such that you could clean out the company, and had tried to get people to take note.... (and in one case got eased out of a job designing biometric scanners).
Guess what I had high hopes that at last people would wake up to RF fault injection in the academic world and actually do some real indepth research.
But... nagh, nothing doing, the only research if you can call it that is what working design engineers in FAB labs etc are doing almost as a hobby just as I did. And they pass it around by word of mouth almost like an ascociation of funny handshakes, possibly because they fear managment.
Have a chat with Robert T and ask him how he came by his knowledge of such things, they are kind of "known in the industry" but no one can say quite where or point to a good solid document.
It is something that is long past being draged out kicking and screaming into the light of day, but... nobbody want's to do it. It's almost as if they are frightened you are going to kill the goose that lays the golden eggs and to be quite honest I'm tired of it.
And it's not the first time and I doubt it will be the last time I've come across this "golden goose" mentality forensic "science" is riddled with it as are some other fields of endeavor. Putting your head in the sand and hoping it will go away is not the way you deal with this sort of issue. Various religions tried it including burnning people as heretics, it didn't work at best it deleyed things becoming "known" at worst a lot of people have needlessly be hurt.
Any way I shall now step down of my soap box and find a dark room to lay down in, "it being a quater off the witching hour" 8)
@ Nick P,
Contary to my earlier I'm no longer trying to lay down. The local road repair men have just started using a neumatic drill right outside the window of the hopital surgical ward I'm in "argh" there are unhapy patents asking for drugs and ear plugs...
"I'm kind of curious if grounding leaking computers sends the leaks directly into the ground in a way that's recoverable."
Yes, quite eaisly, as many amateur radio enthusiasts are aware.
The first question you should ask is,
"What is ground?"
Followed almost immediatly by,
"Where is ground?"
Normaly crypto kit is NOT grounded for exactly this reason it uses an issolation transformer to turn the system into a "balanced feed" close to or inside the equipment (it may have an additional external metal case that is grounded but in much equipment such as BID equipment it's not).
The problem is how do you get from the equipment to the local ground and is the local ground of a sufficiently low impeadence to make a difference...
Let us assume as can often be seen the person installing the mains wiring has been a "neat and tidy" wireman.
In the UK certainly and in the US you will often see the "earth wire" having been wound around a BIC biro six or more times to take up the slack.
And in the process create a realy nice inductance that takes the ground impeadence well above 300R from LF upwards, so not quite from DC but definatly untill daylight.
Thus the earth lead actually acts as an antenna....
Even at HF the inductive impeadence of the wire is so high Amateur Radio operators have been known to have two earths. The DC mains earth with an RF trap at the box and a "series tuned" HF earth to a real ground close by (think galvanised 2inch steel pole driven ten or twenty feet into the ground in six or eight places around the "shack" with "lightening conductor" copper tape connecting them all together).
So yes earthing equipment for EmSec is a very real very difficult problem. It's easiest to just ignore it altogether with an isolating mains transformer and appropriate RF filters either side.
Basicaly you have to ground the input side of the transformer as a "safty earth" and respect the unbalanced live and nutral. However on the output side it's "balanced" and you float the output side as "balanced feed" inside a coaxial outer that is connected to a "special ground" but but importantly not to the equipment chasis unless it's issolated from the sheilding casing inside it. Further you make dam sure that just. as with audio eguipment that there are no "earth loops" or any other kind of loop as these just love to radiate even when 99.999% shielded (see sheilded loop antennas used for direction finding to see the issue).
Part of the big big problem and expense of crypto cells is earthing and one favourite solution has always been "the hole in the ground"...
"Have a chat with Robert T and ask him how he came by his knowledge of such things"
Lets see which is the best answer.
1) I got it the old fashioned way "I stole it!"
2) "I could tell you but than I'd have to kill you"
3) I'm just that ******g smart.
4) Cursed with a mind fascinated by puzzles.
6) It's elementary, my dear Watson!
7) In the Blog-sphere, it's always possible that I'm actually a synthetic person! (too many thoughts like that and I'll need to double my meds)
Nice history of things and more specific examples. If I get a big grant from the Gates' foundation, I'll focus it on figuring out everything NSA already knows about this subject and maybe how to get around it. It might be easier to figure that out penniless than to get the grant...
Lmao! I'm afraid No. 7 is already taken by Clive. Seriously, though, a serious answer would be nice. I'll rephrase the question though: are there any texts/publications, aside from popular papers clive mentioned, that would tell aspiring E.E. majors what to look for and how to deal with it? I remember having bookmarked two texts on the subject in the past (before HD crashed), including a TEMPEST certified engineer. I can probably find those in seconds, but you might be aware of more obscure references.
I'm sure the subset of the security community that gives a shit about EMSEC would appreciate any references you can give to shed light on detecting, reproducing, shielding, countering, etc. emanations.
Actually the fundemental gates in nearly all practical electrical/electronic circuits are the NAND or NOR which is used is dependent on the technology used.
For instance in discreet component Diode Transistor Logic (DTL) the NOR gate was often the preffered choice. However once on sillicon with Transistor Transistor Logic (TTL) the basic functional block was the NAND gate (depending on how you looked at it ;)
The problem with nearly all logic gates in use, is that they are "non-reversable" unlike the likes of the Fredkin or Toffili gates. The reversability has an interesting issue to do with energy and the transportation of information.
First of though you have to understand the concept ot the "Controlable NOT gate" (CNOT). In essence it is an XOR gate with two inputs and two outputs. It can be shown to be a fundemental building block of many but not all logic functions.
That is the control line goes straight through the CNOT gate and the data line will invert or not invert the state of the data bit dependant on the control gate. However it is important to note that unlike an XOR gate the CNOT Gate can have the outputs pushed back in to get the original inputs.
The Fredkin gate is effectivly the same except that it has two inputs and two outputs and simply swaps or not swaps them under the influance of the control gate (you can fake this behaviour easily enough with relays but not the likes of ordinary transistor based gates).
The Toffili gate is in effect a "Controled Controled Not Gate" (CCNG), it is like the CNOT gate except for an aditional control line. The state of the data is only inverted when both control lines are asserted [ thus D' = D XOR (C1 AND C2)]. This can be shown to be "functionaly compleate" and thus a Universal Gate from which all others can be built.
Now both the Fredkin and Toffili gates work within the "Billiard Ball Model" which means all the mass that goes into the gate comes out of the gate. Likewise if you push the balls outputed from the gate back through it they appear at the inputs in the original state.
Hence not only are they reversable, additionaly with the mass what goes in comes out which is quite an important point within physics as the "Conservation of Mass" shows that the gates are not wastefull. It has also been shown that "Mass and Energy" are equivalent....
Now to 99.9999% of the world and probabbly the same percentage of electronic engineers the lack of reversability in NAND and NOR gates is a shoulder shrug exercise.
It is only a few that think hey if "Mass in = Mass out" and "Energy = Mass" then the logical conciquence is "Energy out = Energy in = 100% Efficiency"... Thus energy lost to suroundings = 0. Which has significant implications for chip density where one of the bigger issues is getting rid of heat.
However as important as these gates are in an information theoretic way and in a very real practical way due to zero energy loss is the aditional fact that the "lost energy" is "leaked energy to the environment" and this might just be cohearent enough to "leak information to the environment".
Interestingly from this perspective is that just over two years ago at the University of Innsbruck the Quantum Optics and Spectroscopy Group anounced that some of their members (T. Monz, K. Kim, W. Hansel, M. Riebe, A. S. Villar,m P.Schindler, M. Chwalla, M. Hennrich, and R. Blatt) had made a Toffoli gate using trapped ions.
If the technology can be made to work reliably there is the very distinct possability that we can have hardware without unknown energy based side channels which would make life so much easier in some respects.
@Clive Robinson , cheers for the link.
About hidden data.
say you have 3 logic gates setup to make
xor = 20
and = 30
or = 15
if we start with 41 and compress the gates (41)^20^30^15 = 35
35 - xor,and,or
24,0,13 = 13
(13)^15^30^20 = 16
41/16 something is wrong(could be data to the gates or wrong gate order
35 - xor,and,or
24,0,13 = 35
(35)^15^30^20 = 41
should knock out some possible comber nations
I hope you win Bruce, you deserve it from AES competition. Best Regards.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.