Will Keccak = SHA-3?

Last year, NIST selected Keccak as the winner of the SHA-3 hash function competition. Yes, I would have rather my own Skein had won, but it was a good choice.

But last August, John Kelsey announced some changes to Keccak in a talk (slides 44-48 are relevant). Basically, the security levels were reduced and some internal changes to the algorithm were made, all in the name of software performance.

Normally, this wouldn’t be a big deal. But in light of the Snowden documents that reveal that the NSA has attempted to intentionally weaken cryptographic standards, this is a huge deal. There is too much mistrust in the air. NIST risks publishing an algorithm that no one will trust and no one (except those forced) will use.

At this point, they simply have to standardize on Keccak as submitted and as selected.

CDT has a great post about this.

Also this Slashdot thread.

EDITED TO ADD (10/5): It’s worth reading the response from the Keccak team on this issue.

I misspoke when I wrote that NIST made “internal changes” to the algorithm. That was sloppy of me. The Keccak permutation remains unchanged. What NIST proposed was reducing the hash function’s capacity in the name of performance. One of Keccak’s nice features is that it’s highly tunable.

I do not believe that the NIST changes were suggested by the NSA. Nor do I believe that the changes make the algorithm easier to break by the NSA. I believe NIST made the changes in good faith, and the result is a better security/performance trade-off. My problem with the changes isn’t cryptographic, it’s perceptual. There is so little trust in the NSA right now, and that mistrust is reflecting on NIST. I worry that the changed algorithm won’t be accepted by an understandably skeptical security community, and that no one will use SHA-3 as a result.

This is a lousy outcome. NIST has done a great job with cryptographic competitions: both a decade ago with AES and now with SHA-3. This is just another effect of the NSA’s actions draining the trust out of the Internet.

Posted on October 1, 2013 at 10:50 AM57 Comments

Comments

Secret Police October 1, 2013 11:17 AM

I’ve never trusted NIST3 and have been using Skein for everything including android backups and bitcoin wallet encryption. NIST recommendations can never be trusted again unless the totalitarian global spying apparatus is dismantled

Matthew Green October 1, 2013 11:27 AM

What a terrible world where we even have to think about this.

Since I’m not a hash function expert I’m torn by the arguments on both sides. To be clear, the Keccak team is involved in the changes — Joan Daemen has weighed in supporting the capacity reductions. Moreover, nobody has been able to enumerate any likely scenarios in which the reduced pre-image resistance leads to a practical concern. The general argument is that (2^128) and (2^256) are both large numbers, and the ‘security paranoid’ are best off putting our effort into more practical aspects of the construction such as the number of rounds.

This is all very convincing. At the same time, we have an enormous pile of NIST standard curves and primitives all targeted towards the 2^256 security level. If we didn’t truly care about this security margin we could save a whole lot of dead trees (or bytes or whatever NIST docs consume these days).

The sad thing is that these changes are almost certainly not driven by any sort of NSA conspiracy. What’s at stake here is not a new backdoor, but rather the opportunity for NIST to regain some trust. I hope we see that happen.

PS Bruce, what do you think? You designed Skein to one set of standards. Would you have made changes if the competition had different requirements?

Clive Robinson October 1, 2013 11:58 AM

The point to take from this is irespective of if the NSA was involved or not it’s not the algorithm that was subject to intense scrutiny.

So to be blunt not only is it untested, it is not what the competition asked for. The way NIST has gone about this is a dismal failure as well as being compleatly unfair to the other entrants. As a result a lot of hard won resources have been wasted by NIST for absolutly no good reason.

If they realy think the changes are what the market wants, then they should let the market decide by implementing the competition winner in full as well as their revision in the standard and give the market the choice.

Which I suspect in these current times of paranoia will be for the one that appears to give the greatest security.

Nicholas Weaver October 1, 2013 12:50 PM

Part of the reason nobody doubts AES, despite NSA meddling, is that unlike SHA-3, there was no post-decision meddling.

The AES standard was not only open and transparent, with 3 good finalists (Twofish/Serpent/Rijndael) and 2 finalists with performance issues (RC6’s multiply, Mars’s WTF structure), but the winner (Rijndael) was adopted unmodified.

For NIST to suddenly decide to change the hash arbitrarily in the nebulous name of performance is to betray the process that made the AES standard such a success.

Perseids October 1, 2013 1:28 PM

Personally I think the whole issue about the capacity reduction is mostly FUD (though it is understandable in the given situation). Keccak was designed with a variable capacity in mind and according the Keccak team the proposed capacities for the different varieties were mostly an afterthought to accommodate to the competition specifications.

I am no fan of the proposed 128 bit security variation (i.e. 256 bit capacity), but that is mostly because I’d like to keep the standard as simple as possible and that also mandates to use only one capacity for every output size. And I am in favour of 256 bit security there (i.e. 512 bit capacity): The performance penalty compared to 128 bit security is minimal and it is still twice as big as the minimum acceptable size, so if larger capacities will safe us from advanced cryptoanalysis (which is only speculation at this point) this should be enough. Also (in contrast possibly to 128 bit security) 256 bit is secure against brute force attacks for all eternity.

Btw. Nist is very open to discussion about this topic. There is a heated debate going on on the contest mailing list.

jimrandomh October 1, 2013 1:36 PM

Why the emphasis on hash function performance? I don’t think people actually care about hash functions being slow, with one exception: people often incorrectly use regular hash functions when key-derivation functions like bcrypt would be more appropriate, and the faster the hash – especially as implemented on FPGAs and ASICs – the easier this is to exploit. Similar issues would arise in any context that involves reversing hashes to recover the last bits of a mostly-known plaintext, which could also happen with (for example) hashed key material generated by an under-seeded PRNG.

Rather than a hash which is as fast as possible, I think we want a hash function that minimizes the performance advantage of FPGAs and ASICs over CPUs. Keccak is not that function.

John Smith October 1, 2013 1:48 PM

I’m very disappointed by your post.

Coming from someone with your influence, I would expect much more than just FUD and vague statements.

The so-called changes that NIST are proposing are not modifying Keccak in any meaning way. To make short, they simply propose to remove the crazy security level for pre-image (the one at 2^512 operations), and add another security level with better performance (~30% faster). Then there is some difference in the padding.

Someone with your expertise should know that these changes are not tweaks that would be the results of some hidden influence from the NSA. I would say that it is your role to explain this to laymen. It is your role to clarify the situation, and add value to the debate. But instead you prefer to go for the FUD.

Maybe you have some secret agenda?

dink October 1, 2013 2:06 PM

good call brucey boy! let’s not update standards ever because we can’t trust our crypto community to vet them. 🙂 clearly the nsa would wait until after news has published to push their way into this standard. your usual flawless logic prevails again!

VeryOdd October 1, 2013 2:25 PM

I just attempted to visit the NIST website and I got a message saying the following:

“Due to a lapse in government funding, the National Institute of Standards and Technology (NIST) is closed and most NIST and affiliated web sites are unavailable until further notice. We sincerely regret the inconvenience.”

Is this something that happens a lot?

Gweihir October 1, 2013 2:53 PM

It will be very interesting to see whether NIST insists on these changes, or whether they are going to do the original Keccak now.

Also, the changes proposed may make a fascinating subject for some gifted cryptographers.

Scott October 1, 2013 3:22 PM

@Matthew Green

The general argument is that (2^128) and (2^256) are both large numbers, and the ‘security paranoid’ are best off putting our effort into more practical aspects of the construction such as the number of rounds.

The thing is, 128-bits is probably too small for a general purpose cryptographic hash since the collision resistance is only 64 bits. While there are many cases where that is acceptable, such as key derivation or message authentication, it’s likely too small to use for signatures. 80 bits of collision resistance is probably safe for the foreseeable future, but we should probably set the minimum at 96 or 128 bits these days (192 or 256 bit hashes) just to be prudent.

I personally don’t see any advantage to having a general purpose hash function with less than 256 bits of output. If you are really that tight on space, you can truncate the result (note that Keccak is an arbitrary length hash function and they could have simply specified a recommended minimum output length for various applications).

Pavel Roskin October 1, 2013 3:23 PM

John Smith, I know something about cryptography, but it’s not about cryptography, it’s about procedures and trust. It would be bad even if the additional changes were meant to make the SHA-3 algorithm more secure. It doesn’t matter that Will Keccak is involved. What matters is that the changes are made after the competition. Some things should follow an established procedure to be trusted by the public. For example, a suspected criminal can only be convicted by a court, not by experts.

The NIST site is down, but that’s a quote from Wikipedia:

‘ NIST noted some factors that figured into its selection as it announced the finalists:

Security: “We preferred to be conservative about security, and in some cases did not select algorithms with exceptional performance, largely because something about them made us ‘nervous,’ even though we knew of no clear attack against the full algorithm.” ‘

Now NIST is making everyone else nervous. How ironic.

z October 1, 2013 4:58 PM

Even if these modifications are not backdoorsand have legitimate reasons, I can’t figure out why NIST is doing them. Leave it alone. If Keccak will work as is (and it should, otherwise it shouldn’t have been selected), then don’t touch it. Everyone is hyper-sensitive to tampering with cryptographic standards right now, and with good reason. Any benefits NIST gets by making these changes cannot possibly outweigh the damage it will do to their credibility.

Brian October 1, 2013 5:08 PM

While I completely understand that some level of conservative “paranoia” is part of being a cryptographer, the reaction to the proposed NIST changes from some people has got to be among the most ridiculous things I’ve ever seen in the cryptographic community, particularly with regard to the capacity. The idea of a conspiracy driven by an adversary that can attack a 128-bit or 256-bit security level function is the kind of thing the old Bruce Schneier would have mocked as a movie plot threat (although that would be a terrible movie. Particularly when NIST has reasonable justification for the changes like standardizing security levels for all attacks and improving performance.

This would be one thing if it was random posters on reddit. But serious cryptographers discussing this issue seem to be focusing less on cryptographic analysis than they are in looking for the NSA hiding behind every tree and under every rock. And THAT seems like a problem that could have long term security implications. In this case, it’s a definite possibility that various cryptographers will end up driving people away from a solid hash function (and potential authenticated encryption function) for no solid cryptographic reason. If that doesn’t concern you, you’re not really thinking about the long term health of the public cryptographic community. Quite honestly, I think certain cryptographers should be at least a little ashamed of themselves here.

That said, I DO think there is a reasonable point to be made against changing SHA3. The changed pre-image security level would be below the level of the original requirement as far as I understand. A different initial requirement may have changed some of the other submissions. A perceived lack of “fairness” in the process might make it harder for NIST next time they want to run a competition. And ultimately, reasonable or not, it might be in the best interests of everyone if NIST mollified the folks concerned that any changes could be a backdoor. I believe at this point that they’re going to go out of their way to sink SHA3 if they don’t get their way. That would ultimately be a worse outcome than missing out on the benefits NIST is looking for with the changes.

Clive Robinson October 1, 2013 5:11 PM

@ maxCohen,

    Yes, but Silent Circle trusts your work Mr. Schneier

It might have something to do with the fact that the person making the statment has worked with Mr Schneier on Skein/Threefish.

I can’t help feeling that this is all more trouble than it could have been.

I’ve repeatedly said in the past that NIST should have set up a standard for a framework into which crypto algorithms their modes and protocols had a sufficiently standard interface to alow Plug-n-Play upgrading.

If it had been done then most of these problems would take just a few seconds to fix by editing a configuration file and copying in a new PnP module…

As I’ve also said befor I would advise people to have the other NIST competition finalists in a “ready to run” state in your own framework. Neither AES or SHA-3 winners are the most secure or conservative designs so were always a compromise, and if for no other reason than prudence having a ready to run fallback is good engineering practice.

And as far as I am concerned this lack of prudence is NISTs most grevious sin, they should have known better.

Let’s say for arguments sake that tomorow some researcher comes up with a significant attack against AES –unlikely but possible– how long would it take to run a competition for AES-2? What would we do in the mean time? And importantly what to do with all those embedded systems?

Prudence is having constructive and workable answers to these questions long before they are needed.

After all we know systems age and become obsoleat it’s one of those unfortunate facts of life like death and taxation, whilst we all want to live forever as twenty-somethings we know it’s not going to happen. Prudent people plan for their old age and death, and what is true for us is true for our creations to think otherwise is not just imprudent but stupid.

Nick P October 1, 2013 5:20 PM

@ John Smith & Brian

“The so-called changes that NIST are proposing are not modifying Keccak in any meaning way. To make short, they simply propose to remove the crazy security level for pre-image (the one at 2^512 operations), and add another security level with better performance (~30% faster). Then there is some difference in the padding.”

“While I completely understand that some level of conservative “paranoia” is part of being a cryptographer, the reaction to the proposed NIST changes from some people has got to be among the most ridiculous things I’ve ever seen in the cryptographic community”

The bit size change is probably fine. I’ll add that SSL was defeated by choosing a poor padding scheme. Many security proofs inadequately model the security implications of padding and error handling. That led to side channel attacks. One of the changes made is in the padding. I think that is quite “meaningful” and deserves strong review for potential security issues.

The whole point of the competition was to subject algorithms to strong peer review and standardize the best of them. The peer review gives us the trust. They chose an algorithm that went through this process, began standardization, and then started asking its developers to change potentially security critical aspects of it. What a way to inspire confidence in the process, yeah?

The modifications could be entirely innocent. However, the NSA leaks show they pushed seemingly innocent modifications to other standards that weakened their security in subtle ways. So… if people are paranoid and want to use an alternative it’s understandable. That’s the only conclusion I’m personally coming to right now.

Note: This reminds me a bit of the common gripe about Common Criteria evaluations. The evaluation typically certifies the properties of the development process and the features of the product in the security target. However, the evaluation only applies to what’s in that security target or protection profile. Changes to such functionality invalidate the evaluation evidence. Yet companies routinely get an evaluation, change a bunch of stuff, and then claim that was certified to some high level without having the changes similarly vetted. NIST is doing that right now.

Brian October 1, 2013 5:23 PM

@scott

You’re misunderstanding the meaning of “security level” here. The 128-bit and 256-bit security levels Matthew Green referred to are the amount of work required to attack the proposed SHA-3 variations, not the output length. So SHA-3-256 (i.e. 256 bits of output )as proposed by NIST would take 2^128 operations to attack while attacking SHA-3-512 (512 bits of output) would take 2^256 operations. Keccak-256 (256 bits of output), on the other hand, would take 2^256 operations to attack for some attacks and 2^128 for others. Keccak-512 (512 bits of output) would take 2^512 and 2^256 respectively.

The security level in some cases vs output length vs security level in other cases confusion is part of the reason NIST has given for changing the capacity parameter (the more controversial of the proposed changes, as far as I can tell). Having one security level, even if that lowers the security level in SOME cases, is arguably easier to understand than having different security levels depending on the attack…and expecting users to know what attack applies to their particular case.

So when Bruce says NIST’s proposed changes mean “the security levels were reduced”, it’s important to understand that this only applies to certain cases. The easiest to perform attack did not get any easier. In other words, if 2^128 is an unacceptably low security level, you shouldn’t use Keccak-256 either.

Muddy Road October 1, 2013 5:24 PM

Trust is a two way street, and I want to congratulate you Bruce for Twofish encryption which is trusted code.

I’ve read a couple places will replace AES with it soon and I would assume there will be more.

Thanks!

Brian October 1, 2013 5:54 PM

@Nick P:

My understanding of the analysis of sponge functions is that capacity is taken into account during cryptographic analysis. Or in other words, if there was a problem with the assumed security of smaller capacity Keccak, larger capacity Keccak would be questionable as well in terms of not providing the stated security.

I don’t know enough to comment on the padding scheme, to be honest. It’s not padding in the sense of SSL/TLS, but there ARE certainly security pitfalls to watch out for that uniquely apply to this sort of function.

Like I said, I think the strongest argument for leaving Keccak alone is that changing ANYTHING after the competition is over has, at the very least, fairness issues. But I think those issues should apply regardless of the situation. The fact that the likely cause, and certainly the content, of the debate here is centered around some conspiracy theory is at least a little troubling to me. At the end of the day, I agree with the idea that maybe NIST should just standardize Keccak as-is (performance issues aside)…but if the reason for doing so involves current events, I think they’d be doing it for the wrong reasons.

Brian October 1, 2013 6:10 PM

@Muddy Road

That is EXACTLY the kind of reaction that I think has the potential to seriously, and negatively, impact cryptographic security, and one I very much hope the broader community rejects.

Trust is also a multi-lane street (if you’ll pardon the tortured metaphors). In addition to trusting Bruce Scheier and the rest of the Twofish team more than the designers of Rijndael, substituting Twofish for AES also requires us to ignore the fact that Rijndael has been subject to VASTLY more cryptanalysis since the AES competition. So while I have no particular reason to trust Bruce et al more than the Ringdael folks, I do trust the mountains of analysis that have gone on in the last 13 years or so focused far more on AES than Twofish. Plus I think AES is simpler and, thanks to hardware support, considerably faster where that is important.

Silent Circle’s (to name one example) rumored embrace of Twofish over AES is a silly move, if you ask me. Abandoning well over a decade of dedicated cryptographic analysis over some vague, and unsupported, conspiracy fears seems like a ridiculous tradeoff to me.

ellen October 1, 2013 6:56 PM

When NIST said that the proposed changes were made “all in the name of software performance” I would like to ask what the real meaning is. In a secure communication with my bank, it seems like the Internet link or Bank server throughput are likely the rate limiting steps. So what if 30% more multiplies are needed to calculate the hash, my modern I7 chip can do a lot of math in an Internet latency period measured in milliseconds.

What possible use case could see a 30% impact to a 30% more expensive hash function? What sort of user is doing enough hashes that the hash function calculation time is a noticeable fraction of their day? Even in the case of a hardware smartcard, how many times is a hardware security device used per day? It seems like NIST is solving a problem that nobody has.

Unless you were a User that spent your money building a gigantic computer to brute force search for hash collisions for some nefarious purpose. OK, I get it now, the NSA really was the customer.

Nick P October 1, 2013 7:28 PM

@ Brian

” The fact that the likely cause, and certainly the content, of the debate here is centered around some conspiracy theory is at least a little troubling to me. At the end of the day, I agree with the idea that maybe NIST should just standardize Keccak as-is (performance issues aside)…but if the reason for doing so involves current events, I think they’d be doing it for the wrong reasons.”

Fair point.

@ Ellen

Re: importance of performance

” What sort of user is doing enough hashes that the hash function calculation time is a noticeable fraction of their day? Even in the case of a hardware smartcard, how many times is a hardware security device used per day? It seems like NIST is solving a problem that nobody has.”

Ah, but the burden of proof is the other way around. The people promoting the security algorithm want it to use a certain amount of CPU time (e.g. cycles per byte) and memory. They must justify that expense. The default security mechanism should be the one that provides the required amount of protection at the minimum cost to the user. If the extra cycles aren’t justified, they shouldn’t exist.

(And every cycle they don’t use for that feature can be used for another, either functional or security.)

Examples of where a faster hash might have noticeable benefit are embedded systems, low bandwidth links, high throughput applications that use hashing, hash-based integrity schemes for system memory, content-based addressing and integrity checking large files. It helps interoperability to have a baseline, off-the-shelf algorithm that’s fast enough for all of these while providing adequate security.

Bauke Jan Douma October 1, 2013 7:32 PM

@ Brian October 1, 2013 6:10 PM

quotes:
“Rijndael has been subject to VASTLY more cryptanalysis since the AES
competition”
“Abandoning well over a decade of dedicated cryptographic analysis seems
like a ridiculous tradeoff” (w.r.t. AES)

I’m neither a cryptographer nor a cryptanalyst.
But.
Isn’t a ‘long period of cryptanalysis’ at best a neutral term?

If you use it to suggest durability — sure.
If you use it to suggest weakness (taking technological progress, Moore’s
law, etc. into consideration) — sure.

Therefore, a long period of cryptanalysis can be held ‘for’, and ‘against’.
It’s neutral. At best.

bjd

65535 October 2, 2013 2:39 AM

I am with Bruce on this issue. Why not allow the true winner display his product without “last minute” changes?

I do understand that there 256 bit strength and then there very strong 256 bit strength due to actual implementation. But, why not allow the 512 bit strength to be used (at least for those who want to use it)? Given mathematics and all things being equal, 512 bit strength is much higher than 256 bit strength (assuming equal implementations).

The NIST gives off a bad smell when at the 11th hour the bit strength is basically cut in half. NIST is a disappointment.

fuujuhi October 2, 2013 4:03 AM

I always thought that security experts should consider the weakest link when evaluating systems.

Current SHA-3 proposal and SHA-2 have the SAME minimum security level (i.e. collision resistance, 2^128 for SHA3-256 and SHA2-256).

But SHA-3 is actually more conservative than SHA-2 because it claims a lower security level as SHA-2 for the SAME output length, and hence has MORE security margin than SHA-2.

SHA-2 for instance claims 2^256 pre-image resistance, but we all know that SHA-2 is vulnerable to multi-targets attacks, length extension, multi-paths, etc.

What should we trust more? A primitive that is conservative on its claim, or another one full of holes and exceptions?

fuujuhi October 2, 2013 4:32 AM

@65535

Yeah, let’s go for some NIST bashing, it’s so cool these days.

Do you know that NIST is very open on the current proposal?
And are actually asking opinion from the community, in order to get the maximum acceptance?

Also, 1024 bit is much higher than 512 bits. So we should go for 1024 bit security then? What about 2048 bit?

But do you know that to simply count up to 2^256 you need to burn more energy than the Sun will ever produce? Is this what you want? Resist to an adversary that has more power than the Sun?

Of course there will probably be more powerful attacks than brute-force. But the point is that NIST believe (and the whole community agrees) that there is enough margin today to say that Keccak with capacity 512 will not be broken in the near future.

thoth October 2, 2013 5:44 AM

Did the creators of Keccak discuss their changes with NIST regarding their intending changes ? Keccak is going to be a new standard for the world to use and the world community needs to be given a chance to vote and discuss on whether Keccak used in SHA3 would be allowed to follow the changes the creators of Keccak proposed and NIST should facilitate and convene openly such an international meeting (and also bring the top 3 SHA3 candidates and their creators – including SKEIN) to the board to discuss issues regarding the update for Keccak/SHA3 if NIST wants to regain the respect of the world community.

In the light of all the drama surrounding NIST and Keccak’s creators wanting to weaken their algorithm in the light of the Snowden leaks regarding NIST & NSA, I doubt it would be advisable to use NIST provisioned algorithms until proven to be backdoor-free and secure enough by the crypto and security community worldwide.

I think the use of the second and third place competition winners and finalists for NIST hosted AES (Twofish and Serpent) and SHA3 (Skein, JH, Grostl, BLAKE) would be more advisable.

Mike the goat October 2, 2013 8:10 AM

Even if the changes are completely benign and are indeed being done for performance reasons (and there’s nothing up there sleeve) it is the appearance of impropriety that is damaging to trust. They would do well to just leave it alone – it did win the competition in that configuration, right?

Alan Kaminsky October 2, 2013 9:28 AM

At the start of the SHA-3 competition in 2007, NIST wanted a hash algorithm that was more secure than SHA-2 — because of the concern (at that time) of potential weaknesses in SHA-2 — and that had faster performance than SHA-2.

At the end of the SHA-3 competition in 2012, none of the finalist algorithms were demonstrably more secure than SHA-2, and none were unequivocally faster than SHA-2.

Therefore, the SHA-3 competition was a failure. While lots of interesting hash algorithms and valuable cryptanalyses were published, the fact remains that the competition did not achieve its goals.

What NIST should have done was declare that there was no SHA-3 winner. NIST should then have either revised the goals and restarted the competition (perhaps taking another look at some of the candidate algorithms eliminated after the second round), or decided to leave the existing Secure Hash Standard untouched.

But this would have been a huge loss of face for NIST. So to try to salvage something, NIST is now altering the “winning” algorithm, Keccak, to improve its performance. Then NIST can at least say that they have a faster algorithm than SHA-2. (They can’t do anything to make Keccak demonstrably more secure than SHA-2.)

I really don’t think the NSA is behind NIST’s current efforts to alter Keccak after the fact. NIST would still be doing what it is doing, even if the NSA was uninvolved. The Snowden revelations, though, have destroyed all trust in cryptographic standards, whether or not the mistrust is justified.

RSaunders October 2, 2013 9:52 AM

@ Nick P

Interesting response to Ellen, but I’m not sure I agree. If a hash function was infinitely fast it would not be very secure because brute force would be highly effective. That’s the difference between a secure hash function and a merely effective one that might be used for cache management.

It’s not just a zero-sum game versus other functionality that might go into the widget; it’s a two-sided game where raising the evildoer’s work factor is one of the desired benefits.

I’m not saying NSA is an evildoer or that NIST is kowtowing to them; that’s irrelevant. Raising the computational cost of the most efficient implementation of an algorithm raises the security of the resulting system by increasing the work required to conduct attacks. Think of it like the time delay after you enter your iPhone PIN wrong three times. It makes a robotic brute force PIN attack more expensive.

Moderator October 2, 2013 1:40 PM

Yes, the NIST site is currently unavailable due to the U.S. government shutdown, which you can read about in many other places. Debating the shutdown is off topic here.

Jose October 2, 2013 1:56 PM

Well Skein will be new de facto hash function, so add the primitives fully on your website. You won in one undirect way… congratulations. I trust more Twofish than AES also indeed…

Nick P October 2, 2013 2:54 PM

@ RSaunders

” If a hash function was infinitely fast it would not be very secure because brute force would be highly effective.”

Remember that the whole point of the security level of a hash is making brute forced collisions impossible (or improbable). A security level of 2^128 requires more operations than any supercomputer we have will be capable of pulling off for hundreds to thousands of years (in theory). At those timespans, a speed up in bytes per cycle would have almost zero effect on the situation as it would still take longer than their lives to brute force a 128 security level hash.

And these are “one-way” functions I remind you. For full preimage attacks, this means they inherently loose information as they progress. Being able to go from a file to a digest quickly has no bearing on how easily they can go from a digest to a file. I’ve never seen that done at all, btw.

As for regular collisions, attacks so far have been about weaknesses in the algorithm or security level, not algorithm’s speed. I’ll change my position if you can name one algorithm that was strong and had at least 80 bits security strength, but was cracked because it was fast in terms of cycles per byte. I’ll wait. 😉

mamling October 2, 2013 6:49 PM

I understand how to get a collision on a 256-bit hash in 2128 effort. And I can believe that finding a pre-image of a SHA-256 hash takes 2256 effort. But I don’t see what property of Keccak makes finding a pre-image require no more effort than finding a collision. Is it that the Keccak permutation can be run backwards? If so, how does that help?

Dirk Praet October 2, 2013 7:06 PM

@ Alan Kaminsky

I really don’t think the NSA is behind NIST’s current efforts to alter Keccak after the fact. NIST would still be doing what it is doing, even if the NSA was uninvolved. The Snowden revelations, though, have destroyed all trust in cryptographic standards, whether or not the mistrust is justified.

That’s probably the best way I have heared it phrased to date, and it especially applies to NIST in light of the DUAL_EC_DRBG discussion. RSA’s recent advisory to developer customers to stop using it certainly hasn’t helped. At this point, the only way for NIST to regain trust is to come clean about what has really been going on, providing of course that they already would be legally allowed to do so.

The same can be said for Verizon, AT&T, the PRISM associates, Intel, Cisco and all other companies that are suspected of being in bed with the NSA. The current climate of FUD is hurting everyone and is indeed effectively destroying all trust in the US ICT tech industry. I concur with Bruce that the best way to go about this would probably be through a new Church Committee.

Frankly, it is beyond me that the USG prefers to burry its head in the sand and rather has a thriving industry and diplomatic relations with other countries seriously damaged than reining in its out-of-control national security agency all in the name of a so-called “war on terror” that nobody in his right mind is buying anymore. It just goes to show how strong the grip of the military/surveillance-industrial complex on the country has become.

Anon October 2, 2013 9:03 PM

@Dirk

It’s hard to see how Intel could be hurt by the Snowden documents. Between Intel and AMD, another US company, they have a 99% share of the world desktop CPU market with IBM, another US company, getting most of that other 1%. There really aren’t any foreign competitors in that market niche.

Anon October 2, 2013 9:17 PM

@Dirk

You said: “Frankly, it is beyond me that the USG prefers to burry its head in the sand and rather has a thriving industry and diplomatic relations with other countries seriously damaged than reining in its out-of-control national security agency all in the name of a so-called “war on terror” that nobody in his right mind is buying anymore.”

From the US perspective, the main reasons the US wants strong diplomatic relations with other countries is to fight a “war on terror” and a “war on drugs”. If the USG accepted your argument that “war on terror” should be ended, it would have no reason to care at all whether its diplomatic relations with other countries were seriously damaged.

Figureitout October 2, 2013 11:09 PM

Anon Re: Your friend Dirk Praet
–You conveniently left out the last part of his quote, talking about the military industrial complex. Relations w/ the world are extremely important, I had the pleasure of living in Belgium at the beginning of the Iraq war…Basically I got a lot of hate for stuff I didn’t do or have any part of.

I wish we could just ship off all the wanna-be soldiers and coppers to a little kiddy camp to shoot at each other and arrest each other on false charges. Maybe test their missiles and nukes on them too.

Jeff Trombly October 2, 2013 11:17 PM

Bruce, I’m a bit surprised at you. I can see the political argument for making SHA-3 match the Keccak submission exactly, but the proposed changes are mostly clear technical improvements and are based on the security proofs by Joan Daeman et al. There is no possibility of sneaking in a back door.

In fact, all the changes are suggestions from outside researchers (including the Keccak designers themselves) that NIST is proposing to incorporate into the official standard.

There are two parts to Keccak. One is the core round function, an unkeyed cryptographic permutation on a large block. This is the hard part to verify the security of, and nobody is suggesting making the tiniest change to it. This is akin to the core “MD5 Transform” or “SHA1 transform” algorithm. It is supposed to be computationally indistinguishable from a completely random permutation, and nobody who has studied it has found any hint of a technique for making the distinction. (We do not have a proof, however.)

The second part of the Keccak hash function is the “sponge construction” that is used to take this finite-sized random permutation and make a cryptographic hash on arbitrary-sized inputs. There are strong security proofs on the sponge function, assuming the permutation at its core is truly random.

This is equivalent to (but different from) the Davies-Meyer construction which is at the heart of MD4, MD5, SHA1 and SHA2.

This second part is the part where tweaks are suggested. Because we have actual security proofs, it’s straightforward to make some changes without invalidating the proofs.

The first change proposed is to the padding algorithm used to break the arbitrary-sized input into blocks to feed to the sponge rounds. The original submission proposed a simple padding algorithm similar to the Damgård–Merkle padding used by earlier hashes. Some others showed (with security proof) an alternative scheme that allows extension to tree hashing, a useful feature that other SHA-3 submissions provided. Including Skein, I might mention.

NIST suggested, “you know, since the security is provably equivalent, how about we use the tree-compatible padding so that it’s at least possible to standardize tree hashing later if we want to.”

The only downside is that the minimum block padding is increased from 2 bits to 8. This is lower than SHA-1’s 65 bits in either case, and makes no difference if the input length is divisible by 8 (which it always will be in practice), so it’s a good idea.

The second suggestion is to adjust the security parameters. The sponge construction divides the 1600-bit permutation block into a “capacity” c which is a security parameter, and an r = 1600−c “rate” which is the number of input bits hashed per round. If the core function is indeed a random permutation, then the sponge construction is provably c/2 bits secure against both collision and pre-image attacks. (Minus some small epsilon.)

Since the SHA-3 submission guidelines asked for an n-bit hash with n/2 bit collision resistance (the maximum possible for an n-bit hash) and n-bit preimage resistance, the Keccak submission proposed c=2n.

NIST is saying “well, you know, everyone knows than an n-bit hash is at most n/2 bits secure anyway. How about we use c=n to match the pre-image and the collision resistance.” That way, SHA3-256 would be 128 bits secure, and SHA3-512 would be 256 bits secure against all the standard attack scenarios.

This parameter choice is suggested by the Keccak authors, but their official SHA3 proposal used c=2n to avoid being disqualified on a technicality.

This change increases the efficiency of Keccak by increasing the rate (the number of bits hashed per round), and makes sense. It’s more debatable whether it’s a good idea, but there’s nothing remotely secret or nefarious about the security implications; they’re straight from the Keccak paper.

The argument is that this is not a “useful margin of safety”, but stupid excess and bad engineering to provide so much strength in one part when the collision resistance is the limiting factor. As we all know, security is about the weakest link; people who crow about how strong some other link is (e.g. “We use unbreakable 256-bit AES encryption!” without talking about how the keys are generated or managed) are idiots who don’t understand security.

And in both cases, this is something that NIST is discussing publicly and asking for feedback about before standardizing. It’s like a “round 4” if you like. I don’t see anything to complain about, just taking advantage of the fact that Keccak is a very flexible algorithm with more adjustable parameters than some others.

I really think this is a silly argument. If Bruce wants to explain why n-bit preimage resistance is important even when collisions are n/2, then I’m all ears. But the proposal and its merits are completely open and public.

All the discussion now is doing is giving hardware implementors a heads-up that they should make the corresponding parameters adjustable in their implementations. Which is good thing regardless.

Figureitout October 2, 2013 11:50 PM

Jeff Trombly
–Nice math, but don’t be surprised at Bruce, have you ever dealt w/ an active covert investigation? He is dealing w/ this now; you don’t think straight when you’re a subject of a covert investigation and you’re outnumbered. You revert to methods and systems that are very impractical; but still you wonder if even those are cracked and they’re just watching you. And there’s a seemingly limitless amount of agents just showing up out of nowhere…

A backdoor is always a possibility; if enough smart people really care. Protection is to make them not care. The unknown/hidden backdoors are what give me nightmares; meaning concepts not even conceived of in any public knowledge, just pure flukes and lucky finds….

ewan October 3, 2013 8:57 AM

“The sponge construction divides the 1600-bit permutation block into a “capacity” c which is a security parameter, and an r = 1600−c “rate” which is the number of input bits hashed per round. ”

This is not a matter of cryptography, it’s a matter of public relations. This sort of explanation is simply addressing the wrong problem.

John Smith October 3, 2013 12:51 PM

@ewan

Exactly.

On one side, NIST, that communicate openly and interactively about choices of parameters and interface aspects.

On the other side, this post with vague allegations (NSA shadow) and factually wrong statements (some internal changes to the algorithm).

Guess who has a public relation problem?

unimportant October 5, 2013 10:01 AM

My take on this issue is that NIST will eventually withdraw its modifications and standardize the trusted open community vote — meaning SHA-3 = untampered Keccak.

SHA-3 seems to be too important for chipping individuals. And trust is required for the future new surveillanced and individualized money.

Bruce Schneier October 5, 2013 5:09 PM

“Bruce, what do you think? You designed Skein to one set of standards. Would you have made changes if the competition had different requirements?”

We designed the best hash function we could. We really didn’t optimize to the exacting requirements. So we likely would have had the same design regardless.

I guess it depends how different the requirements were.

Brian Dell October 7, 2013 1:40 AM

re “My problem with the changes isn’t cryptographic, it’s perceptual.”

So, in other words, some guy off the street like me who knows next to nothing about cryptography yet sees the insidious NSA under every rock could voice hthe objection just as soundly based on the validity of his “perception”?

People follow you, Bruce, because they look to you for a CRYPTOGRAPHIC objection, if any.

btc robot bonus October 7, 2013 7:42 AM

Hi, Nice submit. We have a dilemma and your web page throughout world wide web ie, may possibly go here? For instance on the other hand may be the marketplace fundamental in addition to a large piece of others will take out your excellent composing due to this difficulty.

Marsh Ray October 9, 2013 8:21 PM

@Jeff_Trombly:

The argument is that this is not a “useful margin of safety”, but stupid excess and bad engineering to provide so much strength in one part when the collision resistance is the limiting factor.

Bullshit.

Collision resistance is not always the limiting factor (MAC constructions for example). It was NIST themselves who said that preimage resistance is essential, but they were just listing the well-known properties of an ideal function.

http://csrc.nist.gov/groups/ST/hash/documents/FR_Notice_Nov07.pdf : “NIST expects the SHA-3 algorithm of message digest size n to meet the following security requirements at a minimum. […] any result that shows that the candidate algorithm does not meet these requirements will be considered to be a serious attack. […] Preimage resistance of approximately n bits” [em added]

You should support your own claims before you go calling people “stupid” and “idiots”.

Me or My October 9, 2013 9:25 PM

To be honest I’m glad they made these changes because I wouldn’t use SHA-3 if they didn’t. It was the only logical thing to do and already suggested by the Keccak team. I’d be upset if they’d not standardize an optimal solution just because they fear that some paranoid folks might interpret this as intentional weakening by the NSA.

You and Yours November 14, 2013 8:05 AM

@Jonathan: What he means is that hopefully, the permutation function behaves like a randomly selected permutation among the (2^1600)! permutations on 2^1600 elements. Of course, this cannot be completely true since the Keccak permutation can be described in much less than 2^1600 bits.

Ben January 31, 2016 1:27 PM

You keep talking like it’s the NSA’s fault, the NSA’s screw-up. Maybe you’re afraid, maybe you’re paid, but the NSA just follows the directives of the president, and this president OBAMA, it’s okay to say his name, Obama has a consistent record of ignoring anything anyone else thinks and doing his own thing. So, this NSA spying program probably isn’t the plan of one of his advisers. It’s probably something he came up with in discussions with other corrupt politicians interested in the power to be had from having unaccountable power over the people.

Also, with the exponential increase and frequency of secrets and double-speak that the Democrats and government in-general have grown so accustomed too, almost like second-nature now, over these last several years, to assume that they are just implementing better more efficient coding in the Keccak algorithms is literally the same thing as trusting the words of a known pedophile requesting to be an elementary school teacher, only worse because some of them actually acknowledge they’re wrong and do make the effort to change themselves.

Oh, but I see you’ve asked around about him to “make sure” you can trust his words.

Unless you go into that Keccak system and see for yourself that nothing has been made more vulnerable, no backdoors are being put into it, you’re really just talking because you really don’t know. And chances are, they’ve been researching how to implement backdoors that are very difficult to discover so that when they are discovered, it can be plausibly believed that it was just a simple undiscovered security flaw.

It’s like this. How often does the government actually fix anything? How much more often to they actually end up breaking something that didn’t actually need to be fixed?

SHA-2, what’s wrong with it? From what I understand, it doesn’t have any discernible vulnerabilities in all of this time that it and SHA-1 has existed.

Why the push for SHA-3 . . . why now, in this time, when the government is desperately clamoring to find those cooperative to make backdoor vulnerabilities?

You want to know the best way to trick people. Always maintain plausible deniability and have a scapegoat reason for people to believe. People already want to believe that in the long-run their nation will continue to prove to be the best in the world. People are intentionally naive when it comes to realizing the true detriment to our liberties that could happen and does happen, especially to nations that have given too much power over to their respective governments.

A October 16, 2019 5:11 AM

A huge problem with trading off security for performance is this: computing performance is constantly increasing, and compromising a standard that is intended to be a long-term solution, in the name of current performance standards, is just unacceptable in this day and age.

In 5 years time, running Keccak-512 will be easy as pie, because CPUs and technology will catch up and be able to run things like this. It’s idiotic to radically change the recommended parameters just in the name of performance.

Eventually CPUs will come with hashing functions when they become so popular, so that will again significantly reduce the performance penalty. Unless something has changed for the better, I question whether SHA-3 offers any security benefits over SHA-512.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.