Comments

still useful June 20, 2011 1:33 PM

Cryptographic hash functions are useful outside of the security field.

I still use “broken” MD5 because it is faster than the others and provides me with a low chance of accidental collision. In my application, I there is no concern about trying to create collisions on purpose.

Creosote June 20, 2011 1:48 PM

The snarky ‘reactions to stages in the life cycle of cryptographic hash functions’ table, below the main chart is quite amusing, and disturbingly accurate…

swim June 20, 2011 1:51 PM

@Eric: given what happened last weekend, it hardly seems like the problem with Bitcoin is in the hashing. Sat got the crypto right, but screwed up the human element (economics), just like Schneier is always complaining about.

In fact, I’d like to see a Bitcoin post-mortem by Schneier at some point.

John Hardin June 20, 2011 3:17 PM

I know I’ve brought this up before, but I’d like to do so again: isn’t there value in using two (or more) different hashing algorithms for signing content to try to avoid the inevitable problem of synthetic collisions being found? Since it’s unlikely one replacement text can be synthesized to collide with both/all of the hashes, that seems to me to be a reasonable way to almost entirely avoid the problem on long-term persistent signatures.

Mark June 20, 2011 3:24 PM

Until now, I failed to notice how many advances the year of 2004 brought to the field of hash functions. Well, breaking a algorithm is an advance, isn’t it?

Petréa Mitchell June 20, 2011 4:17 PM

Entertaining snark in two of the three columns– it’s pretty easy to guess that the author is a programmer. Would that every software company took weaknesses that seriously that early in the process…

Eam June 20, 2011 5:57 PM

@Petréa Mitchell:

Are you kidding? I’ll be happy when XOR and ROT-13 are no longer de-facto standards!

It’s been my experience that the average programmer working on authentication systems falls into the “SHA-what?” camp.

Eam June 20, 2011 6:05 PM

@John Hardin:

isn’t there value in using two (or more) different hashing algorithms…

I was originally going to comment that that would be a good idea where performance isn’t an issue.

But now that I think about it, almost every routine I have that hashes data is disk or network I/O-limited. I bet creating two hashes would have a negligible effect on total processing time.

I have to imagine there are a lot of cases like that in the average web app.

Macarthur June 20, 2011 7:12 PM

I think that it’s a bit off as far as ‘programmer’, if it was true that programmers actually did what they’re supposed to. According to that chart they’d not be using MD5 still. But as all of the recent leaks of databases by “lulzsec”, and other hacking groups have shown. MD5 is still widely used. Even though it’s been broken for ages.

I mean look at those behind ‘gawker’ they are considered ‘programmers’ but they were using triple DES until they were utterly owned and shamed. Then they finally went with bcrypt.

Most of the “programmers” out there I think should be lumped in with the “slashdotter”s but sadly they’re everywhere. Working at small startups, to people who are “unofficially” part of the worlds governments. Most of these people are not even hashing their passwords, and if they are, they’re simply using MD5 or at best SHA-1.

The tables are amusing, and I’m going to be sure to share this post with the people who I talk to regularly.

The thing that I’m wondering about is if bcrypt will be rewritten to use something more modern if blowfish is broken. And once it is what will people use for their password hashing to help mitigate bruteforce searches.

With more and more sites moving to bcrypt(after they’ve had their databases leak and due to public scrutiny from them using md5), I’m sure that the interest in breaking blowfish will increase exponentially in the future. Now that previous statement was just a hope, since if I know how ‘programmers’ as they call themselves operate. It won’t happen, but I at least hope that people will use stronger password hashing algorithms in the future.

Richard Steven Hack June 20, 2011 7:55 PM

Anyone having trouble connecting to this site? It seems to fluctuate. I can connect to some articles via the menu, but when I try to post…nada. Also can’t connect to some articles using the menu. Firefox just sits there “connecting…”

At the moment I’m downloading a 1.3GB Network Security Tools distro, but this problem has been going on all day.

Jay June 20, 2011 9:15 PM

@John Hardin:

I think there have been papers demonstrating how to union two different collision methods – i.e. the difficulty of finding a collision in two hash functions becomes O(M+N), not O(M*N) like you’d prefer.

I think this relies on the ability to find “families” of collisions – which is true for SHA-1and MD5, IIRC

Richard Steven Hack June 20, 2011 11:34 PM

Who cares about stronger password hashing when password cracking these days involves sending the hash up to a Web database and getting the password back in a second?

When your users are still using single English word passwords and every English word (as well as most proper names and hundreds of thousands of other password types) has already been hashed by all the standard hash functions in every OS and stored in a database accessible on the Web or downloaded, what good are password hashes?

Jonathan Abbey June 21, 2011 12:46 AM

That’s why everyone should be using salt with their password hashes, Richard.

Ping-Che Chen June 21, 2011 12:49 AM

@Richard Steven Hack:

You should salt your passwords. After all, this is old technology.

W.r.t. weak passwords, it’s actually possible to make weak passwords stronger by using hard-to-compute salts. Of course, this does not solve the “12345678” or “password1” problem, but it’s easy to filter these passwords out (and ask your users to pick a stronger password). This is all up to how much computation power you are willing to sacrifice for security.

regular_guy June 21, 2011 1:01 AM

@Richard Steven Hack,

The hashes databases brings up a good point. I’d like to think most authentication systems use some form of cryptography such as RSA to transmit the user/password pair.

Even if this is the case the system storing the hashes might still be vulnerable. One of the advantages of using the files name service database on UNIX systems is the shadow file is only readable by root so unless root or a user with access via RBAC systems is compromised getting the hashes would require physical access.

In a properly configured system the only people who should be able to get even the hashes should be those administering the box. This goes back to the typical weaknesses in any cryptographic or security system. Trusting those with legitimate access and proper implementation is the hard part.

Unfortunately most users (laymen) don’t think about how their information nor would they understand the risks of different methods, let alone look at a web page source or contact the owning organization. Im sure lots of web bases authentication systems are storing their hashes/passwords in a database table easily extractable via SQL injection. The sad part is most likely nobody will find out until its too late or find out at all.

I guess my point is that databases of hashes are going to pop up with any algorithm and security professionals, managers, and developers should focus more on implementation and hiring/retaining trustworthy employees (no easy feat).

Richard Steven Hack June 21, 2011 1:25 AM

My bad, forgot all about salts.

Meanwhile, I’m STILL having enormous problems connecting to and posting to this site. Clearing my Firefox cache hasn’t helped. It seems completely random. I could access this thread, but if I click on Main from the menu Firefox just sits there “connecting…” And I probably won’t be able to post this because it will sit there connecting…

I can get to every other site I normally go to without problems.

tommy June 21, 2011 2:19 AM

Since they included slashdotters, shouldn’t they have included “Schneier blog readers”?

@ Richard Steven Hack:

Reboot the whole machine.
(Tech Support Rule #1.)

Believe it or not, some online financial institutions are actually requiring reasonably strong pws – at least, compared to dictionary words. One site recently required:

1) Must include upper and lower case letters, but cannot start with upper case (since your bf’s/gf’s/pet’s name is usually capitalized).

2) Must include one or more numbers, but cannot end with a number. … because studies have shown that most get around the “no-dictionary” requirement with “Richard1” or something, an easy dictionary attack. “rich8Ard” enlarges the search space quite a bit, though still not good enough.

3) 10-character minimum, way up from the 6 or 8 previously required.

They have an actual pw-strength meter at the page where you set up your login creds, and tell you how relatively strong it is.

Steve Gibson recently wrote a paper that merely adding twelve dots or dashes greatly increases search space, though of course once published, don’t do that. But an easy-to-remember password can be salted by the user with something easy to remember, but not dictionary-definable.

https://www.grc.com/haystack.htm

rich8Ardoscysbtdelwspwhattlg?

Search Space Size: 2.28 x 10^57

Online Attack Scenario:
(Assuming one thousand guesses per second) 7.26 hundred million trillion trillion trillion centuries

Offline Fast Attack Scenario:
(Assuming one hundred billion guesses per second) 7.26 trillion trillion trillion centuries

Massive Cracking Array Scenario:
(Assuming one hundred trillion guesses per second) 7.26 billion trillion trillion centuries

How did I get that, other than inserting the 8 and captializing the A in your name, which you could probably remember? The rest of the letters are the initial letters of the first line (or two) of the US National Anthem, including the ?, because it is in fact a question. How hard is it to memorize and reproduce that at will?

Me, I just use Password Safe (thank you, Bruce), and have the one master pw which is based on similar acronym principles, but of a phrase no one would ever guess, and with natural caps and punctuation in the phrase ending up in the pw.

Nick P June 21, 2011 2:26 AM

@ Richard Steven Hack

“Still a mess. No apparent pattern. Sometimes I can access something, more than half the time I can’t. Almost every time I try to post, I can’t connect.”

My bad dude. They promised the rootkit would be stealthy, seemless, and make your posts more optimistic. Those damned Romanians…

Nick P June 21, 2011 2:32 AM

@ still useful

You didn’t try a hashing algorithm like Dijkstra’s? Many hashing schemes can typically deal with a collision or two without spending the resources that cryptographic hashes like MD5 do.

@ creosote

“The snarky ‘reactions to stages in the life cycle of cryptographic hash functions’ table, below the main chart is quite amusing, and disturbingly accurate…”

Heck yeah. My favorites are geeks dropping hash names at cocktail parties to look elite (done that before, but beer replaced cocktails) and the one about mathematical comparisons between the search space and the number of protons in the universe. I see that B.S. on web sites all the time. When have we ever attacked a decent cipher with brute force alone? Those comparisons are all meaningless because attackers try to step around the largest obstacles, not plow through them. Yet, the ridiculous comparisons continue…

Richard Steven Hack June 21, 2011 3:34 AM

Still can’t post ninety percent of the time.

I give up. I just won’t be able to post here until I find a solution.

I’ve deleted the Firefox profile, uninstalled and reinstalled Firefox (and the same thing happens in Opera), rebooted the DSLExtreme modem and the router, cleared the DNS cache in openSUSE 11.4.

And now the same thing happens in Windows XP running in VirtualBox!

I give up. It’s hopeless. Short of a full reinstall of the OS and everything I can’t seem to figure out this problem.

DSLExtreme had a DNS outage a few days ago for several hours. Maybe they have a corrupt DNS record in their DNS server for Bruce’s site?

I dont know. I give up. I won’t be posting here any more until this is solved.

Andy June 21, 2011 4:08 AM

@tommy “rich8Ardoscysbtdelwspwhattlg?

Search Space Size: 2.28 x 10^57”
Are stuff it invert it
80-57=27
10^27 =

D.P. June 21, 2011 4:09 AM

Cryptologists often describe some hash function as broken when complexity of finding a collision is less than it would require using the brute force. In that sense, SHA-1 is broken now, however, no one has been able to find a single collision for SHA-1.

Apparently, this use of the word “broken” may confuse some people to think that SHA-1 does not provide any protection. So, Valerie chose to use “weekened” to describe the situation where no real collision is found, and reserve “broken” for those where a collision is known.

Though, known collisions for a hash function present some security risk, it may be less than many people assume. Knowing collision does not allow to replace any previously generated file with another file with the same hash. It only allows an attacker to generate two binary files with the same hash sum. Usually those two files will differ by a single cryptographic block. These two blocks are pseudo-random bytes, so it can be used only as a flag to trigger different sequences of execution. However, it offers very little advantage for the attacker — it does not help much to hide malicious code, also there are many other more convenient ways to trigger a dormant malicious code, which do not require replacing the binary file.

So, MD5 is broken, but we probably have a decade or more before it will possible to generate a malicious file with the same hash as a trusted file. Of course, it does not mean that we should stay with MD5 any longer, but in practical sense, there is far more risk due to leak of a private key, tricking someone in signing malicious file, etc.

Andy June 21, 2011 4:11 AM

@Nick P “and the one about mathematical comparisons between the search space and the number of protons in the universe. I see that B.S. on web sites all the time”
Was just wondering if thats possable?

RonK June 21, 2011 5:30 AM

I fully understand the reasoning behind abandoning hashes which no longer satisfy their security criteria. It’s been a good paradigm up to now.

On the other hand, when an attack requiring > 2**200 bits of memory is suggested I fail to get excited even if that is orders of magnitude less memory than the theoretical maximum (for let’s say, a birthday attack on a 512-bit hash function). It all depends on whether this attack seems to only be the start of a series of landslide improvements, or it is just another incremental improvement. In other words, it seems to me that an important factor is the average derivative of the logarithm of the complexities of the attacks vs. time, compared with the security margin which still remains.

Am I wrong?

Jonadab June 21, 2011 6:03 AM

Isn’t there value in using two (or more)
different hashing algorithms for signing
content to try to avoid the inevitable
problem of synthetic collisions being found?

Yes, sort of, but mostly no.

Theoretically, using two hashes (of roughly equal complexity) is approximately twice as secure as using just one of them. Problem is, in the field of security, everything is about magnitudes, and so “twice as secure” is almost the same thing as “barely more secure at all”. If you really want more secure hashing, you move up to a more complex function. Instead of using both MD5 and SHA-1 together, for example, you’re rather better off to go ahead and move to SHA-2.

What I don’t understand, given the computing resources available these days, is why the most complex hash functions available produce such small results. On a modern system, 512 bytes is nothing. Why hasn’t anyone put forward a multi-kilobyte hash function yet? I mean, yes, I realize that in many applications keeping it small is desirable (due to computation time, not storage or bandwidth). Nonetheless, it seems to me that there are other applications where the extra computational time would be no big deal and we could be using significantly more complex hashes, if they existed (and had been widely peer reviewed and so on). Why do we always do just above the minimum necessary to remain secure? Why not do way more than is necessary, if we can?

Jonadab June 21, 2011 6:27 AM

I’ll be happy when XOR and ROT-13
are no longer de-facto standards!

ROT13 is mostly used when deliberate attacks are not a concern, to prevent someone from inadvertently reading the plaintext. It’s good enough for that. (Yeah, I know, there are people out there who have looked at so much ROT13 text that this no longer works for them. But it’s not a major concern. The larger problem is Unicode and, in particular, foreign languages.)

XOR is, admittedly, overused.

almost every routine I have that
hashes data is disk or network I/O-limited

In which case, why not use hashes with significantly more complexity? Why don’t we have 4KB, 8KB, and 16KB hashes?

password cracking these days involves
sending the hash up to a Web database
and getting the password back in a second?

I know how to fix that.

First, add per-password salting to the mix. (Any time a user changes his password, he automatically gets freshly-generated salt along with it.)

Second, the more entropy you can put into the salt, the better. Assume you’re defending against rainbow tables generated by a large botnet and plan your salt accordingly. If you have a good entropy source, go ahead and use significantly more salt with each password than the number of bits in the resulting password hash. If necessary, make the user monkey-bang the keyboard for a couple of minutes when they change their password, to generate the needed entropy for the salt.

Third, obviously, don’t use short or otherwise terribly insecure passwords.

jmdesp June 21, 2011 6:42 AM

The part where top level researchers wait for general acceptance before seriously searching for a weakness is chilling.

An interesting thing is that SHA-1 as already spent 7 years in the weakened state without a collision being found, just another two and that will be as long as MD-5 had before been broken.

@swim : The economic of Bitcoins is completely wrong. Real science in economy (and not the frequent “gold, gold, gold” raving) is that the monetary mass must follow the economic activity, so progress at roughtly the same speed as the PIB (actually the current dominant though is that inflation will show if the monetary mass progression is adequate or not. IMO this ignores monetary creation can be captured and not immediatly generate inflation), therefore the forbiddance for governements to print their own money, because it will alway be too tempting to create too much, and instead the control of that mass through interest rates (that regulates at which speed banks will create money by issuing mortages).

The trouble of bitcoins is that there monetary creation is an open-loop, there is simply no control at all over wether they are too many or not enough bitcoins created (it’s very hard to have the perfect control, but it doesn’t mean none at all makes any sense) , and it’s conceived so that at some point no more bitcoins will be created (ok, a few will still be created but if they are created much too slowly it’s in practice the same as not creating) which means that the easiest way to get rich will be to sleep over one’s bitcoins, just keep them and do nothing, wait until they they have more value. Which is real bad economics.

Frank June 21, 2011 6:42 AM

@Jonadab

IIRC ‘monkey banging the keyboard’ or going wild on your mouse doesn’t give you as much entropy as one might believe.

GothAlice June 21, 2011 7:21 AM

Its interesting that bcrypt was mentioned, but, being based on multiple cycles of a modified Blowfish cipher, it is still parallizeable and vulnerable to efficient CUDA brute forcing, making it about as useful as any other hashing algorithm.

There is another algorithm that I use, scrypt, that isn’t just CPU-hard, but is also memory-hard (the output of all previous cycles feed into subsequent ones) making this type of scalable brute-force virtually impossible.

If you trust bcrypt, link me a copy of your password database. 😉 (I’m willing to recriprocate with my scrypt-backed tables; anonymized, of course.)

Dirk Praet June 21, 2011 8:28 AM

@ Jonadab

“ROT13 is mostly used when deliberate attacks are not a concern, to prevent someone from inadvertently reading the plaintext. ”

Actually, the only use I have for ROT13 is when tweeting or posting content on certain fora that could be considered offensive by other audiences than those it was intended for. It’s convenient enough to avoid flames from 3rd parties or stupid questions from, say, half-wit recruiters or HR folks that during an interview appear to have done some internet background checking. And it doesn’t show up in Google searches on a particular subject. Leetkey is my friend.

Alan Kaminsky June 21, 2011 9:12 AM

@D.P.: “So, MD5 is broken, but we probably have a decade or more before it will possible to generate a malicious file with the same hash as a trusted file.”

It’s already been done. Back in 2008, the folks at the HashClash Project created a rogue CA certificate — one with a valid CA signature, but not actually signed by the CA — by taking advantage of MD5’s brokenness.

http://www.win.tue.nl/hashclash/rogue-ca/

Alan Kaminsky June 21, 2011 9:20 AM

@Jonadab: “What I don’t understand, given the computing resources available these days, is why the most complex hash functions available produce such small results. On a modern system, 512 bytes is nothing. Why hasn’t anyone put forward a multi-kilobyte hash function yet?”

That’s already been done as well. So-called “sponge construction” hash functions can generate arbitrarily-long message digests. Keccak, one of the SHA-3 finalists, is a sponge construction hash function.

http://sponge.noekeon.org/
http://keccak.noekeon.org/

Skein, another SHA-3 finalist, although not a sponge construction hash function, is also able to generate arbitrarily-long message digests.

http://www.schneier.com/skein.html

Perseid June 21, 2011 9:28 AM

@GothAlice
“”” Its interesting that bcrypt was mentioned, but, being based on multiple cycles of a modified Blowfish cipher, it is still parallizeable and vulnerable to efficient CUDA brute forcing, making it about as useful as any other hashing algorithm. “””
Bruteforcing a keyspace is inherently parallelizable, whatever the key derivation function might be. The only important property is the guaranteed complexity of a single call, so that the search as a whole takes too long to be done.

@tommy
“””Believe it or not, some online financial institutions are actually requiring reasonably strong pws – at least, compared to dictionary words. One site recently required:
[…]
They have an actual pw-strength meter at the page where you set up your login creds, and tell you how relatively strong it is.”””
I hate that. My passwords usually fail 1) and 2) although their entropy is about 70 bit (ever heard about diceware?).

@Jonadab
“””Nonetheless, it seems to me that there are other applications where the extra computational time would be no big deal and we could be using significantly more complex hashes, if they existed (and had been widely peer reviewed and so on). “””
IIRC Skein is equally fast with small and large output sizes and SHA-512 is faster than SHA-1 on 64 bit hardware, so the CPU time wouldn’t even be an issue

“””Theoretically, using two hashes (of roughly equal complexity) is approximately twice as secure as using just one of them. Problem is, in the field of security, everything is about magnitudes, and so “twice as secure” is almost the same thing as “barely more secure at all”. “””
What I find interesting in using two hashes, is that both have to be broken before the combination is insecure. This should increase the (secure) lifetime of your application.

Me June 21, 2011 9:41 AM

I particularly like the ‘slashdotter’ reactions, having stalked those forums frequently, I must say that is is extremely accurate.

phred14 June 21, 2011 2:13 PM

So was “Duke Nukem Forever” really released by cryptography maintenance programmers? They don’t feel like switching hashes yet, but need to delay the inevitable.

So the obvious ploy is to release a compelling game, so that those GPUs will go back to 3D graphics instead of password cracking and calculating hash collisions.

Unfortunately, based on reviews it appears that they’ve failed in this goal with D.N.F.

tommy June 21, 2011 4:54 PM

@ Richard Steven Hack:

Maybe the feebs who watch the site, or auto-scan it, saw the advocacy of aarhy or t*r**r sp*ees once too often, so they’ve decided to interfere with your ability to post here? ;-D

Get a new ISP, a proxy, or even a different IP? Or enter this IP in your Hosts file in XP, thus bypassing DNS lookup. 92 dot 242 dot 140 dot 1. (which is configured not to answer pings, probably for DoS prevention – I don’t blame Bruce).

Maybe there is a DNS corruption. So, in Win XP, %windir%\system32\drivers\etc\hosts, double-click hosts, enter the above address, two spaces, then http://www.schneier.com/blog. Let me know if it works – the browser will always look there before going to the ISP.

@ Perseid:

Yes, my pseudorandom PW-generator failed on the first two or three tries, by sheer chance. It only takes a moment to generate another try, or you can go to
http://www.grc.com/pass and copy any compatible portion, in hex, ASCII, or alphanumeric, up to 63-64 characters. Refreshed every time you reload the page, and he has no idea what you copied.

Yes, I’m aware of Diceware. It won’t meet the requirements for numbers and keyboard characters, though you could add them yourself.

You have to understand the target of the bank’s rules — what I said before: People who put in their gf’s name – “Jennifer”. Bank says, no cap on first letter, but need a cap. So they try “jennifeR”. Bank says, need a number. “jennifeR2”. There was a study of tens of thousands of pws captured in one hack, and this “root plus a number at the end, and sometimes one at the beginning” was waaay common. Which naturally was added to attacker’s dictionaries. They’re trying to help the clueless avoid weak pws, and if that’s a bit inconvenient for you and me, it’s a small price to pay. Configuring PW Safe’s random generator, or using Gibson’s site above, it’s easy to get one that meets the rules.

@ Andy:

I can’t understand what you’re trying to say. I realize English isn’t your native language.

“Are stuff it invert it” – in English, telling someone to “stuff it” is an insult. 😉

Macarthur June 21, 2011 9:18 PM

@GothAlice

That’s not always feasible for everyone. I would be using scrypt but I don’t currently have an interface for it for my web applications. I could write up an interface for it, but I didn’t feel comfortable doing that since I’m no cryptography expert.

I chose bcrypt due to how much stronger it was than anything else out there. I saw scrypt a long long time ago, when I first looking into algorithms.

The primary issue with it, is that there’s not a lot of easy to use libraries like their are with bcrypt. Once scrypt is as widely used a lot more people will use it. Also it uses sha256 for it’s hashing algorithm(internally) so thus once the sha2 variant family is done and overwith you’re done and overwith too.

So it’s more ‘hashing’ based as far as itself, and I do like the memory limitations(beyond the time limitations). But the thing feels just like PBKDF2 which is more of a key making algorithm and that’s it. So someone would have to essentially break it off of the rest of the utility which is likely just a lot of sha256.

If I’m wrong about this, in this regard I’d love to hear about it. But it feels like this is just essentially PBKDF2 with a memory limitation added onto it and that it wasn’t made with passwords in mind as far as storing the hashes.

I have read the paper and seen their claims of cost compared to others also, but it seems to be more for someone using it to do their key creation when doing file encryption and not for storing hashed passwords in the database.

Andy June 21, 2011 10:11 PM

@tommy “Are stuff it “invert” it”, can remeber but and/or should be reveserable
and
00 0
01 0
10 0
11 1 removed
and inverted
00 1
01 1
10 1
11 0 removed

or
1011
0100
0010
0110
01
10
00
11

@RonK ,”On the other hand, when an attack requiring > 2**200 bits of memory is suggested I fail to get excited”, if the seeds can be found for the alog you can low the search space.
if the hash/ciphers out put is 16bytes send a string or password thats all the same a messure the difference between the outputs and divide by 16, use that a a mixer, and a odd value that doesn’t repeat ever 255(or the longest repeatable line from 0 to creta than half the sbox,rinise repeat and use that value)

RonK June 22, 2011 1:21 AM

@ Andy

Sorry, Andy, I think I have a better chance to discover a collision for the SHA-256 hash than to understand your comment on my post…

Perseid June 22, 2011 2:13 AM

“””Also it uses sha256 for it’s hashing algorithm(internally) so thus once the sha2 variant family is done and overwith you’re done and overwith too.”””
Can anyone explain to me, why a collision or preimage attack on a hash weakens its properties as a key strengthening function (for small keyspaces like 8 letter passwords)? I’ve read that several times now and I just don’t believe it.

@tommy
I know, I know, it’s all about tricking the average person to use good passwords.
Yet it feels wrong to always only tell people “use long passwords”, “use numbers”, “also use uppercase letters”, “no dictionary words” and never explain the underlying concept of choosing a large keyspace. Compared to the concepts necessary to use TLS properly this should be relatively easy.

D.P. June 22, 2011 2:48 AM

@Mark R

Doesn’t this count?
No, it does not.

You confuse ability to construct two files with the same MD5, which was demonstrated many years ago, and ability to create a file with some pre-defined MD5 hash.

Obviously, hash collision is a security problem in some cases (such as issuing certificates), but in many other cases, such as signing an executable file, collision does not matter, because we trust to its author. On the other hand, ability to create a malicious file with the given MD5 hash is probably still a decade away…

Clive Robinson June 22, 2011 3:22 AM

@ Swim,

“In fact, I’d like to see a Bitcoin post-mortem by Schneier at some point”

Well it’s not by Bruce but there is an interesting set of mini-post-mortem articles over on the Financial Cryptography site, and the author very much knows the business and economics of the issue of alternative currencies etc,

https://financialcryptography.com/mt/archives/001327.html

Andy June 22, 2011 3:54 AM

@Perseid “sha256″ ,”for small keyspaces like 8 letter passwords”
The maths could be wrong, but if you have say 50% collisions with 32bytes 16bytes would produce the same hash(password),8 bytes might produce 50+(50) 75% dulpcate hashs, most of the password people use would produce the same thing, so you could login with the wrong password

Perseid June 22, 2011 9:07 AM

@Andy
Are we talking about the same thing? I was thinking about using a broken cryptographic hash function (like MD5, SHA1, or probably soon SHA256) in a setting like PBKDF2. The output would still be longer than 127bit.
Maybe I should have said passwordspace instead of keyspace, that would have been more precise.

MyCat June 22, 2011 10:24 AM

@Clive Robinson

Firefox won’t let me connect to that Financial Cryptography site since it has an invalid certificate and I don’t want to add an exception. Fortunately Internet Explorer is less strict.

Macarthur June 22, 2011 2:31 PM

@Perseid since most algorithms like PBKDF2 do, you only extract x bytes of information from the hash. So if you were using sha256 and were only using aes128(no idea why you’d be using it instead of aes256) or some other algorithm that is only 128bits for its key then you only have half of the hash to work with as far as it being safe.

Also I believe that he was talking about the resulting hash. Whether it’s used as a key for your encryption, or just a hash that is stored in the database. It’s pretty close to the same thing.

If your primary method of cryptographically secure input is gone, as in the case with sha256. Then everything else is going to provide hash collisions. PBKDF2 is made primarily to slow people down, to make brute-forcing stronger. It’s actually security is not that much higher than just a single hash(someone please correct me if I’m wrong).

It’s the same thing as saying, that double encrypting with let’s say aes. And each time that it’s encrypted you add a packed integer from the current time that you’ve encrypted inside of the loop. Well if aes is broken, you’re entire security model is now broken. Even if you do some extra things on top of AES if it’s broken to the point where people can enter any key that they want(with a 50% chance of it working) then your ‘special sauce’ that you or someone else is using is now useless.

Once more, everything that I’ve said above is based upon my extremely limited understanding of cryptography and it may be completely wrong, and if it is I’d love for someone to correct it so that someone coming along and seeing it won’t think that what I said was truth.

Nick P June 23, 2011 12:34 AM

@ MyCat

“Firefox won’t let me connect to that Financial Cryptography site since it has an invalid certificate and I don’t want to add an exception. Fortunately Internet Explorer is less strict.”

Should I laugh or be concerned about this statement?

tommy June 23, 2011 3:11 AM

@ Nick P.:

You want to laugh? MS Secure Update (yes, Virginia, there is such a thing):

https://update.microsoft.com/microsoftupdate/v6/default.aspx

used to throw a certificate mismatch error a few years ago, even in IE. I wrote their “responsible disclosure” unit, because I didn’t know who else to write. It wasn’t a vuln, just a minor name mismatch (www vs no www), but if it throws cert warnings… The reply: “Not my job. (will pass it on.)” No reply ever again, but a few months later, it was fixed.

Guess what? With the passage of a few years (and my switch from IE to Fx), you can now get the same error, even in Firefox. Click the link and see…

(I OK’d it and Certificate Patrol stored it, so I don’t get it any more.)

tommy June 23, 2011 3:18 AM

@ Perseid:

You want to explain “keyspace” to a population that can’t balance their own checkbooks or give proper change without the electronic register computing the correct change for them? 😉 … You’re giving the vast majority way too much credit. Math scores in the US have been declining steadily for half a century; Probability and Statistics or Permutations and Combinations are way beyond those who don’t quite pass “general math”, and never even try algebra. (sigh)

Clive Robinson June 23, 2011 6:14 AM

@ MyCat,

“Firefox won’t let me connect to that Financial Cryptography site since it has an invalid certificate and I don’t want to add an exception.”

The problem is that the root certificate for the CA who issued the FC Certificate is not in your FFx browser, it might be in your MS_IE browser.

Now A short while ago Bruce changed his Certificate issuer, and the root cert is not inmy browser. So my browser (in my smart phone) thows an exception (and rightly so).

However it made me think again on CA Root Certificates, and my view for various reasons is “they cannot be trusted in the slightest”.

Thus I’ve decided to rip out all the root certs.in my browsers and use exceptions on a site by site basis.

The big reason is “upstream attack vectors”.

There is a problem with root certs and some browsers, that as long as a cert signed by a root cert is there it is accepted with little or no further checking ( @tommy can give more details on this).

Further there is a very well known vulnerability with ordinary DNS (which many people still use) and the various IP caches on peoples machines (which are insecure by default).

Without going into the gory details, any attacker who is upstream of you can provide a false Domain Name to IP address translation, and there is little you can do to check it (Which is why we now have a secure DNS system which people should be using especialy on Live CD’s etc).

Another problem is an “owned machine” on the same network segment. This applies to many many users be they corporate or ISP connected.

Put simply your machine actually does not use IP addresses at the lowest network level (ie physical) it often uses Ethernet MAC or equivalent addresses. If I own a machine on your network segment, I can send out network packets that will send your non local network traffic through the owned machine as though it is the network gateway…

As the person who has “owned” the machine I can change the IP addresses in the packets to whatever I like, which means I can send you to “ScumBags.R-Us.crook” not “CheepSkate.Bank.com” where you have your account. It also means I can MITM the SSL-negotiation phase and set up an SSL-Bridge unless you have actually taken the time to ensure you have a “good and safely stored” version of the Banks SSL cert…

Oh and there are plenty of other tricks.

The real trouble with PKI is it was designed for “offline” use where as nearly everybody uses it in “online” use, which gives a series of potentialy gaping security holes when the browsers do not work the way they should…

Nick P June 23, 2011 3:04 PM

@ tommy

That’s a trip. Although, I was actually laughing that he was worried about a certificate warning (which is rarely a vulnerability), then went to the site in the least secure, most targeted web browser and felt safe. Bruce often says crypto is the strongest link in the chain and it will break somewhere else. Perhaps he should be more worried about the ability of his browser to resist attack than adding or removing a certificate exception from firefox. 😉

John Hardin June 26, 2011 9:33 AM

@Jay et. al.: That’s unfortunate, but O(M+N) is still better then O(M) or O(N).

When I think about this I usually am doing so in the context of long-term verification of the integrity of documents, where a synthetic collision would probably be easy to detect through simple visual inspection of the text being hashed (I hope), rather than password hashes and the like. Think GPG signatures on text.

For the purposes of hashing passwords or verifying network traffic or other short-lifetime uses where the source text doesn’t necessarily need to “make sense” by itself, and for new development, I agree a newer, stronger hash should be used, with strong salting if appropriate.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.