Comments

z March 25, 2014 6:28 AM

I like this, but I fear whatever they come up with won’t really be used unless NIST also adopts it. Companies that want to store your passwords securely want to follow NIST standards so they can cover their ass if there’s a problem. They can say “Hey, we followed the US government standard for XYZ” when they get sued. The companies who don’t store your password securely don’t care. It’s not that they don’t know how to do it right, or can’t find developers who do; it’s a lack of interest.

However, this should give developers who are interested in doing things properly a good set of guidelines to follow, and hopefully will kickstart a movement for a standardization body to update their standards as well.

Per Thorsheim March 25, 2014 6:44 AM

Ahem… Building anything in compliance with NIST standards seems a little less of an smart idea now than just a few years ago, depending of course on who you reckon as your enemy.

uh, Mike March 25, 2014 7:52 AM

Aren’t we trying to relax reliance on passwords? If they weren’t so vital, it wouldn’t be worthwhile to make them harder to crack.

Likewise, I wish a directory of social security numbers would leak. We need to stop abusing those as well.

Jason C March 25, 2014 9:02 AM

Why not promote a wider use of bcrypt? It has a very open license, and is a drop-in replacement for UNIX password hashing, so why not other OSs? The OpenBSD guys even use it for Web Authentication now. As computing power grows, you just increase the iteration count to keep it highly resistant to brute force search attacks.

Matt March 25, 2014 10:27 AM

I agree with Jason C. What password hashing problems are there that the proper use of bcrypt doesn’t solve?

Matt March 25, 2014 10:48 AM

And, yes, I read the page. The first problem (“The poor state of passwords protection in web services:”) isn’t going to be solved by having more hashing options. It’s solved by teaching people that proper hashing is important.

The second problem… I suppose it might be nice if there were more options? But again, I’m unclear on what bcrypt doesn’t provide that they think is needed.

s-Offtopic-Mylar March 25, 2014 10:54 AM

I fell over this today while browsing:

“Mylar is a platform for building secure web applications.

Web applications rely on servers to store and process confidential information. However, anyone who gains access to the server (e.g., an attacker, a curious administrator, or a government) can obtain all of the data stored there. Mylar protects data confidentiality even when an attacker gets full access to servers. Mylar stores only encrypted data on the server, and decrypts data only in users’ browsers. Simply encrypting each user’s data with a user key does not suffice, and Mylar addresses three challenges in making this approach work. First, Mylar allows the server to perform keyword search over encrypted documents, even if the documents are encrypted with different keys. Second, Mylar allows users to share keys and data securely in the presence of an active adversary. Finally, Mylar ensures that client-side application code is authentic, even if the server is malicious. ”

http://www.technologyreview.com/news/525651/new-approach-could-stop-websites-from-leaking-or-stealing-your-data/

http://css.csail.mit.edu/mylar/

Has anyone looked at this or can offer any informed comments (I can’t)

Anura March 25, 2014 11:35 AM

@Matt, Jason C

bcrypt uses a fixed amount of memory, so it isn’t flexible enough to protect against all threats. I don’t think there’s any ideal alternatives out there. Scrypt is overly complicated (IMO, of course) and subject to timing attacks, PBKDF2 uses little memory and gets a huge advantage from GPUs. There are also some theoretical problems with existing functions, as iterated functions can reduce the collision resistance (e.g. if you have the same password with two different salts, the more iterations you do for PBKDF2, the more likely it is that they will both hash to the same value).

One problem is that there is very little work done on the abstract of key derivation functions, so there’s no real guidelines on creating them. I’ve been working on a function that addresses all the concerns I have. It’s a very simple algorithm that works with exisitng hash functions, but I doubt I can get a paper ready by Monday (I still have to test against a GPU, but I have no experience with coding for a GPU).

As for two-factor authentication, usually that still involves passwords. That also really only works for server-based systems. If you want your laptop secure from theft, then even if you have a key on a thumb-drive, it’s still a good idea to have a password along with it, using a key-stretching algorithm to make brute forcing more difficult.

Anura March 25, 2014 11:42 AM

Also, another problem with bcrypt is it doesn’t work as a general-purpose key-stretching algorithm because it has a fixed output length. If you want a 256-bit encryption key, and a 256-bit MAC key, you have to come up with your own system with bcrypt.

Evan March 25, 2014 6:25 PM

@uh, Mike: I agree with you about SSNs, but I think part of the problem with multifactor identification is that, as services increasingly move off of local systems and on to the cloud, any secondary factors ultimately boil down to another password – i.e. a sequence of bytes. Most secondary factors in use are more or less obtainable via investigative work (mother’s maiden name, first school, place you were born), and even those that aren’t (e.g. fingerprints) can be spoofed by a malicious client.

Badtux March 25, 2014 8:28 PM

@Evan, the answer to your question about two-factor authentication is a secondary security device that generates a new authentication token every minute or so. The attacker must then possess both your password and the secondary security device in order to compromise your account. The most popular secondary security device right now appears to be a smartphone running Google Authenticator. See RFC4226 for more information on the HTOP algorithm used.

Is this completely secure? No. If the service is utterly compromised, the secret key used to sync HTOP between the service and your device can also be compromised. But if the service is so utterly compromised that both your password and your secondary authenticator are both compromised, then you’re fscked anyhow. What this does do is insure that scammers with key scanners cannot get both your password and your secondary authentication source, since the sequence of characters you enter onto, e.g., the AWS web site, changes every minute. They’d have to use those credentials mighty quickly to make it worthwhile.

Chris Abbott March 25, 2014 11:01 PM

I’ve got an idea, tell me what you guys think:

Step 1: Hash the password

Step 2: Encrypt the hash of the password with a randomly generated key. (If we’re talking about something like a website, store the random key on a different server to reduce the possibility/effectiveness of an attack)

I’m sure there are already schemes that do something like this, but imagine if there were 2^256 possible salts and you didn’t have access to the keys stored elsewhere than the encrypted hash. Or how bad it would slow you down to have to decrypt each hash prior to using your rainbow table. It would make an attacker’s life more difficult. They wouldn’t really be salts, they’d be keys. It would be like super-salt.

Mike Amling March 25, 2014 11:45 PM

SRP is a better protocol, IMHO, than, say, sending the password over https and having the server hash it, or having the client hash it and send the result over https. But for widespead adoption, SRP would need to be built in to all the common browsers.

I like Catena (http://eprint.iacr.org/2013/525.pdf), for the hashing part of the protocol.

The problem with password hashing is that key-stretching is not efficient for the good guys. It increases the attackers’ effort by a factor of N only when it increases the normal users’ effort by a factor of N. Far better to use more password entropy, which increases the attackers’ effort by a factor of 2**X for constant normal users’ effort.

I agree that encrypting the server’s authentication data with a key that is harder to steal than the data makes sense.

I think authentication would be better handled by, say, a zero-knowledge proof of possession of a private key. The problem is carrying around the bits of the private key. Maybe some kind of dongle similar to dongles used for 2fa?

Anura March 25, 2014 11:59 PM

@Chris Abbott

If I understand you correctly, your algorithm is this:

encrypt(secret-salt, hash(password));

The thing is, if the attacker gets the salt, then it’s no more secure than hash(password). The purpose of these schemes is that so even if the salt is not secret, it still adds a huge cost to the attacker to try and brute force it. Just including a salt makes you secure against rainbow tables

Anura March 26, 2014 12:05 AM

@Mike Amling

For your own stuff, yes, definitely go with more entropy. The thing is that we can’t force people to choose strong passwords; for you, as a web developer, the best way to protect your users if you are attacked is with key stretching. For your personal computer or a low traffic server, you can also devise a key-stretching algorithm that can be multi-threaded (my algorithm will do this, if I ever finish it), so if you have four cores you can run in four threads and increase the cost the attacker faces by 4N instead of N.

Thoth March 26, 2014 8:31 AM

1.) Trying to store some secrets/random secrets/salt on another server and hoping nothing happens is not the best idea. If you are using a HSM to protect it, it will give more margin of security but that means you have to buy those expensive HSMs and they are usually a black box which no one will ever know their security levels.

2.) Password Hashing Competition is a good beginning exercise to get more people involved in designing their password hashing/encrypting algorithms and getting the world to review it. More of such global collaborations should have been done long time ago.

3.) Nothing is wasted in PHC because the algorithms and competition itself might bring more interest to younger generations on the subject of Computer Security and who knows what ideas may appear.

Somebody March 26, 2014 11:23 AM

Mike:

N and X measure different things and serve different purposes, they should not be conflated.

X is the length of the password, if I have to remember it it is limited by my memory and ability to type. So its maximum value is essentially constant.

N measures computing power. This is dependent on the available technology and changes with time.

For any given X there is some potential level of computing power that breaks a hash in say one hour. As long as computers keep improving N has to keep increasing to maintain security.

Someone March 27, 2014 12:24 PM

Did anyone have a chance to look at the PolyPassHash submission? It uses Shamir Secret Shares in an interesting way that make it so that passwords can’t be individually cracked by an attacker.

The claims about added security are pretty amazing and the performance / space impact seems to be minimal. What do you think about this?

Anteop Car Building March 28, 2014 6:44 PM

I have a suggestion.

  1. You salt the Login Name with a shared salt and then hash it with BCrypt.
  2. You then take the original plain text Login Name and use it as a key to encrypt the password with AES. The result is then salted with a unique salt and finally hashed with BCrypt.
  3. The user’s Display Name is set to their User ID (integer), as opposed to their Login Name, when the account is created. The user can change it later, except it can not match anyone else’s Display Name or a case-insensitive comparison to their Login Name. This test, specifically when comparing your Display Name to your Login Name in a case insensitive manner with hashes, is the main reason Login Names (as opposed to passwords) and Display Names would have to be limited to a reasonable number of characters.

My question is, if the database were copied and all storage procedures were known in every detail, would this make it significantly harder to recover the passwords than storing plaintext usernames and uniquely salted and BRcypted passwords

Mike Amling April 5, 2014 11:02 PM

@Someone

The PolyPassHash paper (say that five times) advocates encrypting ordinary users’ password hashes with a symmetric key that it regards as more secure against hacking because it is not written to disk. The symmetric key is initialized after every server restart by “administrative” users logging in.

The “administrative” users’ have Shamir secret shares on disk that are recoverable from the password hashes obtained during administrators’ logins. When a threshold of administrative users have logged in after a restart, the server combines their unlocked Shamir secret shares to find the symmetric key to allow decryption of the ordinary users’ password hashes.

Administrative users are trusted to have passwords that are not leaked and not guessable by hackers.

To hack ordinary users’ passwords from the contents of files on disk, you’d need either A) access to the symmetric key from the server’s working memory or B) enough “administrative” users’ passwords to recover the symmetric key from the Shamir secret shares.

It’s not a panacea. IIRC, last year’s Target hackers read data from working memory (of POS terminals, which may not be as hard to attack as the servers PolyPassHash is aimed at), so the symmetric key is still at risk.

Note: Where I work, we’ve been initializing private keys for one particular server from Shamir secret shares provided by ‘administrative’ users since 2004 (although not as a byproduct of those administrators logging in).

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.