Schneier on Security
A blog covering security and security technology.
« REAL ID |
| Company Continues Bad Information Security Practices »
May 10, 2005
The Potential for an SSH Worm
SSH, or secure shell, is the standard protocol for remotely accessing UNIX systems. It's used everywhere: universities, laboratories, and corporations (particularly in data-intensive back office services). Thanks to SSH, administrators can stack hundreds of computers close together into air-conditioned rooms and administer them from the comfort of their desks.
When a user's SSH client first establishes a connection to a remote server, it stores the name of the server and its public key in a known_hosts database. This database of names and keys allows the client to more easily identify the server in the future.
There are risks to this database, though. If an attacker compromises the user's account, the database can be used as a hit-list of follow-on targets. And if the attacker knows the username, password, and key credentials of the user, these follow-on targets are likely to accept them as well.
A new paper from MIT explores the potential for a worm to use this infection mechanism to propagate across the Internet. Already attackers are exploiting this database after cracking passwords. The paper also warns that a worm that spreads via SSH is likely to evade detection by the bulk of techniques currently coming out of the worm detection community.
While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It's only a matter of time before someone writes a worm like this.
One of the countermeasures proposed in the paper is to store hashes of host names in the database, rather than the names themselves. This is similar to the way hashes of passwords are stored in password databases, so that security need not rely entirely on the secrecy of the database.
The authors of the paper have worked with the open source community, and version 4.0 of OpenSSH has the option of hashing the known-hosts database. There is also a patch for OpenSSH 3.9 that does the same thing.
The authors are also looking for more data to judge the extent of the problem. Details about the research, the patch, data collection, and whatever else thay have going on can be found here.
Posted on May 10, 2005 at 9:06 AM
• 31 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Hashing is great, but this quote defeats some of the effort:
"Unfortunately, the feature must be turned on manually via configuration options and each known_hosts file must be converted to a hashed format manually."
As worms are created to be smarter it will be no time before they have their own lookup table online. To help speed up attacks if they persist on hashed data. More than likely they should be instructed to move on to easier targets for the idea of speedy propagation.
Also, shouldn't it hash the hostnames in the authorized keys files as well?
If I undestand the summary of this risk correctly, it is that a compromised account has a stored file of hostnames and public keys which can then be connected to and the compromised username and password tried. While I can see a worm using this mechanism I question that it's much more effective than a passive sniffer watching outgoing ssh connections or sifting through logs.
That is a good thought, I think to defeat online lookups, those hashes should be salted.
A small salt (two characters for passwords is normal) would only make the hash table a bit bigger. I don't think it would make the storage requirements for the lookup table significantly more difficult.
If we could combine some characteristic of the localhost in there (the public key?) so that the hashes are (hopefully) unique to that machine (preferably with a random salt as well to further extend the search space) then that would make lookup tables significantly less useful.
I am not certain I understand the issue here. Seems there may be a search for a non-existant problem. Why not just simply delete the known_hosts MRU (Most Recently Used) cache and configure the SSH client to not store the name and public key of servers connected to. Problem solved.
As I understand, the sole purpose of the known_hosts cache is for the convenience of the user (don't have to remember the name of the server, just select from a MRU list). So, I am not clear how having a hash table helps in this case. With a hashed list, the person using the client would have to remember the server name to do the lookup in the known_hosts MRU cache, kind of defeating the purpose of the MRU known_hosts cache.
No, rcme. The file in question is used to mitigate man-in-the-middle attacks. The first time you connect to a server, you are presented with the public key fingerprint (which you are supposed to verify). Once you accept it, it is added to the known_host file.
If you connect back to that host, and the key has change, you are presented with a warning.
Getting rid of this behavior would make SSH trivially MITMable (to coin a word).
I didn't say anything about the size of the salt. However, looking at the MIT paper, it looks like they thought of this. It looks like they are using the host's SSH key along with some random bit in the salt. I tend to think that is pretty unique.
In any case, I think we need to rise the difficulty bar of using known_hosts as other host lookup well above the difficulty of just guessing other machine's name through other means. The latter can be done by correlations between machines. Like if 50% of the people who have a login for machine A has a login for machine B too...
One thing to consider is that known_hosts files can be helpful in reconstructing a break-in. So hashing the hostnames will deny information to defenders as well as attackers. Probably still a net win, I guess.
Is hashing those hostnames enough? A smart worm would simply search the machine for any hostnames, IP's and subnetworks and then resolve the numeric ones (typically from DNS) building a list of hostnames. The worm would then go thru the gathered list of hostnames and check which hashes they map to. This would be done in no time on todays fast machines we all have. It would also make sense for the worm to keep running, monitor network connections and continue checking realtime. Salting would help, but only by ensuring other programs (and users) can't get access to them. As for security options, they're generally useless (other than marketing) if they're off by default as most users probably don't even know they exist. If something's gonna be converted it should be done automatically without users ever even noticing it, otherwise no one's gonna bother.
I think this overstates the case a little and deflects attention from the underlying password / credential management issues that are of greater concern to me than local SSH client configuration. Enforced policies and good network design should control use and deter mobile code threats -- not your SSH client.
Sorry, I guess I will have to read the SSH spec. From the discussion here, the purpose of the known_hosts file is rather unclear. Is it an MRU, a trusted hosts list, or what? As you describe it, the known_hosts list appears to be some form of "trusted hosts" file, used to prevent MITM attacks by having this list of trusted servers (based on the public key associated to the host name in the file)? But then having a known_hosts file seems unnecessary since one can simply prevent MITM attaacks by verifying the identity of the server using the server's public key each time the connection is made (just like SSL).
Surely, if this is a problem, then the bash shell's history is just as big a risk? It, too, after all, will contain lots of hostnames, all a worm would have to do is grep for "^ssh"...
I think the only "solution" here isn't technical, but user education: don't use the same password on all your accounts...
rcme - As I understand it there is (usually/always?) no way to verify an SSH connection beyond comparing the key to the value in known_hosts.
Tom - one argument would be .bash_history can be disabled (and often will be, on the off-chance someone types a password at the wrong prompt), known_hosts can't be disabled.
SSL requires a CA - a trusted third party - which is something that SSH does not. The first time you connect to a host, you assume it's not been compromised, and subsequent connections can verify that at least you're connecting to the same host as the first time. There's no other way to do it except by remembering the hostname and it's public key some how.
Hmm. I've often had to edit my known_hosts file to deal with changes such as machines changing IP numbers or being reinstalled. Hashing the hostnames would make that inconvenient. How many users would deal with the inconvenience by just deleting the file when it gives them trouble? That would have a security cost, precisely in the area where SSH security is already weakest. So far I don't see a compensating security gain. If a worm wants to find machines to infect, then spraying across the local network would be more thorough and just as fast.
I agree with Richard Braakman. The regular reinstall and movement of the machines I access means that sometimes I have to manually remove a machine from my known_hosts file usually via something like:
grep -v machinename known_hosts > gg
mv gg known_hosts
A change to make it hard to remove bad machines from the known_hosts file will tempt people to bypass security.
There's no reason that you can't still mess around with a known_hosts file - in theory you would only need to hash the machinename beforehand (assuming the hash doesn't use a separate salt for each machine - which would be overkill).
As far as I can see, this would work:
grep -v `ssh_hash machinename` known_hosts > gg
mv gg known_hosts
A script for converting multiple users is provided by MIT too (which IMO that should be included in the OpenSSH tarball.):
About the ~/.bash_history , an administrator could remove that and add: "unset HISTFILE" to /etc/profile i guess.
Removing the bash_history, or the known_hosts file doesn't address the main problem. Educating users not to use the same password on all machines does.
If an attacker is just looking to harvest hosts to attack why not just scoop /etc/hosts?
My /etc/hosts contains only localhost, while my bash history and known_hosts is far more interesting.
I am glad to hear there is a fix in the pipeline already.
There are a few misinformed comments on how the hashing in OpenSSH actually works. Hopefully this will clear them up:
- At present the hashing is not on by default, because it is a new feature. Once we are satisfied that it is stable then we may change the default.
- The hashes are computed over the hostname and are heavily salted (160 bits), so precomputation attacks are completely infeasible.
- It is still easy to edit a known_hosts file, because we have provided (in ssh-keygen) automated tools to find and delete hosts from the file. These work regardless of whether or not the hostnames are hashed.
- ssh-keygen has also been extended to allow it to hash an existing known_hosts file. We haven't (and probably won't) include any tools to hash the known_hosts files for all the accounts on a system because it is so trivially easy to do with find(1)
If you are curious about these features, please take the time to read the manual pages (or the source code) before posting speculation.
Is it just me, or does this seem like a relatively minor issue? Once you're on a system, as others here have pointed out, you've got plenty of other ways of finding new hosts to try to compromise.
Not that this is bad work, but it would be nice if the paper gave some idea of the scale of the problem in comparison to other problems with SSH, such as key management.
Wouldn't the new gssapi-with-mic patches  for OpenSSH also mitigate this? The idea behind those patches is to use Kerberos (well, GSSAPI, really) to authenticate both the user and the host, eliminating the known_hosts problem altogether.
If you wanted to write an ssh worm, would it not be easier to patch ssh to log all host names, logins and passwords/keys used to access other systems and later use that information to spread itself? I find it hard to believe that knownhosts could actually be used as an attack vector.
"Wouldn't the new gssapi-with-mic patches  for OpenSSH also mitigate this?"
Sure. Just install kerberos everywhere. That will be especially popular with people who just want to ssh into their routers (with one user account defined). After all what's setting up a complete new user management infrastructure with its own bunch of vulnerabilites, when you can get protection against one single vulnerability (that's not even here yet) in a simple system?
Using GSSAPI/kerberos will also mitigate this. But it's not an option in many cases.
My point, Sascha, was that the GSSAPI patches would also mitigate the problem. Not that it was the only solution, or necessarily the best solution for every case. And for many people, especially us users in higher ed, already have a Kerberos infrastruture in place.
I'm sorry, I mistyped earlier. I meant to refer to the gssapi-keyex patches, not gssapi-with-mic.
According to the ChangeLog for portable OpenSSH, this hashing is now part of the OpenSSH distribution, having been added in early March.
While known_hosts file is always used and hasing hosts within it makes sense (w.r.t a worm spreading) what about the ~/.ssh/config file?
This is where a user can store nicknames that map to full hostnames and details of which remote username to use for which host etc. This cannot be hashed as human editable by its nature.
This has got lots of useful info in it and I suspect is used by those people who have access to lots of machines and hence of interest to any potential worm writers.
I guess savvy admins now need to alias ssh/scp etc. to "ssh -F myconffile" and stop using the default name for config and hope worm authors do not start checking for command aliases...
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.