The Potential for an SSH Worm

SSH, or secure shell, is the standard protocol for remotely accessing UNIX systems. It’s used everywhere: universities, laboratories, and corporations (particularly in data-intensive back office services). Thanks to SSH, administrators can stack hundreds of computers close together into air-conditioned rooms and administer them from the comfort of their desks.

When a user’s SSH client first establishes a connection to a remote server, it stores the name of the server and its public key in a known_hosts database. This database of names and keys allows the client to more easily identify the server in the future.

There are risks to this database, though. If an attacker compromises the user’s account, the database can be used as a hit-list of follow-on targets. And if the attacker knows the username, password, and key credentials of the user, these follow-on targets are likely to accept them as well.

A new paper from MIT explores the potential for a worm to use this infection mechanism to propagate across the Internet. Already attackers are exploiting this database after cracking passwords. The paper also warns that a worm that spreads via SSH is likely to evade detection by the bulk of techniques currently coming out of the worm detection community.

While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It’s only a matter of time before someone writes a worm like this.

One of the countermeasures proposed in the paper is to store hashes of host names in the database, rather than the names themselves. This is similar to the way hashes of passwords are stored in password databases, so that security need not rely entirely on the secrecy of the database.

The authors of the paper have worked with the open source community, and version 4.0 of OpenSSH has the option of hashing the known-hosts database. There is also a patch for OpenSSH 3.9 that does the same thing.

The authors are also looking for more data to judge the extent of the problem. Details about the research, the patch, data collection, and whatever else thay have going on can be found here.

Posted on May 10, 2005 at 9:06 AM32 Comments

Comments

Israel Torres May 10, 2005 9:19 AM

Hashing is great, but this quote defeats some of the effort:
“Unfortunately, the feature must be turned on manually via configuration options and each known_hosts file must be converted to a hashed format manually.”

As worms are created to be smarter it will be no time before they have their own lookup table online. To help speed up attacks if they persist on hashed data. More than likely they should be instructed to move on to easier targets for the idea of speedy propagation.

Israel Torres

Don May 10, 2005 9:41 AM

If I undestand the summary of this risk correctly, it is that a compromised account has a stored file of hostnames and public keys which can then be connected to and the compromised username and password tried. While I can see a worm using this mechanism I question that it’s much more effective than a passive sniffer watching outgoing ssh connections or sifting through logs.

Tim Vail May 10, 2005 9:43 AM

@Israel

That is a good thought, I think to defeat online lookups, those hashes should be salted.

Zwack May 10, 2005 10:58 AM

@Tim

A small salt (two characters for passwords is normal) would only make the hash table a bit bigger. I don’t think it would make the storage requirements for the lookup table significantly more difficult.

If we could combine some characteristic of the localhost in there (the public key?) so that the hashes are (hopefully) unique to that machine (preferably with a random salt as well to further extend the search space) then that would make lookup tables significantly less useful.

Z.

rcme May 10, 2005 12:49 PM

I am not certain I understand the issue here. Seems there may be a search for a non-existant problem. Why not just simply delete the known_hosts MRU (Most Recently Used) cache and configure the SSH client to not store the name and public key of servers connected to. Problem solved.

As I understand, the sole purpose of the known_hosts cache is for the convenience of the user (don’t have to remember the name of the server, just select from a MRU list). So, I am not clear how having a hash table helps in this case. With a hashed list, the person using the client would have to remember the server name to do the lookup in the known_hosts MRU cache, kind of defeating the purpose of the MRU known_hosts cache.

kj May 10, 2005 12:54 PM

No, rcme. The file in question is used to mitigate man-in-the-middle attacks. The first time you connect to a server, you are presented with the public key fingerprint (which you are supposed to verify). Once you accept it, it is added to the known_host file.

If you connect back to that host, and the key has change, you are presented with a warning.

Getting rid of this behavior would make SSH trivially MITMable (to coin a word).

Tim Vail May 10, 2005 1:00 PM

@Zwack

I didn’t say anything about the size of the salt. However, looking at the MIT paper, it looks like they thought of this. It looks like they are using the host’s SSH key along with some random bit in the salt. I tend to think that is pretty unique.

In any case, I think we need to rise the difficulty bar of using known_hosts as other host lookup well above the difficulty of just guessing other machine’s name through other means. The latter can be done by correlations between machines. Like if 50% of the people who have a login for machine A has a login for machine B too…

Ben Rosengart May 10, 2005 1:10 PM

One thing to consider is that known_hosts files can be helpful in reconstructing a break-in. So hashing the hostnames will deny information to defenders as well as attackers. Probably still a net win, I guess.

Ari Heikkinen May 10, 2005 1:51 PM

Is hashing those hostnames enough? A smart worm would simply search the machine for any hostnames, IP’s and subnetworks and then resolve the numeric ones (typically from DNS) building a list of hostnames. The worm would then go thru the gathered list of hostnames and check which hashes they map to. This would be done in no time on todays fast machines we all have. It would also make sense for the worm to keep running, monitor network connections and continue checking realtime. Salting would help, but only by ensuring other programs (and users) can’t get access to them. As for security options, they’re generally useless (other than marketing) if they’re off by default as most users probably don’t even know they exist. If something’s gonna be converted it should be done automatically without users ever even noticing it, otherwise no one’s gonna bother.

Tim D. May 10, 2005 2:09 PM

I think this overstates the case a little and deflects attention from the underlying password / credential management issues that are of greater concern to me than local SSH client configuration. Enforced policies and good network design should control use and deter mobile code threats — not your SSH client.

rcme May 10, 2005 3:56 PM

@kj

Sorry, I guess I will have to read the SSH spec. From the discussion here, the purpose of the known_hosts file is rather unclear. Is it an MRU, a trusted hosts list, or what? As you describe it, the known_hosts list appears to be some form of “trusted hosts” file, used to prevent MITM attacks by having this list of trusted servers (based on the public key associated to the host name in the file)? But then having a known_hosts file seems unnecessary since one can simply prevent MITM attaacks by verifying the identity of the server using the server’s public key each time the connection is made (just like SSL).

Tom De Mulder May 10, 2005 5:15 PM

Surely, if this is a problem, then the bash shell’s history is just as big a risk? It, too, after all, will contain lots of hostnames, all a worm would have to do is grep for “^ssh”…

I think the only “solution” here isn’t technical, but user education: don’t use the same password on all your accounts…

dumbo May 10, 2005 5:30 PM

rcme – As I understand it there is (usually/always?) no way to verify an SSH connection beyond comparing the key to the value in known_hosts.

Tom – one argument would be .bash_history can be disabled (and often will be, on the off-chance someone types a password at the wrong prompt), known_hosts can’t be disabled.

Dean Harding May 10, 2005 7:40 PM

@rcme

SSL requires a CA – a trusted third party – which is something that SSH does not. The first time you connect to a host, you assume it’s not been compromised, and subsequent connections can verify that at least you’re connecting to the same host as the first time. There’s no other way to do it except by remembering the hostname and it’s public key some how.

Richard Braakman May 10, 2005 8:45 PM

Hmm. I’ve often had to edit my known_hosts file to deal with changes such as machines changing IP numbers or being reinstalled. Hashing the hostnames would make that inconvenient. How many users would deal with the inconvenience by just deleting the file when it gives them trouble? That would have a security cost, precisely in the area where SSH security is already weakest. So far I don’t see a compensating security gain. If a worm wants to find machines to infect, then spraying across the local network would be more thorough and just as fast.

Simon Lyall May 11, 2005 3:20 AM

I agree with Richard Braakman. The regular reinstall and movement of the machines I access means that sometimes I have to manually remove a machine from my known_hosts file usually via something like:

grep -v machinename known_hosts > gg
mv gg known_hosts

A change to make it hard to remove bad machines from the known_hosts file will tempt people to bypass security.

dumbo May 11, 2005 4:43 AM

There’s no reason that you can’t still mess around with a known_hosts file – in theory you would only need to hash the machinename beforehand (assuming the hash doesn’t use a separate salt for each machine – which would be overkill).

As far as I can see, this would work:

grep -v ssh_hash machinename known_hosts > gg
mv gg known_hosts

Anonymous May 11, 2005 10:16 AM

Removing the bash_history, or the known_hosts file doesn’t address the main problem. Educating users not to use the same password on all machines does.

Anonymous May 11, 2005 10:41 AM

If an attacker is just looking to harvest hosts to attack why not just scoop /etc/hosts?

Tim Green May 11, 2005 2:33 PM

@anonymous:

My /etc/hosts contains only localhost, while my bash history and known_hosts is far more interesting.

I am glad to hear there is a fix in the pipeline already.

Damien Miller May 11, 2005 9:16 PM

There are a few misinformed comments on how the hashing in OpenSSH actually works. Hopefully this will clear them up:

  • At present the hashing is not on by default, because it is a new feature. Once we are satisfied that it is stable then we may change the default.
  • The hashes are computed over the hostname and are heavily salted (160 bits), so precomputation attacks are completely infeasible.
  • It is still easy to edit a known_hosts file, because we have provided (in ssh-keygen) automated tools to find and delete hosts from the file. These work regardless of whether or not the hostnames are hashed.
  • ssh-keygen has also been extended to allow it to hash an existing known_hosts file. We haven’t (and probably won’t) include any tools to hash the known_hosts files for all the accounts on a system because it is so trivially easy to do with find(1)

If you are curious about these features, please take the time to read the manual pages (or the source code) before posting speculation.

http://www.openbsd.org/cgi-bin/man.cgi?query=ssh_config

http://www.openbsd.org/cgi-bin/man.cgi?query=ssh-keygen

http://www.openbsd.org/cgi-bin/cvsweb/src/usr.bin/ssh/hostfile.c?rev=1.34&content-type=text/x-cvsweb-markup

Curt Sampson May 11, 2005 11:24 PM

Is it just me, or does this seem like a relatively minor issue? Once you’re on a system, as others here have pointed out, you’ve got plenty of other ways of finding new hosts to try to compromise.

Not that this is bad work, but it would be nice if the paper gave some idea of the scale of the problem in comparison to other problems with SSH, such as key management.

Jonas May 12, 2005 7:19 AM

If you wanted to write an ssh worm, would it not be easier to patch ssh to log all host names, logins and passwords/keys used to access other systems and later use that information to spread itself? I find it hard to believe that knownhosts could actually be used as an attack vector.

Sascha Welter May 12, 2005 7:22 AM

“Wouldn’t the new gssapi-with-mic patches [1] for OpenSSH also mitigate this?”

Sure. Just install kerberos everywhere. That will be especially popular with people who just want to ssh into their routers (with one user account defined). After all what’s setting up a complete new user management infrastructure with its own bunch of vulnerabilites, when you can get protection against one single vulnerability (that’s not even here yet) in a simple system?

Using GSSAPI/kerberos will also mitigate this. But it’s not an option in many cases.

Derek Morr May 12, 2005 9:33 AM

My point, Sascha, was that the GSSAPI patches would also mitigate the problem. Not that it was the only solution, or necessarily the best solution for every case. And for many people, especially us users in higher ed, already have a Kerberos infrastruture in place.

Derek Morr May 12, 2005 9:53 AM

I’m sorry, I mistyped earlier. I meant to refer to the gssapi-keyex patches, not gssapi-with-mic.

Chris Walsh May 12, 2005 11:15 AM

According to the ChangeLog for portable OpenSSH, this hashing is now part of the OpenSSH distribution, having been added in early March.

Sweet.

Anonymous May 16, 2005 3:50 AM

While known_hosts file is always used and hasing hosts within it makes sense (w.r.t a worm spreading) what about the ~/.ssh/config file?

This is where a user can store nicknames that map to full hostnames and details of which remote username to use for which host etc. This cannot be hashed as human editable by its nature.

This has got lots of useful info in it and I suspect is used by those people who have access to lots of machines and hence of interest to any potential worm writers.

I guess savvy admins now need to alias ssh/scp etc. to “ssh -F myconffile” and stop using the default name for config and hope worm authors do not start checking for command aliases…

ViSage November 10, 2019 2:31 AM

From everything I have read here, it seems there are two issues:
1. Usage of the known_hosts ( and many others sources ) as an address book for selecting other targets to try local user account credentials on.
2. The security of those credentials.

It looks like some are saying that if the ssh access to the user accounts is secure enough then issue 1. is a mute point. Have they considered the possibility that a non-ssh access to the user accounts referenced by known_hosts may be tried using the credentials that let the hacker/ssh-worm access the user account with the known_hosts file? This operates on the assumption that the user account credentials will be the same on all of the systems. A break-in on one account followed by a list of hosts results in a break-in on all instances of that user account on all hosts ( but not necessarily via ssh. ssh is only providing a list of hosts where other instances of the same user account can be found. There is no guarantee that ssh will be the break in method, especially if ssh wasn’t the break-in method for the original account. ).

Also would it improve the security in issue 2. by using public key cryptography with ssh and encrypting the ssh keys? The method I am thinking of is using GPG to encrypt the generated keys by setting up gpg-agent as a replacement for ssh-agent. As long as the GPG passphrase is secure ( and especially if the gpg secret keys are on a hardware token ), the ssh remote access to the account should be relatively safe from ssh-worm based attacks. There is still a password at the heart of it. However, the password secures the gpg secret key vice the ssh connection. The ssh key secures the ssh connection. The ssh key is unavailable without the gpg secret key to decrypt it. The gpg secret key is either protected by a password or is stored on a hardware token. The hardware token ( if used ) does not have to be plugged into the system except when in use.

Of course this does not address the user account password and other local user account security on that host.

Or am I just confused?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.