Bad Password Security at Twitter

Twitter fell to a dictionary attack because the site allowed unlimited failed login attempts:

Cracking the site was easy, because Twitter allowed an unlimited number of rapid-fire log-in attempts.

Coding Horror has more, but -- come on, people -- this is basic stuff.

EDITED TO ADD (1/14): Twitter responds.

Posted on January 12, 2009 at 6:48 AM • 48 Comments

Comments

bobJanuary 12, 2009 8:26 AM

Absolutely the most basic security concept for logins is to have a 2 second delay in responding to a bad password/invalid account (and don't specify which). Humans (almost) wont even notice, but scripts will be slowed by a factor of 5,000 or so; making brute-force attacks impractical. Basic step#2; after 5 consecutive failures lock the account out for an hour.

nahJanuary 12, 2009 8:45 AM

Another Social sad story. Its embarassing to see such big sites not following the most fundamental security concept. Which system are they using anyway in 21st century.

Simon WillisonJanuary 12, 2009 8:56 AM

Is there a good way of defending against this attack without providing an easy mechanism for an attacker to disable someone's account by deliberately attempting to log in multiple times?

iamnoahJanuary 12, 2009 8:57 AM

@bob

Why even lock the account for an hour? Do like Coding Horror suggested and just add exponential back off. First 2 seconds, then 4, then 8, then 16, then 32, etc. If you haven't been to a site recently, it may take more than (choose arbitrary number under 10 here) attempts to remember *which* password you used there. I can stand to wait for a few minutes after my 7th failed attempt (and maybe dig through my emails for clues while I wait), but if you lock me out for an hour, I'm probably not coming back today, if ever.

5 is just too arbitrarily low. I mean, suppose each account is allowed 50 failed attempts per day. You've reduced the brute-force time by 90%, but 25 years is still too long to wait to get someone's Twitter account.

PaeniteoJanuary 12, 2009 9:00 AM

@bob: "2 second delay in responding"

like this?

if (verify($password)) {
. // grant access
} else {
. sleep(2000);
. die("access denied");
}

Think about that again in the face of an attacker being able to issue parallel requests. You have to prevent/ignore submission of a new password-attempt for a certain time and not delay your processing of a single request.
It may have been a misunderstanding of your wording on my part, but I have seen the abovementioned code snippet far too often in useless attempts to implement rate-limiting...

Carlo GrazianiJanuary 12, 2009 9:08 AM

As it turns out, the sshd setup shipped with many Linux distributions also allows thousands of login attempts without penalty. And a perusal of logs on server machines at three different sites has led me (belatedly, I know, it's embarrassing) to the realization that people are knocking on this door hundreds or even thousands of times per day. Unfortunately, many Linux distributions are unashamed of offering server systems that allow such flagrant attacks.

Unfortunately, sshd configuration doesn't offer a really good solution. However, there are a few better solutions out there. As a public service, I'll mention 'fail2ban', a daemonized log inspector that can be easily configured to monitor arbitrary log files for signatures of break-in attempts, and to trigger temporary bans of the source IP either through the firewall or through TCP wrappers. It's Python-based and cross-platform, and is available packaged for most Linux distributions. The example configurations are very illuminating, and helped me put the clamps on sshd in very short order.

PaeniteoJanuary 12, 2009 9:15 AM

@Simon Willison: "Is there a good way of defending against this attack without providing an easy mechanism for an attacker to disable someone's account by deliberately attempting to log in multiple times?"

Go for IP-based banning like Fail2Ban does for SSH:
http://www.fail2ban.org
There are reports that botnets can sucessfully stay "under the radar" and avoid being banned, however:
http://www.heise-online.co.uk/news/...

AntonomasiaJanuary 12, 2009 9:17 AM

@Carlo Graziani,

On internet-exposed hosts the ssh password guessing has been happening since July 2004. I find preventing passwords and requiring keys to be a satisfactory solution.

dogJanuary 12, 2009 9:19 AM

@Simon Willison
I would say, after n attempts mail a verification link to the mail address associated to the account, locking the account until the user click the verification link.

The source IP(s) of the failed login could be automatically attached to the message, just to attempt to help the user to track the attacker (a good one probably is covering the tracks with TOR and is connected to an open wireless lan, but in some other cases it would be useful)

It would be good if the verification link also aks the user to change the password, providing the old one and the new one.

I like the iamnoah's idea too.

AaronJanuary 12, 2009 9:48 AM

@Carlo Graziani- The denyhosts package is all that's needed for blocking failed remote login attempts for SSH. After configuring, failures to satisfy denyhosts go to /etc/deny.hosts. Couple that with a shell script, and you could add those entries to Netfilter to block not only SSH, but everything on your server. Simple, easy and it works. Of course, if you keep your public facing SSH server on port 22, expect your /etc/deny.hosts to fill fast. Take the public facing port off of 22, like say 63549, and you'll rarely, if ever see an attempt.

HJohnJanuary 12, 2009 10:46 AM

In an audit a few years ago, I debated a manager about not having password complexity requirements, even with account lockout at 6 attempts. He argued that they only get 5 shots before locking out an account, and I argued that he they actually got 2,500 shots since it is unlikely the would target just one ID (when user IDs are first initial and last name, the attack wasn't hard).

I ran the following passwords in the attack:
* Password
* Secret
* Month name
* User ID

I got at least 1 hit on each, and several hits on month. Gee, it was September when I did it. What are the odds that next month, the password would be October?

Granted, each individual user had a good chance of avoiding cracking due to the lockout, but the entity as a whole was vulnerable.

How many years have people been warned about this risk?

It's what I call a DAS error. Dumb A** Stupid.

MikeAJanuary 12, 2009 11:02 AM

I'm guessing that the "month name as password" stupidity only occurs at companies that are still doing the "make users change passwords frequently" stupidity. The users might not be IT savvy, but the IT group really should know better, 20+ years after the dangers of enforced password timeouts were documented.

Of course, it may be a case like that of the large, well-known IT company that formerly employed me, where the "head of IT security" knew better (I asked), but had been overruled my management above him.

Ponder the ramifications of a company that has tens of thousands of employees, and an official "head of IT security", and allows him to be overruled by a clueless suit.

BogwitchJanuary 12, 2009 11:12 AM

ISTM, the standard manner for preventing brute password attacks is to require a captcha after n failed attempts. Now, I know captcha is not flawless but it should prevent a basic attack such as this.

Johnny VectorJanuary 12, 2009 11:33 AM

Delays? Temporary lockouts? The place I work (a certain federal agency that shall remain nameless) is now requiring passwords to use all four character classes, be changed every 2 or 3 months, and you get locked out completely after 3 failed attempts. Only way back in is to call the help desk. Who of course must be fielding dozens if not hundreds of such calls per day.

Which means everybody knows that all you really need to get into any account is someone's name and badge number (which they're required to wear around their neck in full view at all times), and the help desk phone number.

I believe this is known as "locking it down so tight that any Joe off the street can wander right in."

ChrisJanuary 12, 2009 11:37 AM

This was an admin user. For a marginal cost of damn close to zero they could have required two-factor. How long has S/Key been around? This is not that complicated. The solution needn't be perfect -- it needs to be good enough, and what they had clearly was not.

HJohnJanuary 12, 2009 11:39 AM

@MikeA: "I'm guessing that the "month name as password" stupidity only occurs at companies that are still doing the "make users change passwords frequently" stupidity."

Pretty much. One major failure of IT systems I have found over the last 11 years of auditing dozens of environments is the failure to consider user behavior.

One thing I've found to be used in a dumb manner also is the "minimum password age" setting. I think a minimum password age of 1 day is a benefit, but I know of a federal standard that I'm tempted to link to that prescribes a minimum password age of at least 15, with the argument that "if a user believes their password is compromised, this will force them to go to security administration and notify them." I responded to the authority that they are giving two options:
* Option 1: tell security administration you didn't properly secure your password , and risk embarrassment and/or reprimand.
* Option 2: keep your mouth shut and hope for the best, and if something happens play dumb, possibly avoid embarassment and reprimand.

I told them that the following options would be far better:
Option A: Keep your mouth shut, and risk a compromise, and the associated embarrassment and/or reprimand.
Option B: Change your password immediate and avoid the compromise, embarrassment, and/or reprimand.

While 1 would be the ideal, 2 is more likely when pursuing 1. However, A-B is most likely to lead to security when you factor in the user's behavior.

Basically, in the bad twitter password security, changes, criteria, etc., when you go beyond requirements into the relm where users have descretion (and it is impossible to not have some of this in play), you can only expect users to make good decisions more often than poor ones when you formulate the options in a manner in which security is in their best interest.

sidelobeJanuary 12, 2009 12:23 PM

Seems to me that Twitter, in particular, is prepared for two-factor authentication. They already have the SMS interfaces and your cell phone number. After a few too many incorrect passwords, it could send you a text message with a verification code. This message would also serve to alert the legitimate owner that their account was under attack.

But, then, there's really not that much to be gained by hacking a Twitter account. The operator is likely more interested in keeping costs low than in strong security.

A nonny bunnyJanuary 12, 2009 12:26 PM

Is there a reason why a company shouldn't run a dictionary attack against the passwords on its server and inform users when they're being idiots? Or at the very least do it for admin/moderator level users.

HJohnJanuary 12, 2009 12:35 PM

@A nonny bunny: "Is there a reason why a company shouldn't run a dictionary attack against the passwords on its server and inform users when they're being idiots? Or at the very least do it for admin/moderator level users."

If they don't or can't enforce complexity requirements, they should do this, save for the fact that management often balks at it.

I personally think that dictionary passwords are too big of a vulnerability to be allowed for any user. Dictionary words all but render length irrellevant. They need to force complexity and educate users, especially ones they learn are being dumb.

Having another form of authentication may be ideal, but this is a tough financial sell to management, especially in economically tough times. As Bruce has said, security is often an economic problem, and sometimes we just have to do the best we can with what management allocates to us, which in many cases is to make passwords stronger. Not ideal, but nothing is.

Discrimination based on age, especially prejudice against the elderly.January 12, 2009 12:40 PM

@MikeA

See Also:
http://all.net/journal/netsec/1997-09.html
http://www.cerias.purdue.edu/site/blog/post/...
http://www.cl.cam.ac.uk/~rja14/Papers/... (search for the comment on auditors)


@HJohn,
> minimum password age of 1 day is a benefit

Why? If you want to prevent resuse of old passwords then prevent reuse by age (e.g for the last year) rather than by number (last 12 passwords). That vendors don't do this must be because not enough customers are asking for it.

@A nonny bunny
> Is there a reason why a company shouldn't run a dictionary attack against the passwords on its server and inform users when they're being idiots? Or at the very least do it for admin/moderator level users.

Because they've done it before and remember how much notice anyone took?

HJohnJanuary 12, 2009 1:07 PM

@Discrimination: "Why? If you want to prevent resuse of old passwords then prevent reuse by age (e.g for the last year) rather than by number (last 12 passwords). That vendors don't do this must be because not enough customers are asking for it."

If the option to do it by age is available, I agree, and it is preferable. However, if the only option is available is by number, then the minimum password age is necessary to avoid recycling, but setting the minimum age too high is a risk for the reasons described above.

My discussion was more about a poor standard, which cites minimum age.

xd0sJanuary 12, 2009 1:12 PM

The trick to passwords isn't about "this technique or that one is better" IMO. You essentially have to make the choices:

UsePassword (YES | NO)
If UsePassword == Yes
Balance(Usabilty, Complexity of password, change frequency, and lockout behavior)
Verify(Cost Effectiveness for support costs, User Acceptance)
Else
AvoidUsingPasswords
End

The general answer is passwords are weak and easily compromised via Social techniques (in many cases). But the actual answer for any given case is not as simple because despite years of trying, IT hasn't been successful at changing users to IT savvy and security aware beings. :)

Philip (flip) KromerJanuary 12, 2009 2:51 PM

At the application stack, doing backoff is non-trivial. If you do, as was suggested, a "sleep(2000)" then that rails/django/whatever instance is blocked for the whole time. Somehow your web app has to recognize multiple login attempts (which itself isn't easy in a 1000's of servers farm) and then pass the connection off to a low-level waiter.

All of that can be done, but it just isn't as easy as you think it should be.

Philip (flip) KromerJanuary 12, 2009 3:28 PM

Sorry for the double post, but wanted to highlight some good points made in the comments of the codinghorror page.

* Simon Williamson gives a working code sample for detecting multiple login attempts in Memcached+Django - http://simonwillison.net/2009/Jan/7/...

* In the comments there, Rene Dudfield points out many of the ways in which even just *detecting* multiple logins in a web-app is a 20-foot problem. http://simonwillison.net/2009/Jan/7/...

* ... and a followup from Twitter's API lead vouching that their post-facto solution was the same as Simon's.

* As Jeff Atwood notes, when rate limiting is triggered you don't have to hand off to a sleepy waiter. Set a "don't auth until" on the server side and a /notification/ timer in JS on the client side.

* Better still, as was said: kick to a CAPTCHA after multiple failures. Even if it fails at differentiating human from computer, it is a robust means for client-side rate limiting. Take care that it is not so annoying as to encourage lax password security.

Alex PayneJanuary 12, 2009 4:53 PM

I'm Twitter's API Lead (referenced by Philip Kromer above). My post on this matter, which consists entirely of my opinions and not necessarily those of my employer, is here: http://al3x.net/2009/01/12/...

In response to a couple comments above: we did end up kicking to a CAPTCHA (ReCAPTCHA, to be specific) after several failed login attempts, and then locking down the account in question for a while after a few more.

Caleb JonesJanuary 12, 2009 6:39 PM

For ssh brute force attack prevention on my own personal server, I do the following:

1. Move ssh port to another port far away from 22. This is not done to prevent targeted attacks since even a basic port scan will find which port you moved it to. It, however, is surprising how much the ssh logging quiets down after doing this. This immediately cuts out a significant percentage of attempts.

2. Install an ssh blacklisting program like DenyHosts. DenyHosts keeps track of failed attempts. Once the threshold is met, it permanently blocks the IP. It also allows you to upload these IPs to a central server which will send back top offending IPs to all DenyHosts users.

When I had ssh on port 22 with DenyHosts, I'd get emails all the time about a new IP that was being blocked. After moving to a different port, 95%+ dropped off. Shortly thereafter, all the attack rate fell down to about 1% of what it was. I now rarely get these emails.

Hopefully, that doesn't mean that they've just worked around it and are running wild on my box (that's where intrusion detection comes in).

Mitch P.January 12, 2009 9:39 PM

I think a lot of the criticisms are missing the point.

CAPTCHA -- This is not web based authenication we are talking about, but an api (I assume xmlrpc, or similar) which happens to use http as a transport layer. An non-browser client doing an XMLRPC call won't handle a CAPTCHA any better than a spambot zombie. It would force the user to use a browser to clear the blocked state, which is a form of DOS attack very similar to an account lockout (you need to switch to a different access method to clear the block, whether it's logging in via a browser or emailing tech support)

SMS: not every account has an SMS associated with it, and any system which might cause an SMS to be automatically sent might turn one type of DOS attack into another, and a more costly/disruptive one if it causes my mobile phone to be disconnected/charged.

Rate limiting: While not impossible/impractical, maintaining shared state in a distributed web app can be non-trivial. I don't know much about twitter's setup, but let's assume that all API calls--including login/authenication-token requests--are handled by a large number of servers which are distributed in a handful of data centers around the world. They have some sort of shared data store to maintain state (maybe an RDBMS, maybe something else), and all data is probably at least cached locally.

Distributed shared state is a "solved" problem, but only in the sense that you can trade off various parts of the ACID guarantee for performance and robustness. If every login attempt can change the global shared state of that record (to rate limit or lock it) you either introduce lots of slownewss in the entire system, or you let the shared state grow inconsistent.

For something like rate limiting, you may not care about consistency between data centers, only within a data center. because the "state" (origin, number and recency of failed requests) is ephemeral and isolated from the rest of the system, but a naive implementation of state persistence might not play well with multiple "domains" for consistency, and even if they do then generally only a few senior coders/DBAs will understand all the trade-offs and gotchas.

All in all, while it is better to protect against it, I can understand why that took a back seat to user-visible features (as security often does.) Their response time was reasonable, even speedy by soMe Standards. There are other strategies, making the client perform some mildly expensive action (factor a 64-bit co-prime?) to make high-rate retries expensive en mass, but still accessible to the embedded device.

Personally I can't imagine why anybody would want to hack my twitter account, and if I was concerned, I'd use a high-entropy password. A 10 character password with 5 bits of entropy per character is not feasibly guessable even at a high rate. Quite frankly at that point there are probably other gaps in their authentication model (SMS is not exactly a secure protocol, and I am guessing their cookies can be sniffed/replayed from the http stream.) There are plenty ways to make these more secure I'm sure, but why bother. As with credit cards, it's probably better to authenticate the transaction than the person.

A nonny bunnyJanuary 13, 2009 3:17 AM

@Mitch P.
"Personally I can't imagine why anybody would want to hack my twitter account, and if I was concerned, I'd use a high-entropy password."

If I understand things correctly that would not have helped you in this case. They had access to an administrator account, so the quality of the passwords of the accounts they wanted to access was irrelevant.
What the article doesn't make clear is how/why the reset password gets sent somewhere other than the email account used to set up the twitter account (maybe the admin could change that as well?).
Aside from not enforcing strong administrator passwords, and allowing unlimited login attempts, this seems like another flaw to me. Unless there is a good reason, it shouldn't be possible for an admin to retrieve a password (unless he/she can get into the code/database in which case it's irrelevant; but that level of access should be kept separate from the online administration account).

PaeniteoJanuary 13, 2009 6:00 AM

@A nonny bunny

It seems to me that the admin accounts are allowed to change users' email addresses - which makes sense, anyway.

WotcherJanuary 13, 2009 6:20 AM

IMHO, the dictionary attack vulnerability was only one of the flaws responsible for twitters downfall.

Reading GMZ’s account of the hack:

1. He noticed a popular twitter account "crystal".

2. He launched his dictionary attack.

3. Got the password "happiness" in 2 days

4. Logged into, I assume, www.twitter.com

5. He is, to his surprise, presented with twitter support tools.

Mistakes:

a. A normal account was being used for administrative duties. If the "crystal" account name was only used for normal twitter stuff then his hack would have had a limited scope. This would have prevented 1 leading to 5.

b. Password weaknesses on an account with admin access. As discussed in comments here it made step 3 relatively easy given the tool he had developed.

c. Support and admin tools were in the same twitter.com application instead of another separate application known to support staff and accessed with different accounts. This would have prevented 5. If admin tools and application had to be on the same application (avoid if possible), enforce separation of duties, strong authentication and accounting measures for admin level accounts.

My point is it was not just the poor authentication mechanisms but it was a number of flaws that lead to twitter being effectively owned. This serves as a reminder that it is often a combination of flaws that is our undoing.

nickJanuary 13, 2009 9:54 AM

NO!!! You should never have auto-lockout of accounts! That's a built in security vulnerability.

What you should do is increase the delay between authentication attempts as the number of failed logins increases. This stops brute force attacks without sacrificing availability (as much).

blahJanuary 13, 2009 11:34 AM

Exponential back off? Does anyone else see a kid with nothing better to do exponentially increasing the wait time of someone he doesn't like? You gotta have limits.

Paul BerryJanuary 14, 2009 5:33 AM

For every good password idea some people will invariably choose to get round it by gaming the system to fit their habits. As mentioned above, insisting a new password is entered at least every x days and that it cannot be one of the last n passwords you used will simply mean that come Password Change Day some people will enter n slight variations on their favourite password n times then finish off with their favourite again (which will now be accepted) to get round this.

Michael AshJanuary 14, 2009 10:09 AM

Twitter responds about health care leaks, it seems! Looks like a link to your latest "Key Management" article got pasted instead. Feel free to delete my comment after you've read it....

ClipboardJanuary 14, 2009 11:02 AM

Bruce - Your 'twitter responds' link you just added is pointing to the story on lep.co.uk about the lost USB stick.

Michael AshJanuary 14, 2009 11:11 AM

Well that twitter blog entry is awfully content-free. Pretty much boils down to "we were hacked, we patched it over, and we'll fix it real good in the future." Not even a mention of dictionary attacks, timeouts, or anything of the sort. I guess I shouldn't expect any sort of technical discussion on a company blog....

Pat CahalanJanuary 15, 2009 9:59 AM

@ Nick

> NO!!! You should never have auto-lockout
> of accounts! That's a built in security vulnerability.

Never say never. There are plenty of use cases for auto-lockout. Yes, you can have accounts effectively denial-of-serviced when you use auto-lockout, but this might not be a very big problem, depending upon what you're locking down, and why.

@ Antonomasia

> I find preventing passwords and requiring
> keys to be a satisfactory solution.

Removing passwords and relying upon keys does help. It also opens up an *entirely new* slog of attack vectors. It also makes it nearly impossible to have any sort of password policy (since users can generate their own keys). You wind up with users with no passwords on their key, and they leave their key *everywhere*, including lots of hosts that use password authentication that are outside your security domain. We see this all the time in education -> someone's account is compromised at University of Foo, they have an ssh key with no password or a weak password in their homedirectory, and with that and a known_hosts file, your attacker gets much more access than they would with just a password.

Sure, keys can be used effectively... but just dropping them into place to solve your password problem is probably not going to work out the way you think it will, in the long run.

KristenJanuary 16, 2009 12:50 PM

Okay so what if auto lockout keeps biting the legitimate user of an account simply because someone is DETERMINED to get their account and is trying fervently to accomplish this. I've been locked out of MY OWN account for too many failed logins when my username and password were autofilled and worked just fine up until about Wednesday. Today was the worst, 5 lockouts in an hour. I don't dare log out right now or I'll have 6.

I'm sorry someone is being a scumfrog and trying to get into my account, but why punish me? Isn't there something that can be done to make it so I'm not having to change my password every 10 minutes?

I enjoy Twitter, but not enough to keep setting and resetting my password because they can't or wont come up with a more intelligent defense against hackers.

JimFiveJanuary 16, 2009 2:08 PM

@ Nick
> NO!!! You should never have auto-lockout
> of accounts! That's a built in security vulnerability. [DOS]

If an attacker wants to DOS an account there is no difference between a lockout and an exponential delay.


@Kristen
> Okay so what if auto lockout keeps
> biting the legitimate user of an
> account simply because someone is
> DETERMINED to get their account and
> is trying fervently to accomplish this.

Create a new account. All web-based accounts are throwaway accounts anyway. Let it get locked out and abandon it.
--
JimFive

John2496January 28, 2010 11:30 AM

@Kristen
> Okay so what if auto lockout keeps
> biting the legitimate user of an
> account simply because someone is
> DETERMINED to get their account and
> is trying fervently to accomplish this.

Use the 'forgot password' option. Twitter will send you a link to a pass update page, that automatically logs into your acct after the pw is changed.

Tom DibbleJune 3, 2010 5:43 PM

@John2496:

If an attacker wants to DOS an account there is no difference between a lockout and an exponential delay.

--

There needs to be an upper limit to the delay, AND there needs to be "queue jumping" for unique IP addresses (if IP1 has sent 3 requests for authentication since the last successful login and IP2 has had none, then IP2 ends up in front of all IP1 requests in the queue, even if it comes in later).

These two make a DOS very hard for someone to pull off without significant IP spoofing knowledge and/or a large botnet. The "upper limit" for the delay could be as low as 30 seconds.

DOS with account lockout is trivially easy. If you are running a system with potentially adversarial users (eg, where making one user unable to log in at some time of the day gives another player an advantage in a game) then you should not be using an account lockout approach. A "soft lockout" or "escalating delay" or whatever-you-want-to-call-it approach is almost as effective against brute force attacks (depending on how often the actual user logs in) and makes the nuisance DOS significantly unlikely.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..