Why Phishing Works

Interesting paper.

Abstract:

To build systems shielding users from fraudulent (or phishing) websites, designers need to know which attack strategies work and why. This paper provides the first empirical evidence about which malicious strategies are successful at deceiving general users. We first analyzed a large set of captured phishing attacks and developed a set of hypotheses about why these strategies might work. We then assessed these hypotheses with a usability study in which 22 participants were shown 20 web sites and asked to determine which ones were fraudulent. We found that 23% of the participants did not look at browser-based cues such as the address bar, status bar and the security indicators, leading to incorrect choices 40% of the time. We also found that some visual deception attacks can fool even the most sophisticated users. These results illustrate that standard security indicators are not effective for a substantial fraction of users, and suggest that alternative approaches are needed.

Here’s an article on the paper.

Posted on April 4, 2006 at 2:18 PM28 Comments

Comments

Zac Bedell April 4, 2006 3:36 PM

The paper mentions that study participants were shown websites on a Mac using Firefox. I’m genuinely curious how much of an effect that had on the results. Granted, I’m a Mac user myself, but I have to wonder if the results might have been different on, say IE 6 on Windows.

Going purely on raw numbers, odds are most of the participants were out of their element in both OS and browser choice. I don’t doubt the results would have been depressing regardless of platform, though.

Mike Sherwood April 4, 2006 3:39 PM

Most people have no experience in computer security. Combine that with the fact that phishing can be targeted at a large number of people, and there’s no way to stop people from making bad decisions.

This reminds me of people making counterfeit currency on an inkjet using regular paper. Most people do not know how to differentiate between legitimate and counterfeit currency. There are mechanisms out there that will help in determining legitimacy, but most people won’t bother with them.

Phishing won’t go away until there are severe civil and criminal penalties for the companies who participate in fraud against individuals. The phishers couldn’t do anything with the information they collect if financial institutions acted responsibly.

roy April 4, 2006 3:49 PM

Making harsh civil and criminal penalties will (1) scare off the easily frightened, and (2) make the public feel safer, foolishly. The policy will not deter real criminals.

A smart operator could hijack all the computers he’d need to phish through layers of cutouts, protecting himself from penalties.

HJT April 4, 2006 3:58 PM

The problem with a study like that is that it can be basically impossible to determine if some arbitrary website is legitimate or not. They did use some pretty well-known sites, but I bet not all of the test subjects knew or personally used all of the sites.

You can look at the professionality of the site, correct use of SSL, whois history for the domain, search the web for abuse reports, call the numbers provided on the site, etc. and even if all of that checks out it can still be a new, well-done phishing site.

Personally I try to avoid new ecommerce etc. sites unless I get personal recommendations from friends.

rr April 4, 2006 4:11 PM

Zac,
I don’t know how much of a difference using firefox on a mac would make for this task. Firefox really doesn’t look that much different than IE (Back button is in the same place, etc.) and the different mouse behavior (single vs. double click) isn’t visible in a webbrowser, where for the naive user all clicking is single clicking. (And modern macs have moved beyond single buttons for those who like to right click.)

Elsewhere, people have commented that phishing as a phenomena will only improve in “quality of deception” due to natural selection. In other words, only the phishing attempts that most closely resemble real mail will survive. I still get a bit of a jolt from the occassional subject line in a phishing email by chance. Particularly when it coincides with a timeframe or scenario in real life that is plausible. (Domain name renewals, expected legal threats, etc.)

I now expect that the only legit email I will get from a company is a “please log into your account” with no html link to the login whatsoever. Sadly, there isn’t full compliance with even this simple workaround.

Semirelated – is there some reason why Gmail doesn’t automatically send anything with the strings “chase” and “bank” in it to spam? Very frustrating. I wish there was an option to train gmail that I don’t have accounts with Chase Manhattan, paypal, etc. etc.

-r.

Nick Lancaster April 4, 2006 4:20 PM

Consumer fear apparently outweighs the common sense point that banks and other financial institutions never ask for your password in this manner, and that the safest way to verify is to then get your bank statement (or turn over your credit card) and call the customer service number printed there.

But it’s also a sad comment on the state of critical thinking in this country. The majority of phish attacks are clones with horrible grammar and lousy spelling. Even the ones prettied up with graphics are a re-tread of old phish mails. (I even got one that purported to be in response to a security violation that took place THREE DAYS IN THE FUTURE.)

I’m waiting to see the phishers who provide a fraudulent 800 number and play off the ‘trusted agent’ aspect of such mails.

Jehiah April 4, 2006 4:31 PM

It would have also been helpfull/interesting to show 20 non-fraudulent sites to a second set of users (a controll group) so that you could identify how much of their decision was based on the environment they were tested in, or for that matter related to fraudulent sites at all.

Could that controll group have also identified 40% of the sites as fraudulent when asked the same question?

Fred April 4, 2006 4:47 PM

What interested me most was that two legitimate sites in the study had a 50%+ rejection rate.

Micah April 4, 2006 5:47 PM

It doesn’t help when companies don’t use good judgement in building their own websites. I recently received a solicitation from my satellite TV company via email. However, none of the links in the email actually went back to the company’s website! Instead, they went to some obscure domain owned by a marketing company. This immediately set off red flags and I was about to fire off an email to them about this “phishing” email. But then I took some time to dig around and found that this company was in fact related to the satellite company. So instead I fired off an email explaining why it’s a bad idea to send out emails that don’t even reference your own website. What’s worse is the “landing page” was done up to look just like the satellite company’s page.

I haven’t read the paper yet, but I wonder how they define “legitimate”.

j April 4, 2006 7:27 PM

From Nick: “I’m waiting to see the phishers who provide a fraudulent 800 number and play off the ‘trusted agent’ aspect of such mails.”

Funny you should say that. I just got two of this very type of phish yesterday. I was almost fooled: they claimed that my account had (1) some unusual activity and (2) several failed logins to their web site. There were no clickable links in the mail that I could see (except one to the legitimate top level home page of the bank). The only contact provided was a toll-free 888 number (the same in both mails).

The two immediate clues that it was a phish was that (1) they came to one of my freemail accounts, but that bank sends me mail to a different freemail account; and (2) mail from almost all banks includes some definite clue such as the last 4 digits of the account number, but these mails did not. (At least they were properly “to-” addressed, and not Bcc’d as most spam including a lot of phish attempts are.)

That was enough to clue me in, but further investigation also showed that the two emails came from different service providers (though both were American, no Chinese or Russian involvement, e.g.). Also I was able to log into my account successfully and verify that there had been no unusual activity.

Finally, the 888 number itself: I did not attempt to trace its ownership, but I did call it. It identified itself anonymously (“thank you for calling the account verification service” or something to that effect), and then asked for a 16 digit account number. (It hung up on me when I gave it 16 zeros.)

So — Ask, and you shall receive! My first experience of a phish email to a phone number.

(So far I have never been fooled, as far as I know, into going to any website linked to by phish email.)

/J

Gary April 4, 2006 8:07 PM

The legitimate sites have got to stop putting out URLs like “www.myrewards.com” and “www.mynewshinygoldcard.com” – these marketing URLs are training everyone to be stupid and unsuspicious.

Longwalker April 4, 2006 9:29 PM

If a luser clicks on a phish link, half the battle has already been lost. While it’s a laudable goal to improve browser UIs and certificate rules to make life harder for phish sites, the first line of contact for phishing is not websites but rather email. Widespread deployment of S/MIME would take a bite out of phishing as users could be trained to expect account-related emails to be signed by the account issuer. Spam filters could also be constructed to drop unsigned emails which claim to be from an institution known to sign their emails, cutting users out of the phish judgement loop entirely.

Mike Sherwood April 4, 2006 10:31 PM

@roy

I’m not talking about going after the phishers. They’re a product of their environment.

Banks will give them a big wad of money if they get personal details on people and open accounts in the names of those people. I’m suggesting going after the banks. They are causing real economic harm to the people who are being impersonated. The phishers would have no use for this information if the banks wouldn’t so easily turn it into cash.

Simon April 4, 2006 11:14 PM

On a few occasions I’ve phoned or e-mailed commercial vendors to inform them that somebody is phishing in their name, only to find that it was legitimate.

So I tell them, “So stop sending out e-mails that look like phish!!”

Matt Schinckel April 5, 2006 3:05 AM

I think the first step in a Phishing test should be to try a faked username/password. If you don’t get the required response, then it’s clearly a fake!

Dennis April 5, 2006 3:46 AM

Maybe the users/victims should be punished instead. Give out your personal information to a phisher and spend your next five years in jail.

Why? Because people that respond to phishing, spamming, etc contribute to make the Internet a less useful place. In other words, they are terrorists that attack an important part of the infrastructure.

Maybe people would be more careful if they risked doing some time in jail.

Dan April 5, 2006 3:47 AM

‘I think the first step in a Phishing test should be to try a faked username/password. If you don’t get the required response, then it’s clearly a fake!’ – yeah, I bet phishers have never thought of that.

One point to note is that the people taking part in this study were not cyberterrorists

Huge April 5, 2006 6:00 AM

Frankly, I don’t understand the fuss. I have a simple, straightforward and foolproof policy;

  • I programmatically reject (as spam) all emails that contain HTML.
  • I delete unread all emails apparently from any financial or commerical organisation. (Sometimes I’ve had so many phishing emails apparently from one organisation that I’ve marked them as spam to reduce the irritation level.)
  • I have a specific email address I use for order confirmations (Amazon, etc.) which I use for nothing else.

I regard the reason my bank(s) might want to send me emails as spam, anyway. And I refuse to do my private business in a manner functionally equivalent to standing outside my bank branch with a loudhailer.

arl April 5, 2006 7:05 AM

I have been using the “petname” addon to Firefox for a while now and it is a helpful idea. For it to be more useful it might need to do something like block form input until the site is entered into its database. Still it depends on the user.

The only other thing I can think of is to include more data into the certificates and improve browsers to take advantage of the additional information.

jvd April 5, 2006 7:08 AM

As long as there is a virtual bubble, there will be people blowing air into it. Learn to blow glass or something people can use. Then you have a craft that they can’t take away from you.

peter April 5, 2006 7:22 AM

Yet another reason to migrate from exiting SSL cert chains to Bonded SSL where a single Govt entity collects a large bond from an ecommerce player in return for their cert which points back to a central Govt entity empowered by statute to enforce the rules of the game.

Peter

jvd April 5, 2006 7:59 AM

Authors prove that one good book is worth the effort to write and it is verified as a fact by the readers who buy it and or read it. It takes time and the process can’t be automated or virtualized. There are no shortcuts to replace good thinking or security. It’s as if they are trying to produce the paperless office and then pretending it makes sense. Who is going to argue with cash, when it’s right in front of them? The Internet can be helpful. You still need to make an effort. It’s not like technology is going to reduce fraud.
Phishing works because fraud works. It breaks in the long run. The question is what else gets broken along the way.

Leo April 5, 2006 9:49 AM

From a user perspective, phising represents a risk. So, it makes sense to manage it by reducing its probability of occurrence or by reducing its damage if it does occur (in effect reducing its expected impact). Training people is focusing on the former, getting insurance is focusing on the latter. I think that, for any user, a well rounded mix of both approaches is needed.

From the law enforcement perspective, it is clear that the profitability of phishing must be reduced. This means that setting up the site, maintaining it and getting away with the cash must be made drastically more difficult than it is now, thus reducing the profitability margin. User education is important here as well, as it drives costs up by forcing criminals to invest more time and effort in the site. The paper shows, however, that this is not enough.

I agree with Mike Sherwood that the next step would be to make the banks accountable for the safety of their user’s money. As long as the cost of lousy authentication procedures remains on the users, banks will not have an incentive to make them better.

AG April 5, 2006 9:58 AM

I have a Chase bank account and when I started getting the “Chase Account Informaiton” Phishing emails I paused for a moment.
Being a semi tech savy person I deleted and moved on, but I can see how a non-tech savy person would click on the message and “confirm” there account information.

John C. Kirk April 6, 2006 6:37 AM

It’s an interesting article, and I was surprised by some of the results (particularly the one about education not correlating to accurate assessments). I’m also glad that people were able to use a web browser for this, rather than just being shown printouts – my normal phishing-detection technique is to hover over hyperlinks in emails, and see what the real target is, which I can’t replicate on paper copies.

Mind you, I do agree with @HJT – the “Bank of the West” fake website looked pretty authentic to me, but I’ve never heard of them before, and since I don’t have an account I wouldn’t try to log in to their real site or a fake one. I know the address for my bank, so that would skew my results.

As for @Dennis, I’ve heard the argument that “people who can’t/won’t defend themselves deserve to be attacked” a few times in a computer security context (typically referring to people who run IIS), but I don’t like it. The analogy I prefer is this: karate has been open source for hundreds of years, so anyone who is willing to make sufficient effort can learn to defend themselves (barring physical handicap). Does that mean we should arrest people who get mugged, on the grounds that they are responsible for increasing violent crime?

Thomas L. Jones, Ph.D. April 6, 2006 12:45 PM

While technological solutions may be valuable, the short-term fix is to educate the users. While fraudulent Web sites are essentially indistinguishable from the real thing, the scam e-mails are quite identifiable by a savvy and/or security-conscious user.

Tom Jones

David D. Levine April 7, 2006 7:31 PM

One question is puzzling me: why do the paper’s authors place so much emphasis on the user’s understanding of the SSL lock and other indicators?

An SSL indicator, as I understand it, only indicates that the connection with the server is secure. It doesn’t indicate that the secure connection is to a legitimate server. Is there anything to prevent whoever set up http://www.bankofthevvest.com from adding HTTPS capability? I only pay attention to the SSL icon after I have determined that the site is legitimate (through inspection of the URL). I could be fooled by a site that successfully overlays the browser chrome to present legitimate URLs in the address bar and status area.

Certificate dialogs have been rendered worthless because so many legitimate sites have invalid certificates. (Or have I been interacting with a lot more illegitimate sites than I think I have?)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.