Schneier on Security
A blog covering security and security technology.
« Google and Privacy |
| Airplane Security »
December 1, 2005
New Phishing Trick
Although I think I've seen the trick before:
Phishing schemes are all about deception, and recently some clever phishers have added a new layer of subterfuge called the secure phish. It uses the padlock icon indicating that your browser has established a secure connection to a Web site to lull you into a false sense of security. According to Internet security company SurfControl, phishers have begun to outfit their counterfeit sites with self-generated Secure Sockets Layer certificates. To distinguish an imposter from the genuine article, you should carefully scan the security certificate prompt for a reference to either "a self-issued certificate" or "an unknown certificate authority."
Yeah, like anyone is going to do that.
Posted on December 1, 2005 at 7:43 AM
• 55 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Using the padlock icon for phishing was well to be expected. Enough security pages are there telling people to "watch out for the padlock symbol to make sure you're surfing securely".. what makes me wonder is that phishers have only now started making use of that icon.
Certificates are worthless anyways, at least in preventing fraud.. Verisign et al will issue any certificate to any person who pays their fees.. they don't check the site for lawfulness.
There simply is no good advice against phishing than getting people to think twice before entering their personal data into officially-looking websites. Getting them to think about what might Ebay/Paypal/$Bank cause them to demand their login data.
Somehow I think, the customers should be held liable for any fraud arising from login data being entered into phishing sites.. some just won't learn unless they have to pay. Why should the companies pay for the dumbness of their customers?
"Somehow I think, the customers should be held liable for any fraud arising from login data being entered into phishing sites.. some just won't learn unless they have to pay. Why should the companies pay for the dumbness of their customers?"
Because doing otherwise will 1) cost everyone more money, and 2) not result in any security improvements. The customers can't be reasonably held liable because they can't affect the security systems. All they can do is follow whatever rules the designers of the systems invent. If you want to actually improve security, the party who is in the best position to improve security must be responsible for security. It's a simple application of loss-allocation rules.
What's worse is that some legitimate sites have bad (expired/self-generated) certificates. Bad hygiene by legitimate generates the expectation that certificates can be dodgy.
And I wonder what will happen the first time someone finds a way to crack the chain of certificate authority? The cryptography is no doubt impeccable, but the processes are administered by humans.
It used to be that the SSL "lock" icon had some value and one could actually have some degree of trust in the fact that the CA issued SSL server certificate was issued to a real company, actually verified by the CA, and could be trusted.
Unfortunately, the CA's themselves are part of this problem. They are creating multiple levels of SSL trust by issuing "quicky" certificates. I almost fell off my chair when I saw that Thawte (VeriSign) was issuing "SSL123" certificates. There is no trust whatsoever in this cert, all they do is check that the domain name is valid and issue the SSL server cert which chains to an existing trusted root. They should be calling the SSL123 certs their "phish special".
One problem here being that the web browsers have no way to discriminate these varying levels of trust in the SSL server certificates issued by the CAs.
It has been a while, but the last time I helped my company get a SSL server cert, we had to submit DUNS numbers, bank references, official letters on company letterhead, and VeriSign actually checked our references (called our bank), all before issuing our SSL server cert for a merchant site. The process may have taken a couple of weeks to get the SSL server cert, but for a real company, not some "fly by night" rip-off merchant or phish site, a couple of weeks to wait for a SSL server cert isn't a big deal.
I recently noticed that Thawte now has "trial" SSL server certs. I haven't checked, but I bet these chain back to an existing trusted root cert (another "phish special"?).
There are some interesting forms of mutual authentication out there that leverage the certificate system without relying on the CA. IMO, mutual authentication will 'fix' the problems with SSL and will help prevent MITM attacks on web apps.
Bruce, do you know any _working_ solution how the companies should improve _their_ security (read: anything that is in direct access to the companies!) which could possibly prevent their customers from entering their personal data on fake sites?
The companies can't do much more than repeatedly tell their customers "Look twice where you enter your data!". I don't get your point on why the companies still should have to pay if their customers don't listen to that advice.
Actually I don't think it matter that much if the CA doesn't check you out. If you pay for the certificate, then (at least in theory) a law enforcer is going to able to trace that purchase back to its source. I'm not aware of any CAs that accept payment in cash!
Anyway the issue here is that consumers have been told to look for the padlock as an indicator that the site they are looking at is trustworthy. We know that it means nothing of the sort. Consumers have been lied to.
@Neil: Nowadays where you can get (almost compeletely) anonymous Prepaid Visa cards via the internet, the payment paper trail can easily be cut off or made worthless. Services like PayByCash, Western Union etc just add to the ease..
teach the customers to enter the url themselves by typing it correctly into the box, not by clicking on the phisher's link. then they won't have to worry about worthless certificates and padlock icons.
"If you pay for the certificate, then (at least in theory) a law enforcer is going to able to trace that purchase back to its source. "
Assuming the phisher uses a stolen credit card to register their phishing domain name and uses that same stolen credit card to purchase a "quicky" SSL server cert, there is nothing to trace. The Domain Registrars, and now the CAs, allow one to register a domain name and get a SSL server cert in literally a matter of minutes, preventing any real checks and balances, and the ability to establish any real trust.
In the SSL/PKI world, the certificate issuer (aka CA) provides the "trust element". If the SSL server cert can't be trusted, then the whole system breaks down.
Bruce and Woo:
Somehow, I feel you are talking past each other. Woo is talking about the sad reality that customers tend to just jump from webpage to webpage without really checking to make sure it is a legitimate site. Bruce is talking about what legitimate companies can do to help the customers find them, and do legitimate business with them.
Those are two different issues. But, keep in mind, Woo, that the company is just as much a victim when some miscreant decides to spoof their website. This is the idea behind phishing -- setting up a spoof of a legitimate site to get someone's password. Keep in mind, also, that a password is NOT someone's identity -- it is simply an authentication mechanism. It is only good business practice to develop an authentication mechanism that is next to impossible for the phisher to duplicate. That a legitimate customer who is exercising a baseline of common sense knows that the other site is not legitimate. Passmark is an example of an idea that could help in this area.
"teach the customers to enter the url themselves by typing it correctly into the box, not by clicking on the phisher's link. then they won't have to worry about worthless certificates and padlock icons. "
Part of the problem is that even trusted companies are using "obscure" URLs that don't use the base company name. For example, American Express uses a number of different domain names for customer interaction a that don't all have the "www.americanexpress.com" domain.
So a phisher registers the domain name "www.TrustVISA.com", gets a quicky SSL server cert, and then sends their spam phishing email that doesn't provide a link, but prompts users to type this URL into their browser, to "ensure they are at the correct website". This phish site has a seemingly valid domain name and even a matching, valid SSL server certificate. How would an average user ever know the difference?
"teach the customers to enter the url themselves by typing it correctly into the box, not by clicking on the phisher's link. then they won't have to worry about worthless certificates and padlock icons."
That doesn't protect them from DNS-attacks, unfortunately.
"In the SSL/PKI world, the certificate issuer (aka CA) provides the "trust element". If the SSL server cert can't be trusted, then the whole system breaks down."
Actually, there are a number of ways to do SSL validation without relying on the CA. Petnames and Trustbar, for example, assign a name or logo to the certificate on the client side, so a change in the cert is noticed. We use an out-of-band verification. There are a number of ways to bring SSH-esque security to SSL. The SSL certs don't have to be from a known CA, they just have to be known by reputation/repetition or validated by some other trusted means.
"Bruce, do you know any _working_ solution how the companies should improve _their_ security (read: anything that is in direct access to the companies!) which could possibly prevent their customers from entering their personal data on fake sites?"
No. None at all.
There are some clever technologies that will work with a sophisticated users, but none that will reliably work with normal people.
The only thing the lock icon means to me is that the connection is secure. I assure the web site by typing in the URL myself.
Now the really evil thing would be for a browser hijacker to get into my bookmarks and replace my site URLs with the fake ones (shudder).
I really like what Bank of America does with its site, displaying a custom image and pass phrase that I select on the page. A fake site couldn't show my image and pass phrase.
"No. None at all.
There are some clever technologies that will work with a sophisticated users, but none that will reliably work with normal people."
This is the problem. Technology solutions will only impact security aware users. Unfortunately the only feasible solution for this is a much more stringently controlled network.
The Internet was not designed for security, and pretty much everyone acknowledges that if security is not in the design, then any addition of it will be a hack. Eliminating phishing will only happen when people realize this and accept the changes that need to be made.
Unfortunately this is not going to happen until the insurance companies and banks start to realize that the risks of conducting business on the internet exceeds the value; when insurance is not available, and banks won't invest in insecure organizations, the business community will have no choice but to change.
That's an interesting solution, although there are a few loopholes. Does BoA show it before you log in or after? Since displaying the image after login would only help for after-the-fact discovery, and displaying before would require some sort of unique identifier (cookie etc...) sent to the site that could probably be intercepted by a third party. But I think this type of reverse-authentication for banks is an excellent concept.
"Actually, there are a number of ways to do SSL validation without relying on the CA. "
Sure, there are lots of "two-party" trust solutions available. The problem is that these solutions don't scale. They are fine for setting up a trust relationship with a user's bank, where they have an account. This is where most of the two-factor authentication (token, whatever) solutions are. The user sets up some "shared secret" or "mutual authentication" with the other party, hence the "two-party" trust. The problem is that users have to establish this "two-party" trust relationship with every other party they want to trust, resulting in a different shared secret, another hardware token, whatever, eventually leading to what is commonly refered to as a "token necklace".
However, "two-party" trust relationships provided by these solutions don't currently work when users need to connect to a multitude of merchant web sites. This is where SSL/PKI was originally intended to establish a secure connection where the web server is authenticated and trusted (via a "third-party", aka CA). In the "early days", this was the case, when there were a few CAs, which provided a strong level of trust in the SSL server certs they issued. However, with the uncontrolled proliferation of trusted CAs, many of whom have "sold out", the SSL/PKI system we have today doesn't provide the expected level of trust, and the system has become broken.
It's quite a two-edged sword... as long as the users aren't held liable for fraud commited because of miskept secrets, they aren't going to develop a sense of carefulness.. and the companies will pass on the costs from individual frauds to their whole consumer base, through prices etc.
On the other hand, if the liability was taken from the companies and put on the customers, then the companies had less incentive to track down reported phishing mails and try to get fake sites shut down.
There should be a way to smack people over the head who aren't careful about their authentification data.. *sigh*
Good points, very interesting stuff. The 'token necklace' is a concern, but I tend to think it won't happen just because consumers won't let it happen. The token necklace is only a problem with symmetric tokens, though. Public-key based systems can handle multiple relationships in a single client.
The other issue would seem to revolve around initial validation. In a world of 'two-party" trust how do you establish trust? How does the user know they are at the bank when they set their petname?
> Now the really evil thing would be for a browser hijacker to get into my bookmarks > and replace my site URLs with the fake ones (shudder).
Now *that* is an excellent attack. Scales badly, since you'd have to actually check the existing bookmarks for strings like "bank", etc., and replace them... and you'd have to have some method for replacing only the bank site that you have spoofed, or people will notice immediately.
Of course, if you get launch an attack like this, you'll get *some* results, and it may very well be worth the time.
Honorary "black hat" for ya.
BoA presenting custom images:
BoA is one of the offenders sending out emails and telling people to click on the links.
As for their custom image/passphrase solution- it may help savy users detect a DNS attack, but it's not going to help the real target of phishing attacks. Remember the guy in the suit who tells the elderly lady to withdraw 10K and give it to him to help him investigate a corrupt teller? (or the one who tells the bank VP to push 250K under a bathroom stall) So elderly lady gets an email stating that there might be a problem with her account, and part of that problem might affect her image and passphrase. In order to maintain her security, she should log on right away and determine that they are still correct.
As Bruce discovered somewhere between 'Applied Cryptography' and 'Secrets and Lies', this is not a technology problem. And I don't mean to imply that it's an elderly lady problem either.
The lock icon is just another thing to add to make something look like the real thing. From realistic email to realistic websites, the lock icon just adds a little more credibility. It seems to me the website holds the lock and the key. Perhaps a real bank could register its website with a verification company. The verification company makes a checksum or hash of the real website. Whenever you want to make sure you are at a real website the suspect website is checksummed or hashed against the verification company. You would need a browser plugin to check to see if both hashes or checksums matched.
Excellent white paper on Visual Spoofing.
And for all practical purposes, you could also easily create your own SSL cert. I've done it before, but that was running my own Windows-based setup.
Not sure how that would work elsewhere (e.g. shared servers).
Woo, you obviously have never been the victim of fraud. The process of cleaning up the mess may not be a direct *financial* burden on the end user/customer, but it's certainly a smack over the head that makes people more careful.
While some kind of by-hand key exchange suggests itself, nothing is going to help without educated users.
And users aren't going to be educated by polite means. Banks saying "we'll never do this" aren't really helping anything.
There's very little that will teach you more about how hot a fire is than getting burnt.
While the ethics are questionable, wouldn't it be useful from an educational point of view to mount a white-hat phishing trip? Start a few really good phishing sites. except that the follow up page says "Haha! Now I have all your money! Not really, but I could. Here's how you could have figured this out." instead of "There's been an error. Please try again, or enter your bank card number," might help push a lot of people into more secure behaviors.
Would something like the anti-phishing toolbar Netcraft released (http://toolbar.netcraft.com/) a while back be any help, considering it takes into account factors other than whether SSL happens to be enabled? I guess the problem remains that installing it is still more diligence than the average user will likely perform.
Personaly I do not pay bills online and NEVER save any credit card information on my PC. The rare time I want to order something online I always try and find a phone number on the website that I can use to talk to a live person to order the product.
I use the electronic bill payment system that my bank provides on their ATMs, which are in there building. I am aware that it might be possible for some one to install a fake card reader and a pinhole camera on the bank ATM, inside the Bank proper but it is highly unlikely and certainly much safer than going online to pay bills. If a person is truly concerned they can always do the manual bill payment method, cheque, envelope and stamp.
What does the "padlock icon" say? It says that the operator of a web site paid $$$ to a CA whose certificate is distributed in the root cert list.
This means that anybody who is in posession of valid credit card credentials is able to acquire a SSL certificate for an arbitrary site which ought to be the common name in the certificate.
Even if the browsers would implement prompting and checking differently from the current implementations (IMVHO Firefox has a pretty decent one, with several explanations that hint the user to what actually happens there) we would still have the wrong process in place to establish end-to-end trust, because the trust is just a proven valid credit card credential of the certificate holder.
I doubt that the popular root CAs employ very rigorous testing for the identity of certificate requestors.
To (not only) me, it appears simply as a machinery for printing money.
Think 275K Sites in January 2004 times $300 average price for a server certificate: it sums up to $82.5M.
These $82.5M represent only a certain amount of the CA/cert ecosystem which includes comparably or higher priced codesigning, CA, VPN and other certificate types as well.
Wouldn't turning on the certificate mismatch warning (in IE, Tools / Internet Options / Advanced / Security / Warn about invalid site certificates) help somewhat? That way at least if the cert doesn't match the FQDN on the site (or they try to be clever and just drive you to an IP address) your browser will put up a "stop, hey there, what's-that-sound" warning before loading the page. Correct?
"What's worse is that some legitimate sites have bad (expired/self-generated) certificates. Bad hygiene by legitimate generates the expectation that certificates can be dodgy."
I'll grant you the expired cert is not a good thing, but what is dodgy about a self-generated cert?
The whole idea behind a CA is that they are supposed to be a third party, that you trust, who will attest to the identity of the other party. So, take a look at the list of "Trusted Root Certification Authorities" that comes with any MS OS (or insert favorite browser here) and tell me why you trust them. I personally don't trust any of them.
You've seen it before.... :)
So what does the lock icon actually mean?
To me, it says that I have an SSL connection to a server. It says absolutely nothing else. It means that a mediocre 128-bit encryption mechanism is being used between my client and the server, and that data sent between the client and server are a lot less likely to be overheard by third parties.
It's the camera/mirror at the ATM, or the large amount of personal space around the teller window at the bank.
It does NOT tell me anything about the server. The cert itself, *might* but I'd need to understand how to read it, and frankly, the language in the "look at this cert" window in most browsers is obtuse. And in addition to that, it really only makes sense if you understand the chain of trust back to the root cert.
Just a thought ... is there much value in usuing 'spoof stick' as an easy(er) method for end-users? I use it in FireFox, I understand there's an IE item, as well.
'... What is SpoofStick?
SpoofStick is a simple browser extension that helps users detect spoofed (fake) websites. A spoofed website is typically made to look like a well known, branded site (like ebay.com or citibank.com) with a slightly different or confusing URL. ..."
-from http://www.spoofstick.com/ and nope, don't own any stock in these guys.
Check out the Petname Toolbar (https://addons.mozilla.org/extensions/moreinfo.php?id=957) for Firefox. It allows you to securely name your trusted sites, in order to allow users to easily detect spoofs.
The CA/PKI system was supposed to solve this problem by ensuring that anyone who has a cert is who they claim to be. Since the CAs are undermining this system by handing out certs without verification, then perhaps it's time for CAs to be held accountable. CAs would exercise a lot more due dilligence when issuing certs for if they were liable for losses caused by fraud sites which were issued valid certs.
At the very least, CAs should be required to stipulate the level of trust they have in the certificate holder and for what purpose the certificate was issued. A distinction should be drawn between certs that verify low-level issues, such as someone's personal identity, and certs that state the holder is e.g. a bank. If this distinction was made, combating SSL phishing would be somewhat easier as a site which preported to be a bank but had a personal identity grade cert could be made to stick out like a sore thumb.
Another option to combat financial phishing would be for regulators to issue certs, backed by a governmental root certificate, directly to financial organizations. Regulators are in the best position to know if a given cert applicant is legitimate or fraudulent as most financial organizations can't operate without regulatory approval.
Regardless of which of these two options was implemented, from that point forward it would be a relatively simple matter for browsers to articulate to the user the kind of cert a given site has. Any number of UI solutions exist, but an easy one would be to add a status line which stated the owner of the root cert and what the site cert was authorized for. Users would merely need to be trained to realize that real banks (etc) have government-backed certs issued by a banking regulator and classed as 'for banking use.'
One of the things I find interesting about these phishing attacks is they are spoofing sites of organizations with which the victim almost invariably has a pre-existing relationship set up via non-web means. (Paypal is one notable exception.)
We have, in SSL, the security tools to deal with this sort of thing. Why should I care who signed my bank certificate, when I could go down to my bank and get a trusted copy from the bank itself? Why should I worry about whether or not I give my password to a phisher, if he can't use it without also having a copy of the private key that corresponds to the public key I gave to my bank?
It seems to me that we've set up things rather wrongly, here.
I agree to a point since certs need to be tied to a more legitimate root to have broader value. That makes me think about drivers license validation at bars. If servers can't trust the issuing authority to produce a cert that can be easily validated, then should the bars really be accountable for a bunch of under-age scam artists who are able to buy a beer?
Without strong registration authorities and certification authorities, what other form of trust can you use to legitimize (e.g. sign, stamp, etc.) certificates?
whats phishing? i dont get u ppl! o and i like bananas! and www.makingfiends.com!
Amir Herzberg's been touting his TrustBar extension for exactly this reason. I personally use the petnames extension to keep from getting caught out...
At a presentation I saw a few days ago from APACS, we were shown phishing which used the padlock in a much nastier way. The browser really did have an SSL connection to the genuine bank server, but the HTML was replaced on the fly by a trojan DLL. This means that the TCP/IP connection and the certificates were all valid, but phishing is still in progress.
It struck me that all of the heavy work which goes into the anti-phishing tools is utterly wasted on windows/IE, since most of the phishing exploits we saw involved loading trojan code into the user's browser, thus changing the rules. All of the anti-phishing tools, including the new IE7 demo, assumed that the user's browser was inviolate. According to the APACS presentation, it usually isn't.
There was perhaps some accidental some irony in the schedule which arranged the APACS and Microsoft consecutively.
> I recently noticed that Thawte now has "trial" SSL server certs.
> I haven't checked, but I bet these chain back to an existing trusted
> root cert (another "phish special"?).
No they don't.
what we really need is two types of CA's one to issue for everyday traffic and another who will only issue once a large bond is posted. The browser would differentiate between the two.
perhaps a Safe icon for a bond indicating a higher level of trust.
"At a presentation I saw a few days ago from APACS, we were shown phishing which used the padlock in a much nastier way. The browser really did have an SSL connection to the genuine bank server, but the HTML was replaced on the fly by a trojan DLL. ... most of the phishing exploits we saw involved loading trojan code into the user's browser."
Isn't this an MITM attack and not phishing? Do you have more information on how they were able to compromise the browser?
""Bruce, do you know any _working_ solution how the companies should improve _their_ security (read: anything that is in direct access to the companies!) which could possibly prevent their customers from entering their personal data on fake sites?"
No. None at all."
What banks can do is be vigilant, track down phishing mails and fake sites and have them shut down early. Moreover they can make phishing very ineffective or near impossible by using a well-designed two-factor-mechanism (which can be as simple as rotating card - the phishers have only a fractional chance if the transaction code is randomly selected from a list of 100 codes). And of course they should abstain from emailing their customers, tell them never to trust an email from their bank, and for heaven's sake never to click on the link.
As we know, some customers will still click on the link despite having been told a thousand times not to. In those cases, holding the bank liable is questionable. The liability principle should push banks to behave responsibly, but if they can really do nothing to protect certain customers from their own carelessness, then we simply end up paying for some people's stupidity.
It is a form of MITM, but it was definitely classified as phishing by all parties. This was an APWG conference.
As for the method of compromise, there's no shortage of them in windows. A few were suggested, including remote exploits, but these change day-to-day so I didn't take notes. I'm not a windowsuser.
I am vaguely disappointed by the plurality of suggestions for different colours or types of padlock or CA. This is exactly what Microsoft have done in IE7, and it won't address the phishing attacks demonstrated at APWG.
I actually had reason to use windows the other day, and a spyware-alert popped up: "Do you want to allow this operation. Yes/No". I could find no button to tell me WHAT the operation was, and I pressed them all. I just had to allow it or deny it. I tossed a coin. Padlocks sure are pretty, and they make the user feel safe, but they are useless, and the uselessness is occasionally compounded by the implementation.
Do you have a reference to the conference paper?
"It is a form of MITM, but it was definitely classified as phishing by all parties." What's the phishy part here? If they are able to compromise the browser, what do they need phishing for?
The real problem here is that the interface doesn't match the intentions. When you go to an SSL protected site, you see a little padlock icon, and that proves the site is secure. Well, indeed it does; it proves (more or less) that you really are talking to the owner of that certificate, BUT who the hell is that? If you want to find out you have to drill down about 5 clicks and read some ASN.1 gibberish.
What people (apparently) want the padlock icon to mean is "you are talking to someone who is an honest, established businessman" (at least, that's what Verisign's business model would have us believe). If you are going to force this semantic overloading into SSL, then what you require is not a new type of certificate, but to make it easier for the ordinary user to understand the manifold fields of existing certificates.
In particular, when signing a certificate, the CA should only sign those items in the subject field which have been verified, replacing the rest with "not verified"; and
when a link is established, several items in the subject field should be displayed on the screen in an easy to read format like:
"[CA] certifies that you have securely linked to the website [CN], registered in [L]. At [Not Before], the real owners of this website were [O]"
So this will end up looking like:
" Thawte Consulting cc certifies that you have securely linked to the website www.hushmail.com, registered in Anguilla. At 7/05/2005, the real owners of this website were Hush Communications Anguilla, Inc."
" Cheap Certs Inc. certifies that you have securely linked to the website www.example.com.ru, registered in Russia. At 7/05/2005, the real owners of this website were not verified"
(Obviously numerous variants on the basic idea are possible, both in the data displayed and the exact manner of advising the user.) The establishment of an SSL link is now accompanied by information which lets the user know what that actually means in this particular case. Either might be sufficient for one's particular purpose, but now it is in a format which lets a nontechnical person easily decide.
Of course in Firefox this could easily be done with an extension.
(Oh, in case anything is licensable or whatever here, I hereby license this idea to be freely used by anyone, provided only that they don't try to encumber it to prevent others doing the same.)
Is there some clue, somewhere in the ssl on how to track down the ssl holder?
For checking ssl you have inside internet explorer , Internet Options. On the Content tab check for Certificates...
A few points to go over:
1) Consumers should bear the burdon of their whoopsies. Like credit card numbers, login information and the like are the responsibility of the owner. The only argument I've seen against this concept was from Bruce:
"The customers can't be reasonably held liable because they can't affect the security systems. All they can do is follow whatever rules the designers of the systems invent."
The discrepancy here is that phishing isn't about a "security system." No one is breaking into a secure server, no one's guessing a password. They're throwing fake content, designed to appear real, at a user who doesn't know enough to verify the information. The idea that a company should be responsible because it's users fell for an email purporting to be the company is beyond ludicrous. The user is responsible for their data, and any damages incurred because they willingly submitted their data (they clicked 'Submit') to a fraudulent site.
Also, from "another_bruce":
"teach the customers to enter the url themselves by typing it correctly into the box, not by clicking on the phisher's link. then they won't have to worry about worthless certificates and padlock icons. "
If their system is compromised (and let's be honest, people allowing malicious code to run happens all the time -- thus the need for things like UAC and confirmation after confirmation), they could have their URLs redirected by any number of means. The shortcuts could be changed or, even more maliciously, the hosts file(s) could be used to provide spoof site IPs for totally valid URLs. So you type in www.citibank.com but go directly to a phishing site. End-user security isn't that simple. The only real answer is education -- people need to know how computers work.
Oh, and I wanted to add that encryption isn't worthless. The gripe I'm seeing is that a padlock doesn't always mean secure. It DOES always mean encrypted. Encryption is one very important part of security that should -not- be overlooked. Just because users are still falling for phishing scams and using horrible passwords (and using them for every system they sign on to) doesn't make encryption worthless.
It's worth noting that users also constantly fall for the classic spoof, "www.amazon.com.something.de" as compared to "www.amazon.com", so as stated previously, tech-savvy people are the ones most likely to see real, lasting benefit from such a tool -- and only because they take into account the weaknesses of the tool and exactly what it's output really means. Blind faith in any technology is predictably insecure, as all technology has it's limitations.
The best security-related new feature in Firefox 3 from my point of view is that it very prominently displays the title of the organization to which a certificate is issued right on the address bar in front of the address. So it's very easy to see if the certificate matches the website or not. I believe once this becomes wide-spread, this will greatly help combat one type of phishing (of course, this still requires a certain level of user know-how to even bother to look there).
Over time, users may even come to expect the green field with the company name, and at that point the SSL certificate system will actually become almost as useful as it was originally intended to be for a widespread audience (not just for us geeks).
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.