When Computer-Based Profiling Goes Bad

Scary story of someone who was told by his bank that he’s no longer welcome as a customer, because the bank’s computer noticed a deposit that wasn’t “normal.”

After two written complaints and a phone call to customer services, a member of the “Team” finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent “suspicious” transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I’m not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC’s “account review” affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.

Expect more of this kind of thing as computers continue to decide who is normal and who is not.

Posted on December 18, 2006 at 6:37 AM54 Comments

Comments

Shefaly Yogendra December 18, 2006 6:58 AM

I don’t think there is anything inherently wrong with a computer-model raising ‘risk’ flags. All one has to do is to consider a situation where a flag was not raised (say a fradulent credit card transaction that does not fit my profile of spending), and then imagine the hassle in cleaning it up. That also damages one’s credit rating big-time.

In the UK, the KYC norms are implemented with great zeal to prevent money laundering, and I am sure, although nobody will admit it, recently arrived foreigners with little no credit history in the UK will probably have lower thresholds for raising of such flags.

However I wonder if these models also inform banks of the damage to their reputation. They should by all means see these flags and take note, but questioning the customer with the presumption as if (s)he were a criminal or worse, hastily telling them they are not welcome, is a stupid strategy. Then again the less said about British customer service traditions, the better.

Tim December 18, 2006 7:38 AM

It’s far from new, anyway. About 6 years ago, I asked Barclays for an extended overdraft and they refused – “the computer” had said I hadn’t had a salary paid in regularly in the last 6 months. Transpired the bloody thing couldn’t tell the difference between end-February and start-March.

Not using them any more now, although for other reasons…

Rob Kendrick December 18, 2006 7:42 AM

HSBC seem to have their fair share of computer monitoring for unusual activity. They recently suspended the accounts of a load of customers who were using a little-known web browser called NetSurf that around 2,000 at most RISC OS users use. They did apologise eventually, and re-enable the accounts, but only after many of them sent very bitchy letters, and the developers of the browser contacted them.

Apparently, a member in one branch of the bank said they only supported Internet Explorer on Windows XP as that was the only OS they could gaurentee to be free of malware.

lol December 18, 2006 7:49 AM

@Rob Kendrick:
…”Apparently, a member in one branch of the bank said they only supported Internet Explorer on Windows XP as that was the only OS they could gaurentee to be free of malware”…

  • Well at least someone has a sense of humour…

HSBC is evil December 18, 2006 7:58 AM

HSBC holds a special place in my heart. Before allowing me to open an overseas account they insisted on enough ID to enable anyone at the bank to steal my identity with ease. Then they redid the website in a way that the security certificate was issued for a different website’s address. When I called to inform them of this I got an irritated clerk telling me just to click through, with no apparent awareness that this may be a problem.

Then when I finally got tired of their shit and tried closing the account, they held onto the money for three weeks while they ran some kind of security check. Apparently they couldn’t deal with the idea of someone depositing a lot of money, getting pissed off after 3 months of mistreatment and withdrawing all of it.

Anonymous December 18, 2006 8:28 AM

Customer service levels vary HUGELY! I contacted Nationwide to inform them that I couldn’t login because an advert (some flashy thing) was getting in the way of the login link… They responded with something along the lines of “not our problem, we only support Windows.” I sent them a rather bitchy letter stating that it worked last week and some idiot in their marketting department clearly thought an ad was more than customers. I also suggested that if they constructed a valid HTML page far more people would be able to use it. I got a nice reply and the page was fixed. (Luck of the draw!! Clearly someone who cared received the second mail)

Scott December 18, 2006 8:48 AM

I think computer profiling is fine, as is profiling in general. The problem is when humans become too reliant on the profile’s results and fail to apply a healthy dose of common sense.

Elliott December 18, 2006 9:00 AM

Computer models about humans are inherently wrong. Any such automatism is guaranteed to punish a fixed percentage of the population (those outside two or three standard deviations).

I expect that such systems will educate people with above-average personalities and skills to (pretend to) behave like average dumbheads. Which will narrow the margin for “suspicious behaviour” until most extraordinary people (and their money and brightness) go elsewhere.

Josh Rubin December 18, 2006 9:06 AM

Here’s a succinct statement of the problem with computer-based profiling:

"When I am convicted by the testimony
 of a machine, do I have the right to
 cross-examine that machine?"

Sean December 18, 2006 9:29 AM

The problem is not with the computer raising an alarm, but the extremely incompetent types that think computers are all knowing and blindly take that output and apply it in an inappropriate manner.

Expect more to come as the blind are lead by the unknowing.

FP December 18, 2006 9:31 AM

@Shefaly Yogendra: “I don’t think there is anything inherently wrong with a computer-model raising ‘risk’ flags.”

No, there isn’t. That’s what credit-card companies have done for ages to find potentially fraudulent transactions. As Mr. Schneier keeps saying, that profiling works because the cost of a false positive is low: you get a call and confirm that the trasaction is legitimate. In this case, the cost of a false positive was much more severe: a lost and dissatisfied customer.

@Scott: “The problem is when humans become too reliant on the profile’s results.”

Excactly. But that is becoming prevalent these days. People are afraid to trust their “common sense.” For a bank with millions of accounts, the cost of losing a single customer is negligible. But “what if” that customer really was a terrorist? The cost of public exposure, and having to explain to stockholders why a low-level employee was allowed to override the computer profile just because of his or her flawed common sense, is enormous. That trickles down to the employees as well: their cost of rejecting a single customer is much less than the perceived risk of “being wrong.”

The frightening part is that there is no due process (the profiling algorithms are secret), no presumption of innocence, and no means of appeal, even when the bank is essentially acting on government policy.

Bruce Schneier December 18, 2006 9:39 AM

“I don’t think there is anything inherently wrong with a computer-model raising ‘risk’ flags.”

Of course there isn’t. And you can look at credit-card fraud detection as an example of the right way to use data mining to detect criminal activity.

But this is an example of a way not to do it. The computer model didn’t raise a risk flag; the computer model barred a customer.

Reader X December 18, 2006 9:50 AM

“Expect more of this kind of thing as computers continue to decide who is normal and who is not.”

As Sean and others above have said, this isn’t the problem. The problem is that the bank doesn’t have appropriate process to manage events kicked out by the AML system. Other banks do, and for them the subsequent operational costs and reputational harm are easier to manage.

One wonders to what extent the legal environment in the UK, which encourages banks to blame the customer (well documented by Ross Anderson and others), is responsible for this. In the US, with it’s stronger consumer protections (and strong regulatory onus on banks to enforce the BSA but Do The Right Thing), this sort of thing is rarer.

Chris December 18, 2006 9:51 AM

I would bet that the accounts were closed because they were not profitable to the bank not because they were suspicious. UK banks have been pruning as many accounts as possible that don’t make money for them.

Jungsonn December 18, 2006 9:53 AM

Good example of how they use machined profiling, obvious something that’s prone to failure. Never let a machine which is incapable of boxing out his rigid code to do a human job. I can’t really think how they should improve on this, maybe a little AI would help.

Reader X December 18, 2006 10:05 AM

“The computer model didn’t raise a risk flag; the computer model barred a customer.”

No, the bank barred the customer. This is a policy and procedure problem, and a misuse of a detective control that may very well be working correctly.

Shefaly Yogendra December 18, 2006 10:08 AM

@FP and @ Bruce

The points you raise was the 3rd para in my comment, which I reproduce here:

“However I wonder if these models also inform banks of the damage to their reputation. They should by all means see these flags and take note, but questioning the customer with the presumption as if (s)he were a criminal or worse, hastily telling them they are not welcome, is a stupid strategy. Then again the less said about British customer service traditions, the better.”

I disagree however that the system auto-magically barred the customer. The system generates flags, but a decision to bar a customer is a managerial choice, based on the risk propensity of the firm in question. Which is why a human being is involved.

What we are seeing is not the failure of a computer model but of a firm, which fails to train its staff adequately to deal with complex issues.

What HSBC did do wrong was not to give the customer a chance. But then again in the UK banking market, there is a lot of activity to lose customers at the lower end of profitability and students are one such market. First Direct fired the first salvo in this war and that is an HSBC entity. Thanks.

Dave December 18, 2006 10:11 AM

Part of the problem here is the new Bank Secrecy Act laws. Financial institutions are required by law to profile their customers and report any unusual activity to the Feds (in the US that is).

Reader X December 18, 2006 10:17 AM

“I can’t really think how they should improve on this, maybe a little AI would help.”

Interesting comment. AI has been tried, but it has proven enormously difficult to adequately feed the appetite of learning algorithms. Better to use a statistical approach and manage the false positives.

“In this case, the cost of a false positive was much more severe: a lost and dissatisfied customer.”

It didn’t have to be. Banks that do this well, and many do, are efficient and accurate in following up machine-generated leads.

“Part of the problem here is the new Bank Secrecy Act laws. Financial institutions are required by law to profile their customers and report any unusual activity to the Feds (in the US that is).”

Not really. US banks are required to know their customers and prevent or interdict money laundering to the greatest possible extent without compromising customer access or privacy. How they do it is negotiable. Reporting is certainly done via SARs and CTRs but does not imply wrongdoing.

Another Dave December 18, 2006 10:17 AM

@ Chris:
Hmm, you may well be on to something. I’ve banked with HSBC since the days of them being Midland, and have never had a problem such as this – even though I have (on two occasions) legitimately paid in cheques which could be considered “suspiciously large”. However, the fact that I was paying in cheques, rather than having funds transferred electronically, might have swung things in my favour…

fromHistory December 18, 2006 10:19 AM

One would think the HSBC would know something about money laudering and suspicious transaction. They pretty much are the “bank of record” for dope money after all. Think of the British opium trade into Hong Kong, and the Opium Wars and you are in the right ball park. These people know dirty!

FP December 18, 2006 10:48 AM

@Dave: “Part of the problem here is the new Bank Secrecy Act laws. Financial institutions are required by law to profile their customers and report any unusual activity to the Feds (in the US that is).”

Not per se. In fact, that is more desirable: pass on the information to the proper government authorities, so that it may investigate and act or not act on the information. In this case, part of the problem was that the bank acted vigilante.

The bank should have no authority over deciding which transactions are fraudulent or not.

another_bruce December 18, 2006 10:49 AM

my banking custom is fairly substantial. in my view, my bank exists to serve me, i don’t exist to serve it. it earns a fair return on the five figures in my no-interest checking account.
my bank, as well as my other vendors, will treat me with a high punctilio of respect at all times, or i will immediately and permanently withdraw my custom.
one time, merrill lynch put a “temporary hold” on one of my deposits. when i got through to customer service, a man calling himself “chad” told me it was company policy to do this because “i might be a money launderer.” 20 years of brokerage custom blew up on the spot.
stand up to corporations or get run over, your choice. there’s something very subtle going on in our society where people are being conditioned to defer to them. public schools are a major offender. the conditioning didn’t work on me.

Thomas December 18, 2006 11:41 AM

I recently had a good experience with this type of situation. I made a credit card purchase for some software from a company overseas. I knew the company I was purchasing software from, so I had no worries. The bank refused the transaction. I tried a couple of more times, but it still refused the transaction. When I contacted the bank, I was connected to the fraud department. They were very nice. They required me to verify certain information to make sure I owned the account. After that, they explained that the transaction request came from overseas and since I had never made purchase like that before, they figured it was fraudulent. I was impressed that they were both nice and very professional with the whole process. I appreciated that they were making an effort to validate my credit card purchases. I have no illusions that this was mostly in the interest of the bank, but the side effect is that it helps me.

Jiminy December 18, 2006 11:51 AM

Another point to consider – how much money are the banks saving by employing the computer to run the analysis and then paying low entry-level wages for customer service techs to read the results off a screen?

I work in finance and can tell you that many business models are built around this concept. I imagine these episodes here are symptoms of that failed system.

I know it is cheaper than employing a string of compliance officers to monitor the same suspicious transactions.

Remember, Big Brother doesn’t run the world – His actuaries do.

Thornton December 18, 2006 1:11 PM

The bank should not have closed the account. They should have put a hold on the account and contacted the customer.

Thornton December 18, 2006 1:12 PM

The bank should not have closed the account. They should have put a hold on it and contacted the customer.

DBH December 18, 2006 1:18 PM

My issue is what happens with this information? Does this guy have a ding on his credit report? Was this suspicious behavior noted and sent to Homeland Security? Is this one more datapoint that will continue to increase his risk scores in other assessments? Has it been accumulated in Total Information Awareness?
http://www.schneier.com/blog/archives/2006/10/total_informati.html

In this particular case, HSBC was attempting to do something reasonable, and obviously has the model wrong, but either will pay the price in lost customers or will fix/mitigate the problem. However, once it is out of HSBCs hands, this bit of data will be floating around screwing up all kinds of things. As Bruce has noted before, it is not so much the actual misinterpretation of the data, as the difficulty in setting the record straight, that causes real problems.

DBH December 18, 2006 1:22 PM

@FP

“The bank should have no authority over deciding which transactions are fraudulent or not.”

HUH!?!? They in fact need to do just that, authenticating all transactions. One can argue that when they find something wrong, they should contact the alleged perp, or authorities, as circumstances and law decide, but they are clearly the first line of defense in fraudulent transactions, else they could just let anyone take money out of your account.

Matthew Skala December 18, 2006 2:48 PM

The bank should decide whether transactions are fraudulent… but when a transaction is not fraudulent, they should not be able to penalize the customer for making it. That’s the real problem here: the transactions met the official requirements for legitimate transactions in terms of having the right signatures and so on, but they didn’t meet the secret unpublished requirements for “looking right”. If the transaction is allowable, it should really be allowable without penalty; and if they want to penalize someone for making the transaction, then they shouldn’t allow the transaction to go through in the first place. Having a third class of “technically legitimate, but we’ll close your account if you do this” is a bad thing, especially when the conditions for entering that class are secret.

Secret unpublished legitimacy requirements are a bad thing for the same reason secret source code of security software is a bad thing: they usually turn out to be badly broken under the covers, for instance by being discriminatory (requirements that wouldn’t be allowed if they were published), insecure (algorithms that wouldn’t be trusted if they were published), used to further inappropriate goals (Internet filtering systems with political agendas), or similar.

HiltonT December 18, 2006 5:44 PM

The day a computer can’t tell if I’m normal or not is the day that the programmer needs to find another job. 🙂

In many cases, as has been mentioned, the cost of a false positive is low, but the effort that’s required to follow up on this without pissing off a most likely legitimate user is simply not expended. All that needs to happen is that a friendly call is placed, asking about the reason for the variance in behavior and the reason noted on file.

This will leave the customer knowing that the company is careful about the customer’s data, the company with data that can be referenced in the future, an audit trail for all variances, and the ability to follow up anything that looks like a legitimate positive.

The computer can flag the behavior with whatever rules have been programmed into it, but until this flag gets handled properly, the computer will always be blamed when it is the business that is at fault.

Regards,
HiltonT

TimH December 18, 2006 6:56 PM

@ DBH
Really interesting if a UK bank with issues with a UK customer actually did report the issue to USA’s Homeland Security. Just think how many EU privacy, banking, and computer record laws that action would break… the SWIFT scandal would be nuffin’ in comparison.

Davi Ottenheimer December 18, 2006 7:00 PM

“computers continue to decide who is normal and who is not”

Eh? Unless I’ve missed some giant leap forward in AI they’re not deciding anything, just accelerating our already broken methods…

Why does this sound like the time GM tried to “automate” and ended up spending more than the total cost of buying Toyota, with little to show for it? If I remember correctly they eventually gave up the fight and tried diplomacy with Toyota (NUMMI) instead.

http://www.gm.com/company/corp_info/history/gmhis1980.html

@ Kingcob Bob IV

Amen

Dan December 18, 2006 7:38 PM

“I say ‘your world’ because really, as soon as you started letting us make the decisions for you, it became our world.” — Agent Smith “The Matrix”

Robin December 18, 2006 11:01 PM

There is nothing wrong with banks using automated fraud detection systems so long as they are reasonably configured to not interrupt too much normal traffic. Security is always going to cause some unwanted inconvenience. Ultimately it is there to protect the safety of our bank accounts and I’m all for that.

What I find strange in this case is that this person’s account was suspended when somebody else deposited a large sum of money into it.

How does that pose a risk to either the bank or the account holder? I would understand it if the party who sent the money got blocked by their bank, but surely neither I nor my bank gets hurt if someone wants to give me money.

Eric Rachner December 19, 2006 12:23 AM

Externalities again.

Let’s assume, reasonably I think, that when a certain amount of federal attention comes to bear upon a customer, that customer’s accounts transform from profitable assets into significant financial liabilities.

By pro-actively shedding itself of customers who, in the bank’s experience, seem likely to be the subject of federal inquiry, a bank can reduce its exposure to the associated costs.

Eric Rachner December 19, 2006 12:31 AM

Addendum:

More precisely, I should have said, the bank transfers its exposure to those costs to others.

First, the jilted customer bears the cost of finding another bank. Second, whatever bank the customer migrates to assumes the risk of having to cooperate if and when that customer is actually investigated.

BuPa December 19, 2006 3:21 AM

Using computer to detect fraudster is not wrong but rely solely on a computer system to make decision on customer (human) relationship can never produce good result. If my account has to be closed, I would expect some human being to explain to me the reason in human language. The question is whether the bank is doing human business or pure business. On day one, banks use automation to serve their customers better. Now, the objective is cost saving. And, even worse, the behaviour of the system is no longer comprehensive to human beings and hence noboby could explain the exact reasons for closing the account. Many banking systems are now commodities and not built according to user requirements.

To prevent money laundering, banks have to monitor the deposit instead of the withdrawal and if the source of fund is suspicious, they have to reported to legal enforcement. I believe this allows the police to hold the fund and investigate.

Shefaly Yogendra December 19, 2006 5:49 AM

And when a bank, in providing exclusivity and personal service relies on humans alone, not a computer model in sight, here is what happens. Citibank is being sued in Zurich by a rich client for allegedly failing to keep his money secure and for allegedly allowing fraudulent transactions on his bank account (registration on FT needed):

https://registration.ft.com/registration/barrier?referer=http://www.ft.com/home/uk&location=http%3A//www.ft.com/cms/s/3d9ff3f0-8ec8-11db-a7b2-0000779e2340.html

JR December 19, 2006 5:58 AM

@Jiminy: More and more businesses are relying on computers to run things and using entry-level employees who know next to nothing of the business and are just supposed to feed data into the computer and read out the answers.

This would have made sense if the computer software developers were of a reasonably high standard. Most of them however know little of the business and mostly their knowledge of computing and programming is deficient too.

So the bank’s computer displays something along the lines of “suspicious deposit” or maybe even just a code, instead of “phone the customer and verify the source of the funds”.

This business model is cheap, and the cost is low too.

Reader X December 19, 2006 9:13 AM

“By pro-actively shedding itself of customers who, in the bank’s experience, seem likely to be the subject of federal inquiry, a bank can reduce its exposure to the associated costs.”

Well, sort of. The bank has already sunk costs into the development of account monitoring and AML capability. They will be engaged in the same AML activities regardless of the number of launderers caught.

What they avoid is the risk of negative regulatory scrutiny and possible action (fines, etc.) that might result from a high-profile case. These costs are not directly related to the handling of any given incident but are part of regulatory risk.

Reader X December 19, 2006 9:23 AM

“What I find strange in this case is that this person’s account was suspended when somebody else deposited a large sum of money into it. How does that pose a risk to either the bank or the account holder?”

It poses a risk to society, which is why regulators are requiring it. This typology does not differ substantially from the way some of the 9/11 hijackers were funded. The problem, of course, is that it also does not differ substantially from the way many foreign students are funded. it’s important to understand that this is not a needle-in-a-haystack problem, but a needle-in-a-box-of-needles problem.

It is not folly to scrutinize these sort of transactions more closely than others – modern AML systems would likely have caught some of the 9/11 hijackers – but it is definitely folly to think that this transaction alone is a basis to deny business to the customer. This is not the fault of the system, nor of its raison d’etre, but of the business process fed by the system’s output.

Davi Ottenheimer December 19, 2006 11:41 AM

“It is not folly to scrutinize these sort of transactions more closely than others – modern AML systems would likely have caught some of the 9/11 hijackers”

Yeah, or just having the ability to listen to FBI agents in the field and/or the French and German Intelligence services who are sending you warnings.

Scrutinizing transactions is a neutral phrase, but making a decision implies a value system. A lack of computer-profiling was not the problem, so it’s hard to imagine anyone thinking it will solve things on its own. Part of a solution, perhaps, but that brings us back to the problem of knowing what to do when presented with the information…

Reader X December 19, 2006 1:49 PM

“A lack of computer-profiling was not the problem…”

A lack of effective profiling was certainly part of the problem, to the extent that I understand what you mean by that statement. Transaction monitoring, in terms of both process and technology, has come a long way since the USA PATRIOT Act. (Perhaps the economists reading this would like to opine on the effect of draconian regulations on technological innovation.)

“…so it’s hard to imagine anyone thinking it will solve things on its own.”

I’m sure no one who reads this blog regularly thinks that it will. By contrast, at least one person on this thread has already rejected it out of hand, which is silly but, I guess, to be expected. I have not noted any strong operational experience with such systems on this thread…

“Part of a solution, perhaps, but that brings us back to the problem of knowing what to do when presented with the information…”

Yes, absolutely. This is what the people who effectively use profiling systems are doing well.

Matthew Skala December 19, 2006 8:13 PM

I think that using a computer, or not, to enforce the rules is not the real problem. The real problem is that there are two sets of rules – the ones they acknowledge and the ones they penalize you for breaking. There should only be one set of rules.

In Office Space we saw the same situation, with no computers involved. You have to wear at least 25 pieces of flair. But if you do wear 25 pieces of flair, you’ll get in trouble for not wearing enough. You’re supposed to wear more than 25. No, we will not tell you how many are really sufficient.

h December 20, 2006 12:56 AM

is this common for others with Paypal…
They accepted an electronic transfer payment, then promptly locked account when the funds were accessed. At Paypal’s customer service, I found the computer profiling overruled human error -correction authority, and now weeks later await a response from the company. Not really, but they might call.

prn December 20, 2006 4:20 PM

@Shefaly Yogendra: “in the UK banking market, there is a lot of activity to lose customers at the lower end of profitability and students are one such market.”

Now that is clever. Students may be at the lower end of profitability now, but just who do they think is going to be at the upper end of profitability 10-20 years down the line if not the students of today? If I were a student (haven’t been for quite a while now) would they think they can treat me like a pile of dog**** today and that 10 years later when I had a top job and high income I’d come back to them? Not a chance!

@another_bruce: Right on!

Paul

David Harper December 23, 2006 1:25 PM

My American wife came to England as a graduate student in the early 1990s. Even then, she says, the major banks treated foreign students with suspicion, and provided them with fewer banking facilities than British students.

As for HSBC, isn’t it a little bit ironic that the Hong Kong and Shanghai Banking Corporation should be treating an overseas customer like a terrorist?

sky December 30, 2006 8:19 AM

BSA requires that banks help search for and report “suspicious transactions” which may be being made to further terrorist or criminal activity.

The problem is, in part, is that their compliance is measured by state and FDIC examiners who did not construct the program and who each has their own subjective opinion as to what should be reported.

Their training manual even says they say they should cite banks which don’t report “unusual” activity. How is a bank suppose to comply with such vague guidelines? I’d guess that every branch has what someone would call “unusual” many times a day.

I’m new at this, and find the software not particularly helpful as it flags the same accounts over and over long after you have okayed similar transactions in the same account countless times. You can exempt the account from further scrutiny, but then that just raises examiner suspicion.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.