Entries Tagged "banking"

Page 14 of 18

Voice Authentication in Telephone Banking

This seems like a good idea, assuming it is reliable.

The introduction of voice verification was preceded by an extensive period of testing among more than 1,450 people and 25,000 test calls. These were made using both fixed-line and mobile telephones, at all times of day and also by relatives (including six twins). Special attention was devoted to people who were suffering from colds during the test period. ABN AMRO is the first major bank in the world to introduce this technology in this way.

Posted on July 21, 2006 at 7:43 AMView Comments

Paris Bank Hack at Center of National Scandal

From Wired News:

Among the falsified evidence produced by the conspirators before the fraud unraveled were confidential bank records originating with the Clearstream bank in Luxembourg, which were expertly modified to make it appear that some French politicians had secretly established offshore bank accounts to receive bribes. The falsified records were then sent to investigators, with enough authentic account information left in to make them appear credible.

Posted on July 17, 2006 at 6:42 AMView Comments

Failure of Two-Factor Authentication

Here’s a report of phishers defeating two-factor authentication using a man-in-the-middle attack.

The site asks for your user name and password, as well as the token-generated key. If you visit the site and enter bogus information to test whether the site is legit—a tactic used by some security-savvy people—you might be fooled. That’s because this site acts as the “man in the middle”—it submits data provided by the user to the actual Citibusiness login site. If that data generates an error, so does the phishing site, thus making it look more real.

I predicted this last year.

Posted on July 12, 2006 at 7:31 AMView Comments

Interview with a Debit Card Scammer

Podcast:

We discuss credit card data centers getting hacked; why banks getting hacked doesn’t make mainstream media; reissuing bank cards; how much he makes cashing out bank cards; how banks cover money stolen from credit cards; why companies are not cracking down on credit card crimes; how to prevent credit card theft; ATM scams; being “legit” in the criminal world; how he gets cash out gigs; getting PINs and encoding blank credit cards; how much money he can pull in a day; e-gold; his chances of getting caught; the best day to hit the ATMs; encrypting ICQ messages.

Posted on June 5, 2006 at 6:23 AMView Comments

Aligning Interest with Capability

Have you ever been to a retail store and seen this sign on the register: “Your purchase free if you don’t get a receipt”? You almost certainly didn’t see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store. That sign is a security device, and a clever one at that. And it illustrates a very important rule about security: it works best when you align interests with capability.

If you’re a store owner, one of your security worries is employee theft. Your employees handle cash all day, and dishonest ones will pocket some of it for themselves. The history of the cash register is mostly a history of preventing this kind of theft. Early cash registers were just boxes with a bell attached. The bell rang when an employee opened the box, alerting the store owner—who was presumably elsewhere in the store—that an employee was handling money.

The register tape was an important development in security against employee theft. Every transaction is recorded in write-only media, in such a way that it’s impossible to insert or delete transactions. It’s an audit trail. Using that audit trail, the store owner can count the cash in the drawer, and compare the amount with what the register. Any discrepancies can be docked from the employee’s paycheck.

If you’re a dishonest employee, you have to keep transactions off the register. If someone hands you money for an item and walks out, you can pocket that money without anyone being the wiser. And, in fact, that’s how employees steal cash in retail stores.

What can the store owner do? He can stand there and watch the employee, of course. But that’s not very efficient; the whole point of having employees is so that the store owner can do other things. The customer is standing there anyway, but the customer doesn’t care one way or another about a receipt.

So here’s what the employer does: he hires the customer. By putting up a sign saying “Your purchase free if you don’t get a receipt,” the employer is getting the customer to guard the employee. The customer makes sure the employee gives him a receipt, and employee theft is reduced accordingly.

There is a general rule in security to align interest with capability. The customer has the capability of watching the employee; the sign gives him the interest.

In Beyond Fear I wrote about ATM fraud; you can see the same mechanism at work:

“When ATM cardholders in the US complained about phantom withdrawals from their accounts, the courts generally held that the banks had to prove fraud. Hence, the banks’ agenda was to improve security and keep fraud low, because they paid the costs of any fraud. In the UK, the reverse was true: The courts generally sided with the banks and assumed that any attempts to repudiate withdrawals were cardholder fraud, and the cardholder had to prove otherwise. This caused the banks to have the opposite agenda; they didn’t care about improving security, because they were content to blame the problems on the customers and send them to jail for complaining. The result was that in the US, the banks improved ATM security to forestall additional losses—most of the fraud actually was not the cardholder’s fault—while in the UK, the banks did nothing.”

The banks had the capability to improve security. In the US, they also had the interest. But in the UK, only the customer had the interest. It wasn’t until the UK courts reversed themselves and aligned interest with capability that ATM security improved.

Computer security is no different. For years I have argued in favor of software liabilities. Software vendors are in the best position to improve software security; they have the capability. But, unfortunately, they don’t have much interest. Features, schedule, and profitability are far more important. Software liabilities will change that. They’ll align interest with capability, and they’ll improve software security.

One last story… In Italy, tax fraud used to be a national hobby. (It may still be; I don’t know.) The government was tired of retail stores not reporting sales and paying taxes, so they passed a law regulating the customers. Any customer having just purchased an item and stopped within a certain distance of a retail store, has to produce a receipt or they would be fined. Just as in the “Your purchase free if you don’t get a receipt” story, the law turned the customers into tax inspectors. They demanded receipts from merchants, which in turn forced the merchants to create a paper audit trail for the purchase and pay the required tax.

This was a great idea, but it didn’t work very well. Customers, especially tourists, didn’t like to be stopped by police. People started demanding that the police prove they just purchased the item. Threatening people with fines if they didn’t guard merchants wasn’t as effective an enticement as offering people a reward if they didn’t get a receipt.

Interest must be aligned with capability, but you need to be careful how you generate interest.

This essay originally appeared on Wired.com.

Posted on June 1, 2006 at 6:27 AMView Comments

Triple-DES Upgrade Adding Insecurities?

It’s a provocative headline: “Triple DES Upgrades May Introduce New ATM Vulnerabilities.” Basically, at the same time they’re upgrading their encryption to triple-DES, they’re also moving the communications links from dedicated lines to the Internet. And while the protocol encrypts PINs, it doesn’t encrypt any of the other information, such as card numbers and expiration dates.

So it’s the move from dedicated lines to the Internet that’s adding the insecurities.

Posted on April 17, 2006 at 6:48 AMView Comments

Credit Card Companies and Agenda

This has been making the rounds on the Internet. Basically, a guy tears up a credit card application, tapes it back together, fills it out with someone else’s address and a different phone number, and send it in. He still gets a credit card.

Imagine that some fraudster is rummaging through your trash and finds a torn-up credit card application. That’s why this is bad.

To understand why it’s happening, you need to understand the trade-offs and the agenda. From the point of view of the credit card company, the benefits of giving someone a credit card is that he’ll use it and generate revenue. The risk is that it’s a fraudster who will cost the company revenue. The credit card industry has dealt with the risk in two ways: they’ve pushed a lot of the risk onto the merchants, and they’ve implemented fraud detection systems to limit the damage.

All other costs and problems of identity theft are borne by the consumer; they’re an externality to the credit card company. They don’t enter into the trade-off decision at all.

We can laugh at this kind of thing all day, but it’s actually in the best interests of the credit card industry to mail cards in response to torn-up and taped-together applications without doing much checking of the address or phone number. If we want that to change, we need to fix the externality.

Posted on March 13, 2006 at 2:18 PMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

1 12 13 14 15 16 18

Sidebar photo of Bruce Schneier by Joe MacInnis.