Entries Tagged "banking"

Page 12 of 19

Future of Malware

Excellent threepart series on trends in criminal malware:

When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren’t weren’t paying for already-stolen credentials. Instead, 76service sold subscriptions or “projects” to Gozi-infected machines. Usually, projects were sold in 30-day increments because that’s a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.

Subscribers could log in with their assigned user name and password any time during the 30-day project. They’d be met with a screen that told them which of their bots was currently active, and a side bar of management options. For example, they could pull down the latest drops—data deposits that the Gozi-infected machines they subscribed to sent to the servers, like the 3.3 GB one Jackson had found.

A project was like an investment portfolio. Individual Gozi-infected machines were like stocks and subscribers bought a group of them, betting they could gain enough personal information from their portfolio of infected machines to make a profit, mostly by turning around and selling credentials on the black market. (In some cases, subscribers would use a few of the credentials themselves).

Some machines, like some stocks, would under perform and provide little private information. But others would land the subscriber a windfall of private data. The point was to subscribe to several infected machines to balance that risk, the way Wall Street fund managers invest in many stocks to offset losses in one company with gains in another.

[…]

That’s why the subscription prices were steep. “Prices started at $1,000 per machine per project,” says Jackson. With some tinkering and thanks to some loose database configuration, Jackson gained a view into other people’s accounts. He mostly saw subscriptions that bought access to only a handful of machines, rarely more than a dozen.

The $1K figure was for “fresh bots”—new infections that hadn’t been part of a project yet. Used bots that were coming off an expired project were available, but worth less (and thus, cost less) because of the increased likelihood that personal information gained from that machine had already been sold. Customers were urged to act quickly to get the freshest bots available.

This was another advantage for the seller. Providing the self-service interface freed up the sellers to create ancillary services. 76service was extremely customer-focused. “They were there to give you services that made it a good experience,” Jackson says. You want us to clean up the reports for you? Sure, for a small fee. You want a report on all the credentials from one bank in your drop? Hundred bucks, please. For another $150 a month, we’ll create secure remote drops for you. Alternative packaging and delivery options? We can do that. Nickel and dime. Nickel and dime.

And about banks not caring:

As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. “If you look at the volume of loss versus revenue, it’s not horribly bad yet,” says Chris Hoff, with a nod to the criminal hacker’s strategy of distributed pain. “The banks say, ‘Regulations say I need to do these seven things, so I do them and let’s hope the technology to defend against this catches up.'”

“John” the security executive at the bank, one of the only security professionals from financial services who agreed to speak for this story, says “If you audited a financial institution, you wouldn’t find many out of compliance. From a legal perspective, banks can spin that around and say there’s nothing else we could do.”

The banks know how much data Lance James at Secure Science is monitoring; some of them are his clients. The researcher with expertise on the HangUp Team calls consumers’ ability to transfer funds online “the dumbest thing I’ve ever seen. You can’t walk into the branch of a bank with a mask on and no ID and make a transfer. So why is it okay online?”

And yet banks push online banking to customers with one hand while the other hand pushes problems like Gozi away, into acceptable loss budgets and insurance—transferred risk.

As long as consumers don’t raise a fuss, and thus far they haven’t in any meaningful way, the banks have little to fear from their strategies.

But perhaps the only reason consumers don’t raise a fuss is because the banks have both overstated the safety and security of online banking and downplayed negative events around it, like the existence of Gozi and 76service.

The whole thing is worth reading.

Posted on October 17, 2007 at 1:07 PMView Comments

UK Police Can Now Demand Encryption Keys

Under a new law that went into effect this month, it is now a crime to refuse to turn a decryption key over to the police.

I’m not sure of the point of this law. Certainly it will have the effect of spooking businesses, who now have to worry about the police demanding their encryption keys and exposing their entire operations.

Cambridge University security expert Richard Clayton said in May of 2006 that such laws would only encourage businesses to house their cryptography operations out of the reach of UK investigators, potentially harming the country’s economy. “The controversy here [lies in] seizing keys, not in forcing people to decrypt. The power to seize encryption keys is spooking big business,” Clayton said.

“The notion that international bankers would be wary of bringing master keys into UK if they could be seized as part of legitimate police operations, or by a corrupt chief constable, has quite a lot of traction,” he added. “With the appropriate paperwork, keys can be seized. If you’re an international banker you’ll plonk your headquarters in Zurich.”

But if you’re guilty of something that can only be proved by the decrypted data, you might be better off refusing to divulge the key (and facing the maximum five-year penalty the statue provides) instead of being convicted for whatever more serious charge you’re actually guilty of.

I think this is just another skirmish in the “war on encryption” that has been going on for the past fifteen years. (Anyone remember the Clipper chip?) The police have long maintained that encryption is an insurmountable obstacle to law and order:

The Home Office has steadfastly proclaimed that the law is aimed at catching terrorists, pedophiles, and hardened criminals—all parties which the UK government contents are rather adept at using encryption to cover up their activities.

We heard the same thing from FBI Director Louis Freeh in 1993. I called them “The Four Horsemen of the Information Apocalypse“—terrorists, drug dealers, kidnappers, and child pornographers—and have been used to justify all sorts of new police powers.

Posted on October 11, 2007 at 6:40 AMView Comments

Credit Card Gas Limits

Here’s an interesting phenomenon: rising gas costs have pushed up a lot of legitimate transactions to the “anti-fraud” ceiling.

Security is a trade-off, and now the ceiling is annoying more and more legitimate gas purchasers. But to me the real question is: does this ceiling have any actual security purpose?

In general, credit card fraudsters like making gas purchases because the system is automated: no signature is required, and there’s no need to interact with any other person. In fact, buying gas is the most common way a fraudster tests that a recently stolen card is valid. The anti-fraud ceiling doesn’t actually prevent any of this, but limits the amount of money at risk.

But so what? How many perps are actually trying to get more gas than is permitted? Are credit-card-stealing miscreants also swiping cars with enormous gas tanks, or merely filling up the passenger cars they regularly drive? I’d love to know how many times, prior to the run-up in gas prices, a triggered cutoff actually coincided with a subsequent report of a stolen card. And what’s the effect of a ceiling, apart from a gas shut-off? Surely the smart criminals know about smurfing, if they need more gas than the ceiling will allow.

The Visa spokesperson said, “We get more calls, questions, when gas prices increase.” He/she didn’t say: “We make more calls to see if fraud is occurring.” So the only inquiries made may be in the cases where fraud isn’t occurring.

Posted on June 26, 2007 at 1:21 PMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Social Engineering Notes

This is a fantastic story of a major prank pulled off at the Super Bowl this year. Basically, five people smuggled more than a quarter of a ton of material into Dolphin Stadium in order to display their secret message on TV. A summary:

Just days after the Boston bomb scare, another team of Boston-based pranksters smuggled and distributed 2,350 suspicious light-up devices into the Super Bowl. Due to its attractiveness as a terrorist target, Dolphin Stadium was on a Level One security alert, a level usually reserved for Presidential inaugurations. By posing as media reporters, the pranksters were able to navigate 95 boxes through federal marshals, Homeland Security agents, bomb squads, police dogs, and a five-ton X-ray crane.

Given all the security, it’s amazing how easy it was for them to become part of the security perimeter with all that random stuff. But to those of us who follow this thing, it shouldn’t be. His observations are spot on:

1. Wear a suit.
2. Wear a Bluetooth headset.
3. Pretend to be talking loudly to someone on the other line.
4. Carry a clipboard.
5. Be white.

Again, no surprise here. But it makes you wonder what’s the point of annoying the hell out of ordinary citizens with security measures (like pat down searches) when the emperor has no clothes.

Someone who crashed the Oscars last year gave similar advice:

Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.

On a much smaller scale, here’s someone’s story of social engineering a bank branch:

I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk.

“Hello,” I say. “John Doe with XYZ Pest Control, here to perform your pest inspection.?? I flash her the smile followed by the credentials. She looks at me for a moment, goes “Uhm… okay… let me check with the branch manager…” and picks up the phone. I stand around twiddling my thumbs and wait while the manager is contacted and confirmation is made. If all goes according to plan, the fake emails I sent out last week notifying branch managers of our inspection will allow me access.

It does.

Social engineering is surprisingly easy. As I said in Beyond Fear (page 144):

Social engineering will probably always work, because so many people are by nature helpful and so many corporate employees are naturally cheerful and accommodating. Attacks are rare, and most people asking for information or help are legitimate. By appealing to the victim’s natural tendencies, the attacker will usually be able to cozen what she wants.

All it takes is a good cover story.

EDITED TO ADD (4/20): The first commenter suggested that the Zug story is a hoax. I think he makes a good argument, and I have no evidence to refute it. Does anyone know for sure?

EDITED TO ADD (4/21): Wired concludes that the Super Bowl stunt happened, but that no one noticed. Engaget is leaning toward hoax.

Posted on April 20, 2007 at 6:41 AMView Comments

Foiling Bank Robbers with Kindness

Seems to work:

The method is a sharp contrast to the traditional training for bank employees confronted with a suspicious person, which advises not approaching the person, and at most, activating an alarm or dropping an exploding dye pack into the cash.

When a man walked into a First Mutual branch last year wearing garden gloves and sunglasses, manager Scott Taffera greeted him heartily, invited him to remove the glasses, and guided him to an equally friendly teller. The man eventually asked for a roll of quarters and left.

Carr said he suspects the man was the “Garden Glove Bandit,” who robbed area banks between March 2004 and November 2006.

What I like about this security system is that it fails really well in the event of a false alarm. There’s nothing wrong with being extra nice to a legitimate customer.

Posted on April 18, 2007 at 6:24 AMView Comments

Bank Botches Two-Factor Authentication

From their press release:

The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.

Um, hello? Having a username and a password—even if they’re both secret—does not count as two factors, two layers, or two of anything. You need to have two different authentication systems: a password and a biometric, a password and a token.

I wouldn’t trust the New Horizons Community Credit Union with my money.

Posted on April 13, 2007 at 7:33 AMView Comments

1 10 11 12 13 14 19

Sidebar photo of Bruce Schneier by Joe MacInnis.