Entries Tagged "fraud"

Page 25 of 34

Fraudulent Australian Census Takers

In Australia, criminals are posing as census takers and harvesting personal data for fraudulent purposes.

EDITED TO ADD (8/21): I didn’t notice that this link is from 2001. Sorry about missing that, but it actually makes the story more interesting. This is the sort of identity-theft tactic that I would have expected to see this year, as criminals have gotten more and more sophisticated. It surprises me that they were doing this five years ago as well.

Posted on August 21, 2006 at 6:24 AMView Comments

Stealing Credit Card Information off Phone Lines

Here’s a sophisticated credit card fraud ring that intercepted credit card authorization calls in Phuket, Thailand.

The fraudsters loaded this data onto MP3 players, which they sent to accomplices in neighbouring Malaysia. Cloned credit cards were manufactured in Malaysia and sent back to Thailand, where they were used to fraudulently purchase goods and services.

It’s 2006 and those merchant terminals still don’t encrypt their communications?

Posted on August 15, 2006 at 6:19 AMView Comments

HSBC Insecurity Hype

The Guardian has the story:

One of Britain’s biggest high street banks has left millions of online bank accounts exposed to potential fraud because of a glaring security loophole, the Guardian has learned.

The defect in HSBC’s online banking system means that 3.1 million UK customers registered to use the service have been vulnerable to attack for at least two years. One computing expert called the lapse “scandalous”.

The discovery was made by a group of researchers at Cardiff University, who found that anyone exploiting the flaw was guaranteed to be able to break into any account within nine attempts.

Sounds pretty bad.

But look at this:

The flaw, which is not being detailed by the Guardian, revolves around the way HSBC customers access their web-based banking service. Criminals using so-called “keyloggers” – readily available gadgets or viruses which record every keystroke made on a target computer – can easily deduce the data needed to gain unfettered access to accounts in just a few attempts.

So, the “scandalous” flaw is that an attacker who already has a keylogger installed on someone’s computer can break into his HSBC account. Seems to me if an attacker has a keylogger installed on someone’s computer, then he’s got all sorts of security issues.

If this is the biggest flaw in HSBC’s login authentication system, I think they’re doing pretty good.

Posted on August 14, 2006 at 7:06 AMView Comments

Technological Arbitrage

This is interesting. Seems that a group of Sri Lankan credit card thieves collected the data off a bunch of UK chip-protected credit cards.

All new credit cards in the UK come embedded come with RFID chips that contain different pieces of user information, in order to access the account and withdraw cash the ATMs has to verify both the magnetic strip and the RFID tag. Without this double verification the ATM will confiscate the card, and possibly even notify the police.

They’re not RFID chips, they’re normal smart card chips that require physical contact—but that’s not the point.

They couldn’t clone the chips, so they took the information off the magnetic stripe and made non-chip cards. These cards wouldn’t work in the UK, of course, so the criminals flew down to India where the ATMs only verify the magnetic stripe.

Backwards compatibility is often incompatible with security. This is a good example, and demonstrates how criminals can make use of “technological arbitrage” to leverage compatibility.

EDITED TO ADD (8/9): Facts corrected above.

Posted on August 9, 2006 at 6:32 AMView Comments

Bot Networks

What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.

All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.

The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.

The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.

Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.

These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!

Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.

Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.

And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)

Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.

One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.

There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.

As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it’s primarily targeted against fringe industries—online gaming, online gambling, online porn—located offshore, but we’re seeing more and more of against mainstream companies in the U.S. and Europe.

EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.

Posted on July 27, 2006 at 6:35 AMView Comments

Sky Marshals Name Innocents to Meet Quota

One news source is reporting that sky marshals are reporting on innocent people in order to meet a quota:

The air marshals, whose identities are being concealed, told 7NEWS that they’re required to submit at least one report a month. If they don’t, there’s no raise, no bonus, no awards and no special assignments.

“Innocent passengers are being entered into an international intelligence database as suspicious persons, acting in a suspicious manner on an aircraft … and they did nothing wrong,” said one federal air marshal.

[…]

These unknowing passengers who are doing nothing wrong are landing in a secret government document called a Surveillance Detection Report, or SDR. Air marshals told 7NEWS that managers in Las Vegas created and continue to maintain this potentially dangerous quota system.

“Do these reports have real life impacts on the people who are identified as potential terrorists?” 7NEWS Investigator Tony Kovaleski asked.

“Absolutely,” a federal air marshal replied.

[…]

What kind of impact would it have for a flying individual to be named in an SDR?

“That could have serious impact … They could be placed on a watch list. They could wind up on databases that identify them as potential terrorists or a threat to an aircraft. It could be very serious,” said Don Strange, a former agent in charge of air marshals in Atlanta. He lost his job attempting to change policies inside the agency.

This is so insane, it can’t possibly be true. But I have been stunned before by the stupidity of the Department of Homeland Security.

EDITED TO ADD (7/27): This is what Brock Meeks said on David Farber’s IP mailing list:

Well, it so happens that I was the one that BROKE this story… way back in 2004. There were at least two offices, Miami and Las Vegas that had this quota system for writing up and filing “SDRs.”

The requirement was totally renegade and NOT endorsed by Air Marshal officials in Washington. The Las Vegas Air Marshal field office was (I think he’s retired now) by a real cowboy at the time, someone that caused a lot of problems for the Washington HQ staff. (That official once grilled an Air Marshal for three hours in an interrogation room because he thought the air marshal was source of mine on another story. The air marshal was then taken off flight status and made to wash the office cars for two weeks… I broke that story, too. And no, the punished air marshal was never a source of mine.)

Air marshals told they were filing false reports, as they did below, just to hit the quota.

When my story hit, those in the offices of Las Vegas and Miami were reprimanded and the practice was ordered stopped by Washington HQ.

I suppose the biggest question I have for this story is the HYPE of what happens to these reports. They do NOT place the person mention on a “watch list.” These reports, filed on Palm Pilot PDAs, go into an internal Air Marshal database that is rarely seen and pretty much ignored by other intelligence agencies, from all sources I talked to.

Why? Because the air marshals are seen as little more than “sky cops” and these SDRs considered little more than “field interviews” that cops sometimes file when they question someone loitering at a 7-11 too late at night.

The quota system, if it is still going on, is heinous, but it hardly results in the big spooky data collection scare that this cheapjack Denver “investigative” TV reporter makes it out to be.

The quoted former field official from Atlanta, Don Strange, did, in fact, lose his job over trying to chance internal policies. He was the most well-liked official among the rank and file and the Atlanta office, under his command, had the highest morale in the nation.

Posted on July 25, 2006 at 9:55 AMView Comments

Voice Authentication in Telephone Banking

This seems like a good idea, assuming it is reliable.

The introduction of voice verification was preceded by an extensive period of testing among more than 1,450 people and 25,000 test calls. These were made using both fixed-line and mobile telephones, at all times of day and also by relatives (including six twins). Special attention was devoted to people who were suffering from colds during the test period. ABN AMRO is the first major bank in the world to introduce this technology in this way.

Posted on July 21, 2006 at 7:43 AMView Comments

Paris Bank Hack at Center of National Scandal

From Wired News:

Among the falsified evidence produced by the conspirators before the fraud unraveled were confidential bank records originating with the Clearstream bank in Luxembourg, which were expertly modified to make it appear that some French politicians had secretly established offshore bank accounts to receive bribes. The falsified records were then sent to investigators, with enough authentic account information left in to make them appear credible.

Posted on July 17, 2006 at 6:42 AMView Comments

Click Fraud and the Problem of Authenticating People

Google’s $6 billion-a-year advertising business is at risk because it can’t be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.

With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It’s fraud if you sit at the computer and repeatedly click on the ad or—better yet—write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people’s computers to generate the fake clicks.

The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money—sometimes a lot of money—on nothing. (Here’s a company that will commit click fraud for you.)

Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever … and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn’t doing enough. My guess is that everyone is right: It’s in Google’s interest both to solve and to downplay the importance of the problem.

But the overarching problem is both hard to solve and important: How do you tell if there’s an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn’t automated his responses, and isn’t being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that’s infected with a Trojan.

This problem manifests itself in other areas as well.

For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn’t see.

Playing is less fun if everyone else is computer-assisted, but unless there’s a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players—or even computers playing without a real person at all—have the potential to drive all the human players away from the game.

Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it’s a real person visiting a website, not just a bot on a computer. Standard testing doesn’t work online, because the tester can’t be sure that the test taker doesn’t have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that’s not always practical and obviates the benefits of internet testing.

This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant’s computer committed some hacking offense, but the defense argued that it wasn’t the defendant who did it—that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties—and it wasn’t him who’d downloaded the porn.

Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it—in both senses of that word.

And it’s a problem that will get worse as computers get better at imitating people.

Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don’t pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It’s a hard model to make work—Google would become more of a partner in the final sale instead of an indifferent displayer of advertising—but it’s the right security response to click fraud: Change the rules of the game so that click fraud doesn’t matter.

That’s how to solve a security problem.

This essay appeared on Wired.com.

EDITED TO ADD (7/13): Click Monkeys is a hoax site.

EDITED TO ADD (7/25): An evalution of Google’s anti-click-fraud efforts, as part of the Lane Gifts case. I’m not sure if this expert report was done for Google, for Lane Gifts, or for the judge.

Posted on July 13, 2006 at 5:22 AMView Comments

1 23 24 25 26 27 34

Sidebar photo of Bruce Schneier by Joe MacInnis.