Entries Tagged "fraud"

Page 20 of 35

UPC Switching Scam

It’s not a new scam to switch bar codes and buy merchandise for a lower value, but how do you get away with over $1M worth of merchandise with this scam?

In a statement of facts filed with Tidwell’s plea, he admitted that, during one year, he and others conspired to steal more than $1 million in merchandise from large retailers and sell the items through eBay. The targeted merchandise included high-end vacuum cleaners, electric welders, power winches, personal computers, and electric generators.

Tidwell created fraudulent UPC labels on his home personal computer. Conspirators entered various stores in Ohio, Illinois, Indiana, Pennsylvania and Texas and placed the fraudulent labels on merchandise they targeted, and then bought the items from the store. The fraudulent UPC labels attached to the merchandise would cause the item to be rung up for a price far below its actual retail value.

That requires a lot of really clueless checkout clerks.

EDITED TO ADD (11/7): Video of talk on barcode hacks.

Posted on October 31, 2008 at 6:43 AMView Comments

Horrible Identity Theft Story

This is a story of how smart people can be neutralized through stupid procedures.

Here’s the part of the story where some poor guy’s account get’s completely f-ed. This thief had been bounced to the out-sourced to security so often that he must have made a check list of any possible questions they would ask him. Through whatever means, he managed to get the answers to these questions. Now when he called, he could give us the information we were asking for, but by this point we knew his voice so well that we still tried to get him to security. It worked like this: We put him on hold and dial the extension for security. We get a security rep and start to explain the situation; we tell them he was able to give the right information, but that we know is the same guy that’s been calling for weeks and we are certain he is not the account holder. They begrudgingly take the call. Minutes later another one of us gets a call from a security rep saying they are giving us a customer who has been cleared by them. And here the thief was back in our department. For those of us who had come to know him, the fight waged on night after night.

Posted on October 30, 2008 at 12:10 PMView Comments

"Scareware" Vendors Sued

This is good:

Microsoft Corp. and the state of Washington this week filed lawsuits against a slew of “scareware” purveyors, scam artists who use fake security alerts to frighten consumers into paying for worthless computer security software.

The case filed by the Washington attorney general’s office names Texas-based Branch Software and its owner James Reed McCreary IV, alleging that McCreary’s company caused targeted PCs to pop up misleading security alerts about security threats on the victims’ computers. The alerts warned users that their systems were “damaged and corrupted” and instructed them to visit a Web site to purchase a copy of Registry Cleaner XP for $39.95.

I would have thought that existing scam laws would be enough, but Washington state actually has a specific law about this sort of thing:

The lawsuits were filed under Washington’s Computer Spyware Act, which among other things punishes individuals who prey on user concerns regarding spyware or other threats. Specifically, the law makes it illegal to misrepresent the extent to which software is required for computer security or privacy, and it provides actual damages or statutory damages of $100,000 per violation, whichever is greater.

Posted on October 2, 2008 at 7:03 AMView Comments

News from the Rock Phish Gang

Definitely interesting:

Based in Europe, the Rock Phish group is a criminal collective that has been targeting banks and other financial institutions since 2004. According to RSA, they are responsible for half of the worldwide phishing attacks and have siphoned tens of millions of dollars from individuals’ bank accounts. The group got its name from a now discontinued quirk in which the phishers used directory paths that contained the word “rock.”

The first sign the group was expanding operations came in April, when it introduced a trojan known alternately as Zeus or WSNPOEM, which steals sensitive financial information in transit from a victim’s machine to a bank. Shortly afterward, the gang added more crimeware, including a custom-made botnet client that was spread, among other means, using the Neosploit infection kit.

[…]

Soon, additional signs appeared pointing to a partnership between Rock Phishers and Asprox. Most notably, the command and control server for the custom Rock Phish crimeware had exactly the same directory structure of many of the Asprox servers, leading RSA researchers to believe Rock Phish and Asprox attacks were using at least one common server. (Researchers from Damballa were able to confirm this finding after observing malware samples from each of the respective botnets establish HTTP proxy server connections to a common set of destination IPs.)

Posted on September 10, 2008 at 7:47 AMView Comments

Identity Farming

Let me start off by saying that I’m making this whole thing up.

Imagine you’re in charge of infiltrating sleeper agents into the United States. The year is 1983, and the proliferation of identity databases is making it increasingly difficult to create fake credentials. Ten years ago, someone could have just shown up in the country and gotten a driver’s license, Social Security card and bank account—possibly using the identity of someone roughly the same age who died as a young child—but it’s getting harder. And you know that trend will only continue. So you decide to grow your own identities.

Call it “identity farming.” You invent a handful of infants. You apply for Social Security numbers for them. Eventually, you open bank accounts for them, file tax returns for them, register them to vote, and apply for credit cards in their name. And now, 25 years later, you have a handful of identities ready and waiting for some real people to step into them.

There are some complications, of course. Maybe you need people to sign their name as parents—or, at least, mothers. Maybe you need to doctors to fill out birth certificates. Maybe you need to fill out paperwork certifying that you’re home-schooling these children. You’ll certainly want to exercise their financial identity: depositing money into their bank accounts and withdrawing it from ATMs, using their credit cards and paying the bills, and so on. And you’ll need to establish some sort of addresses for them, even if it is just a mail drop.

You won’t be able to get driver’s licenses or photo IDs in their name. That isn’t critical, though; in the U.S., more than 20 million adult citizens don’t have photo IDs. But other than that, I can’t think of any reason why identity farming wouldn’t work.

Here’s the real question: Do you actually have to show up for any part of your life?

Again, I made this all up. I have no evidence that anyone is actually doing this. It’s not something a criminal organization is likely to do; twenty-five years is too distant a payoff horizon. The same logic holds true for terrorist organizations; it’s not worth it. It might have been worth it to the KGB—although perhaps harder to justify after the Soviet Union broke up in 1991—and might be an attractive option for existing intelligence adversaries like China.

Immortals could also use this trick to self-perpetuate themselves, inventing their own children and gradually assuming their identity, then killing their parents off. They could even show up for their own driver’s license photos, wearing a beard as the father and blue spiked hair as the son. I’m told this is a common idea in Highlander fan fiction.

The point isn’t to create another movie plot threat, but to point out the central role that data has taken on in our lives. Previously, I’ve said that we all have a data shadow that follows us around, and that more and more institutions interact with our data shadows instead of with us. We only intersect with our data shadows once in a while—when we apply for a driver’s license or passport, for example—and those interactions are authenticated by older, less-secure interactions. The rest of the world assumes that our photo IDs glue us to our data shadows, ignoring the rather flimsy connection between us and our plastic cards. (And, no, REAL-ID won’t help.)

It seems to me that our data shadows are becoming increasingly distinct from us, almost with a life of their own. What’s important now is our shadows; we’re secondary. And as our society relies more and more on these shadows, we might even become unnecessary.

Our data shadows can live a perfectly normal life without us.

This essay previously appeared on Wired.com.

EDITED TO ADD (9/9): Interesting commentary.

Posted on September 9, 2008 at 5:42 AMView Comments

Software to Facilitate Retail Tax Fraud

Interesting:

Thanks to a software program called a zapper, even technologically illiterate restaurant and store owners can siphon cash from computer cash registers and cheat tax officials.

[…]

Zappers alter the electronic sales records in a cash register. To satisfy tax collectors, the tally of food orders, for example, must match the register’s final cash total. To hide the removal of cash from the till, a crooked business owner has to erase the record of food orders equal to the amount of cash taken; otherwise, the imbalance is obvious to any auditor.

[…]

The more sophisticated zappers are easy to use, according to several experts. A dialogue box, which shows the day’s tally, pops up on the register’s screen.

In a second dialogue box, the thief chooses to take a dollar amount or percentage of the till. The program then calculates which orders to erase to get close to the amount of cash the person wants to remove. Then it suggests how much cash to take, and it erases the entries from the books and a corresponding amount in orders, so the register balances.

Posted on September 2, 2008 at 12:24 PMView Comments

Data Mining to Detect Pump-and-Dump Scams

I don’t know any of the details, but this seems like a good use of data mining:

Mr Tancredi said Verisign’s fraud detection kit would help “decrease the time between the attack being launched and the brokerage being able to respond”.

Before now, he said, brokerages relied on counter measures such as restrictive stock trading or analysis packages that only spotted a problem when money had gone.

Verisign’s software is a module that brokers can add to their in-house trading system that alerts anti-fraud teams to look more closely at trades that exhibit certain behaviour patterns.

“What this self-learning behavioural engine does is look at the different attributes of the event, not necessarily about the computer or where you are logging on from but about the actual transaction, the trade, the amount of the trade,” said Mr Tancredi.

“For example have you liquidated all of your assets in stock that you own in order to buy one penny stock?” he said. “Another example is when a customer who normally trades tech stock on Nasdaq all of a sudden trades a penny stock that has to do with health care and is placing a trade four times more than normal.”

This is a good use of data mining because, as I said previously:

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms.

Another news article here.

Posted on August 14, 2008 at 6:10 AMView Comments

Indictments Against Largest ID Theft Ring Ever

It was really big news yesterday, but I don’t think it’s that much of a big deal. These crimes are still easy to commit and it’s still too hard to catch the criminals. Catching one gang, even a large one, isn’t going to make us any safer.

If we want to mitigate identity theft, we have to make it harder for people to get credit, make transactions, and generally do financial business remotely:

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.

The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.

Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

I am, however, impressed that we managed to pull together the police forces from several countries to prosecute this case.

Posted on August 7, 2008 at 12:45 PMView Comments

Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

1 18 19 20 21 22 35

Sidebar photo of Bruce Schneier by Joe MacInnis.