Entries Tagged "fraud"

Page 9 of 34

Hijacking Someone's Facebook Account with a Fake Passport Copy

BBC has the story. The confusion is that a scan of a passport is much easier to forge than an actual passport. This is a truly hard problem: how do you give people the ability to get back into their accounts after they’ve lost their credentials, while at the same time prohibiting hackers from using the same mechanism to hijack accounts? Demanding an easy-to-forge copy of a hard-to-forge document isn’t a good solution.

Posted on July 7, 2016 at 1:27 PMView Comments

Fraudsters are Buying IPv4 Addresses

IPv4 addresses are valuable, so criminals are figuring out how to buy or steal them.

Hence criminals’ interest in ways to land themselves IP addresses, some of which were detailed this week by ARIN’s senior director of global registry knowledge, Leslie Nobile, at the North American Network Operators Group’s NANOG 67 conference.

Nobile explained that criminals look for dormant ARIN records and try to establish themselves as the rightful administrator. ARIN has 30,556 legacy network records, she said, but a validated point of contact for only 54 per cent of those networks. The remaining ~14,000 networks are ripe for targeting by hijackers who Nobile said are only interested in establishing legitimacy with ARIN so they can find a buyer for unused IPv4 addresses possessed by dormant legacy networks.

Criminals do so by finding dormant ARIN records and Whois data to see if there is a valid contact, then ascertaining if IPv4 allocations are currently routed. If the assigned addresses are dark and no active administrator exists, hijackers can revive dormant domain names or even re-register the names of defunct companies in order to establish a position as legitimate administrators of an address space. If all goes well, the hijackers end up with addresses to sell.

Video presentation here.

Posted on June 22, 2016 at 1:15 PMView Comments

New Credit Card Scam

A criminal ring was arrested in Malaysia for credit card fraud:

They would visit the online shopping websites and purchase all their items using phony credit card details while the debugging app was activated.

The app would fetch the transaction data from the bank to the online shopping website, and trick the website into believing that the transaction was approved, when in reality, it had been declined by the bank.

The syndicates would later sell the items they had purchased illegally for a much lower price.

The problem here seems to be bad systems design. Why should the user be able to spoof the merchant’s verification protocol with the bank?

Posted on May 11, 2016 at 6:34 AMView Comments

Amazon Unlimited Fraud

Amazon Unlimited is an all-you-can-read service. You pay one price and can read anything that’s in the program. Amazon pays authors out of a fixed pool, on the basis of how many people read their books. More interestingly, it pays by the page. An author makes more money if someone reads his book through to page 200 than if they give up at page 50, and even more if they make it through to the end. This makes sense; it doesn’t pay authors for books people download but don’t read, or read the first few pages of and then decide not to read the rest.

This payment structure requires surveillance, and the Kindle does watch people as they read. The problem is that the Kindle doesn’t know if the reader actually reads the book—only what page they’re on. So Kindle Unlimited records the furthest page the reader synched, and pays based on that.

This opens up the possibility for fraud. If an author can create a thousand-page book and trick the reader into reading page 1,000, he gets paid the maximum. Scam authors are doing this through a variety of tricks.

What’s interesting is that while Amazon is definitely concerned about this kind of fraud, it doesn’t affect its bottom line. The fixed payment pool doesn’t change; just who gets how much of it does.

EDITED TO ADD: John Scalzi comments.

Posted on April 28, 2016 at 8:20 AMView Comments

Data Is a Toxic Asset

Thefts of personal information aren’t unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it’s information that’s needed to commit fraud, as happened in 2015 to Experian and the IRS.

Sometimes it’s stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the US Office of Personnel Management. The latter exposed highly sensitive personal data that affects security of millions of government employees, probably to the Chinese. Always it’s personal information about us, information that we shared with the expectation that the recipients would keep it secret. And in every case, they did not.

The telecommunications company TalkTalk admitted that its data breach last year resulted in criminals using customer information to commit fraud. This was more bad news for a company that’s been hacked three times in the past 12 months, and has already seen some disastrous effects from losing customer data, including £60 million (about $83 million) in damages and over 100,000 customers. Its stock price took a pummeling as well.

People have been writing about 2015 as the year of data theft. I’m not sure if more personal records were stolen last year than in other recent years, but it certainly was a year for big stories about data thefts. I also think it was the year that industry started to realize that data is a toxic asset.

The phrase “big data” refers to the idea that large databases of seemingly random data about people are valuable. Retailers save our purchasing habits. Cell phone companies and app providers save our location information.

Telecommunications providers, social networks, and many other types of companies save information about who we talk to and share things with. Data brokers save everything about us they can get their hands on. This data is saved and analyzed, bought and sold, and used for marketing and other persuasive purposes.

And because the cost of saving all this data is so cheap, there’s no reason not to save as much as possible, and save it all forever. Figuring out what isn’t worth saving is hard. And because someday the companies might figure out how to turn the data into money, until recently there was absolutely no downside to saving everything. That changed this past year.

What all these data breaches are teaching us is that data is a toxic asset and saving it is dangerous.

Saving it is dangerous because it’s highly personal. Location data reveals where we live, where we work, and how we spend our time. If we all have a location tracker like a smartphone, correlating data reveals who we spend our time with­—including who we spend the night with.

Our Internet search data reveals what’s important to us, including our hopes, fears, desires and secrets. Communications data reveals who our intimates are, and what we talk about with them. I could go on. Our reading habits, or purchasing data, or data from sensors as diverse as cameras and fitness trackers: All of it can be intimate.

Saving it is dangerous because many people want it. Of course companies want it; that’s why they collect it in the first place. But governments want it, too. In the United States, the National Security Agency and FBI use secret deals, coercion, threats and legal compulsion to get at the data. Foreign governments just come in and steal it. When a company with personal data goes bankrupt, it’s one of the assets that gets sold.

Saving it is dangerous because it’s hard for companies to secure. For a lot of reasons, computer and network security is very difficult. Attackers have an inherent advantage over defenders, and a sufficiently skilled, funded and motivated attacker will always get in.

And saving it is dangerous because failing to secure it is damaging. It will reduce a company’s profits, reduce its market share, hurt its stock price, cause it public embarrassment, and­—in some cases—­result in expensive lawsuits and occasionally, criminal charges.

All this makes data a toxic asset, and it continues to be toxic as long as it sits in a company’s computers and networks. The data is vulnerable, and the company is vulnerable. It’s vulnerable to hackers and governments. It’s vulnerable to employee error. And when there’s a toxic data spill, millions of people can be affected. The 2015 Anthem Health data breach affected 80 million people. The 2013 Target Corp. breach affected 110 million.

This toxic data can sit in organizational databases for a long time. Some of the stolen Office of Personnel Management data was decades old. Do you have any idea which companies still have your earliest e-mails, or your earliest posts on that now-defunct social network?

If data is toxic, why do organizations save it?

There are three reasons. The first is that we’re in the middle of the hype cycle of big data. Companies and governments are still punch-drunk on data, and have believed the wildest of promises on how valuable that data is. The research showing that more data isn’t necessarily better, and that there are serious diminishing returns when adding additional data to processes like personalized advertising, is just starting to come out.

The second is that many organizations are still downplaying the risks. Some simply don’t realize just how damaging a data breach would be. Some believe they can completely protect themselves against a data breach, or at least that their legal and public relations teams can minimize the damage if they fail. And while there’s certainly a lot that companies can do technically to better secure the data they hold about all of us, there’s no better security than deleting the data.

The last reason is that some organizations understand both the first two reasons and are saving the data anyway. The culture of venture-capital-funded start-up companies is one of extreme risk taking. These are companies that are always running out of money, that always know their impending death date.

They are so far from profitability that their only hope for surviving is to get even more money, which means they need to demonstrate rapid growth or increasing value. This motivates those companies to take risks that larger, more established, companies would never take. They might take extreme chances with our data, even flout regulations, because they literally have nothing to lose. And often, the most profitable business models are the most risky and dangerous ones.

We can be smarter than this. We need to regulate what corporations can do with our data at every stage: collection, storage, use, resale and disposal. We can make corporate executives personally liable so they know there’s a downside to taking chances. We can make the business models that involve massively surveilling people the less compelling ones, simply by making certain business practices illegal.

The Ashley Madison data breach was such a disaster for the company because it saved its customers’ real names and credit card numbers. It didn’t have to do it this way. It could have processed the credit card information, given the user access, and then deleted all identifying information.

To be sure, it would have been a different company. It would have had less revenue, because it couldn’t charge users a monthly recurring fee. Users who lost their password would have had more trouble re-accessing their account. But it would have been safer for its customers.

Similarly, the Office of Personnel Management didn’t have to store everyone’s information online and accessible. It could have taken older records offline, or at least onto a separate network with more secure access controls. Yes, it wouldn’t be immediately available to government employees doing research, but it would have been much more secure.

Data is a toxic asset. We need to start thinking about it as such, and treat it as we would any other source of toxicity. To do anything else is to risk our security and privacy.

This essay previously appeared on CNN.com.

Posted on March 4, 2016 at 5:32 AMView Comments

Exploiting Google Maps for Fraud

The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.

But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.

Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.

But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.

“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”

The scam is growing:

Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.

What’s interesting to me are the economic incentives involved:

Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.

Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.

“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”

There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.

And that’s Google’s best option, really. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.

This is exactly the sort of market failure that government regulation needs to fix.

Posted on February 8, 2016 at 6:52 AMView Comments

Forced Authorization Attacks Against Chip-and-Pin Credit Card Terminals

Clever:

The way forced authorisation fraud works is that the retailer sets up the terminal for a transaction by inserting the customer’s card and entering the amount, then hands the terminal over to the customer so they can type in the PIN. But the criminal has used a stolen or counterfeit card, and due to the high value of the transaction the terminal performs a “referral”—asking the retailer to call the bank to perform additional checks such as the customer answering a security question. If the security checks pass, the bank will give the retailer an authorisation code to enter into the terminal.

The problem is that when the terminal asks for these security checks, it’s still in the hands of the criminal, and it’s the criminal that follows the steps that the retailer should have. Since there’s no phone conversation with the bank, the criminal doesn’t know the correct authorisation code. But what surprises retailers is that the criminal can type in anything at this stage and the transaction will go through. The criminal might also be able to bypass other security features, for example they could override the checking of the PIN by following the steps the retailer would if the customer has forgotten the PIN.

By the time the terminal is passed back to the retailer, it looks like the transaction was completed successfully. The receipt will differ only very subtly from that of a normal transaction, if at all. The criminal walks off with the goods and it’s only at the end of the day that the authorisation code is checked by the bank. By that time, the criminal is long gone. Because some of the security checks the bank asked for weren’t completed, the retailer doesn’t get the money.

Posted on December 7, 2015 at 5:35 AMView Comments

1 7 8 9 10 11 34

Sidebar photo of Bruce Schneier by Joe MacInnis.