Entries Tagged "scams"
Page 3 of 13
This article feels like hyperbole:
The scam has arrived in Australia after being used in the United States and Britain.
The scammer may ask several times “can you hear me?”, to which people would usually reply “yes.”
The scammer is then believed to record the “yes” response and end the call.
That recording of the victim’s voice can then be used to authorise payments or charges in the victim’s name through voice recognition.
Are there really banking systems that use voice recognition of the word “yes” to authenticate? I have never heard of that.
Interesting paper: “Dial One for Scam: A Large-Scale Analysis of Technical Support Scams“:
Abstract: In technical support scams, cybercriminals attempt to convince users that their machines are infected with malware and are in need of their technical support. In this process, the victims are asked to provide scammers with remote access to their machines, who will then “diagnose the problem”, before offering their support services which typically cost hundreds of dollars. Despite their conceptual simplicity, technical support scams are responsible for yearly losses of tens of millions of dollars from everyday users of the web.
In this paper, we report on the first systematic study of technical support scams and the call centers hidden behind them. We identify malvertising as a major culprit for exposing users to technical support scams and use it to build an automated system capable of discovering, on a weekly basis, hundreds of phone numbers and domains operated by scammers. By allowing our system to run for more than 8 months we collect a large corpus of technical support scams and use it to provide insights on their prevalence, the abused infrastructure, the illicit profits, and the current evasion attempts of scammers. Finally, by setting up a controlled, IRB-approved, experiment where we interact with 60 different scammers, we experience first-hand their social engineering tactics, while collecting detailed statistics of the entire process. We explain how our findings can be used by law-enforcing agencies and propose technical and educational countermeasures for helping users avoid being victimized by
technical support scams.
This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It’s unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it’s a new criminal use of smartphones and ubiquitous information.
Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.
BBC has the story. The confusion is that a scan of a passport is much easier to forge than an actual passport. This is a truly hard problem: how do you give people the ability to get back into their accounts after they’ve lost their credentials, while at the same time prohibiting hackers from using the same mechanism to hijack accounts? Demanding an easy-to-forge copy of a hard-to-forge document isn’t a good solution.
A criminal ring was arrested in Malaysia for credit card fraud:
They would visit the online shopping websites and purchase all their items using phony credit card details while the debugging app was activated.
The app would fetch the transaction data from the bank to the online shopping website, and trick the website into believing that the transaction was approved, when in reality, it had been declined by the bank.
The syndicates would later sell the items they had purchased illegally for a much lower price.
The problem here seems to be bad systems design. Why should the user be able to spoof the merchant’s verification protocol with the bank?
Amazon Unlimited is an all-you-can-read service. You pay one price and can read anything that’s in the program. Amazon pays authors out of a fixed pool, on the basis of how many people read their books. More interestingly, it pays by the page. An author makes more money if someone reads his book through to page 200 than if they give up at page 50, and even more if they make it through to the end. This makes sense; it doesn’t pay authors for books people download but don’t read, or read the first few pages of and then decide not to read the rest.
This payment structure requires surveillance, and the Kindle does watch people as they read. The problem is that the Kindle doesn’t know if the reader actually reads the book — only what page they’re on. So Kindle Unlimited records the furthest page the reader synched, and pays based on that.
This opens up the possibility for fraud. If an author can create a thousand-page book and trick the reader into reading page 1,000, he gets paid the maximum. Scam authors are doing this through a variety of tricks.
What’s interesting is that while Amazon is definitely concerned about this kind of fraud, it doesn’t affect its bottom line. The fixed payment pool doesn’t change; just who gets how much of it does.
EDITED TO ADD: John Scalzi comments.
They feel quaint today:
But in the spring of 1859, folks were concerned about another kind of hustle: A man who went by the name of A.V. Lamartine drifted from town to town in the Midwest pretending to attempt suicide.
He would walk into a hotel according to newspaper accounts from Salem, Ore., to Richmond, Va., and other places and appear depressed as he requested a room. Once settled in, he would ring a bell for assistance, and when someone arrived, Lamartine would point to an empty bottle on the table labeled “2 ounces of laudanum” and call for a clergyman.
People rushing to his bedside to help him would find a suicide note. The Good Samaritans would summon a doctor, administer emetics and nurse him as he recovered.
Somehow Lamartine knew his situation would engender medical and financial assistance from kind strangers in the 19th century. The scenarios ended this way, as one Brooklyn reporter explained: “He is restored with difficulty and sympathetic people raise a purse for him and he departs.
The New York Times has a long article on fraudulent locksmiths. The scam is a basic one: quote a low price on the phone, but charge much more once you show up and do the work. But the method by which the scammers get victims is new. They exploit Google’s crowdsourced system for identifying businesses on their maps. The scammers convince Google that they have a local address, which Google displays to its users who are searching for local businesses.
But they involve chicanery with two platforms: Google My Business, essentially the company’s version of the Yellow Pages, and Map Maker, which is Google’s crowdsourced online map of the world. The latter allows people around the planet to log in to the system and input data about streets, companies and points of interest.
Both Google My Business and Map Maker are a bit like Wikipedia, insofar as they are largely built and maintained by millions of contributors. Keeping the system open, with verification, gives countless businesses an invaluable online presence. Google officials say that the system is so good that many local companies do not bother building their own websites. Anyone who has ever navigated using Google Maps knows the service is a technological wonder.
But the very quality that makes Google’s systems accessible to companies that want to be listed makes them vulnerable to pernicious meddling.
“This is what you get when you rely on crowdsourcing for all your ‘up to date’ and ‘relevant’ local business content,” Mr. Seely said. “You get people who contribute meaningful content, and you get people who abuse the system.”
The scam is growing:
Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.
What’s interesting to me are the economic incentives involved:
Only Google, it seems, can fix Google. The company is trying, its representatives say, by, among other things, removing fake information quickly and providing a “Report a Problem” tool on the maps. After looking over the fake Locksmith Force building, a bunch of other lead-gen advertisers in Phoenix and that Mountain View operation with more than 800 websites, Google took action.
Not only has the fake Locksmith Force building vanished from Google Maps, but the company no longer turns up in a “locksmith Phoenix” search. At least not in the first 20 pages. Nearly all the other spammy locksmiths pointed out to Google have disappeared from results, too.
“We’re in a constant arms race with local business spammers who, unfortunately, use all sorts of tricks to try to game our system and who’ve been a thorn in the Internet’s side for over a decade,” a Google spokesman wrote in an email. “As spammers change their techniques, we’re continually working on new, better ways to keep them off Google Search and Maps. There’s work to do, and we want to keep doing better.”
There was no mention of a stronger verification system or a beefed-up spam team at Google. Without such systemic solutions, Google’s critics say, the change to local results will not rise even to the level of superficial.
And that’s Google’s best option, really. It’s not the one losing money from these scammers, so it’s not motivated to fix the problem. Unless the problem rises to the level of affecting user trust in the entire system, it’s just going to do superficial things.
This is exactly the sort of market failure that government regulation needs to fix.
Sidebar photo of Bruce Schneier by Joe MacInnis.