New Gmail Phishing Scam
The article is right; this is frighteningly good.
Page 2 of 13
The article is right; this is frighteningly good.
Brian Krebs has the story. Bottom line: PayPal has no excuse for this kind of stuff. I hope the public shaming incents them to offer better authentication for their customers.
Interesting paper: “Drops for Stuff: An Analysis of Reshipping Mule Scams. From a blog post:
A cybercriminal (called operator) recruits unsuspecting citizens with the promise of a rewarding work-from-home job. This job involves receiving packages at home and having to re-ship them to a different address, provided by the operator. By accepting the job, people unknowingly become part of a criminal operation: the packages that they receive at their home contain stolen goods, and the shipping destinations are often overseas, typically in Russia. These shipping agents are commonly known as reshipping mules (or drops for stuff in the underground community).
[…]
Studying the management of the mules lead us to some surprising findings. When applying for the job, people are usually required to send the operator copies of their ID cards and passport. After they are hired, mules are promised to be paid at the end of their first month of employment. However, from our data it is clear that mules are usually never paid. After their first month expires, they are never contacted back by the operator, who just moves on and hires new mules. In other words, the mules become victims of this scam themselves, by never seeing a penny. Moreover, because they sent copies of their documents to the criminals, mules can potentially become victims of identity theft.
The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.
This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.
There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and—usually—our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.
It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)
The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.
This is the main consequence of the OPM data breach. Whoever stole the data—we suspect it was the Chinese—got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now opens these employees up to blackmail and other types of coercion.
Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password—something you know—with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.
And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.
Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.
Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.
Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.
This essay previously appeared on Motherboard.
This is a pretty impressive social engineering story: an attacker compromised someone’s GoDaddy domain registration in order to change his e-mail address and steal his Twitter handle. It’s a complicated attack.
My claim was refused because I am not the “current registrant.” GoDaddy asked the attacker if it was ok to change account information, while they didn’t bother asking me if it was ok when the attacker did it.
[…]
It’s hard to decide what’s more shocking, the fact that PayPal gave the attacker the last four digits of my credit card number over the phone, or that GoDaddy accepted it as verification.
The misuse of credit card numbers as authentication is also how Matt Honan got hacked.
Peter Swire and Yianni Lagos have pre-published a law journal article on the risks of data portability. It specifically addresses an EU data protection regulation, but the security discussion is more general.
…Article 18 poses serious risks to a long-established E.U. fundamental right of data protection, the right to security of a person’s data. Previous access requests by individuals were limited in scope and format. By contrast, when an individual’s lifetime of data must be exported ‘without hindrance,’ then one moment of identity fraud can turn into a lifetime breach of personal data.
They have a point. If you’re going to allow users to download all of their data with one command, you might want to double- and triple-check that command. Otherwise it’s going to become an attack vector for identity theft and other malfeasance.
I wrote about this sort of thing in 2006 in the UK, but it’s even bigger business here:
The criminals, some of them former drug dealers, outwit the Internal Revenue Service by filing a return before the legitimate taxpayer files. Then the criminals receive the refund, sometimes by check but more often though a convenient but hard-to-trace prepaid debit card.
The government-approved cards, intended to help people who have no bank accounts, are widely available in many places, including tax preparation companies. Some of them are mailed, and the swindlers often provide addresses for vacant houses, even buying mailboxes for them, and then collect the refunds there.
[…]
The fraud, which has spread around the country, is costing taxpayers hundreds of millions of dollars annually, federal and state officials say. The I.R.S. sometimes, in effect, pays two refunds instead of one: first to the criminal who gets a claim approved, and then a second to the legitimate taxpayer, who might have to wait as long as a year while the agency verifies the second claim.
J. Russell George, the Treasury inspector general for tax administration, testified before Congress this month that the I.R.S. detected 940,000 fake returns for 2010 in which identity thieves would have received $6.5 billion in refunds. But Mr. George said the agency missed an additional 1.5 million returns with possibly fraudulent refunds worth more than $5.2 billion.
The problem is that it doesn’t take much identity information to file a tax return with the IRS, and the agency automatically corrects your mistakes if you make them—and does the calculations for you if you don’t want to do them yourself. So it’s pretty easy to file a fake return for someone. And the IRS has no way to check if the taxpayer’s address is real, so it sends refunds out to whatever address or account you give them.
There’s a group who charges to make social engineering calls to obtain missing personal information for identity theft.
This doesn’t surprise me at all. Fraud is a business, too.
Good comment:
“We’re moving into an era of ‘steal everything’,” said David Emm, a senior security researcher for Kaspersky Labs.
He believes that cyber criminals are now no longer just targeting banks or retailers in the search for financial details, but instead going after social and other networks which encourage the sharing of vast amounts of personal information.
As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But—of course—it’s not in the database owner’s interest to limit the data it collects; it’s in the interests of those whom the data is about.
Chris Hoofnagle has a new paper: “Internalizing Identity Theft.” Basically, he shows that one of the problems is that lenders extend credit even when credit applications are sketchy.
From an article on the work:
Using a 2003 amendment to the Fair Credit Reporting Act that allows victims of ID theft to ask creditors for the fraudulent applications submitted in their names, Mr. Hoofnagle worked with a small sample of six ID theft victims and delved into how they were defrauded.
Of 16 applications presented by imposters to obtain credit or medical services, almost all were rife with errors that should have suggested fraud. Yet in all 16 cases, credit or services were granted anyway.
In the various cases described in the paper, which was published on Wednesday in The U.C.L.A. Journal of Law and Technology, one victim found four of six fraudulent applications submitted in her name contained the wrong address; two contained the wrong phone number and one the wrong date of birth.
Another victim discovered that his imposter was 70 pounds heavier, yet successfully masqueraded as him using what appeared to be his stolen driver’s license, and in one case submitted an incorrect Social Security number.
This is a textbook example of an economic externality. Because most of the cost of identity theft is borne by the victim—even with the lender reimbursing the victim if pushed to—the lenders make the trade-off that’s best for their business, and that means issuing credit even in marginal situations. They make more money that way.
If we want to reduce identity theft, the only solution is to internalize that externality. Either give victims the ability to sue lenders who issue credit in their names to identity thieves, or pass a law with penalties if lenders do this.
Among the ways to move the cost of the crime back to issuers of credit, Mr. Hoofnagle suggests that lenders contribute to a fund that will compensate victims for the loss of their time in resolving their ID theft problems.
Sidebar photo of Bruce Schneier by Joe MacInnis.