Entries Tagged "identity theft"

Page 6 of 13

Interesting Twist on Identity Theft

Okay, this is clever.

Basically, someone arrested as a homicide suspect walked out of jail after identifying himself as someone else. The biometric system worked, but human error overrode it:

But Sauceda’s fingerprints, taken by a jail employee to verify his identity, were smudged and couldn’t be matched to those on file for Garcia, said Brian Menges, director of jail administration.

So Sauceda was taken for an additional fingerprint check using the jail’s Live Scan technology. Menges said Saucedo’s Live Scan fingerprints were never compared to those on record for Garcia.

It’s a neat scam. Find out someone else who’s been arrested, have a friend come and post bail for that person, and then steal his identity when the jailers come into the cellblock.

Posted on November 2, 2007 at 12:25 PMView Comments

Understanding the Black Market in Internet Crime

Here’s a interesting paper from Carnegie Mellon University: “An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants.”

The paper focuses on the large illicit market that specializes in the commoditization of activities in support of Internet-based crime. The main goal of the paper was to understand and measure how these markets function, and discuss the incentives of the various market entities. Using a dataset collected over seven months and comprising over 13 million messages, they were able to categorize the market’s participants, the goods and services advertised, and the asking prices for selected interesting goods.

Really cool stuff.

Unfortunately, the data is extremely noisy and so far the authors have no way to cross-validate it, so it is difficult to make any strong conclusions.

The press focused on just one thing: a discussion of general ways to disrupt the market. Contrary to the claims of the article, the authors have not built any tools to disrupt the markets.

Related blog posts: Gozi and Storm.

Posted on October 29, 2007 at 2:23 PMView Comments

Future of Malware

Excellent threepart series on trends in criminal malware:

When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren’t weren’t paying for already-stolen credentials. Instead, 76service sold subscriptions or “projects” to Gozi-infected machines. Usually, projects were sold in 30-day increments because that’s a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.

Subscribers could log in with their assigned user name and password any time during the 30-day project. They’d be met with a screen that told them which of their bots was currently active, and a side bar of management options. For example, they could pull down the latest drops—data deposits that the Gozi-infected machines they subscribed to sent to the servers, like the 3.3 GB one Jackson had found.

A project was like an investment portfolio. Individual Gozi-infected machines were like stocks and subscribers bought a group of them, betting they could gain enough personal information from their portfolio of infected machines to make a profit, mostly by turning around and selling credentials on the black market. (In some cases, subscribers would use a few of the credentials themselves).

Some machines, like some stocks, would under perform and provide little private information. But others would land the subscriber a windfall of private data. The point was to subscribe to several infected machines to balance that risk, the way Wall Street fund managers invest in many stocks to offset losses in one company with gains in another.

[…]

That’s why the subscription prices were steep. “Prices started at $1,000 per machine per project,” says Jackson. With some tinkering and thanks to some loose database configuration, Jackson gained a view into other people’s accounts. He mostly saw subscriptions that bought access to only a handful of machines, rarely more than a dozen.

The $1K figure was for “fresh bots”—new infections that hadn’t been part of a project yet. Used bots that were coming off an expired project were available, but worth less (and thus, cost less) because of the increased likelihood that personal information gained from that machine had already been sold. Customers were urged to act quickly to get the freshest bots available.

This was another advantage for the seller. Providing the self-service interface freed up the sellers to create ancillary services. 76service was extremely customer-focused. “They were there to give you services that made it a good experience,” Jackson says. You want us to clean up the reports for you? Sure, for a small fee. You want a report on all the credentials from one bank in your drop? Hundred bucks, please. For another $150 a month, we’ll create secure remote drops for you. Alternative packaging and delivery options? We can do that. Nickel and dime. Nickel and dime.

And about banks not caring:

As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. “If you look at the volume of loss versus revenue, it’s not horribly bad yet,” says Chris Hoff, with a nod to the criminal hacker’s strategy of distributed pain. “The banks say, ‘Regulations say I need to do these seven things, so I do them and let’s hope the technology to defend against this catches up.'”

“John” the security executive at the bank, one of the only security professionals from financial services who agreed to speak for this story, says “If you audited a financial institution, you wouldn’t find many out of compliance. From a legal perspective, banks can spin that around and say there’s nothing else we could do.”

The banks know how much data Lance James at Secure Science is monitoring; some of them are his clients. The researcher with expertise on the HangUp Team calls consumers’ ability to transfer funds online “the dumbest thing I’ve ever seen. You can’t walk into the branch of a bank with a mask on and no ID and make a transfer. So why is it okay online?”

And yet banks push online banking to customers with one hand while the other hand pushes problems like Gozi away, into acceptable loss budgets and insurance—transferred risk.

As long as consumers don’t raise a fuss, and thus far they haven’t in any meaningful way, the banks have little to fear from their strategies.

But perhaps the only reason consumers don’t raise a fuss is because the banks have both overstated the safety and security of online banking and downplayed negative events around it, like the existence of Gozi and 76service.

The whole thing is worth reading.

Posted on October 17, 2007 at 1:07 PMView Comments

Poodle Identity Theft

Weird:

Lynne Day said Afonwen Welch Fusilier—or Blue, for short—was targeted after his pedigree details were accidentally posted online.

A suspected conman has been passing Blue off as his own, claiming the dog has given birth to pups which he tries to sell to unsuspecting customers.

Posted on July 27, 2007 at 6:14 AMView Comments

Computer Repair Technicians Accused of Copying Customer Files

We all know that it’s possible, but we assume the people who repair our computers don’t do this:

In recent months, allegations of agents copying pornography, music and alluring photos from customers’ computers have circulated on the Internet. Some bloggers now call it the “Peek Squad.”Any attractive young woman who drops off her computer with the Geek Squad should assume that her photos will be looked at,” said Brett Haddock, a former Geek Squad technician.

Just how much are these people paid? And how much money can you make with a few good identity thefts?

Posted on July 26, 2007 at 3:00 PM

REAL ID Action Required Now

I’ve written about the U.S. national ID card—REAL ID—extensively (most recently here). The Department of Homeland Security has published draft rules regarding REAL ID, and are requesting comments. Comments are due today, by 5:00 PM Eastern Time. Please, please, please, go to this Privacy Coalition site and submit your comments. The DHS has been making a big deal about the fact that so few people are commenting, and we need to prove them wrong.

This morning the Senate Judiciary Committee held hearings on REAL ID (info—and eventually a video—here); I was one of the witnesses who testified.

And lastly, Richard Forno and I wrote this essay for News.com:

In March, the Department of Homeland Security released its long-awaited guidance document regarding national implementation of the Real ID program, as part of its post-9/11 national security initiatives. It is perhaps quite telling that despite bipartisan opposition, Real ID was buried in a 2005 “must-pass” military spending bill and enacted into law without public debate or congressional hearings.

DHS has maintained that the Real ID concept is not a national identification database. While it’s true that the system is not a single database per se, this is a semantic dodge; according to the DHS document, Real ID will be a collaborative data-interchange environment built from a series of interlinking systems operated and administered by the states. In other words, to the Department of Homeland Security, it’s not a single database because it’s not a single system. But the functionality of a single database remains intact under the guise of a federated data-interchange environment.

The DHS document notes the “primary benefit of Real ID is to improve the security and lessen the vulnerability of federal buildings, nuclear facilities, and aircraft to terrorist attack.” We know now that vulnerable cockpit doors were the primary security weakness contributing to 9/11, and reinforcing them was a long-overdue protective measure to prevent hijackings. But this still raises an interesting question: Are there really so many members of the American public just “dropping by” to visit a nuclear facility that it’s become a primary reason for creating a national identification system? Are such visitors actually admitted?

DHS proposes guidelines for proving one’s identity and residence when applying for a Real ID card. Yet while the department concedes it’s a monumental task to prove one’s domicile or residence, it leaves it up to the states to determine what documents would be adequate proof of residence—and even suggests that a utility bill or bank statement might be appropriate documentation. If so, a person could easily generate multiple proof-of-residence documents. Basing Real ID on such easy-to-forge documents obviates a large portion of what Real ID is supposed to accomplish.

Finally, and perhaps most importantly for Americans, the very last paragraph of the 160-page Real ID document deserves special attention. In a nod to states’ rights advocates, DHS declares that states are free not to participate in the Real ID system if they choose—but any identification card issued by a state that does not meet Real ID criteria is to be clearly labeled as such, to include “bold lettering” or a “unique design” similar to how many states design driver’s licenses for those under 21 years of age.

In its own guidance document, the department has proposed branding citizens not possessing a Real ID card in a manner that lets all who see their official state-issued identification know that they’re “different,” and perhaps potentially dangerous, according to standards established by the federal government. They would become stigmatized, branded, marked, ostracized, segregated. All in the name of protecting the homeland; no wonder this provision appears at the very end of the document.

One likely outcome of this DHS-proposed social segregation is that people presenting non-Real ID identification automatically will be presumed suspicious and perhaps subject to additional screening or surveillance to confirm their innocence at a bar, office building, airport or routine traffic stop. Such a situation would establish a new form of social segregation—an attempt to separate “us” from “them” in the age of counterterrorism and the new normal, where one is presumed suspicious until proven more suspicious.

Two other big-picture concerns about Real ID come to mind: Looking at the overall concept of a national identification database, and given existing data security controls in large distributed systems, one wonders how vulnerable this system-of-systems will be to data loss or identity theft resulting from unscrupulous employees, flawed technologies, external compromises or human error—even under the best of security conditions. And second, there is no clear guidance on the limits of how the Real ID database would be used. Other homeland security initiatives, such as the Patriot Act, have been used and applied—some say abused—for purposes far removed from anything related to homeland security. How can we ensure the same will not happen with Real ID?

As currently proposed, Real ID will fail for several reasons. From a technical and implementation perspective, there are serious questions about its operational abilities both to protect citizen information and resist attempts at circumvention by adversaries. Financially, the initial unfunded $11 billion cost, forced onto the states by the federal government, is excessive. And from a sociological perspective, Real ID will increase the potential for expanded personal surveillance and lay the foundation for a new form of class segregation in the name of protecting the homeland.

It’s time to rethink some of the security decisions made during the emotional aftermath of 9/11 and determine whether they’re still a good idea for homeland security and America. After all, if Real ID was such a well-conceived plan, Maine and 22 other states wouldn’t be challenging it in their legislatures or rejecting the Real ID concept for any number of reasons. But they are.

And we as citizens should, too. Let the debate begin.

Again, go to this Privacy Coalition site and express your views. Today. Before 5:00 PM Eastern Time. (Or, if you prefer, you can use EFF’s comments page.)

Really. It will make a difference.

EDITED TO ADD (5/8): Status of anti-REAL-ID legislation in the states.

EDITED TO ADD (5/9): Article on the hearing.

Posted on May 8, 2007 at 12:15 PMView Comments

Story of a Credit Card Fraudster

A twopart story from The Guardian: an excerpt from Other People’s Money: The Rise And Fall Of Britain’s Most Audacious Credit Card Fraudster.

The first time I did the WTS, it was on a man from London who was staying in a £400 hotel room in Glasgow. I used my hotel phone trick to get his card and personal information—fortunately, he was a trusting individual. I then called his card company and explained that I was the gentleman concerned, in Glasgow on business, and had suffered the theft of my wallet and passport. I was understandably distraught, lying on my bed in Battlefield and speaking quietly so my parents couldn’t hear, and wondered what the company suggested I do. The sympathetic woman at the other end proposed I take a cash advance set against my account, which they could have ready for collection within a couple of hours at a wire transfer operator.

Posted on April 4, 2007 at 6:25 AMView Comments

Misplacing the Blame in Personal Identity Thefts

Really good article:

In a recent dissection of the connection between gaming and violence, the term “folk devil” was used to describe something that can be labeled dangerous in order to assign blame in a case where the causes are complex and unclear. The new paper suggests that hackers have become the folk devils of computer security, stating that “even though the campaign against hackers has successfully cast them as the primary culprits to blame for insecurity in cyberspace, it is not clear that constructing this target for blame has improved the security of personal digital records.”

Part of this argument is based on the contention that many of the criminal groups that engage in illicit access to records are culturally distinct from the hacker community and that the hacker community proper is composed of a number of subcultures, some of which may access personal data without distributing it.

But, even if a more liberal definition of hacker is allowed, they still account for far less than half of the data losses. The report states that “60 percent of the incidents involve missing or stolen hardware, insider abuse or theft, administrative error, or accidentally exposing data online.”

Those figures come from analyzing the data while eliminating a single event, the compromise of 1.6 billion records at Axciom. The Axciom data loss is informative, as it reveals how what could be categorized as a hack involves institutional negligence. The records stolen from the company were taken by an employee that had access to Axciom servers in order to upload data. That employee gained download access because Axciom set the same passwords for both types of access.

Posted on March 23, 2007 at 10:29 AMView Comments

1 4 5 6 7 8 13

Sidebar photo of Bruce Schneier by Joe MacInnis.