Entries Tagged "identity theft"

Page 12 of 13

Student Hacks System to Alter Grades

This is an interesting story:

A UCSB student is being charged with four felonies after she allegedly stole the identity of two professors and used the information to change her own and several other students’ grades, police said.

The Universty of California Santa Barbara has a custom program, eGrades, where faculty can submit and alter grades. It’s password protected, of course. But there’s a backup system, so that faculty who forget their password can reset it using their Social Security number and date of birth.

A student worked for an insurance company, and she was able to obtain SSN and DOB for two faculty members. She used that information to reset their passwords and change grades.

Police, university officials and campus computer specialists said Ramirez’s alleged illegal access to the computer grading system was not the result of a deficiency or flaw in the program.

Sounds like a flaw in the program to me. It’s even one I’ve written about: a primary security mechanism that fails to a less-secure secondary mechanism.

Posted on April 1, 2005 at 2:36 PMView Comments

ID Theft is Inescapable

The Register says what I’ve been saying all along:

While this is nothing new, there is an important observation here that’s worth emphasizing: none of these cases involved online transactions.

Many people innocently believe that they’re safe from credit card fraud and identity theft in the brick and mortar world. Nothing could be farther from the truth. The vast majority of incidents can be traced to skimming, dumpster diving, and just plain stupidity among those who “own” our personal data.

Only a small fraction of such incidents result from online transactions. Every time you pay by check, use a debit or credit card, or fill out an application for insurance, housing, credit, employment, or education, you lose control of sensitive data.

In the US, a merchant is at liberty to do anything he pleases with the information, and this includes selling it to a third party without your knowledge or permission, or entering it into a computerized database, possibly with lax access controls, and possibly connected to the Internet.

Sadly, Congress’s response has been to increase the penalties for identity theft, rather than to regulate access to, and use of, personal data by merchants, marketers, and data miners. Incredibly, the only person with absolutely no control over the collection, storage, security, and use of such sensitive information is its actual owner.

For this reason, it’s literally impossible for an individual to prevent identity theft and credit card fraud, and it will remain impossible until Congress sees fit to regulate the privacy invasion industry.

Posted on March 30, 2005 at 7:35 AMView Comments

Personal Information and Identity Theft

From BBC:

The chance to win theatre tickets is enough to make people give away their identity, reveals a survey.

Of those taking part 92% revealed details such as mother’s maiden name, first school and birth date.

Fraud due to impersonation—commonly called “identity theft”—works for two reasons. One, identity information is easy to obtain. And two, identity information is easy to use to commit fraud.

Studies like this show why attacking the first reason is futile; there are just too many ways to get the information. If we want to reduce the risks associated with identity theft, we have to make identity information less valuable. Too much of our security is based on identity, and it’s not working.

Posted on March 25, 2005 at 8:09 AMView Comments

The Failure of Two-Factor Authentication

Two-factor authentication isn’t our savior. It won’t defend against phishing. It’s not going to prevent identity theft. It’s not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.

The problem with passwords is that they’re too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They’re also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can’t be sure who is typing that password in.

Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it’s harder for someone else to intercept. You can’t write down the ever-changing part. An intercepted password won’t be good the next time it’s needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.

These tokens have been around for at least two decades, but it’s only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don’t provide adequate security, and are hoping that two-factor authentication will fix their problems.

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

Here are two new active attacks we’re starting to see:

  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.

  • Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

The real threat is fraud due to impersonation, and the tactics of impersonation will change in response to the defenses. Two-factor authentication will force criminals to modify their tactics, that’s all.

Recently I’ve seen examples of two-factor authentication using two different communications paths: call it “two-channel authentication.” One bank sends a challenge to the user’s cell phone via SMS and expects a reply via SMS. If you assume that all your customers have cell phones, then this results in a two-factor authentication process without extra hardware. And even better, the second authentication piece goes over a different communications channel than the first; eavesdropping is much, much harder.

But in this new world of active attacks, no one cares. An attacker using a man-in-the-middle attack is happy to have the user deal with the SMS portion of the log-in, since he can’t do it himself. And a Trojan attacker doesn’t care, because he’s relying on the user to log in anyway.

Two-factor authentication is not useless. It works for local login, and it works within some corporate networks. But it won’t work for remote authentication over the Internet. I predict that banks and other financial institutions will spend millions outfitting their users with two-factor authentication tokens. Early adopters of this technology may very well experience a significant drop in fraud for a while as attackers move to easier targets, but in the end there will be a negligible drop in the amount of fraud and identity theft.

This essay will appear in the April issue of Communications of the ACM.

Posted on March 15, 2005 at 7:54 AMView Comments

ChoicePoint Says "Please Regulate Me"

According to ChoicePoint’s most recent 8-K filing:

Based on information currently available, we estimate that approximately 145,000 consumers from 50 states and other territories may have had their personal information improperly accessed as a result of the recent Los Angeles incident and certain other instances of unauthorized access to our information products. Approximately 35,000 of these consumers are California residents, and approximately 110,000 are residents of other states. These numbers were determined by conducting searches of our databases that matched searches conducted by customers who we believe may have had unauthorized access to our information products on or after July 1, 2003, the effective date of the California notification law. Because our databases are constantly updated, our search results will never be identical to the search results of these customers.

Catch that? ChoicePoint actually has no idea if only 145,000 customers were affected by its recent security debacle. But it’s not doing any work to determine if more than 145,000 customers were affected—or if any customers before July 1, 2003 were affected—because there’s no law compelling it to do so.

I have no idea why ChoicePoint has decided to tape a huge “Please Regulate My Industry” sign to its back, but it’s increasingly obvious that it has. There’s a class-action shareholders’ lawsuit, but I don’t think that will be enough.

And, by the way, Choicepoint’s database is riddled with errors.

Posted on March 9, 2005 at 2:54 PMView Comments

Choicepoint's CISO Speaks

Richard Baich, Choicepoint’s CISO, is interviewed on SearchSecurity.com:

This is not an information security issue. My biggest concern is the impact this has on the industry from the standpoint that people are saying ChoicePoint was hacked. No we weren’t. This type of fraud happens every day.

Nice spin job, but it just doesn’t make sense. This isn’t a computer hack in the traditional sense, but it’s a social engineering hack of their system. Information security controls were compromised, and confidential information was leaked.

It’s created a media frenzy; this has been mislabeled a hack and a security breach. That’s such a negative impression that suggests we failed to provide adequate protection. Fraud happens every day. Hacks don’t.

So, Choicepoint believes that providing adequate protection doesn’t include preventing this kind of attack.

I’m sure he’s exaggerating when he says that “this type of fraud happens every day” and “frauds happens every day,” but if it’s true then Choicepoint has a huge information security problem.

Posted on March 1, 2005 at 10:45 AMView Comments

Identity Theft out of Golf Lockers

When someone goes golfing in Japan, he’s given a locker in which to store his valuables. Generally, and at the golf course in question, these are electronic combination locks. The user selects a code himself and locks his valuables. Of course, there’s a back door—a literal one—to the lockers, in case someone forgets his unlock code. Furthermore, the back door allows the administrator of these lockers to read all the codes to all the lockers.

Here’s the scam: A group of thieves worked in conjunction with the locker administrator to open the lockers, copy the golfers’ debit cards, and replace them in their wallets and in their lockers before they were done golfing. In many cases, the golfers used the same code to lock their locker as their bank card PIN, so the thieves got those as well. Then the thieves stole a lot of money from multiple ATMs.

Several factors make this scam even worse. One, unlike the U.S., ATM cards in Japan have no limit. You can literally withdraw everything out of the account. Two, the victims don’t know anything until they find out they have no money when they use their card somewhere. Three, the victims, since they play golf at these expensive courses, are
usually very rich. And four, unlike the United States, Japanese banks do not guarantee loss due to theft.

Posted on March 1, 2005 at 9:20 AMView Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive—both in money and time—to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs—whether through regulation or liability—it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PMView Comments

The Digital Person

Last week, I stayed at the St. Regis hotel in Washington, DC. It was my first visit, and the management gave me a questionnaire, asking me things like my birthday, my spouse’s name and birthday, my anniversary, and my favorite fruits, drinks, and sweets. The purpose was clear; the hotel wanted to be able to offer me a more personalized service the next time I visited. And it was a purpose I agreed with; I wanted more personalized service. But I was very uneasy about filling out the form.

It wasn’t that the information was particularly private. I make no secret of my birthday, or anniversary, or food preferences. Much of that information is even floating around the Web somewhere. Secrecy wasn’t the issue.

The issue was control. In the United States, information about a person is owned by the person who collects it, not by the person it is about. There are specific exceptions in the law, but they’re few and far between. There are no broad data protection laws, as you find in the European Union. There are no Privacy Commissioners, as you find in Canada. Privacy law in the United States is largely about secrecy: if the information is not secret, there’s little you can do to control its dissemination.

As a result, enormous databases exist that are filled with personal information. These databases are owned by marketing firms, credit bureaus, and the government. Amazon knows what books we buy. Our supermarket knows what foods we eat. Credit card companies know quite a lot about our purchasing habits. Credit bureaus know about our financial history, and what they don’t know is contained in bank records. Health insurance records contain details about our health and well-being. Government records contain our Social Security numbers, birthdates, addresses, mother’s maiden names, and a host of other things. Many driver’s license records contain digital pictures.

All of this data is being combined, indexed, and correlated. And it’s being used for all sorts of things. Targeted marketing campaigns are just the tip of the iceberg. This information is used by potential employers to judge our suitability as employees, by potential landlords to determine our suitability as renters, and by the government to determine our likelihood of being a terrorist.

Some stores are beginning to use our data to determine whether we are desirable customers or not. If customers take advantage of too many discount offers or make too many returns, they may be profiled as “bad” customers and be treated differently from the “good” customers.

And with alarming frequency, our data is being abused by identity thieves. The businesses that gather our data don’t care much about keeping it secure. So identity theft is a problem where those who suffer from it—the individuals—are not in a position to improve security, and those who are in a position to improve security don’t suffer from the problem.

The issue here is not about secrecy, it’s about control. The issue is that both government and commercial organizations are building “digital dossiers” about us, and that these dossiers are being used to judge and categorize us through some secret process.

A new book by George Washington University Law Professor Daniel Solove examines the problem of the growing accumulation of personal information in enormous databases. The book is called The Digital Person: Technology and Privacy in the Information Age, and it is a fascinating read.

Solove’s book explores this problem from a legal perspective, explaining what the problem is, how current U.S. law fails to deal with it, and what we should do to protect privacy today. It’s an unusually perceptive discussion of one of the most
vexing problems of the digital age—our loss of control over our personal information. It’s a fascinating journey into the almost surreal ways personal information is hoarded, used, and abused in the digital age.

Solove argues that our common conceptualization of the privacy problem as Big Brother—some faceless organization knowing our most intimate secrets—is only one facet of the issue. A better metaphor can be found in Franz Kafka’s The Trial. In the book, a vast faceless bureaucracy constructs a huge dossier about a person, who can’t find out what information exists about him in the dossier, why the information has been gathered, or what it will be used for. Privacy is not about intimate secrets; it’s about who has control of the millions of pieces of personal data that we leave like droppings as we go through our daily life. And until the U.S. legal system recognizes this fact, Americans will continue to live in an world where they have little control over their digital person.

In the end, I didn’t complete the questionnaire from the St. Regis Hotel. While I was fine with the St. Regis in Washington, DC, having that information to make my subsequent stays a little more personal, and was probably fine with that information being shared among other St. Regis hotels, I wasn’t comfortable with the St. Regis doing whatever they wanted with that information. I wasn’t comfortable with them selling the information to a marketing database. I wasn’t comfortable with anyone being able to buy that information. I wasn’t comfortable with that information ending up in a database of my habits, my preferences, my proclivities. It wasn’t the primary use of that information that bothered me, it was the secondary uses.

Solove has done much more thinking about this issue than I have. His book provides a clear account of the social problems involving information privacy, and haunting predictions of current U.S. legal policies. Even more importantly, the legal solutions he provides are compelling and worth serious consideration. I recommend his book highly.

The book’s website

Order the book on Amazon

Posted on December 9, 2004 at 9:18 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.