Entries Tagged "databases"

Page 13 of 14

Noticing Data Misuse

Everyone seems to be looking at their databases for personal information leakages.

Tax liens, mortgage papers, deeds, and other real estate-related documents are publicly available in on-line databases run by registries of deeds across the state. The Globe found documents in free databases of all but three Massachusetts counties containing the names and Social Security numbers of Massachusetts residents….

Although registers of deeds said that they are unaware of cases in which criminals used information from their databases maliciously, the information contained in the documents would be more than enough to steal an identity and open new lines of credit….

Isn’t that part of the problem, though? It’s easy to say “we haven’t seen any cases of fraud using our information,” because there’s rarely a way to tell where information comes from. The recent epidemic of public leaks comes from people noticing the leak process, not the effects of the leaks. So everyone thinks their data practices are good, because there have never been any documented abuses stemming from leaks of their data, and everyone is fooling themselves.

Posted on July 5, 2005 at 8:47 AMView Comments

Public Disclosure of Personal Data Loss

Citigroup announced that it lost personal data on 3.9 million people. The data was on a set of backup tapes that were sent by UPS (a package delivery service) from point A and never arrived at point B.

This is a huge data loss, and even though it is unlikely that any bad guys got their hands on the data, it will have profound effects on the security of all our personal data.

It might seem that there has been an epidemic of personal-data losses recently, but that’s an illusion. What we’re seeing are the effects of a California law that requires companies to disclose losses of thefts of personal data. It’s always been happening, only now companies have to go public with it.

As a security expert, I like the California law for three reasons. One, data on actual intrusions is useful for research. Two, alerting individuals whose data is lost or stolen is a good idea. And three, increased public scrutiny leads companies to spend more effort protecting personal data.

Think of it as public shaming. Companies will spend money to avoid the PR cost of public shaming. Hence, security improves.

This works, but there’s an attenuation effect going on. As more of these events occur, the press is less likely to report them. When there’s less noise in the press, there’s less public shaming. And when there’s less public shaming, the amount of money companies are willing to spend to avoid it goes down.

This data loss has set a new bar for reporters. Data thefts affecting 50,000 individuals will no longer be news. They won’t be reported.

The notification of individuals also has an attenuation effect. I know people in California who have a dozen notices about the loss of their personal data. When no identity theft follows, people start believing that it isn’t really a problem. (In the large, they’re right. Most data losses don’t result in identity theft. But that doesn’t mean that it’s not a problem.)

Public disclosure is good. But it’s not enough.

Posted on June 8, 2005 at 4:45 PMView Comments

U.S. Medical Privacy Law Gutted

In the U.S., medical privacy is largely governed by a 1996 law called HIPAA. Among many other provisions, HIPAA regulates the privacy and security surrounding electronic medical records. HIPAA specifies civil penalties against companies that don’t comply with the regulations, as well as criminal penalties against individuals and corporations who knowingly steal or misuse patient data.

The civil penalties have long been viewed as irrelevant by the health care industry. Now the criminal penalties have been gutted:

An authoritative new ruling by the Justice Department sharply limits the government’s ability to prosecute people for criminal violations of the law that protects the privacy of medical records.

The criminal penalties, the department said, apply to insurers, doctors, hospitals and other providers—but not necessarily their employees or outsiders who steal personal health data.

In short, the department said, people who work for an entity covered by the federal privacy law are not automatically covered by that law and may not be subject to its criminal penalties, which include a $250,000 fine and 10 years in prison for the most serious violations.

This is a complicated issue. Peter Swire worked extensively on this bill as the President’s Chief Counselor for Privacy, and I am going to quote him extensively. First, a story about someone who was convicted under the criminal part of this statute.

In 2004 the U.S. Attorney in Seattle announced that Richard Gibson was being indicted for violating the HIPAA privacy law. Gibson was a phlebotomist ­ a lab assistant ­ in a hospital. While at work he accessed the medical records of a person with a terminal cancer condition. Gibson then got credit cards in the patient’s name and ran up over $9,000 in charges, notably for video game purchases. In a statement to the court, the patient said he “lost a year of life both mentally and physically dealing with the stress” of dealing with collection agencies and other results of Gibson’s actions. Gibson signed a plea agreement and was sentenced to 16 months in jail.

According to this Justice Department ruling, Gibson was wrongly convicted. I presume his attorney is working on the matter, and I hope he can be re-tried under our identity theft laws. But because Gibson (or someone else like him) was working in his official capacity, he cannot be prosecuted under HIPAA. And because Gibson (or someone like him) was doing something not authorized by his employer, the hospital cannot be prosecuted under HIPAA.

The healthcare industry has been opposed to HIPAA from the beginning, because it puts constraints on their business in the name of security and privacy. This ruling comes after intense lobbying by the industry at the Department of Heath and Human Services and the Justice Department, and is the result of an HHS request for an opinion.

From Swire’s analysis the Justice Department ruling.

For a law professor who teaches statutory interpretation, the OLC opinion is terribly frustrating to read. The opinion reads like a brief for one side of an argument. Even worse, it reads like a brief that knows it has the losing side but has to come out with a predetermined answer.

I’ve been to my share of HIPAA security conferences. To the extent that big health is following the HIPAA law—and to a large extent, they’re waiting to see how it’s enforced—they are doing so because of the criminal penalties. They know that the civil penalties aren’t that large, and are a cost of doing business. But the criminal penalties were real. Now that they’re gone, the pressure on big health to protect patient privacy is greatly diminished.

Again Swire:

The simplest explanation for the bad OLC opinion is politics. Parts of the health care industry lobbied hard to cancel HIPAA in 2001. When President Bush decided to keep the privacy rule—quite possibly based on his sincere personal views—the industry efforts shifted direction. Industry pressure has stopped HHS from bringing a single civil case out of the 13,000 complaints. Now, after a U.S. Attorney’s office had the initiative to prosecute Mr. Gibson, senior officials in Washington have clamped down on criminal enforcement. The participation of senior political officials in the interpretation of a statute, rather than relying on staff attorneys, makes this political theory even more convincing.

This kind of thing is bigger than the security of the healthcare data of Americans. Our administration is trying to collect more data in its attempt to fight terrorism. Part of that is convincing people—both Americans and foreigners—that this data will be protected. When we gut privacy protections because they might inconvenience business, we’re telling the world that privacy isn’t one of our core concerns.

If the administration doesn’t believe that we need to follow its medical data privacy rules, what makes you think they’re following the FISA rules?

Posted on June 7, 2005 at 12:15 PMView Comments

Accuracy of Commercial Data Brokers

PrivacyActivism has released a study of ChoicePoint and Acxiom, two of the U.S.’s largest data brokers. The study looks at accuracy of information and responsiveness to requests for reports.

It doesn’t look good.

From the press release:

100% of the eleven participants in the study discovered errors in background check reports provided by ChoicePoint. The majority of participants found errors in even the most basic biographical information: name, social security number, address and phone number (in 67% of Acxiom reports, 73% of ChoicePoint reports). Moreover, over 40% of participants did not receive their reports from Acxiom—and the ones who did had to wait an average of three months from the time they requested their information until they
received it.

I spoke with Deborah Pierce, the Executive Director of PrivacyActivism. She made a couple of interesting points.

First, it was very difficult for them to find a legal way to do this study. There are no mechanisms for any kind of oversight of the industry. They had to find companies who were doing background checks on employees anyway, and who felt that participating in this study with PrivacyActivism was important. Then those companies asked their employees if they wanted to anonymously participate in the study.

Second, they were surprised at just how bad the data is. The most shocking error was that two people out of eleven were listed as corporate directors of companies that they had never heard of. This can’t possibly be statistically meaningful, but it is certainly scary.

Posted on June 7, 2005 at 7:45 AMView Comments

Massive Data Theft

During a time when large thefts of personal data are dime-a-dozen, this one stands out.

What is thought to be the largest U.S. banking security breach in history has gotten even bigger.

The number of bank accounts accessed illegally by a New Jersey cybercrime ring has grown to 676,000, according to police investigators. That’s up from the initial estimate of 500,000 accounts police said last month had been breached.

Hackensack, N.J., police Det. Capt. Frank Lomia said today that an additional 176,000 accounts were found by investigators who have been probing the ring for several months. All 676,000 consumer accounts involve New Jersey residents who were clients at four different banks, he said.

Even before the latest account tally was made public, the U.S. Department of the Treasury labeled the incident the largest breach of banking security in the U.S. to date.

The case has already led to criminal charges against nine people, including seven former employees of the four banks. The crime ring apparently accessed the data illegally through the former bank workers. None of those employees were IT workers, police said.

One amazing thing about the story is how manual the process was.

The suspects pulled up the account data while working inside their banks, then printed out screen captures of the information or wrote it out by hand, Lomia said. The data was then provided to a company called DRL Associates Inc., which had been set up as a front for the operation. DRL advertised itself as a deadbeat-locator service and as a collection agency, but was not properly licensed for those activities by the state, police said.

And I’m not really sure out what the data was stolen for:

The information was then allegedly sold to more than 40 collection agencies and law firms, police said.

Is collections that really big an industry?

Edited to add: Here is some good commentary by Adam Fields.

Posted on May 24, 2005 at 8:49 AMView Comments

Processing Exit Visas

From Federal Computer Week:

The Homeland Security Department will choose in the next 60 days which of three procedures it will use to track international visitors leaving the United States, department officials said today.

A report evaluating the three methods under consideration is due in the next few weeks, said Anna Hinken, spokeswoman for US-VISIT, the program that screens foreign nationals entering and exiting the country to weed out potential terrorists.

The first process uses kiosks located throughout an airport or seaport. An “exit attendant”—who would be a contract worker, Hinken said—checks the traveler’s documents. The traveler then steps to the station, scans both index fingers and has a digital photo taken. The station prints out a receipt that verifies the passenger has checked out.

The second method requires the passenger to present the receipt when reaching the departure gate. An exit attendant will scan the receipt and one of the passenger’s index fingers using a wireless handheld device. If the passenger’s fingerprint matches the identity on the receipt, the attendant returns the receipt and the passenger can board.

The third procedure uses just the wireless device at the gate. The screening officer scans the traveler’s fingerprints and takes a picture with the device, which is similar in size to tools that car-rental companies use, Hinken said. The device wirelessly checks the US-VISIT database. Once the traveler’s identity is confirmed as safe, the officer prints out a receipt and the visitor can pass.

Properly evaluating this trade-off would look at the relative ease of attacking the three systems, the relative costs of the three systems, and the relative speed and convenience—to the traveller—of the three systems. My guess is that the system that requires the least amount of interaction with a person when boarding the plane is best.

Posted on April 20, 2005 at 8:16 AMView Comments

Mitigating Identity Theft

Identity theft is the new crime of the information age. A criminal collects enough personal data on someone to impersonate a victim to banks, credit card companies, and other financial institutions. Then he racks up debt in the person’s name, collects the cash, and disappears. The victim is left holding the bag. While some of the losses are absorbed by financial institutions—credit card companies in particular—the credit-rating damage is borne by the victim. It can take years for the victim to clear his name.

Unfortunately, the solutions being proposed in Congress won’t help. To see why, we need to start with the basics. The very term “identity theft” is an oxymoron. Identity is not a possession that can be acquired or lost; it’s not a thing at all. Someone’s identity is the one thing about a person that cannot be stolen.

The real crime here is fraud; more specifically, impersonation leading to fraud. Impersonation is an ancient crime, but the rise of information-based credentials gives it a modern spin. A criminal impersonates a victim online and steals money from his account. He impersonates a victim in order to deceive financial institutions into granting credit to the criminal in the victim’s name. He impersonates a victim to the Post Office and gets the victim’s address changed. He impersonates a victim in order to fool the police into arresting the wrong man. No one’s identity is stolen; identity information is being misused to commit fraud.

The crime involves two very separate issues. The first is the privacy of personal data. Personal privacy is important for many reasons, one of which is impersonation and fraud. As more information about us is collected, correlated, and sold, it becomes easier for criminals to get their hands on the data they need to commit fraud. This is what’s been in the news recently: ChoicePoint, LexisNexis, Bank of America, and so on. But data privacy is more than just fraud. Whether it is the books we take out of the library, the websites we visit, or the contents of our text messages, most of us have personal data on third-party computers that we don’t want made public. The posting of Paris Hilton’s phone book on the Internet is a celebrity example of this.

The second issue is the ease with which a criminal can use personal data to commit fraud. It doesn’t take much personal information to apply for a credit card in someone else’s name. It doesn’t take much to submit fraudulent bank transactions in someone else’s name. It’s surprisingly easy to get an identification card in someone else’s name. Our current culture, where identity is verified simply and sloppily, makes it easier for a criminal to impersonate his victim.

Proposed fixes tend to concentrate on the first issue—making personal data harder to steal—whereas the real problem is the second. If we’re ever going to manage the risks and effects of electronic impersonation, we must concentrate on preventing and detecting fraudulent transactions.

Fraudulent transactions have nothing to do with the legitimate account holders. Criminals impersonate legitimate users to financial intuitions. That means that any solution can’t involve the account holders. That leaves only one reasonable answer: financial intuitions need to be liable for fraudulent transactions. They need to be liable for sending erroneous information to credit bureaus based on fraudulent transactions.

They can’t claim that the user must keep his password secure or his machine virus free. They can’t require the user to monitor his accounts for fraudulent activity, or his credit reports for fraudulently obtained credit cards. Those aren’t reasonable requirements for most users. The bank must be made responsible, regardless of what the user does.

If you think this won’t work, look at credit cards. Credit card companies are liable for all but the first $50 of fraudulent transactions. They’re not hurting for business; and they’re not drowning in fraud, either. They’ve developed and fielded an array of security technologies designed to detect and prevent fraudulent transactions. They’ve pushed most of the actual costs onto the merchants. And almost no security centers around trying to authenticate the cardholder.

That’s an important lesson. Identity theft solutions focus much too much on authenticating the person. Whether it’s two-factor authentication, ID cards, biometrics, or whatever, there’s a widespread myth that authenticating the person is the way to prevent these crimes. But once you understand that the problem is fraudulent transactions, you quickly realize that authenticating the person isn’t the way to proceed.

Again, think about credit cards. Store clerks barely verify signatures when people use cards. People can use credit cards to buy things by mail, phone, or Internet, where no one verifies the signature or even that you have possession of the card. Even worse, no credit card company mandates secure storage requirements for credit cards. They don’t demand that cardholders secure their wallets in any particular way. Credit card companies simply don’t worry about verifying the cardholder or putting requirements on what he does. They concentrate on verifying the transaction.

This same sort of thinking needs to be applied to other areas where criminals use impersonation to commit fraud. I don’t know what the final solutions will look like, but I do know that once financial institutions are liable for losses due to these types of fraud, they will find solutions. Maybe there’ll be a daily withdrawal limit, like there is on ATMs. Maybe large transactions will be delayed for a period of time, or will require a call-back from the bank or brokerage company. Maybe people will no longer be able to open a credit card account by simply filling out a bunch of information on a form. Likely the solution will be a combination of solutions that reduces fraudulent transactions to a manageable level, but we’ll never know until the financial institutions have the financial incentive to put them in place.

Right now, the economic incentives result in financial institutions that are so eager to allow transactions—new credit cards, cash transfers, whatever—that they’re not paying enough attention to fraudulent transactions. They’ve pushed the costs for fraud onto the merchants. But if they’re liable for losses and damages to legitimate users, they’ll pay more attention. And they’ll mitigate the risks. Security can do all sorts of things, once the economic incentives to apply them are there.

By focusing on the fraudulent use of personal data, I do not mean to minimize the harm caused by third-party data and violations of privacy. I believe that the U.S. would be well-served by a comprehensive Data Protection Act like the European Union. However, I do not believe that a law of this type would significantly reduce the risk of fraudulent impersonation. To mitigate that risk, we need to concentrate on detecting and preventing fraudulent transactions. We need to make the entity that is in the best position to mitigate the risk to be responsible for that risk. And that means making the financial institutions liable for fraudulent transactions.

Doing anything less simply won’t work.

Posted on April 15, 2005 at 9:17 AMView Comments

ChoicePoint Feeling the Heat

AP says:

An executive of embattled data broker ChoicePoint Inc. says the company is developing a system that would allow people
to review their personal information that is sold to law enforcement agencies, employers, landlords and businesses. ChoicePoint’s announcement comes a month after it disclosed
that thieves used previously stolen identities to create what appeared to be legitimate businesses seeking personal
records.

Posted on April 2, 2005 at 9:09 AMView Comments

Sybase Practices Dumb Security

From Computerworld:

A threat by Sybase Inc. to sue a U.K.-based security research firm if it publicly discloses the details of eight holes it found in Sybase’s database software last year is evoking sharp criticism from some IT managers but sympathetic comments from others.

I can see why Sybase would prefer it if people didn’t know about vulnerabilities in their software—it’s bad for business—but disclosure is the reason companies are fixing them. If researchers are prohibited from publishing, then software developers are free to ignore security problems.

Posted on April 1, 2005 at 1:24 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.