Entries Tagged "privacy"

Page 135 of 138

ChoicePoint Says "Please Regulate Me"

According to ChoicePoint’s most recent 8-K filing:

Based on information currently available, we estimate that approximately 145,000 consumers from 50 states and other territories may have had their personal information improperly accessed as a result of the recent Los Angeles incident and certain other instances of unauthorized access to our information products. Approximately 35,000 of these consumers are California residents, and approximately 110,000 are residents of other states. These numbers were determined by conducting searches of our databases that matched searches conducted by customers who we believe may have had unauthorized access to our information products on or after July 1, 2003, the effective date of the California notification law. Because our databases are constantly updated, our search results will never be identical to the search results of these customers.

Catch that? ChoicePoint actually has no idea if only 145,000 customers were affected by its recent security debacle. But it’s not doing any work to determine if more than 145,000 customers were affected—or if any customers before July 1, 2003 were affected—because there’s no law compelling it to do so.

I have no idea why ChoicePoint has decided to tape a huge “Please Regulate My Industry” sign to its back, but it’s increasingly obvious that it has. There’s a class-action shareholders’ lawsuit, but I don’t think that will be enough.

And, by the way, Choicepoint’s database is riddled with errors.

Posted on March 9, 2005 at 2:54 PMView Comments

Remote Physical Device Fingerprinting

Here’s the abstract:

We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device’s known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall, and also when the device’s system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.

And an article. Really nice work.

Posted on March 7, 2005 at 3:02 PMView Comments

Garbage Cans that Spy on You

From The Guardian:

Though he foresaw many ways in which Big Brother might watch us, even George Orwell never imagined that the authorities would keep a keen eye on your bin.

Residents of Croydon, south London, have been told that the microchips being inserted into their new wheely bins may well be adapted so that the council can judge whether they are producing too much rubbish.

I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law.

And there are security risks.

If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant.

To me, this is just another example of those implementing policy not being the ones who bear the costs. How long would the policy last if it were made clear to those implementing it that they would be held personally liable, even if only via their departmental budgets or careers, for any losses to residents if the database did get hacked?

Posted on March 4, 2005 at 10:32 AMView Comments

Sensitive Information on Used Hard Drives

A research team bought over a hundred used hard drives for about a thousand dollars, and found more than half still contained personal and commercially sensitive information—some of it blackmail material.

People have repeated this experiment again and again, in a variety of countries, and the results have been pretty much the same. People don’t understand the risks of throwing away hard drives containing sensitive information.

What struck me about this story was the wide range of dirt they were able to dig up: insurance company records, a school’s file on its children, evidence of an affair, and so on. And although it cost them a grand to get this, they still had a grand’s worth of salable computer hardware at the end of their experiment.

Posted on March 2, 2005 at 9:40 AMView Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive—both in money and time—to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs—whether through regulation or liability—it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PMView Comments

Security Risks of Frequent-Shopper Cards

This is from Richard M. Smith:

Tukwila, Washington firefighter, Philip Scott Lyons found out the hard way that supermarket loyalty cards come with a huge price. Lyons was arrested last August and charged with attempted arson. Police alleged at the time that Lyons tried to set fire to his own house while his wife and children were inside. According to the KOMO-TV and the Seattle Times, a major piece of evidence used against Lyons in his arrest was the record of his supermarket purchases that he made with his Safeway Club Card. Police investigators had discovered that his Club Card was used to buy fire starters of the same type used in the arson attempt.

For Lyons, the story did have a happy ending. All charges were dropped against him in January 2005 because another person stepped forward saying he set the fire and not Lyons. Lyons is now back at work after more than 5 months of being on administrative leave from his firefighter job.

The moral of this story is that even the most innocent database can be used against a person in a criminal investigation turning their lives completely upside down.

Safeway needs to be more up-front with customers about the potential downsides of shopper cards. They should also provide the details of their role in the arrest or Mr. Lyons and other criminal cases in which the company provided Club Card purchase information to police investigators.

Here is how Safeway currently describes their Club Card program in the Club Card application:

We respect your privacy. Safeway does not sell or lease personally identifying information (i.e., your name, address, telephone number, and bank and credit card account numbers) to non-affiliated companies or entities. We do record information regarding the purchases made with your Safeway Club Card to help us provide you with special offers and other information. Safeway also may use this information to provide you with personally tailored coupons, offers or other information that may be provided to Safeway by other companies. If you do not wish to receive personally tailored coupons, offers or other information, please check the box below. Must be at least 18 years of age.

Links:

Firefighter Arrested For Attempted Arson

Fireman attempted to set fire to house, charges say

Tukwila Firefighter Cleared Of Arson Charges

Posted on February 18, 2005 at 8:00 AMView Comments

T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers—names, Social Security numbers, voicemail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office—the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay appeared in eWeek.

Posted on February 14, 2005 at 4:26 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.