Entries Tagged "privacy"

Page 138 of 144

Interview with Marcus Ranum

There’s some good stuff in this interview.

There’s enough blame for everyone.

Blame the users who don’t secure their systems and applications.

Blame the vendors who write and distribute insecure shovel-ware.

Blame the sleazebags who make their living infecting innocent people with spyware, or sending spam.

Blame Microsoft for producing an operating system that is bloated and has an ineffective permissions model and poor default configurations.

Blame the IT managers who overrule their security practitioners’ advice and put their systems at risk in the interest of convenience. Etc.

Truly, the only people who deserve a complete helping of blame are the hackers. Let’s not forget that they’re the ones doing this to us. They’re the ones who are annoying an entire planet. They’re the ones who are costing us billions of dollars a year to secure our systems against them. They’re the ones who place their desire for fun ahead of everyone on earth’s desire for peace and [the] right to privacy.

Posted on June 27, 2005 at 1:14 PMView Comments

CardSystems Exposes 40 Million Identities

The personal information of over 40 million people has been hacked. The hack occurred at CardSystems Solutions, a company that processes credit card transactions. The details are still unclear. The New York Times reports that “data from roughly 200,000 accounts from MasterCard, Visa and other card issuers are known to have been stolen in the breach,” although 40 million were vulnerable. The theft was an intentional malicious computer hacking activity: the first in all these recent personal-information breaches, I think. The rest were accidental—backup tapes gone walkabout, for example—or social engineering hacks. Someone was after this data, which implies that’s more likely to result in fraud than those peripatetic backup tapes.

CardSystems says that they found the problem, while MasterCard maintains that they did; the New York Times agrees with MasterCard. Microsoft software may be to blame. And in a weird twist, CardSystems admitted they weren’t supposed to keep the data in the first place.

The official, John M. Perry, chief executive of CardSystems Solutions…said the data was in a file being stored for “research purposes” to determine why certain transactions had registered as unauthorized or uncompleted.

Yeah, right. Research = marketing, I’ll bet.

This is exactly the sort of thing that Visa and MasterCard are trying very hard to prevent. They have imposed their own security requirements on companies—merchants, processors, whoever—that deal with credit card data. Visa has instituted a Cardholder Information Security Program (CISP). MasterCard calls its program Site Data Protection (SDP). These have been combined into a single joint security standard, PCI, which also includes Discover, American Express, JCB, and Diners Club. (More on Visa’s PCI program.)

PCI requirements encompass network security, password management, stored-data encryption, access control, monitoring, testing, policies, etc. And the credit-card companies are backing these requirements up with stiff penalties: cash fines of up to $100,000, increased transaction fees, orand termination of the account. For a retailer that does most of its business via credit cards, this is an enormous incentive to comply.

These aren’t laws, they’re contractual business requirements. They’re not imposed by government; the credit card companies are mandating them to protect their brand.

Every credit card company is terrified that people will reduce their credit card usage. They’re worried that all of this press about stolen personal data, as well as actual identity theft and other types of credit card fraud, will scare shoppers off the Internet. They’re worried about how their brands are perceived by the public. And they don’t want some idiot company ruining their reputations by exposing 40 million cardholders to the risk of fraud. (Or, at least, by giving reporters the opportunity to write headlines like “CardSystems Solutions hands over 40M credit cards to hackers.”)

So independent of any laws or government regulations, the credit card companies are forcing companies that process credit card data to increase their security. Companies have to comply with PCI or face serious consequences.

Was CardSystems in compliance? They should have been in compliance with Visa’s CISP by 30 September 2004, and certainly they were in the highest service level. (PCI compliance isn’t required until 30 June 2005—about a week from now.) The reality is more murky.

After the disclosure of the security breach at CardSystems, varying accounts were offered about the company’s compliance with card association standards.

Jessica Antle, a MasterCard spokeswoman, said that CardSystems had never demonstrated compliance with MasterCard’s standards. “They were in violation of our rules,” she said.

It is not clear whether or when MasterCard intervened with the company in the past to insure compliance, but MasterCard said Friday that it had now given CardSystems “a limited amount of time” to do so.

Asked about compliance with Visa’s standards, a Visa spokeswoman, Rosetta Jones, said, “This particular processor was not following Visa’s security requirements when we found out there was a potential data compromise.”

Earlier, Mr. Perry of CardSystems said his company had been audited in December 2003 by an unspecified independent assessor and had received a seal of approval from the Visa payment associations in June 2004.

All of this demonstrates some limitations of any certification system. One, companies can take advantage of interpersonal and intercompany politics to get themselves special treatment with respect to the policies. And two, all audits rely to a great extent on self-assessment and self-disclosure. If a company is willing to lie to an auditor, it’s unlikely that it will get caught.

Unless they get really caught, like this incident.

Self-reporting only works if the punishment exceeds the crime. The reason people accurately declare what they bring into the country on their customs forms, for example, is because the penalties for lying are far more expensive than paying any duty owed.

If the credit card industry wants their PCI requirements taken seriously, they need to make an example out of CardSystems. They need to revoke whatever credit card processing license CardSystems has, to the maximum extent possible by whatever contracts they have in place. Only by making CardSystems a demonstration of what happens to someone who doesn’t comply will everyone else realize that they had better comply.

(CardSystems should also face criminal prosecution, but that’s unlikely in today’s business-friendly political environment.)

I have great hopes for PCI. I like security solutions that involve contracts between companies more than I like government intervention. Often the latter is required, but the former is more effective. Here’s PCI’s chance to demonstrate their effectiveness.

Posted on June 23, 2005 at 8:55 AMView Comments

U.S. Medical Privacy Law Gutted

In the U.S., medical privacy is largely governed by a 1996 law called HIPAA. Among many other provisions, HIPAA regulates the privacy and security surrounding electronic medical records. HIPAA specifies civil penalties against companies that don’t comply with the regulations, as well as criminal penalties against individuals and corporations who knowingly steal or misuse patient data.

The civil penalties have long been viewed as irrelevant by the health care industry. Now the criminal penalties have been gutted:

An authoritative new ruling by the Justice Department sharply limits the government’s ability to prosecute people for criminal violations of the law that protects the privacy of medical records.

The criminal penalties, the department said, apply to insurers, doctors, hospitals and other providers—but not necessarily their employees or outsiders who steal personal health data.

In short, the department said, people who work for an entity covered by the federal privacy law are not automatically covered by that law and may not be subject to its criminal penalties, which include a $250,000 fine and 10 years in prison for the most serious violations.

This is a complicated issue. Peter Swire worked extensively on this bill as the President’s Chief Counselor for Privacy, and I am going to quote him extensively. First, a story about someone who was convicted under the criminal part of this statute.

In 2004 the U.S. Attorney in Seattle announced that Richard Gibson was being indicted for violating the HIPAA privacy law. Gibson was a phlebotomist ­ a lab assistant ­ in a hospital. While at work he accessed the medical records of a person with a terminal cancer condition. Gibson then got credit cards in the patient’s name and ran up over $9,000 in charges, notably for video game purchases. In a statement to the court, the patient said he “lost a year of life both mentally and physically dealing with the stress” of dealing with collection agencies and other results of Gibson’s actions. Gibson signed a plea agreement and was sentenced to 16 months in jail.

According to this Justice Department ruling, Gibson was wrongly convicted. I presume his attorney is working on the matter, and I hope he can be re-tried under our identity theft laws. But because Gibson (or someone else like him) was working in his official capacity, he cannot be prosecuted under HIPAA. And because Gibson (or someone like him) was doing something not authorized by his employer, the hospital cannot be prosecuted under HIPAA.

The healthcare industry has been opposed to HIPAA from the beginning, because it puts constraints on their business in the name of security and privacy. This ruling comes after intense lobbying by the industry at the Department of Heath and Human Services and the Justice Department, and is the result of an HHS request for an opinion.

From Swire’s analysis the Justice Department ruling.

For a law professor who teaches statutory interpretation, the OLC opinion is terribly frustrating to read. The opinion reads like a brief for one side of an argument. Even worse, it reads like a brief that knows it has the losing side but has to come out with a predetermined answer.

I’ve been to my share of HIPAA security conferences. To the extent that big health is following the HIPAA law—and to a large extent, they’re waiting to see how it’s enforced—they are doing so because of the criminal penalties. They know that the civil penalties aren’t that large, and are a cost of doing business. But the criminal penalties were real. Now that they’re gone, the pressure on big health to protect patient privacy is greatly diminished.

Again Swire:

The simplest explanation for the bad OLC opinion is politics. Parts of the health care industry lobbied hard to cancel HIPAA in 2001. When President Bush decided to keep the privacy rule—quite possibly based on his sincere personal views—the industry efforts shifted direction. Industry pressure has stopped HHS from bringing a single civil case out of the 13,000 complaints. Now, after a U.S. Attorney’s office had the initiative to prosecute Mr. Gibson, senior officials in Washington have clamped down on criminal enforcement. The participation of senior political officials in the interpretation of a statute, rather than relying on staff attorneys, makes this political theory even more convincing.

This kind of thing is bigger than the security of the healthcare data of Americans. Our administration is trying to collect more data in its attempt to fight terrorism. Part of that is convincing people—both Americans and foreigners—that this data will be protected. When we gut privacy protections because they might inconvenience business, we’re telling the world that privacy isn’t one of our core concerns.

If the administration doesn’t believe that we need to follow its medical data privacy rules, what makes you think they’re following the FISA rules?

Posted on June 7, 2005 at 12:15 PMView Comments

Accuracy of Commercial Data Brokers

PrivacyActivism has released a study of ChoicePoint and Acxiom, two of the U.S.’s largest data brokers. The study looks at accuracy of information and responsiveness to requests for reports.

It doesn’t look good.

From the press release:

100% of the eleven participants in the study discovered errors in background check reports provided by ChoicePoint. The majority of participants found errors in even the most basic biographical information: name, social security number, address and phone number (in 67% of Acxiom reports, 73% of ChoicePoint reports). Moreover, over 40% of participants did not receive their reports from Acxiom—and the ones who did had to wait an average of three months from the time they requested their information until they
received it.

I spoke with Deborah Pierce, the Executive Director of PrivacyActivism. She made a couple of interesting points.

First, it was very difficult for them to find a legal way to do this study. There are no mechanisms for any kind of oversight of the industry. They had to find companies who were doing background checks on employees anyway, and who felt that participating in this study with PrivacyActivism was important. Then those companies asked their employees if they wanted to anonymously participate in the study.

Second, they were surprised at just how bad the data is. The most shocking error was that two people out of eleven were listed as corporate directors of companies that they had never heard of. This can’t possibly be statistically meaningful, but it is certainly scary.

Posted on June 7, 2005 at 7:45 AMView Comments

Attack on the Bluetooth Pairing Process

There’s a new cryptographic result against Bluetooth. Yaniv Shaked and Avishai Wool of Tel Aviv University in Israel have figured out how to recover the PIN by eavesdropping on the pairing process.

Pairing is an important part of Bluetooth. It’s how two devices—a phone and a headset, for example—associate themselves with one another. They generate a shared secret that they use for all future communication. Pairing is why, when on a crowded subway, your Bluetooth devices don’t link up with all the other Bluetooth devices carried by everyone else.

According to the Bluetooth specification, PINs can be 8-128 bits long. Unfortunately, most manufacturers have standardized on a four decimal-digit PIN. This attack can crack that 4-digit PIN in less than 0.3 sec on an old Pentium III 450MHz computer, and in 0.06 sec on a Pentium IV 3Ghz HT computer.

At first glance, this attack isn’t a big deal. It only works if you can eavesdrop on the pairing process. Pairing is something that occurs rarely, and generally in the safety of your home or office. But the authors have figured out how to force a pair of Bluetooth devices to repeat the pairing process, allowing them to eavesdrop on it. They pretend to be one of the two devices, and send a message to the other claiming to have forgotten the link key. This prompts the other device to discard the key, and the two then begin a new pairing session.

Taken together, this is an impressive result. I can’t be sure, but I believe it would allow an attacker to take control of someone’s Bluetooth devices. Certainly it allows an attacker to eavesdrop on someone’s Bluetooth network.

News story here.

Posted on June 3, 2005 at 10:19 AMView Comments

Deep Throat Tradecraft

The politics is certainly interesting, but I am impressed with Felt’s tradecraft. Read Bob Woodward’s description of how he would arrange secret meetings with Felt.

I tried to call Felt, but he wouldn’t take the call. I tried his home in Virginia and had no better luck. So one night I showed up at his Fairfax home. It was a plain-vanilla, perfectly kept, everything-in-its-place suburban house. His manner made me nervous. He said no more phone calls, no more visits to his home, nothing in the open.

I did not know then that in Felt’s earliest days in the FBI, during World War II, he had been assigned to work on the general desk of the Espionage Section. Felt learned a great deal about German spying in the job, and after the war he spent time keeping suspected Soviet agents under surveillance.

So at his home in Virginia that summer, Felt said that if we were to talk it would have to be face to face where no one could observe us.

I said anything would be fine with me.

We would need a preplanned notification system—a change in the environment that no one else would notice or attach any meaning to. I didn’t know what he was talking about.

If you keep the drapes in your apartment closed, open them and that could signal me, he said. I could check each day or have them checked, and if they were open we could meet that night at a designated place. I liked to let the light in at times, I explained.

We needed another signal, he said, indicating that he could check my apartment regularly. He never explained how he could do this.

Feeling under some pressure, I said that I had a red cloth flag, less than a foot square—the kind used as warnings on long truck loads—that a girlfriend had found on the street. She had stuck it in an empty flowerpot on my apartment balcony.

Felt and I agreed that I would move the flowerpot with the flag, which usually was in the front near the railing, to the rear of the balcony if I urgently needed a meeting. This would have to be important and rare, he said sternly. The signal, he said, would mean we would meet that same night about 2 a.m. on the bottom level of an underground garage just over the Key Bridge in Rosslyn.

Felt said I would have to follow strict countersurveillance techniques. How did I get out of my apartment?

I walked out, down the hall, and took the elevator.

Which takes you to the lobby? he asked.

Yes.

Did I have back stairs to my apartment house?

Yes.

Use them when you are heading for a meeting. Do they open into an alley?

Yes.

Take the alley. Don’t use your own car. Take a taxi to several blocks from a hotel where there are cabs after midnight, get dropped off and then walk to get a second cab to Rosslyn. Don’t get dropped off directly at the parking garage. Walk the last several blocks. If you are being followed, don’t go down to the garage. I’ll understand if you don’t show. All this was like a lecture. The key was taking the necessary time—one to two hours to get there. Be patient, serene. Trust the prearrangements. There was no fallback meeting place or time. If we both didn’t show, there would be no meeting.

Felt said that if he had something for me, he could get me a message. He quizzed me about my daily routine, what came to my apartment, the mailbox, etc. The Post was delivered outside my apartment door. I did have a subscription to the New York Times. A number of people in my apartment building near Dupont Circle got the Times. The copies were left in the lobby with the apartment number. Mine was No. 617, and it was written clearly on the outside of each paper in marker pen. Felt said if there was something important he could get to my New York Times—how, I never knew. Page 20 would be circled, and the hands of a clock in the lower part of the page would be drawn to indicate the time of the meeting that night, probably 2 a.m., in the same Rosslyn parking garage.

The relationship was a compact of trust; nothing about it was to be discussed or shared with anyone, he said.

How he could have made a daily observation of my balcony is still a mystery to me. At the time, before the era of intensive security, the back of the building was not enclosed, so anyone could have driven in the back alley to observe my balcony. In addition, my balcony and the back of the apartment complex faced onto a courtyard or back area that was shared with a number of other apartment or office buildings in the area. My balcony could have been seen from dozens of apartments or offices, as best I can tell.

A number of embassies were located in the area. The Iraqi Embassy was down the street, and I thought it possible that the FBI had surveillance or listening posts nearby. Could Felt have had the counterintelligence agents regularly report on the status of my flag and flowerpot? That seems highly unlikely, if not impossible.

Posted on June 2, 2005 at 4:31 PMView Comments

Fingerprint Library Cards

Biometric library cards are coming to Naperville, Illinois.

On the one hand, the library is just storing a data string derived from the fingerprint, and not the fingerprint itself. But I have a hard time believing the second paragraph below.

Library Deputy Director Mark West said the system will be implemented over the summer beginning with a public education campaign in June. West said he is confident the public will embrace the technology once it learns its limitations.

The stored numeric data cannot be used to reconstruct a fingerprint, West said, nor can it be cross-referenced with other fingerprint databases such as those kept by the FBI or the Illinois State Police.

Nor do I have any faith in this sentence:

Officials promise to protect the confidentiality of the fingerprint records.

Posted on May 23, 2005 at 7:44 AMView Comments

Surveillance Cameras in U.S. Cities

From EPIC:

The Department of Homeland Security (DHS) has requested more than $2 billion to finance grants to state and local governments for homeland security needs. Some of this money is being used by state and local governments to create networks of surveillance cameras to watch over the public in the streets, shopping centers, at airports and more. However, studies have found that such surveillance systems have little effect on crime, and that it is more effective to place more officers on the streets and improve lighting in high-crime areas. There are significant concerns about citizens’ privacy rights and misuse or abuse of the system. A professor at the University of Nevada at Reno has alleged that the university used a homeland security camera system to surreptitiously watch him after he filed a complaint alleging that the university abused its research animals. Also, British studies have found there is a significant danger of racial discrimination and stereotyping by those monitoring the cameras.

Posted on May 16, 2005 at 9:00 AMView Comments

Company Continues Bad Information Security Practices

Stories about thefts of personal data are dime-a-dozen these days, and are generally not worth writing about.

This one has an interesting coda, though.

An employee hoping to get extra work done over the weekend printed out 2004 payroll information for hundreds of SafeNet’s U.S. employees, snapped it into a briefcase and placed the briefcase in a car.

The car was broken into over the weekend and the briefcase stolen—along with the employees’ names, bank account numbers and Social Security numbers that were on the printouts, a company spokeswoman confirmed yesterday.

My guess is that most readers can point out the bad security practices here. One, the Social Security numbers and bank account numbers should not be kept with the bulk of the payroll data. Ideally, they should use employee numbers and keep sensitive (but irrelevant for most of the payroll process) information separate from the bulk of the commonly processed payroll data. And two, hard copies of that sensitive information should never go home with employees.

But SafeNet won’t learn from its mistake:

The company said no policies were violated, and that no new policies are being written as a result of this incident.

The irony here is that this is a security company.

Posted on May 10, 2005 at 3:00 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.