Entries Tagged "data breaches"

Page 10 of 12

Identity-Theft Disclosure Laws

California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.

Except that it won’t do the same thing: The federal bill has become so watered down that it won’t be very effective. I would still be in favor of it—a poor federal law is better than none—if it didn’t also pre-empt more-effective state laws, which makes it a net loss.

Identity theft is the fastest-growing area of crime. It’s badly named—your identity is the one thing that cannot be stolen—and is better thought of as fraud by impersonation. A criminal collects enough personal information about you to be able to impersonate you to banks, credit card companies, brokerage houses, etc. Posing as you, he steals your money, or takes a destructive joyride on your good credit.

Many companies keep large databases of personal data that is useful to these fraudsters. But because the companies don’t shoulder the cost of the fraud, they’re not economically motivated to secure those databases very well. In fact, if your personal data is stolen from their databases, they would much rather not even tell you: Why deal with the bad publicity?

Disclosure laws force companies to make these security breaches public. This is a good idea for three reasons. One, it is good security practice to notify potential identity theft victims that their personal information has been lost or stolen. Two, statistics on actual data thefts are valuable for research purposes. And three, the potential cost of the notification and the associated bad publicity naturally leads companies to spend more money on protecting personal information—or to refrain from collecting it in the first place.

Think of it as public shaming. Companies will spend money to avoid the PR costs of this shaming, and security will improve. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

This public shaming needs the cooperation of the press and, unfortunately, there’s an attenuation effect going on. The first major breach after California passed its disclosure law—SB1386—was in February 2005, when ChoicePoint sold personal data on 145,000 people to criminals. The event was all over the news, and ChoicePoint was shamed into improving its security.

Then LexisNexis exposed personal data on 300,000 individuals. And Citigroup lost data on 3.9 million individuals. SB1386 worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. After a while, it was no longer news. And when the press stopped reporting, the “cost” of these breaches to the companies declined.

Today, the only real cost that remains is the cost of notifying customers and issuing replacement cards. It costs banks about $10 to issue a new card, and that’s money they would much rather not have to spend. This is the agenda they brought to the federal bill, cleverly titled the Data Accountability and Trust Act, or DATA.

Lobbyists attacked the legislation in two ways. First, they went after the definition of personal information. Only the exposure of very specific information requires disclosure. For example, the theft of a database that contained people’s first initial, middle name, last name, Social Security number, bank account number, address, phone number, date of birth, mother’s maiden name and password would not have to be disclosed, because “personal information” is defined as “an individual’s first and last name in combination with …” certain other personal data.

Second, lobbyists went after the definition of “breach of security.” The latest version of the bill reads: “The term ‘breach of security’ means the unauthorized acquisition of data in electronic form containing personal information that establishes a reasonable basis to conclude that there is a significant risk of identity theft to the individuals to whom the personal information relates.”

Get that? If a company loses a backup tape containing millions of individuals’ personal information, it doesn’t have to disclose if it believes there is no “significant risk of identity theft.” If it leaves a database exposed, and has absolutely no audit logs of who accessed that database, it could claim it has no “reasonable basis” to conclude there is a significant risk. Actually, the company could point to a study that showed the probability of fraud to someone who has been the victim of this kind of data loss to be less than 1 in 1,000—which is not a “significant risk”—and then not disclose the data breach at all.

Even worse, this federal law pre-empts the 23 existing state laws—and others being considered—many of which contain stronger individual protections. So while DATA might look like a law protecting consumers nationwide, it is actually a law protecting companies with large databases from state laws protecting consumers.

So in its current form, this legislation would make things worse, not better.

Of course, things are in flux. They’re always in flux. The language of the bill has changed regularly over the past year, as various committees got their hands on it. There’s also another bill, HR3997, which is even worse. And even if something passes, it has to be reconciled with whatever the Senate passes, and then voted on again. So no one really knows what the final language will look like.

But the devil is in the details, and the only way to protect us from lobbyists tinkering with the details is to ensure that the federal bill does not pre-empt any state bills: that the federal law is a minimum, but that states can require more.

That said, disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is so common is that the data is so valuable. The way to mitigate the risk of fraud due to impersonation is not to make personal information harder to steal, it’s to make it harder to use.

Disclosure laws only deal with the economic externality of data brokers protecting your personal information. What we really need are laws prohibiting credit card companies and other financial institutions from granting credit to someone using your name with only a minimum of authentication.

But until that happens, we can at least hope that Congress will refrain from passing bad bills that override good state laws—and helping criminals in the process.

This essay originally appeared on Wired.com.

EDITED TO ADD (4/20): Here’s a comparison of state disclosure laws.

Posted on April 20, 2006 at 8:11 AMView Comments

Military Secrets for Sale in Afghanistan

Stolen goods are being sold in the markets, including hard drives filled with classified data.

A reporter recently obtained several drives at the bazaar that contained documents marked “Secret.” The contents included documents that were potentially embarrassing to Pakistan, a U.S. ally, presentations that named suspected militants targeted for “kill or capture” and discussions of U.S. efforts to “remove” or “marginalize” Afghan government officials whom the military considered “problem makers.”

The drives also included deployment rosters and other documents that identified nearly 700 U.S. service members and their Social Security numbers, information that identity thieves could use to open credit card accounts in soldiers’ names.

EDITED TO ADD (4/12): NPR story.

Posted on April 12, 2006 at 6:25 AMView Comments

Air Force One Security Leak

Last week the San Francisco Chronicle broke the story that Air Force One’s defenses were exposed on a public Internet site:

Thus, the Air Force reacted with alarm last week after The Chronicle told the Secret Service that a government document containing specific information about the anti-missile defenses on Air Force One and detailed interior maps of the two planes—including the location of Secret Service agents within the planes—was posted on the Web site of an Air Force base.

The document also shows the location where a terrorist armed with a high-caliber sniper rifle could detonate the tanks that supply oxygen to Air Force One’s medical facility.

And a few days later:

Air Force and Pentagon officials scrambled Monday to remove highly sensitive security details about the two Air Force One jetliners after The Chronicle reported that the information had been posted on a public Web site.

The security information—contained in a “technical order”—is used by rescue crews in the event of an emergency aboard various Air Force planes. But this order included details about Air Force One’s anti-missile systems, the location of Secret Service personnel within the aircraft and information on other vulnerabilities that terrorists or a hostile military force could exploit to try to damage or destroy Air Force One, the president’s air carrier.

“We are dealing with literally hundreds of thousands of Web pages, and Web pages are reviewed on a regular basis, but every once in a while something falls through the cracks,” Air Force spokeswoman Lt. Col. Catherine Reardon told The Chronicle.

“We can’t even justify how (the technical order) got out there. It should have been password-protected. We regret it happened. We removed it, and we will look more closely in the future.”

Turns out that this story involves a whole lot more hype than actual security.

The document Caffera found is part of the Air Force’s Technical Order 00-105E-9 – Aerospace Emergency Rescue and Mishap Response Information (Emergency Services) Revision 11. It resided, until recently, on the web site of the Air Logistics Center at Warner Robins Air Force Base. The purpose is pretty straight-ahead: “Recent technological advances in aviation have caused concern for the modern firefighter.” So the document gives “aircraft hazards, cabin configurations, airframe materials, and any other information that would be helpful in fighting fires.”

As a February 2006 briefing from the Air Force Civil Engineer Support Agency, explains that the document is “used by foreign governments or international organizations and is cleared to share this information with the general global public…distribution is unlimited.” The Technical Order existed solely on paper from 1970 to mid-1996, when the Secretary of the Air Force directed that henceforth all technical orders be distributed electronically (for a savings of $270,000 a year). The first CD-ROMs were distributed in January 1999 and the web site at Warner Robins was set up 10 months later. A month after that, the web site became the only place to access the documents, which are routinely updated to reflect changes in aircraft or new regulations.

But back to the document Caffera found. It’s hardly a secret that Air Force One has defenses against surface-to-air missiles. The page that so troubled Caffera indicates that the plane employs infrared countermeasures, with radiating units positioned on the tail and next to or on all four engine pylons. Why does the document provide that level of detail? Because emergency responders could be injured if they walk within a certain radius of one of the IR units while it is operating.

Nor is it remarkable that Secret Service agents would sit in areas on the plane that are close to the President’s suite, as well as between reporters, who are known to sit in the back of the plane, and everyone else. Exactly how this information endangers anyone is unclear. But it would help emergency responders in figuring out where to look for people in the event of an accident. (Interestingly, conjectural drawings of the layout of Air Force One like this one are pretty close to the real deal.)

As for hitting the medical oxygen tanks to destroy the plane, you’d have to be really, really lucky to do that while the plane is moving at any significant speed. And if it’s standing still and you are after the President and armed with a high-caliber sniper rifle, why wouldn’t you target him directly? Besides, if you wanted to make the plane explode, it would be much easier to aim for the fuel tanks in the wings (which when fully-loaded hold 53,611 gallons). Terrorists don’t need a diagram to figure that out. But a rescuer would want this information so that the oxygen valves could be turned off to mitigate the risk of a fire or explosion.

[…]

An Air Force source familiar with the history and purpose of the documents who asked not to be identified laughed when told of the above quote, reiterated that the Technical Order is and always has been unclassified, and said it is unclear how the document can be distributed now, adding that firefighters in particular won’t like any changes that make their jobs more difficult or dangerous.

“The order came down this afternoon [Monday] to remove this particular technical order from the public Web site,’ said John Birdsong, chief of media relations at Warner Robins Air Logistics Center, the air base in Georgia that had originally posted the order on its publicly accessible Web site.

According to Birdsong, the directive to remove the document came from a number of officials, including Dan McGarvey, the chief of information security for the Air Force at the Pentagon.”

Muddying things still further are comments from Jean Schaefer, deputy chief of public affairs for the Secretary of the Air Force. “We have very clear policies of what should be on the Web,” she said. “We need to emphasize the policy to the field. It appears that this document shouldn’t have been on the Web, and we have pulled the document in question. Our policy is clear in that documents that could make our operations vulnerable or threaten the safety of our people should not be available on the Web.”

And now, apparently, neither should documents that help ensure the safety of our pilots, aircrews, firefighters and emergency responders.

Another news report.

Some blogs criticized the San Francisco Chronicle for publishing this, because it gives the terrorists more information. I think they should be criticized for publishing this, because there’s no story here.

EDITED TO ADD (4/11): Much of the document is here.

Posted on April 11, 2006 at 2:40 PMView Comments

Security Through Begging

From TechDirt:

Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It’s only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems—so that the next time this happens, there won’t be anyone on the network to download such documents.

Even if their begging works, it solves the wrong problem. Sad.

EDITED TO ADD (3/22): Another article.

Posted on March 20, 2006 at 2:01 PMView Comments

More on the ATM-Card Class Break

A few days ago, I wrote about the class break of Citibank ATM cards in Canada, the UK, and Russia. This is new news:

With consumers around the country reporting mysterious fraudulent account withdrawals, and multiple banks announcing problems with stolen account information, it appears thieves have unleashed a powerful new way to steal money from cash machines.

Criminals have stolen bank account data from a third-party company, several banks have said, and then used the data to steal money from related accounts using counterfeit cards at ATM machines.

The central question surrounding the new wave of crime is this: How did the thieves managed to foil the PIN code system designed to fend off such crimes? Investigators are considering the possibility that criminals have stolen PIN codes from a retailer, MSNBC has learned.

Read the whole article. Details are emerging slowly, but there’s still a lot we don’t know.

EDITED TO ADD (3/11): More info in these four articles.

Posted on March 9, 2006 at 3:51 PMView Comments

Unfortunate Court Ruling Regarding Gramm-Leach-Bliley

A Federal Court Rules That A Financial Institution Has No Duty To Encrypt A Customer Database“:

In a legal decision that could have broad implications for financial institutions, a court has ruled recently that a student loan company was not negligent and did not have a duty under the Gramm-Leach-Bliley statute to encrypt a customer database on a laptop computer that fell into the wrong hands.

Basically, an employee of Brazos Higher Education Service Corporation, Inc., had customer information on a laptop computer he was using at home. The computer was stolen, and a customer sued Brazos.

The judge dismissed the lawsuit. And then he went further:

Significantly, while recognizing that Gramm-Leach-Bliley does require financial institutions to protect against unauthorized access to customer records, Judge Kyle held that the statute “does not prohibit someone from working with sensitive data on a laptop computer in a home office,” and does not require that “any nonpublic personal information stored on a laptop computer should be encrypted.”

I know nothing of the legal merits of the case, nor do I have an opinion about whether Gramm-Leach-Bliley does or does not require financial companies to encrypt personal data in its purview. But I do know that we as a society need to force companies to encrypt personal data about us. Companies won’t do it on their own—the market just doesn’t encourage this behavior—so legislation or liability are the only available mechanisms. If this law doesn’t do it, we need another one.

EDITED TO ADD (2/22): Some commentary here.

Posted on February 21, 2006 at 1:34 PMView Comments

Risks of Losing Portable Devices

Last July I blogged about the risks of storing ever-larger amounts of data in ever-smaller devices.

Last week I wrote my tenth Wired.com column on the topic:

The point is that it’s now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I’d never know it.

This problem isn’t going away anytime soon.

There are two solutions that make sense. The first is to protect the data. Hard-disk encryption programs like PGP Disk allow you to encrypt individual files, folders or entire disk partitions. Several manufacturers market USB thumb drives with built-in encryption. Some PDA manufacturers are starting to add password protection—not as good as encryption, but at least it’s something—to their devices, and there are some aftermarket PDA encryption programs.

The second solution is to remotely delete the data if the device is lost. This is still a new idea, but I believe it will gain traction in the corporate market. If you give an employee a BlackBerry for business use, you want to be able to wipe the device’s memory if he loses it. And since the device is online all the time, it’s a pretty easy feature to add.

But until these two solutions become ubiquitous, the best option is to pay attention and erase data. Delete old e-mails from your BlackBerry, SMSs from your cell phone and old data from your address books—regularly. Find that call log and purge it once in a while. Don’t store everything on your laptop, only the files you might actually need.

EDITED TO ADD (2/2): A Dutch army officer lost a memory stick with details of an Afgan mission.

Posted on February 1, 2006 at 10:32 AMView Comments

Most Stolen Identities Never Used

This is something I’ve been saying for a while, and it’s nice to see some independent confirmation:

A new study suggests consumers whose credit cards are lost or stolen or whose personal information is accidentally compromised face little risk of becoming victims of identity theft.

The analysis, released on Wednesday, also found that even in the most dangerous data breaches—where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted—only about 1 in 1,000 victims had their identities stolen.

The reason is that thieves are stealing far more identities than they need. Two years ago, if someone asked me about protecting against identity theft, I would tell them to shred their trash and be careful giving information over the Internet. Today, that advice is obsolete. Criminals are not stealing identity information in ones and twos; they’re stealing identity information in blocks of hundreds of thousands and even millions.

If a criminal ring wants a dozen identities for some fraud scam, and they steal a database with 500,000 identities, then—as a percentage—almost none of those identities will ever be the victims of fraud.

Some other findings from their press release:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-level” breaches, where names and Social Security numbers were stolen and “account-level” breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit.

And:

ID Analytics’ fraud experts believe the reason for the minimal use of stolen identities is based on the amount of time it takes to actually perpetrate identity theft against a consumer. As an example, it takes approximately five minutes to fill out a credit application. At this rate, it would take a fraudster working full-time ­ averaging 6.5 hours day, five days a week, 50 weeks a year ­ over 50 years to fully utilize a breached file consisting of one million consumer identities. If the criminal outsourced the work at a rate of $10 an hour in an effort to use a breached file of the same size in one year, it would cost that criminal about $830,000.

Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

That last bit is interesting, and it makes this recommendation even more surprising:

The company suggests, for instance, that companies shouldn’t always notify consumers of data breaches because they may be unnecessarily alarming people who stand little chance of being victimized.

I agree with them that all this notification is having a “boy who cried wolf” effect on people. I know people living in California who get disclosure notifications in the mail regularly, and who have stopped paying attention to them.

But remember, the main security value of notification requirements is the cost. By increasing the cost to companies of data thefts, the goal is for them to increase their security. (The main security value used to be the public shaming, but these breaches are now so common that the press no longer writes about them.) Direct fines would be a better way of dealing with the economic externality, but the notification law is all we’ve got right now. I don’t support eliminating it until there’s something else in its place.

Posted on December 12, 2005 at 9:50 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.