Security at Visa
Good article on security at Visa in light of the CardSystems fiasco. (The article echoes some of the security arguments I made in this post.)
Page 11 of 12
Good article on security at Visa in light of the CardSystems fiasco. (The article echoes some of the security arguments I made in this post.)
Ignore the corporate sleaziness by Cingular for the moment—they sold used cell phones meant for charity—and focus on the privacy implications. Cingular didn’t erase any of the personal information on the used phones they sold.
This reminds me of Simson Garfinkel’s analysis of used hard drives. He found that 90% of them contained old data, some of it very private and interesting.
Erasing data is one of the big problems of the information age. We know how to do it, but it takes time and we mostly don’t bother. And sadly, these kinds of privacy violations are more the norm than the exception. I don’t think it will get better unless Cingular becomes liable for violating its customers’ privacy like that.
EDITED TO ADD: I already wrote about the risks of losing small portable devices.
As PDAs become more powerful, and memory becomes cheaper, more people are carrying around a lot of personal information in an easy-to-lose format. The Washington Post has a story about this:
Personal devices “are carrying incredibly sensitive information,” said Joel Yarmon, who, as technology director for the staff of Sen. Ted Stevens (R-Alaska), had to scramble over a weekend last month after a colleague lost one of the office’s wireless messaging devices. In this case, the data included “personal phone numbers of leaders of Congress. . . . If that were to leak, that would be very embarrassing,” Yarmon said.
I’ve noticed this in my own life. If I didn’t make a special effort to limit the amount of information on my Treo, it would include detailed scheduling information from the past six years. My small laptop would include every e-mail I’ve sent and received in the past dozen years. And so on. A lot of us are carrying around an enormous amount of very personal data.
And some of us are carrying around personal data about other people, too:
Companies are seeking to avoid becoming the latest example of compromised security. Earlier this year, a laptop computer containing the names and Social Security numbers of 16,500 current and former MCI Inc. employees was stolen from the car of an MCI financial analyst in Colorado. In another case, a former Morgan Stanley employee sold a used BlackBerry on the online auction site eBay with confidential information still stored on the device. And in yet another incident, personal information for 665 families in Japan was recently stolen along with a handheld device belonging to a Japanese power-company employee.
There are several ways to deal with this—password protection and encryption, of course. More recently, some communications devices can be remotely erased if lost.
Remember CardSystems Solutions, the company that exposed over 40 million identities to potential fraud? (The actual number of identities that will be the victims of fraud is almost certainly much, much lower.)
Both Visa and American Express are dropping them as a payment processor:
Within hours of the disclosure that Visa was seeking a replacement for CardSystems Solutions, American Express said Tuesday it would no longer do business with the company beginning in October.
The biggest problem with CardSystems’ actions wasn’t that it had bad computer security practices, but that it had bad business practices. It was holding exception files with personal information even though it was not supposed to. It was not for marketing, as I originally surmised, but to find out why transactions were not being authorized. It was disregrading the rules it agreed to follow.
Technical problems can be remediated. A dishonest corporate culture is much harder to fix. This is what I sense reading between the lines:
Visa had been weighing the decision for a few weeks but as recently as mid-June said that it was working with CardSystems to correct the problem. CardSystems hired an outside security assessor this month to review its policies and practices, and it promised to make any necessary upgrades by the end of August. CardSystems, in its statement yesterday, said the company’s executives had been “in almost daily contact” with Visa since the problems were discovered in May.
Visa, however, said that despite “some remediation efforts” since the incident was reported, the actions by CardSystems were not enough.
And this:
CardSystems Solutions Inc. “has not corrected, and cannot at this point correct, the failure to provide proper data security for Visa accounts,” said Rosetta Jones, a spokeswoman for Foster City, Calif.-based Visa….
Visa said that while CardSystems has taken some remediating actions since the breach was disclosed, those could not overcome the fact that it was inappropriately holding on to account information—purportedly for “research purposes”—when the breach occurred, in violation of Visa’s security rules.
At this point, it is unclear what MasterCard and Discover will do.
MasterCard International Inc. is taking a different tack with CardSystems. The credit card company expects CardSystems to develop a plan for improving its security by Aug. 31, “and as of today, we are not aware of any deficiencies in its systems that are incapable of being remediated,” spokeswoman Sharon Gamsin said.
“However, if CardSystems cannot demonstrate that they are in compliance by that date, their ability to provide services to MasterCard members will be at risk,” she said.
Jennifer Born, a spokeswoman for Discover Financial Services Inc., which also has a relationship with CardSystems, said the Riverwoods, Ill.-based company was “doing our due diligence and will make our decision once that process is completed.”
I think this is a positive development. I have long said that companies like CardSystems won’t clean up their acts unless there are consequences for not doing so. Credit card companies dropping CardSystems sends a strong message to the other payment processors: improve your security if you want to stay in business.
(Some interesting legal opinions on the larger issue of disclosure are here.)
Interesting story on the market for data in Moscow:
This Gorbushka vendor offers a hard drive with cash transfer records from Russia’s central bank for $1,500 (Canadian).
And:
At the Gorbushka kiosk, sales are so brisk that the vendor excuses himself to help other customers while the foreigner considers his options: $43 for a mobile phone company’s list of subscribers? Or $100 for a database of vehicles registered in the Moscow region?
The vehicle database proves irresistible. It appears to contain names, birthdays, passport numbers, addresses, telephone numbers, descriptions of vehicles, and vehicle identification (VIN) numbers for every driver in Moscow.
I don’t know whether you can buy data about people in other countries, but it is certainly plausible.
Everyone seems to be looking at their databases for personal information leakages.
Tax liens, mortgage papers, deeds, and other real estate-related documents are publicly available in on-line databases run by registries of deeds across the state. The Globe found documents in free databases of all but three Massachusetts counties containing the names and Social Security numbers of Massachusetts residents….
Although registers of deeds said that they are unaware of cases in which criminals used information from their databases maliciously, the information contained in the documents would be more than enough to steal an identity and open new lines of credit….
Isn’t that part of the problem, though? It’s easy to say “we haven’t seen any cases of fraud using our information,” because there’s rarely a way to tell where information comes from. The recent epidemic of public leaks comes from people noticing the leak process, not the effects of the leaks. So everyone thinks their data practices are good, because there have never been any documented abuses stemming from leaks of their data, and everyone is fooling themselves.
This is a good editorial from Wired on identity theft.
Following are the fixes we think Congress should make:
Require businesses to secure data and levy fines against those who don’t. Congress has mandated tough privacy and security standards for companies that handle health and financial data. But the rules for credit agencies are woefully inadequate. And they don’t cover other businesses and organizations that handle sensitive personal information, such as employers, academic institutions and data brokers. Congress should mandate strict privacy and security standards for anyone who handles sensitive information, and apply tough financial penalties against companies that fail to comply.
Require companies to encrypt all sensitive customer data. Any standard created to protect data should include technical requirements to scramble the data—both in storage and during transit when data is transferred from one place to another. Recent incidents involving unencrypted Bank of America and CitiFinancial data tapes that went missing while being transferred to backup centers make it clear that companies think encryption is necessary only in certain circumstances.
Keep the plan simple and provide authority and funds to the FTC to ensure legislation is enforced. Efforts to secure sensitive data in the health and financial industries led to laws so complicated and confusing that few have been able to follow them faithfully. And efforts to monitor compliance have been inadequate. Congress should develop simpler rules tailored to each specific industry segment, and give the FTC the necessary funding to enforce them.
Keep Social Security numbers for Social Security. Social Security numbers appear on medical and voter-registration forms as well as on public records that are available through a simple internet search. This makes it all too easy for a thief to obtain the single identifying number that can lead to financial ruin for victims. Americans need a different unique identifying number specifically for credit records, with guarantees that it will never be used for authentication purposes.
Force credit agencies to scrutinize credit-card applications and verify the identity of credit-card applicants. Giving Americans easy access to credit has superseded all other considerations in the cutthroat credit-card business, helping thieves open accounts in victims’ names. Congress needs to bring sane safeguards back into the process of approving credit—even if it means adding costs and inconveniencing powerful banking and financial interests.
Extend fraud alerts beyond 90 days. The Fair Credit Reporting Act allows anyone who suspects that their personal information has been stolen to place a fraud alert on their credit record. This currently requires a creditor to take “reasonable” steps to verify the identity of anyone who applies for credit in the individual’s name. It also requires the creditor to contact the individual who placed the fraud alert on the account if they’ve provided their phone number. Both conditions apply for 90 days. Of course, nothing prevents identity thieves from waiting until the short-lived alert period expires before taking advantage of stolen information. Congress should extend the default window for credit alerts to a minimum of one year.
Allow individuals to freeze their credit records so that no one can access the records without the individuals’ approval. The current credit system opens credit reports to almost anyone who requests them. Individuals should be able to “freeze” their records and have them opened to others only when the individual contacts a credit agency and requests that it release a report to a specific entity.
Require opt-in rather than opt-out permission before companies can share or sell data. Many businesses currently allow people to decline inclusion in marketing lists, but only if customers actively request it. This system, known as opt-out, inherently favors companies by making it more difficult for consumers to escape abusive data-sharing practices. In many cases, consumers need to wade through confusing instructions, and send a mail-in form in order to be removed from pre-established marketing lists. The United States should follow an opt-in model, where companies would be forced to collect permission from individuals before they can traffic in personal data.
Require companies to notify consumers of any privacy breaches, without preventing states from enacting even tougher local laws. Some 37 states have enacted or are considering legislation requiring businesses to notify consumers of data breaches that affect them. A similar federal measure has also been introduced in the Senate. These are steps in the right direction. But the federal bill has a major flaw: It gives companies an easy out in the case of massive data breaches, where the number of people affected exceeds 500,000, or the cost of notification would exceeds $250,000. In those cases, companies would not be required to notify individuals, but could comply simply by posting a notice on their websites. Congress should close these loopholes. In addition, any federal law should be written to ensure that it does not pre-empt state notification laws that take a tougher stance.
As I’ve written previously, this won’t solve identity theft. But it will make it harder and protect the privacy of everyone. These are good recommendations.
The personal information of over 40 million people has been hacked. The hack occurred at CardSystems Solutions, a company that processes credit card transactions. The details are still unclear. The New York Times reports that “data from roughly 200,000 accounts from MasterCard, Visa and other card issuers are known to have been stolen in the breach,” although 40 million were vulnerable. The theft was an intentional malicious computer hacking activity: the first in all these recent personal-information breaches, I think. The rest were accidental—backup tapes gone walkabout, for example—or social engineering hacks. Someone was after this data, which implies that’s more likely to result in fraud than those peripatetic backup tapes.
CardSystems says that they found the problem, while MasterCard maintains that they did; the New York Times agrees with MasterCard. Microsoft software may be to blame. And in a weird twist, CardSystems admitted they weren’t supposed to keep the data in the first place.
The official, John M. Perry, chief executive of CardSystems Solutions…said the data was in a file being stored for “research purposes” to determine why certain transactions had registered as unauthorized or uncompleted.
Yeah, right. Research = marketing, I’ll bet.
This is exactly the sort of thing that Visa and MasterCard are trying very hard to prevent. They have imposed their own security requirements on companies—merchants, processors, whoever—that deal with credit card data. Visa has instituted a Cardholder Information Security Program (CISP). MasterCard calls its program Site Data Protection (SDP). These have been combined into a single joint security standard, PCI, which also includes Discover, American Express, JCB, and Diners Club. (More on Visa’s PCI program.)
PCI requirements encompass network security, password management, stored-data encryption, access control, monitoring, testing, policies, etc. And the credit-card companies are backing these requirements up with stiff penalties: cash fines of up to $100,000, increased transaction fees, orand termination of the account. For a retailer that does most of its business via credit cards, this is an enormous incentive to comply.
These aren’t laws, they’re contractual business requirements. They’re not imposed by government; the credit card companies are mandating them to protect their brand.
Every credit card company is terrified that people will reduce their credit card usage. They’re worried that all of this press about stolen personal data, as well as actual identity theft and other types of credit card fraud, will scare shoppers off the Internet. They’re worried about how their brands are perceived by the public. And they don’t want some idiot company ruining their reputations by exposing 40 million cardholders to the risk of fraud. (Or, at least, by giving reporters the opportunity to write headlines like “CardSystems Solutions hands over 40M credit cards to hackers.”)
So independent of any laws or government regulations, the credit card companies are forcing companies that process credit card data to increase their security. Companies have to comply with PCI or face serious consequences.
Was CardSystems in compliance? They should have been in compliance with Visa’s CISP by 30 September 2004, and certainly they were in the highest service level. (PCI compliance isn’t required until 30 June 2005—about a week from now.) The reality is more murky.
After the disclosure of the security breach at CardSystems, varying accounts were offered about the company’s compliance with card association standards.
Jessica Antle, a MasterCard spokeswoman, said that CardSystems had never demonstrated compliance with MasterCard’s standards. “They were in violation of our rules,” she said.
It is not clear whether or when MasterCard intervened with the company in the past to insure compliance, but MasterCard said Friday that it had now given CardSystems “a limited amount of time” to do so.
Asked about compliance with Visa’s standards, a Visa spokeswoman, Rosetta Jones, said, “This particular processor was not following Visa’s security requirements when we found out there was a potential data compromise.”
Earlier, Mr. Perry of CardSystems said his company had been audited in December 2003 by an unspecified independent assessor and had received a seal of approval from the Visa payment associations in June 2004.
All of this demonstrates some limitations of any certification system. One, companies can take advantage of interpersonal and intercompany politics to get themselves special treatment with respect to the policies. And two, all audits rely to a great extent on self-assessment and self-disclosure. If a company is willing to lie to an auditor, it’s unlikely that it will get caught.
Unless they get really caught, like this incident.
Self-reporting only works if the punishment exceeds the crime. The reason people accurately declare what they bring into the country on their customs forms, for example, is because the penalties for lying are far more expensive than paying any duty owed.
If the credit card industry wants their PCI requirements taken seriously, they need to make an example out of CardSystems. They need to revoke whatever credit card processing license CardSystems has, to the maximum extent possible by whatever contracts they have in place. Only by making CardSystems a demonstration of what happens to someone who doesn’t comply will everyone else realize that they had better comply.
(CardSystems should also face criminal prosecution, but that’s unlikely in today’s business-friendly political environment.)
I have great hopes for PCI. I like security solutions that involve contracts between companies more than I like government intervention. Often the latter is required, but the former is more effective. Here’s PCI’s chance to demonstrate their effectiveness.
Citigroup announced that it lost personal data on 3.9 million people. The data was on a set of backup tapes that were sent by UPS (a package delivery service) from point A and never arrived at point B.
This is a huge data loss, and even though it is unlikely that any bad guys got their hands on the data, it will have profound effects on the security of all our personal data.
It might seem that there has been an epidemic of personal-data losses recently, but that’s an illusion. What we’re seeing are the effects of a California law that requires companies to disclose losses of thefts of personal data. It’s always been happening, only now companies have to go public with it.
As a security expert, I like the California law for three reasons. One, data on actual intrusions is useful for research. Two, alerting individuals whose data is lost or stolen is a good idea. And three, increased public scrutiny leads companies to spend more effort protecting personal data.
Think of it as public shaming. Companies will spend money to avoid the PR cost of public shaming. Hence, security improves.
This works, but there’s an attenuation effect going on. As more of these events occur, the press is less likely to report them. When there’s less noise in the press, there’s less public shaming. And when there’s less public shaming, the amount of money companies are willing to spend to avoid it goes down.
This data loss has set a new bar for reporters. Data thefts affecting 50,000 individuals will no longer be news. They won’t be reported.
The notification of individuals also has an attenuation effect. I know people in California who have a dozen notices about the loss of their personal data. When no identity theft follows, people start believing that it isn’t really a problem. (In the large, they’re right. Most data losses don’t result in identity theft. But that doesn’t mean that it’s not a problem.)
Public disclosure is good. But it’s not enough.
During a time when large thefts of personal data are dime-a-dozen, this one stands out.
What is thought to be the largest U.S. banking security breach in history has gotten even bigger.
The number of bank accounts accessed illegally by a New Jersey cybercrime ring has grown to 676,000, according to police investigators. That’s up from the initial estimate of 500,000 accounts police said last month had been breached.
Hackensack, N.J., police Det. Capt. Frank Lomia said today that an additional 176,000 accounts were found by investigators who have been probing the ring for several months. All 676,000 consumer accounts involve New Jersey residents who were clients at four different banks, he said.
Even before the latest account tally was made public, the U.S. Department of the Treasury labeled the incident the largest breach of banking security in the U.S. to date.
The case has already led to criminal charges against nine people, including seven former employees of the four banks. The crime ring apparently accessed the data illegally through the former bank workers. None of those employees were IT workers, police said.
One amazing thing about the story is how manual the process was.
The suspects pulled up the account data while working inside their banks, then printed out screen captures of the information or wrote it out by hand, Lomia said. The data was then provided to a company called DRL Associates Inc., which had been set up as a front for the operation. DRL advertised itself as a deadbeat-locator service and as a collection agency, but was not properly licensed for those activities by the state, police said.
And I’m not really sure out what the data was stolen for:
The information was then allegedly sold to more than 40 collection agencies and law firms, police said.
Is collections that really big an industry?
Edited to add: Here is some good commentary by Adam Fields.
Sidebar photo of Bruce Schneier by Joe MacInnis.