Schneier on Security
A blog covering security and security technology.
« Secrets and Lies Review |
| Sandia's New Wireless Technology »
June 29, 2005
Wired on Identity Theft
This is a good editorial from Wired on identity theft.
Following are the fixes we think Congress should make:
Require businesses to secure data and levy fines against those who don't. Congress has mandated tough privacy and security standards for companies that handle health and financial data. But the rules for credit agencies are woefully inadequate. And they don't cover other businesses and organizations that handle sensitive personal information, such as employers, academic institutions and data brokers. Congress should mandate strict privacy and security standards for anyone who handles sensitive information, and apply tough financial penalties against companies that fail to comply.
Require companies to encrypt all sensitive customer data. Any standard created to protect data should include technical requirements to scramble the data -- both in storage and during transit when data is transferred from one place to another. Recent incidents involving unencrypted Bank of America and CitiFinancial data tapes that went missing while being transferred to backup centers make it clear that companies think encryption is necessary only in certain circumstances.
Keep the plan simple and provide authority and funds to the FTC to ensure legislation is enforced. Efforts to secure sensitive data in the health and financial industries led to laws so complicated and confusing that few have been able to follow them faithfully. And efforts to monitor compliance have been inadequate. Congress should develop simpler rules tailored to each specific industry segment, and give the FTC the necessary funding to enforce them.
Keep Social Security numbers for Social Security. Social Security numbers appear on medical and voter-registration forms as well as on public records that are available through a simple internet search. This makes it all too easy for a thief to obtain the single identifying number that can lead to financial ruin for victims. Americans need a different unique identifying number specifically for credit records, with guarantees that it will never be used for authentication purposes.
Force credit agencies to scrutinize credit-card applications and verify the identity of credit-card applicants. Giving Americans easy access to credit has superseded all other considerations in the cutthroat credit-card business, helping thieves open accounts in victims' names. Congress needs to bring sane safeguards back into the process of approving credit -- even if it means adding costs and inconveniencing powerful banking and financial interests.
Extend fraud alerts beyond 90 days. The Fair Credit Reporting Act allows anyone who suspects that their personal information has been stolen to place a fraud alert on their credit record. This currently requires a creditor to take "reasonable" steps to verify the identity of anyone who applies for credit in the individual's name. It also requires the creditor to contact the individual who placed the fraud alert on the account if they've provided their phone number. Both conditions apply for 90 days. Of course, nothing prevents identity thieves from waiting until the short-lived alert period expires before taking advantage of stolen information. Congress should extend the default window for credit alerts to a minimum of one year.
Allow individuals to freeze their credit records so that no one can access the records without the individuals' approval. The current credit system opens credit reports to almost anyone who requests them. Individuals should be able to "freeze" their records and have them opened to others only when the individual contacts a credit agency and requests that it release a report to a specific entity.
Require opt-in rather than opt-out permission before companies can share or sell data. Many businesses currently allow people to decline inclusion in marketing lists, but only if customers actively request it. This system, known as opt-out, inherently favors companies by making it more difficult for consumers to escape abusive data-sharing practices. In many cases, consumers need to wade through confusing instructions, and send a mail-in form in order to be removed from pre-established marketing lists. The United States should follow an opt-in model, where companies would be forced to collect permission from individuals before they can traffic in personal data.
Require companies to notify consumers of any privacy breaches, without preventing states from enacting even tougher local laws. Some 37 states have enacted or are considering legislation requiring businesses to notify consumers of data breaches that affect them. A similar federal measure has also been introduced in the Senate. These are steps in the right direction. But the federal bill has a major flaw: It gives companies an easy out in the case of massive data breaches, where the number of people affected exceeds 500,000, or the cost of notification would exceeds $250,000. In those cases, companies would not be required to notify individuals, but could comply simply by posting a notice on their websites. Congress should close these loopholes. In addition, any federal law should be written to ensure that it does not pre-empt state notification laws that take a tougher stance.
As I've written previously, this won't solve identity theft. But it will make it harder and protect the privacy of everyone. These are good recommendations.
Posted on June 29, 2005 at 7:18 AM
• 59 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Is it really practical to convert the SSN into a secret password? Wouldn't it make more sense for businesses to stop using the SSN as an authentication token than to try to make it secret?
SSNs are largely a red herring. It's easy enough to correlate people without a universal identifier: name, address, phone is generally enough, even with variations in spelling and the like.
You know this is true because the data brokers aren't screaming about the bill that would limit their use of SSNs.
One of the biggest problem spots IMHO is education. They are a gold mine of financial and personal information, yet they are virtually unregulated. They are affected by GLBA and FERPA, but neither have real teeth yet and the accreditation bodies don't check for compliance.
They are nice ideas, but any legislation needs real teeth at the CEO level.
It's funny how the prospect of a term in jail, made a large number of corperates review their financial procedures (SabOx etc)
Likewise we need similar legislation for all organisations holding personal data with significant punishment for shipping them across boarders
There is one easy way to make sure these steps are established and used without fuss or foot dragging.
Make businesses and financial institutions financially responsible for their transactions instead of transferring liability to the victim. If an entity accepts a transaction from the wrong person, they are liable for that loss (which they can try to recover from the freaudster), not the person who wasn't even aware it was happening. This would drive the price of sharing information way up because it is a potential liability instead of a way to make money obviating the need for opt-outs etc.
Some people might argue that it would drive up the cost of business, but that is not true. Better security lowers the cost of business, but the cost is transferred from the victim to the entity incompetent enough to be sloppy about their security.
"Make businesses and financial institutions financially responsible for their transactions instead of transferring liability to the victim."
I've been saying that for years.
As a consumer, I'd like to see these things implemented. However, I'll fill in the flip side to these points. I've got a fair amount of experience working with financial institutions and what appear to be reasonable precautions to us would be herculean tasks to get implemented.
From the business side, all of these add additional costs while providing no benefit to the bottom line. I would expect a lot of pressure from financial institutions to throw out all of these ideas as impractical.
Large financial institutions can't secure their data. Their business relies on the ability of customer service and other low paid employees to have access to sensitive information.
To encrypt all sensitive customer data, it is first necessary to define what is sensitive and what is customer data. It should be obvious to a thinking person, but that cannot be codified into law. The laws often end up with loopholes like allowing a normalized database to not be regulated since the names and account numbers are on separate tables.
Ensuring that legislation is enforced means more auditing. This means more jobs for more people who are not generating any more revenue for the business. Same income, more expenses translates into cost cutting by shipping more jobs to places with lower wages and cut the wages of the people who are still here.
Verifying the identity of individuals is expensive. I believe this is required in Germany and the costs per customer are significant.
Fraud alerts are a joke. I have a 7 year victim statement on my report after having my passport and checkbooks stolen. It adds an additional burden to honest creditors, but doesn't actually prevent anything.
Disallowing access to credit reports defeats the purpose of aggregating all of the information in the first place. Credit agencies will spend a lot of money to prevent consumers from having any control over their data. Right now, the credit agencies own the data on you. Giving the consumer any rights related to their data would kill the credit agencies. They have a lot of resources available to prevent this.
Opt-in would kill direct marketing for all affected companies. From a consumer standpoint, this would probably make a lot of people happy. However, businesses will fight this very hard. There is a lot of money that goes into marketing.
Requiring notification of consumers helps raise awareness, but it also serves no practical purpose. If your personal data is compromised, there is very little you can do about that. Once it's out there, you better not plan on needing your credit for the rest of your life. Again, the notification creates an additional cost for the companies with no benefit to the bottom line. Some people have to lose their jobs to make up for the income to expense ratio. You can be guaranteed that the people losing their jobs will not be the ones making 6+ figures.
I don't believe that any of these points are ones that can be practically implemented within our current system. How do we institute change without ending up fighting and endless series losing battles? I'm in favor of people pushing for these goals as being reasonable from a consumer perspective. However, I also understand that companies will simply not do any of these things.
Does anyone know politicians who are genuinely interested in working on these issues? Educating those who are in the system has got to be the first step. I'm willing to take some time to help educate politicians about the issues on both sides. Writing to our "representatives" is about as useful as talking to a brick wall. I'm pretty sure all of the canned responses are generated at a call center in Bangalore, and not one of the good ones. How do we match up people with experience with politicians who want to help? I'm open to ideas.
Require companies to encrypt all sensitive customer data.
I don't see this solving problems. If the database (or application) server is compromised and the "Encrypted" data is available to the database/application in the clear (which logic says it must) then the hacker can exploit those channels to get the unencrypted data.
Yes, I believe the data should be encrypted on the backup medium. I have practiced this myself for a long time. But in practicality, how much security is gained by encrypting data while it is on it's primary media? How much does this reduce the chance/occurance of compromise? What is the cost of compromise? Which cost is greater, the cost to do this, or the cost accepting the risk?
Wow, these readers sure have a lot of enthusiasm for jail and "teeth" and coercion when it's some faceless CEO getting jailed or bitten or coerced. Can we have some discussion of basic rights? Like . . . which constitutionally delegated powers Congress will be invoking to justify this legislation? (How about the Interstate Commerce clause? Have we learned anything from the medicinal-marijuana business?) Do first-amendment guarantees of free speech apply to people in the credit-reporting business?
I, too, would like to improve the situation, but before we do violence to the Bill of Rights we should try less coercive measures.
The argument that securing data would impose all manner of costs on businesses without any bottom-line benefit is all very well except for one fact: all these costs are already being paid. They're being paid by consumers who spend money recovering from fraud or taking elaborate precautions against theft, they're being paid by businesses that lose transactions because consumers don't think the security risk is worth the benefit. They're being paid by every business in the country because the time and money spent on dealing with the fallout of lousy security can't be spent on more interesting goods and services.
I'm still awed at the fact that information brokers have such blanket protection from libel laws -- if a newspaper went around regularly reporting that unvetted anonymous sources labeled the following bunch of people as deadbeats and refused to print prominent and clearly-worded retractions when given information to the contrary, there would be one less newspaper around the next year. (Yeah, I know that position is disengenuously naive.)
I thought these companies already were liable, if I discover I have been a victim of ID theft my creit card company will reimburse me. Right??
"Right now, the credit agencies own the data on you. Giving the consumer any rights related to their data would kill the credit agencies."
I find myself not caring even a little bit.
Not that I believe your conclusion, but my worries over the ability of a few monolithic businesses who make their living on my information could fit in a thimble. These clowns have fought every effort and regulation requiring them to keep accurate information and provide access to the people it describes. If they all fell in a ditch I wouldn't lose a moment of sleep over it.
The costs are not being borne by the companies. If data is compromised and I have to spend hundreds of hours and thousands of dollars fixing the damage to my credit, the cost to the company who lost the data is $0.00. If fraud committed in my name is used to purchase thousands of dollars in goods, the merchants who sold those goods are out the thousands of dollars. The cost to the credit card company that allowed the fraudulent transactions is roughly $0.00. Do you see how these do not impact the bottom line of the companies who cause the problem? The net effect on society is clearly negative, but there is no incentive for companies to fix the source of the problems.
There aren't a lot of consumers who would lose any sleep over putting these companies in a ditch. Of course, that will never happen because the balance of power isn't in favor of the consumer.
The companies who profit from the current way of doing things have a lot of money that they spend to buy laws that work in their favor. Right now, the laws treat collected data as property. The consumer is an inconvenient nuisance as far as the laws are concerned.
The FCRA is a good example of how our laws work. I've currently got a dispute that will not be handled by the alleged creditor, who claims to have written off a debt of $0.00, three years after the account was closed in good standing. The credit agency reporting the "debt" refuses to take any action. I've had my attorney write a letter demanding that the information either be justified or removed in compliance with the FCRA. The credit agency refuses to respond to my attorney. As a consumer, the next step would be to put up tens of thousands of dollars in legal fees to try to get one of these companies to comply with the law. All of this because I cannot do anything that would involve looking at my credit report since there is a recent delinquency on my account. This is hardly an isolated event. Most laws have no teeth and there is no incentive to comply with them. Companies will give the most cost effective customer service they can, which is frequently none at all.
My point isn't about caring what happens to the companies. I share your sentiment. My point is simply the cold hard reality that the companies aren't posting on blogs how they feel. They're actively paying politicians to ensure that their interests are represented. Who is representing the interests of the consumer? I hope you don't believe it's the same politicians who are being paid by these companies.
QUOTE: "Wow, these readers sure have a lot of enthusiasm for jail and "teeth" and coercion when it's some faceless CEO getting jailed or bitten or coerced. Can we have some discussion of basic rights?"
Actually, I don't see any mention of CEO victimization directly or indirectly here (yet).
What we are talking about here is basic capitalism. People and corporations give their money to financial institutions for safe keeping. Banks are selling safety and interest on funds. If a customer's money is given to the wrong person, how should that ever be the customer's fault? If it is, how is having a bank account better than hiding Krugerrands in a sock or letting your cousin Lenny hold on to it?
When there are enough people affected by data theft, confidence in banks will falter, and how will they profit from that?
QUOTE: I thought these companies already were liable, if I discover I have been a victim of ID theft my credit card company will reimburse me. Right??
I think it is dependent on the situation. Also, debit cards, check fraud, and other methods that draw directly from an account are not generally covered (to my knowledge).
Back in April, you wrote:
"The very term 'identity theft' is an oxymoron. Identity is not a possession that can be acquired or lost; it's not a thing at all. Someone's identity is the one thing about a person that cannot be stolen."
So why do you keep using this term, and not the perhaps more logical term "identity fraud"? Just curious.
I agree with much of Mike Sherwood’s commentary and would add that none of the Wired proposals are simple to implement or have a high probability of effectively addressing the problem. The solution lies in acknowledging that we don’t understand the problem – few people truly understand what electronic information really is, much less how to control access to it. Put another way – asking Congress to control something it doesn’t understand is risky. So let’s characterize the issue as something they would understand - Information is an asset or a form of property. My social security number, my credit card number, my health records, my home phone number – all of these are my property. There is an existing body of law that governs the protection and use of personal property. The simple approach would be for Congress to make it clear that my information is personal property and that I should be allowed to use the existing personal property laws to protect the information and to seek damages when necessary. This would not be the ultimate solution, but it will get people’s attention and provide a far better context for addressing the issue.
"SSNs are largely a red herring. It's easy enough to correlate people without a universal identifier: name, address, phone is generally enough, even with variations in spelling and the like."
I agree but does this make SSN a good or bad security feature. It could be seen as a second way of verifying one's name, address, phone, etc. Granted I like the Idea of Credit card companies using their own independent numbers ,but we need SSN numbers for other purposes right? Just trying to understand exactly what was meant here.
Here's a concrete example to consider:
The FTC is looking at whether companies are taking "reasonable" action in securing customer personal information and has initiated a suit against BJ's wholesale club , which is currently in settlement.
The FTC charged that BJ's "failure to
take appropriate security measures to protect the sensitive information
of thousands of its customers," constitues an unfair practice that violates federal law.
This is the first case that I am aware of that has been brought by the Commission for a data breach incident.
The FTC said that information taken from BJ's was used by an unauthorized person or persons to make millions of
dollars of fraudulent purchases.
The settlement will require BJ's to implement a comprehensive information security program and obtain audits by an independent third party security professional every other year for 20 years.
I am unable to find a reason why the requirement for an audit would be every other year. I find this odd, especially when compared to fairly regular SOX and Payment Card Industry requirements for third party security audits. My sense is that quarterly or bi-annual audits will become part of the cost of doing business, since the repeatability of the process will be directly tied to the maturity and accuracy of the information security program(s).
The things about Mike Sherwoods' comments is that they're not really criticisms of the Wired recommendations or ones like them, they're just explanations of why this won't be easy. But then again, it's never been easy to get consumer-protection regs or legislation put in place. You need either some scenario that makes businesses think regulation is a lesser evil, or enough blood/votes on the ground to make legislators not care for the moment what businesses think.
If everyone whose personal data has been compromised in,the past six months made an appointment to visit their local member of congress, you might see something happen rather quickly.
If there is a single identifying number for credit records, and it is distinct from SSN's, then while Mexicans crossing the border for work will still want to steal SSN's, the bad guys will focus on stealing the credit ID.
Anyway, "identity theft" is a misnomer. If Alice obtains enough information on Carol so that she can get credit from Bill, by pretending to be Carol, this should not be Carol's problem; Bill should be out the money, for not doing adequate verification of Alice's identity. Carol hasn't had her "identity stolen", Bill has been played for a sap. But under current law, Bill can send collection agents after Carol, and the burden of proof seems to be up to Carol to show that she never borrowed money from Bill.
If we can fix that, the Bills of the world will be highly motivated to make the credit system more secure.
"Keep Social Security numbers for Social Security."
This part of the proposal is completely wrong-headed. SSNs are *NOT SECRETS*, and any practical solution must accept that fact and stop pretending that the secret-SSN genie can be put back in the bottle.
My counter-proposal is that any credit agency or business who uses SSN as a secret identifier known only by its owner be financially liable for the victim's losses. I frankly don't care who uses it as an identifier, unique or otherwise. What I really care about is that it not be treated as if I'm the only one who knows it.
If SSNs are public information, AND TREATED AS SUCH, then all the problems with a single identifying number that can lead to financial ruin simply evaporate.
Yes, that leaves the problem of having no shared secret identifier that can be used by default in all these situations. But folks, that's exactly the problem you have now with SSNs: fraudsters already know your secret. Pretending they don't know it doesn't solve the problem.
If your secret passwords are compromised, you don't legislate the words away or pretend they aren't passwords, you pick new secrets. So pick a new secret, and keep it secret, if it's going to be used as a secret. Otherwise don't use any secret at all.
How about we kill the problem at its source: No one is allowed to keep any information about me that I don't want them to have for any longer than I want them to have it - felony theft charges obtain if they don't respect the timelimits. There's no real reason they need to store anything about me other than some kind of secure tokenID that says that I was verified to some security level at some past time, and if I want to do anything with them that requires my personal data, why I just insert/swipe my smartcard for them to access the level of stuff I want to give them access to.
I asked this earlier, but I don't think anybody commented, so I ask again.
In this era of cheap digital cameras, make a law that says that whenever someone submits any credit application or any other document that can be used by identity thieves, a good in-situ photograph is taken of him/her and attached to the application.
This way, in addition to signing the document with the usual signature, the applicant would in effect "sign" it with his face.
As far as I can see it, this would make identity theft useless. If anybody tried to get credit by pretending to be someone else, he could still do so, but when the creditor goes after the victim who proclaims his innocence, the cops would have a mugshot to go after the real culprit. Pretty damning if you ask me.
This sounds too simple and easy for me to believe that I'm the only one to think of this. In fact, I don't see why the banks and creditors wouldn't already start doing this on their own. So what am I missing here?
"How about we kill the problem at its source: No one is allowed to keep any information about me that I don't want them to have for any longer than I want them to have it - felony theft charges obtain if they don't respect the timelimits."
This is a common misconception. Security should not be measured only on its success in prevention, but on its ability in enhancing trust (authorization). Otherwise you end up with a world without trust, which ultimately is a world without true security.
To further clarify my last post, I should probably say that the "kill the problem at its source" approach is not wrong, but it runs the risk of being implemented in such a way that it becomes impossible for any company to ever achieve consumer trust. And that is not a fair or balanced approach to security.
How do you define "any information about me" or "longer than I want them to have it"? And what constitutes theft?
I don't subscribe to the idea of absolute privacy. In addition to the points that Davi made in his posts, there's the fact that in many cases some bits of data that are generated by my existence can't really be said to be "my data".
Sixty years ago Old Man Gropp behind the counter of the five-and-dime knew which kids in town were buying cigarettes and which families were spending green stamps (forgive me if I'm mixing my time periods) and generally speaking he knew how much of what produce was being purchased by which families throughout the year.
Now there isn't one Old Man Gropp in one store in town, there are a least a dozen major grocery chain outlets within 5 miles of my house. Even if I stick to one chain, I may be going to different stores depending on if I'm shopping on my way home from work or going out on a Saturday. If the chain (hundreds of part time employees instead of one grumpy ole cat behind the counter) can track my movements, it can regain the knowledge that Gropp had 60 years ago about his customer base. If the store wants to track me for its own use, fine. Now they know that once a month I buy really good steaks if they're in stock, and that may increase the likelihood that good steaks will be in stock when I shop -> that's a benefit.
What bothers me is that 60 years ago if Gropp went to the local priest and told him I was buying condoms I could punch Gropp in the nose (and probably get away with it without getting sued).
The answer isn't in total data privacy (Greg's post), and it's not in total data transparency (the status quo), it is somewhere in between.
"SSNs are largely a red herring. It's easy enough to correlate people without a universal identifier: name, address, phone is generally enough, even with variations in spelling and the like."
Huh? I'm all for not using SSN's but it's easy to come up with holes here. Sr & Jr living at the same address with same phone or John Jones living in a group home and 5 years later a different John Jones resides there. How does a credit bureau tell them apart??
> How does a credit bureau tell them apart??
Why should they be able to? Why am I required to prove to them that the data they are selling is fraudulent?
Ilkka, if photos are required as part of an application, how hard would it be for someone to take a picture of you without your knowledge?
> Ilkka, if photos are required as part of an application, how hard would it be for
> someone to take a picture of you without your knowledge?
It's not necessarily a bad idea. Sure, the bad guys are going to find a way around it eventually, but less of them and they'll have to go to more effort to do so... you're adding "something he is" to "something he knows" (the application information, which as we've already demonstrated is easily acquired).
Passport photos are cheap. It would make more sense to require a passport-style photo and accept the passport photos standards (they're not the absolute best requirements, but it would sure beat having 10,000 customers send in different poses and lighting schemes, etc).
The photo would come imprinted on the card -> if someone stole my passport photo and tried to get a credit card with it, they'd be able to use it online but not at a brick and mortar with reasonably alert cashiers.
In addition, if your picture was part of your credit report, it would be that much more difficult for someone to impersonate you. Not impossible, certainly, and the whole topic is tangent to the overall problem, but it's an interesting point.
I'd like to hear Bruce weigh in on this one, I'm hardly an expert on biometrics.
Sometimes I feel like I keep saying the same things over and over, but there seem to be a lot of people who believe things which are simply untrue.
A common misconception that people seem to have is that your personal data is yours. In a legal sense, you do not own your personal data. Anyone who collects data about you owns that data. In many cases, you don't even have a way to opt out of those databases. That is how the law currently works. No amount of fantasizing about making it a felony will make any difference whatsoever. The concept of owning your personal information is one with absolutely no legal support.
Don't like it? If the kind of people who are interested enough in security to read this kind of stuff don't have a clear definition of the problem, let alone have a solution, do you really think our politicians are going to know how what to do?
One counterpoint to that last one Mike. Indeed, the system currently works that way, but it is not true that in a legal sense you don't own your personal data.
There is *no* legal sense of ownership of the data. There are few if any codified laws, there are few legal precedents. This is new territory, legally speaking. Corporations act like they own your data, because to this point it is profitable for them to do so and no legal authority has come out and said, "You can't do this, you don't own this data".
There is certainly a concept of owning your own personal information in a legal sense, but the concept isn't well defined.
What I was trying to point out with my grocery store analogy above is that there is some "personal" information to which perhaps you aren't entitled full ownership.
It's one thing to say you have "sold" the rights to your identity and someone else owns it (the artist formerly known as Prince comes to mind), but another thing entirely to discover that someone has started to impersonate you without consent (fraud).
With regard to legal recourse, California citizens the 'Shine the Light' law (CA Civil Code 1798.83) went into effect for California residents on January 1, 2005. This law requires businesses to disclose information-sharing practices to their customers. For example, upon request, companies must tell you with whom they have shared your personal information for marketing purposes within the last twelve months.
Moreover, California AB1950 (in effect from Jan 1, 2005) calls for "reasonable precautions" to protect personal information from modification, deletion, disclosure and misuse.
I do not forsee companies claiming that because they "own" all the identity information they collect, they therefore can not be held accountable when individuals discover "misuse" of their identity. On the contrary, the law is specifically meant to give control back to individuals.
Peter: "Ilkka, if photos are required as part of an application, how hard would it be for someone to take a picture of you without your knowledge?"
I guess it wouldn't be that hard, but that's a meaningless question, unless we assume that the credit company participates in the fraud.
The photo would be taken at the time the customer submits the application, preferably so that the customer would hold the application in his hand showing it to the camera. This way, there would be a practically unfalsifiable record of who submitted the application.
No, this would not prevent the thief getting the credit at the moment. Yes, it would guarantee finding and convicting him later, thus making the crime not worth it for him.
"In this era of cheap digital cameras, make a law that says that whenever someone submits any credit application or any other document that can be used by identity thieves, a good in-situ photograph is taken of him/her and attached to the application."
I think I like this idea. I certainly like forcing they applicant to appear in person. Credit-card companies will hate the idea, as they like easy-to-fill-out mai-in credit-card applications.
"Credit-card companies will hate the idea, as they like easy-to-fill-out mai-in credit-card applications."
Or to be fair, companies try to present their offerings in a way that suits potential customers.
I was recently told that seven minutes is the maximum attention span for most verification schemes over the phone. And even that seems a bit long when you think about how much time out of your day you would be willing to give for each credit card application/transaction (especially compared to other forms of currency). A picture system could actually reduce the time required for application verification, and therefore better suit both the applicant and the company...I believe this was also the thinking behind the Citibank credit cards that had photos of the owner next to the signature. Perhaps the photo taken during the application process would become the image on the card itself?
Here's an informative write-up on the FTC and BJ's case, and the shift in regulatory compliance:
"Firms are not protecting the data they hold. Their complacency may cost them dear"
I'm not sure fines are the way to go -- companies have lots of money.
The SEC modeled how to ensure rapid and effective compliance with the Y2K bug -- any trading institution that couldn't certify itself as being completely remedied by 1 August 1999 had to demonstrate that it would be finished before the end of the year. Otherwise, the SEC was going to suspend all their trading activity to make sure they had nothing distracting them from Y2K remediation.
Bruce Schneier: "I certainly like forcing they applicant to appear in person. Credit-card companies will hate the idea, as they like easy-to-fill-out mai-in credit-card applications."
Since these companies are the ones who enable profitable ID theft in the first place, I wouldn't shed any tears for them. The victims come first, and only after that we should look at the concerns of the enablers.
To please the credit card companies, the law could still allow such mail-in or Internet applications. However, the credit card companies should not be allowed to put negative marks on anyone's credit rating or take anyone to the court over the missed payments, unless they can produce photo (or other similarly strong) proof that the original applicant really was that person. Their choice.
This way, the credit companies could decide whether the added business is worth it for them.
In my opinion, any company who sends a credit card to somebody simply because a mailed-in piece of paper or a filled-in web form had somebody's correct date of birth and SSN in it is simply too stupid to be in any kind of business. Especially if they send that card to a PO Box.
> In my opinion, any company who sends a credit card to somebody simply
> because a mailed-in piece of paper or a filled-in web form had somebody's
> correct date of birth and SSN in it is simply too stupid to be in any kind
> of business. Especially if they send that card to a PO Box.
You're not looking at it from a purely economic sense, which is how the credit card company is looking at it.
If I can gain 100,000 new customers who will generate $10,000,000 worth of revenue annually for my company (a mere $100/year per customer), at a cost of enabling 100 active fraud attempts which will cost my company $1,000,000 a year (100 false applicants who will get away with $10,000 worth of fraud before the accounts are disabled), I'm throwing away the opportunity to make $9,000,000 a year if I don't engage in that business practice. How is that stupid? Seems to me that the reverse would be stupid, how can I pass up that business opportunity?
@Pat and Ilkka
But what I find interesting about that discussion is that even if the credit card company does pocket that 9,000,000 -- how does that affect the person whose identity is stolen since there is no longer credit score/debt collection stigma associated with this.
So, Pat, how does your explanation break Ilkka's idea?
Oh, I'm not arguing with Ilkka's idea - just the tailing opinion (the companies are behaving stupidly). In fact, I kind of like Ilkka's idea. The only problem that I can see with it is that it is in effect a self-regulation.
Self-regulations IMO are generally a horrible idea, since they have built-in conflicts of interest. If we allow companies to behave this way, it will indeed help protect the average consumer from the consequences of identity fraud, but it won't necessarily protect the average stockholder (including anyone that holds the credit card companies stock in their mutual fund retirement account), since many companies that self-regulate will behave in ways that sacrifice the long-term in order to have short-term beneficial returns. This is how corporate fraud happens.
I do agree that if a company is going to choose this as a model, though, that they should be footing the bill for the $1,000,00 in fraud. Take your $9M in profit and be happy. The companies taking advantage of their own bad practices, though, and attempting to recoup the $1M from the fraudsters they enabled with their own practices. That's not okay.
In addition, they shouldn't be able to report a fraud to a credit bureau as being associated with an individual. They didn't properly identify and authenticate the fraudster before giving him a line of credit, so in effect they are committing slander by entering a fraud mark against an innocent person.
The system is complex, and it could be made to work better with less consequences to the innocent in lots of different ways.
"SSNs are largely a red herring. It's easy enough to correlate people without a universal identifier: name, address, phone is generally enough, even with variations in spelling and the like."
This is actually a clever idea. Addresses and phone numbers change and even names change. So each time you change phone number or move (or get your name changed while getting married for example) any information left on some database somewhere would become useless. However, it's not quite that simple. Imagine someone taking a loan from bank. It would be easy for anyone to steal the cash by just changing their phone number, address or their name. Changing any of them would basicly create a new identity and would make it easy for people to disappear. You can ofcourse do the same with unique identifiers too. Just forget the identifier originally assigned to you when you were born and pick a new one and start using it everywhere.
This naturally leads to creating the ID from something that's unique in every person and that can't be changed. That way the ID could be "read" from each person. Technically, it could probably be made fail safe, but would only work in a perfect world without any threats and would mean a complete loss of privacy for everyone.
It's not an easy problem to solve and I don't think there's any easy solutions to it.
Pat: "You're not looking at it from a purely economic sense, which is how the credit card company is looking at it."
Of course, you are right.
But perhaps this economic equation will change, if the ID theft keeps growing at present speed for a few more years.
There isn't really anything that is unique enough to be a single point of identification for an individual. Even your DNA, in the future, might be totally un-unique, if you're cloned, and in any event your DNA sequence may be unique, but it's *too* unique -> the data itself is too complex to be of any use for virtually all identification needs.
I'd argue that we probably won't ever want to go to the trouble to have a truly unique identifier anyway. For one thing you'd have a crushingly effective identity fraud vector. For another, virtually all the transactions that people go through don't require identification at all, they just require authorization, so it's wasted effort.
Think of "your identity" as the set of all characteristics that are associated with you, virtually all of which aren't unique. Lots of people have my shoe size, but my shoe size is indeed an identifying characteristic. Any subset of this collection of characteristics can be used to try and identify me, but in a large enough population, anything other than a complete set can yeild a mis-identification. There are at least three Pat Cahalans living within 35 miles of where I'm writing this, two of which are under 40 and have a bachelor's degree in Mathematics (from the same university, no less).
From that standpoint, the process of identification isn't (in the real world) going to be a perfect process. But in virtually all cases, it doesn't need to be. Authentication is more important from a transaction standpoint than identification.
Example -> the salesman in the shoe store doesn't need to know most of my identifying information, he just needs to know my shoe size. If he gets me shoes that fit, he doesn't care if I'm the Son of Sam. The sales clerk doesn't need to know my home address, but they need to know a valid credit card number that's associated with me. If the card is authorized, the sales person doesn't care if I'm a 90 year old Nazi war criminal. When I get a speeding ticket, a police officer needs to know where I live so that he knows where to send the enforcement squad to pick me up if I don't show up for my court date.
But even the cop, in a sense, doesn't need to know *who I am* - he doesn't care which Pat Cahalan I am - he just needs to know enough about the person that he's writing the ticket for to ensure that the ticket-holder will eventually have to pay the consequences for speeding.
In a real sense, the person ticketed could be using a false or stolen or borrowed identity, but as far as this particular transaction is concerned, if they still show up and pay their fine, the transaction of the speeding ticket has been successfully completed.
Lost my original train of thought.
"For another, virtually all the transactions that people go through don't require identification at all, they just require authorization, so it's wasted effort."
You don't even need that. Just use money and it's anonymous (other than possible security cameras recording you in a shop). Credit cards are actually unique identifiers themselves (a number, a name and an expiration date) while your signature is what authorizes those transactions. How would authorization alone (merely your signature) work? Actually, when you buy from a webshop with your credit card your transaction isn't authorized at all.
"Actually, when you buy from a webshop with your credit card your transaction isn't authorized at all."
Just to correct it's authorized but not by you.
Digital snapshots of you and a document doesn't solve anything. It just changes the risks and the fraud tactics, and not necessarily for the better.
So: imagine the world as proposed, then attack it...
First, there'd have to be a trusted network of snapshot-takers. Otherwise anyone could start with a picture of J. Random Victim and Photoshop in a digital image of the document. Yes, there might be a certain minimum level of skill necessary to do this, but don't delude yourself that all practitioners who already have the skill are incorruptible, nor that thieves or shady characters can't acquire the skill.
I Am Not An Artist, yet I've done some touchups in Photoshop, and when it's printed out you can't tell unless you know where to look. I don't mean just fixing red-eye, I mean taking out ugly furniture, adding nice shrubbery, and so on. It's easier to spot the touchups by looking at the individual color-channels, but that's largely because IANAA. If the payoffs were there, I could fix things in the channels, too. If you don't believe me, ask a real Photoshop professional to demonstrate. I think you'd be amazed.
Second, assuming the snapshot itself is trustworthy (i.e. produced by a trustworthy third party), all that does is make your face (or an approximation) a valuable commodity. Thieves will then find it valuable to steal the picture of you in the Big Credit Co's database. Assuming they can't fabricate an image, all they have to do then is find someone who looks enough like you to fool the poor underpaid picture-comparers who work for Big Credit Co. Then the look-alike thief takes in the credit app to the trustworthy snapshot-taker, and voila, mission accomplished.
Or just publish all the stolen pictures, and let any thieves search through them at will, until they find a victim who looks like the thief. Or looks enough like the thief that a little makeup would suffice.
So if someone who looks like you is impersonating you, then the real fun then begins. How do you revoke your face as an authenticator? "No, that's really not me, even though it does bear a certain resemblance to me."
And that doesn't even consider things like new glasses, new hair styles, a bit of surgery, a serious traffic accident injury, or any of countless other things that cause your real face to no longer look enough like the officially registered face. How do you resubmit the new picture, except by undergoing the same procedure that look-alike thieves are using to impersonate you in the first place?
So I don't think digital snapshots are a practical solution, although it might change the balance for a while. Then again, it might not, since it takes time to get any new system fully in place. It's probably years to get something like a trusted network of photo-takers, plus the registered photos of all the citizens you're trying to protect (without a known-good photo, the system is useless).. All during that time, the thieves are busy honing their own skills and creating their own infrastructure. And if any organized criminal endeavor bothers to invest in software to help automate the task (say, face-recognition to find a victim that looks like the thief), then the thieves could well end up technologically ahead of the supposed anti-thief solution by the time that solution is fully in place.
Longer than thought it'd be, but it seemed like it needed saying.
You make some interesting comments on the use of photographs as biometrics, but I think you've misunderstood the proposal. This isn't a proposal to use photographs to authenticate transactions, it is a recorded photograph when applying for the card to help investigate fraudulent ~applications~. The problem domain is someone mailing in a credit application form in my name, with the card to be sent to some sort of untraceable mail drop for the crook to collect.
"First, there'd have to be a trusted network of snapshot-takers."
Well, yes, the credit company's own staff would be doing this when you come into their office to apply for a card. Of course you might need some mechanism to guard against corrupt staff being bribed to take a completely bogus photograph, but that's a (solvable) problem for the CC company to consider, no concern for the user.
"Thieves will then find it valuable to steal the picture of you in the Big Credit Co's database."
There's no database here. It's just pinned to your application form or something--they have to keep that, or else they don't have your signature. Even if they do decide to keep only a digital copy of the photo, they almost never need to refer to it again, so it can be, say, encrypted with a public key, the private part of which is only held by the company's chief security officer.
"looks enough like you to fool the poor underpaid picture-comparers who work for Big Credit Co"
There are no Big Credit Co picture comparers in this proposal. The only time anyone is going to be comparing your face to an alleged photograph of your face, is if Big Credit Co issues a card in your name, you get the bill, and reply "Huh? I don't have a card with you guys!". At that point, if Big Credit Co insists that it really was you, the photos might get compared by police forensics experts. Normally, BCC will look at the picture, admit it obviously isn't you, say "we screwed up accepting that guy's ID, so we'll pay the bill. But at least we have a photo to give to the fraud squad!"
"How do you revoke your face as an authenticator?"
This is the key point you have confused. The proposal here is not to use one's face as an authenticator, but an audit mechanism--to record the face of credit applicants so if an application is made fraudulently, the perpetrator is easier to track down. The fraudulent applicant has no particular reason to try to look like you (unless he is maliciously trying to harm you personally, rather than steal money from the CC company). But anyway it doesn't matter if the fraudulent applicant does look very similar to you; you still know what he looks like! And as Bruce points out, this incidentally also forces applications to be done in person, which completely eliminates the automated mass fraud we are starting to see today.
Having said all that, photographs ~on~ a card (a.k.a. "photographic ID cards") can be a very useful biometric because humans are naturally very good at verifying them and they greatly reduce the utility of stolen cards. So far to date, they have been very successfully used for driver's licenses and similar documents, in many locations. If someone steals my DL, they can't just sell it to any random crook who wants a fake license; they have to find a crook who looks very similar to me. This is a serious complication even for something as widespread as driver's licenses, and when we get down to something like corporate ID cards, it makes stolen cards practically useless.
At the present time, the validity of the photograph on the card is assured only by relative unavailability of the printing technology (which actually embeds the image into the plastic). Presumably these specialised printers will eventually become available to criminals, and then the photographic cards will be much less useful. One solution eventually will be to have a memory device in the card which records a digital version of the photograph, plus a digital signature and some binding information (expiry date, what sort of credential it is for, etc.) A disadvantage of that is that verifying a card will require an electronic device with a graphical display, although it can be an off-line device with quite low computational power.
From a commercial standpoint, the transaction based identity would be fine. I would like to have a different "person" associated with each bank account. The advantage of this approach is that my collection of identities would be the only thing that associates all of my financial information. The disadvantage is that companies want all of the information available on me to develop profiles to select targets for marketing.
From a law enforcement standpoint, there are several reasons why they do need to identify the individual. If they pull someone over for speeding, the speeding ticket is actually the least important part of the transaction from a law enforcement perspective. Traffic stops are legal fishing expeditions. They can check for outstanding warrants, etc at each traffic stop, but that requires knowing the person's identity. A significant number of arrests come from fairly insignificant traffic stops. I do not support the concept of legalized fishing expeditions, but I would expect any solution that makes law enforcement more difficult to be villified as supporting crime.
Mike, you are correct about personal information not belonging to the individual in the US, however that is not the case in other parts of the world.
Many of the people who post to this site are from Europe and other parts of the globe (myself included).
One of the major problems other countries have with the US is the implicit assumption by the US government that their law/writ/view applies wherever they say it does, whilst at the same time denying the same privaledge to other sovrign nations and peoples...
The US also has little or no repect for the individual under law, the main reason for this is that the law understands "property" but not "self", showing harm/damage to property and reasonably assesing the cost of the damage is what tort law is generally about.
However when it comes to demonstrating and quatifing harm/damage to "self" the law flounders and has done since before the time of the Constitution.
One of the reasons for this is that US Law is based on old English Law that goes back to before the Barons forced the King "Lackland" John to sign away some of his rights in June 1215 on Runymeed Island in the River Thames west of London. This peace deal that later became known as the Magna Carta, most importantly made the King liable to pay damages to the citizens (Barons).
Back then you where only had a say if you had property (Land) and a standing army and I guess the modern (abstract) equivalent of property is money, and the standing army is lobyists.
So "buying votes" via lobying today is pretty much the same as what happend back in King John's time. So no real change in 3/4 of a milenium then, if youve got the clout you win...
@Ilkka Kokkarinen & Bruce & Pat,
Sorry folks there is no point taking photos at the application time, the reason is simple,
Most people can quite quickly learn to make a reasonable faxcimily of somebodie elses signiture (the only authenticator on most credit cards).
two or more people could decide to get together and get their photos attached to somebody elses credit card documentation. The card however still goes to the real person who uses it in the normal way.
As long as there is no question on the CC account then the photo is not required.
Without going into the whys and wherefors of why somebody might want to do this (there are several good and bad reasons but you can work them out for yourself). You end up with a credit card that will fail if ever brought into question.
So the next stage is to put the photo on the credit card...
"Photo ID credit cards" and cheque cards have been tried in Europe and they have all failed dimally.
The is a good reason for this which is sumed up by the old joke,
"Any person who looks like their passport photo is not well enough to travel"
Put simply the photo ID is to small, and the people doing the verification not sufficiently trained to be able to reliably recognise the person. They usually do worst with Women due to hair styles and makeup changes. Apparently the US imagration system suffers from similar problems.
Various government agencies are well aware of this which is why there is lots of talk about "Bio-Metrics". Unfortunatly there appears to be very little experiance in various Governments over Bio-Metrics and the problems involved with them.
The people who do know about the problems are the Bio-Metric device manufactures, and they are not going to tell it's not in their interest to.
In an odd sort of way the "Bio-Metric companies" are just like the "cigarette companies" in the 50's and 60's who had done the research to show that cigarettes where not only addictive but also harmfull as well. However they spent a lot of money and did a lot of missleading in courts for half a century or more and made billions in the process.
As a side note in the UK Charles Clark let slip in Parliment the other day one of the real reasons for UK National ID cards "opening bank accounts".
In Europe there have been various laws passed to stop money laundering. Unfortunatly because of them the Banks will soon have a requirment for positivly identifing customers before allowing them to open accounts... The main reason for these laws is to reduce the amount of lost taxation, not as most people would think to stop serious crime such as drugs or terorisum.
Well as the Banks are only to aware the cost of this positive identification is prohibitivly high, Recently the London School of Economics has put it as high as 500USD per ID card after various alowences (such as the free cards to OAPs / unemployed etc).
So a nice little favour from the UK Government to the UK banks, "We enact the law as a 'volentry schem' you make sure it becomes 'mandatory' by denying access to banking facilities unless the person has one..." Oh you will also need one to get medical care or just about any other form of state assistance (that you have allready paid for via your taxes).
I guess that's a Win for the Government and a Win for the Banks, as for the rest of us not only do we lose but we have to pay through the nose for it as well...
Speaking of banks.. In real world, the best they can probably do is to record phone calls, require each person be at bank in person, have cameras record the person inside the bank, require SSN (basicly date of birth and a random number assigned on birth (where I live anyway)), name, address, phone number, employer and employer's phone number. Then they can do checks based on the information you gave and if they find no inconsistencies they can take the risk and assume the information is valid. This is one reason why privacy is important. The less information the bad guys have on people the harder it is for them to pick good victims for attacks. The more information they have access to the easier it is to inpersonate.
Businesswek has an intersting take on identity fraud:
It appears that fraud is not that great - it's down from 0.15 % to 0.04% for established brick and mortar stores.
Yet they go on to say that fraud "fell from $1.13 billion in 1999 to $1.05 billion in 2004."
What to believe - a small percentage or a huge amount of money?
I guess that as long as it is a small percentage, nothing will be done
I cann't remember where I saw it but apparently US spending on credit cards is greater than the GDP of 90% of the worlds nations.
So yes the very small percentage is going to be a very large sum of money, which is why crooks are going to be interested, and financial institutions not.
@Pat Calahan: "There is certainly a concept of owning your own personal information in a legal sense, but the concept isn't well defined." Not in the US. It is, however, decently well defined in those countries which have taken the effort to define it in the laws. It's not that it's rocket science, it's not too difficult to be done, it's just that politicians have to want it in order to be done.
"I think I like this idea. I certainly like forcing they applicant to appear in person." (Bruce) Yes, and this would solve a large junk of the problem. You wouldn't even have to require a mugshot, as Ilkka suggests. A simpler suggestion is that the financial institution includes a copy of some photo ID with the application. This is actually what I'm used to.
"In Europe there have been various laws passed to stop money laundering. Unfortunatly because of them the Banks will soon have a requirment for positivly identifing customers before allowing them to open accounts..." (Clive)
This is already mandatory in various countries, and it's a good thing. I have an account with a pure internet bank. In order to comply with the law, they have another bank do the personal identification. There is no way of getting an account anonymously.
"The main reason for these laws is to reduce the amount of lost taxation, not as most people would think to stop serious crime such as drugs or terorisum."
Switzerland was forced, at the end of the 90s, to mandate financial customer identification exactly because of money laundering and tax evasion, and I have no quarrel with that. There is no right to tax evasion. The irony is just that the US is always keen on forcing tough security standards on other countries (recently biometric passports) but there is no question to implement similar measures in the US.
P.S. Now before you all start yelling at me: I know that photo ID verification isn't perfect, it won't solve everything, and criminals can get fake ID. But it highers the bar, and it forces the fraudster to take a greater risk.
Has anyone heard when and where the first case of identity theft happened?
The best way to prevent personal identity theft is to use a real-time credit monitoring service. The service MUST monitor credit reports for all three bureaus and provide you with immediate or daily alerts. Preferrably, the service will include a free 3-bureau credit report each month. Visit www.venturatown.com or www.venturatown.com/page4.html for more detailed information.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.