Entries Tagged "ID cards"

Page 2 of 10

REAL-ID Implementation

According to this study, REAL-ID has not only been cheaper to implement than the states estimated, but also helpful in reducing fraud.

States are finding that implementation of the 2005 REAL ID Act is much easier and less expensive than previously thought, and is a significant factor in reducing fraud. In cases like Indiana, REAL ID has significantly improved customer satisfaction, resulting in that state receiving AAMVA’s “customer satisfaction” award of the year. This is not just a win-win for national and economic security, but a win (less expensive) -win (doable) -win (fraud reduction) -win (improved customer satisfaction) for federal and state governments as well as individuals.

Moreover, 11 states are already in full compliance, well ahead of the May 2011 deadline for the 18 benchmarks. Another eight are close behind. Some states, like Delaware and Maryland, have achieved REAL ID compliance within a year. Washington State refuses REAL ID compliance, but has already implemented the most difficult benchmarks.

Perhaps most astonishing is that from the cost numbers currently available, it looks like implementation of the 18 REAL ID benchmarks in all the states may end up costing somewhere between $350 million and $750 million, significantly less than the $1 billion projected by those still seeking to change the law.

Legal presence is being checked in all but two states, up 28 states from 2006. Only Washington and New Mexico still do not require legal presence to obtain a license, but Washington so significantly upgraded its license issuance in 2010 that the fraudulent attempts to garner licenses in that state are now significantly reduced. Every state is now checking Social Security numbers.

This might be the first government IT project ever that came in under initial cost estimates. Perhaps the reason is that the states did not want to implement REAL-ID in 2005, so they overstated the costs.

As to fraud reduction — I’m not so sure. As the difficulty of getting a fraudulent ID increases, so does its value. I think we’ll have to wait a while longer and see how criminals adapt.

EDITED TO ADD (2/11): CATO’s Jim Harper argues that this report does not show that implementing the national ID program envisioned in the national ID law is a cost-effective success. It only assesses compliance with certain DHS-invented “benchmarks” related to REAL ID, and does so in a way that skews the results.

Posted on January 25, 2011 at 6:16 AMView Comments

The Limits of Identity Cards

Good legal paper on the limits of identity cards: Stephen Mason and Nick Bohm, “Identity and its Verification,” in Computer Law & Security Review, Volume 26, Number 1, Jan 2010.

Those faced with the problem of how to verify a person’s identity would be well advised to ask themselves the question, ‘Identity with what?’ An enquirer equipped with the answer to this question is in a position to tackle, on a rational basis, the task of deciding what evidence will be useful for the purpose. Without the answer to the question, the verification of identity becomes a sadly familiar exercise in blind compliance with arbitrary rules.

Posted on March 10, 2010 at 7:09 AMView Comments

Security and Function Creep

Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose — one it wasn’t suited for in the first place. And then the security system has to play catch-up.

Take driver’s licenses, for example. Originally designed to demonstrate a credential — the ability to drive a car — they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task — teenagers can be extraordinarily resourceful if they set their minds to it — and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.

Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act — the government’s attempt to make driver’s licenses even more secure — has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.

You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well — maybe even military applications.

Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.

But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and — like a driver’s license becoming an age-verification token — the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.

I don’t like having to play catch-up in security, but we seem doomed to keep doing so.

This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.

Posted on February 4, 2010 at 6:35 AMView Comments

No Smiling in Driver's License Photographs

In other biometric news, four states have banned smiling in driver’s license photographs.

The serious poses are urged by DMVs that have installed high-tech software that compares a new license photo with others that have already been shot. When a new photo seems to match an existing one, the software sends alarms that someone may be trying to assume another driver’s identity.

But there’s a wrinkle in the technology: a person’s grin. Face-recognition software can fail to match two photos of the same person if facial expressions differ in each photo, says Carnegie Mellon University robotics professor Takeo Kanade.

Posted on May 29, 2009 at 11:19 AMView Comments

Michael Froomkin on Identity Cards

University of Miami law professor Michael Froomkin writes about ID cards and society in “Identity Cards and Identity Romanticism.”

This book chapter for “Lessons from the Identity Trail: Anonymity, Privacy and Identity in a Networked Society” (New York: Oxford University Press, 2009)—a forthcoming comparative examination of approaches to the regulation of anonymity edited by Ian Kerr—discusses the sources of hostility to National ID Cards in common law countries. It traces that hostility in the United States to a romantic vision of free movement and in England to an equally romantic vision of the ‘rights of Englishmen’.

Governments in the United Kingdom, United States, Australia, and other countries are responding to perceived security threats by introducing various forms of mandatory or nearly mandatory domestic civilian national identity documents. This chapter argues that these ID cards pose threats to privacy and freedom, especially in countries without strong data protection rules. The threats created by weak data protection in these new identification schemes differ significantly from previous threats, making the romantic vision a poor basis from which to critique (highly flawed) contemporary proposals.

One small excerpt:

…it is important to note that each ratchet up in an ID card regime—the introduction of a non-mandatory ID card scheme, improvements to authentication, the transition from an optional regime to a mandatory one, or the inclusion of multiple biometric identifiers—increases the need for attention to how the data collected at the time the card is created will be stored and accessed. Similarly, as ID cards become ubiquitous, a de facto necessity even when not required de jure, the card becomes the visible instantiation of a large, otherwise unseen, set of databases. If each use of the card also creates a data trail, the resulting profile becomes an ongoing temptation to both ordinary and predictive profiling.

Posted on March 4, 2009 at 7:25 AMView Comments

Cloning RFID Passports

It’s easy to clone RFID passports. (To make it clear, the attacker didn’t actually create fake passports; he just stole the data off the RFID chips.) Not that this hasn’t been done before.

I’ve long been opposed to RFID chips in passports, and have written op eds about them in the International Herald Tribune and several other papers.

EDITED TO ADD (2/11): I got some details wrong. Chris Paget, the researcher, is cloning Western Hemisphere Travel Initiative (WHTI) compliant documents such as the passport card and Electronic Drivers License (EDL), and not the passport itself. Here is the link to Paget’s talk at ShmooCon.

Posted on February 11, 2009 at 5:09 AMView Comments

Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate — never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network — or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with — and I’m not making this up — a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PMView Comments

Biometrics

Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.

What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.

Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.

On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.

And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.

A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.

Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.

The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.

Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.

One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.

Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.

This essay originally appeared in the Guardian, and is an update of an essay I wrote in 1998.

Posted on January 8, 2009 at 12:53 PMView Comments

Kip Hawley Responds to My Airport Security Antics

Kip Hawley, head of the TSA, has responded to my airport security penetration testing, published in The Atlantic.

Unfortunately, there’s not really anything to his response. It’s obvious he doesn’t want to admit that they’ve been checking ID’s all this time to no purpose whatsoever, so he just emits vague generalities like a frightened squid filling the water with ink. Yes, some of the stunts in article are silly (who cares if people fly with Hezbollah T-shirts?) so that gives him an opportunity to minimize the real issues.

Watch-lists and identity checks are important and effective security measures. We identify dozens of terrorist-related individuals a week and stop No-Flys regularly with our watch-list process.

It is simply impossible that the TSA catches dozens of terrorists every week. If it were true, the administration would be trumpeting this all over the press — it would be an amazing success story in their war on terrorism. But note that Hawley doesn’t exactly say that; he calls them “terrorist-related individuals.” Which means exactly what? People so dangerous they can’t be allowed to fly for any reason, yet so innocent they can’t be arrested — even under the provisions of the Patriot Act.

And if Secretary Chertoff is telling the truth when he says that there are only 2,500 people on the no-fly list and fewer than 16,000 people on the selectee list — they’re the ones that get extra screening — and that most of them live outside the U.S., then it is just plain impossible that the TSA identifies “dozens” of these people every week. The math just doesn’t make sense.

And I also don’t believe this:

Behavior detection works and we have 2,000 trained officers at airports today. They alert us to people who may pose a threat but who may also have items that could elude other layers of physical security.

It does work, but I don’t see the TSA doing it properly. (Fly El Al if you want to see it done properly.) But what I think Hawley is doing is engaging in a little bit of psychological manipulation. Like sky marshals, the real benefit of behavior detection isn’t whether or not you do it but whether or not the bad guys believe you’re doing it. If they think you are doing behavior detection at security checkpoints, or have sky marshals on every airplane, then you don’t actually have to do it. It’s the threat that’s the deterrent, not the actual security system.

This doesn’t impress me, either:

Items carried on the person, be they a ‘beer belly’ or concealed objects in very private areas, are why we are buying over 100 whole body imagers in upcoming months and will deploy more over time. In the meantime, we use hand-held devices that detect hydrogen peroxide and other explosives compounds as well as targeted pat-downs that require private screening.

Optional security measures don’t work, because the bad guys will opt not to use them. It’s like those air-puff machines at some airports now. They’re probably great at detecting explosive residue off clothing, but every time I have seen the machines in operation, the passengers have the option whether to go through the lane with them or another lane. What possible good is that?

The closest thing to a real response from Hawley is that the terrorists might get caught stealing credit cards.

Using stolen credit cards and false documents as a way to get around watch-lists makes the point that forcing terrorists to use increasingly risky tactics has its own security value.

He’s right about that. And, truth be told, that was my sloppiest answer during the original interview. Thinking about it afterwards, it’s far more likely is that someone with a clean record and a legal credit card will buy the various plane tickets.

This is new:

Boarding pass scanners and encryption are being tested in eight airports now and more will be coming.

Ignoring for a moment that “eight airports” nonsense — unless you do it at every airport, the bad guys will choose the airport where you don’t do it to launch their attack — this is an excellent idea. The reason my attack works, the reason I can get through TSA checkpoints with a fake boarding pass, is that the TSA never confirms that the information on the boarding pass matches a legitimate reservation. If all TSA checkpoints had boarding pass scanners that connected to the airlines’ computers, this attack would not work. (Interestingly enough, I noticed exactly this system at the Dublin airport earlier this month.)

Stopping the “James Bond” terrorist is truly a team effort and I whole-heartedly agree that the best way to stop those attacks is with intelligence and law enforcement working together.

This isn’t about “Stopping the ‘James Bond’ terrorist,” it’s about stopping terrorism. And if all this focus on airports, even assuming it starts working, shifts the terrorists to other targets, we haven’t gotten a whole lot of security for our money.

FYI: I did a long interview with Kip Hawley last year. If you haven’t read it, I strongly recommend you do. I pressed him on these and many other points, and didn’t get very good answers then, either.

EDITED TO ADD (10/28): Kip Hawley responds in comments. Yes, it’s him.

EDITED TO ADD (11/17): Another article on those boarding pass verifiers.

Posted on October 23, 2008 at 6:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.