January 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0901.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre, and recently hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.
Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.
These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?
Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a website, we use the professionalism of the page to judge whether or not it’s really legitimate — never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.
Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network — or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.
These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with — and I’m not making this up — a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.
This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.
Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.
This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Websites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.
Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.
Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.
We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.
This essay originally appeared on The Wall Street Journal’s website:
Impersonating security guards:
Impersonating the mayor of Paris:
Faxing yourself a forged release notice from jail:
Fooling fingerprint readers:
Fooling automatic face scanners:
Impersonating Best Buy employees:
Solving identity theft:
Mistakenly not believing it’s Barack Obama:
Really interesting article on snipers.
Terrorism fear mongering; buying fake Nintendo consoles helps terrorists:
How to spot a fake Nintendo console:
I have mixed feelings about this proposal to train New York City police with machine guns. On the one hand, deploying these weapons seems like a bad idea. On the other hand, training is almost never a bad thing.
Good comments by Ed Felten on TSA behavioral screening:
Brazilian logging firms hire hackers to modify logging limits:
Clever DNS dead drops:
It’s worth reading this interview with James Bamford on the NSA:
Also worth reading is his new book:
How to bypass airport security checkpoints:
Dilbert on computer security:
Security cartoon — overly specific countermeasures at President Bush press conferences:
Mexico wants to create a registry of cell phone owners. How easy is it to steal a cell phone? I’m generally not impressed with security measures, especially expensive ones, that merely result in the bad guys changing their tactics.
Seems that voice prints are hard.
DHS reality show on ABC. I saw part of an episode: pure propaganda.
Comparing the security of electronic slot machines and electronic voting machines:
Other important differences:
1) Slot machines are used every day, 24 hours a day. Electronic voting machines are used, at most, twice a year — often less frequently.
2) Slot machines involve money. Electronic voting machines involve something much more abstract.
3) Slot machine accuracy is a non-partisan issue. For some reason I can’t fathom, electronic voting machine accuracy is seen as a political issue.
Just declassified by the NSA, this document — A History of U.S. Communications Security (Volumes I and II); the David G. Boak Lectures, National Security Agency (NSA), 1973 — is definitely worth reading. The first sections are highly redacted, but the remainder is fascinating.
Another recently released NSA document: “American Cryptology during the Cold War,” by Thomas R. Johnson.
The NSA on the origins of the NSA:
NSA patent on network tampering detection:
“Securing Cyberspace for the 44th Presidency,” by the Center for Strategic and International Studies.
Due to lack of funding, CCTV cameras aren’t being monitored. This is not surprising at all; when money is scarce, these sorts of things go unfunded. Perhaps the biggest surprise is that people thought the cameras were ever monitored — generally, they’re not.
It’s okay to bring gunpowder on an airplane; putting it in a clear plastic baggie magically makes it safe:
Shoplifting is on the rise in our bad economy:
Or maybe it’s not:
Here’s a list of the most frequently shoplifted items: small, expensive things with a long shelf life.
Matthew Alexander is a former Special Operations interrogator who worked in Iraq in 2006. His op-ed on torture is worth reading:
Also, this interview from Harper’s:
Yet another interview:
CDC bioterrorism readiness plan from 1999:
Real-world data on software security programs.
Counterfeiting is getting worse; it’s of poorer quality, so it’s easier to detect, but there’s more of it.
FBI’s new cryptanalysis contest:
This Kip Hawley quote sounds like me: “‘In the hurly-burly and the infinite variety of travel, you can end up with nonsensical results in which the T.S.A. person says, “Well, I’m just following the rules,”‘ Mr. Hawley said. ‘But if you have an enemy who is going to study your technology and your process, and if you have something they can figure out a way to get around, and they’re always figuring, then you have designed in a vulnerability.'”
The best capers of 2008:
Censorship on Google Maps:
Reporting unruly football fans via text message:
Allocating resources: financial fraud vs. terrorism. We’ve seen this problem over and over again when it comes to counterterrorism: in an effort to defend against the rare threats, we make ourselves more vulnerable to the common threats.
Movie-plot threat: terrorists using insects.
Fear sells books.
Interesting article on what sorts of files the DHS keeps on travelers.
Twitter fell to a dictionary attack because the site allowed unlimited failed login attempts. Come on, people; this is basic stuff.
A security camera study from San Francisco says they don’t work:
One from London says they do:
My own writing on security cameras:
The question isn’t whether they’re useful or not, but whether their benefits are worth the costs.
It’s a good idea to encrypt USB keys — they get lost so easily — but it’s stupid to attach the key to the device:
Michael Chertoff parodied in The Onion.
We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates.
This isn’t a big deal. The research is great; it’s good work, and I always like to see cryptanalytic attacks used to break real-world security systems. Making that jump is often much harder than cryptographers think.
But SSL doesn’t provide much in the way of security, so breaking it doesn’t harm security very much. Pretty much no one ever verifies SSL certificates, so there’s not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints — Trojans and rootkits on users’ computers, attacks against databases and servers, etc — and not in the network.
While it is true that browsers do some SSL certificate verification, when they find an invalid certificate they display a warning dialog box which everyone — me included — ignores. There are simply too many valid sites out there with bad certificates for that warning to mean anything.
This comment by Ted Dziuba is far too true: “If you’re like me and every other user on the planet, you don’t give a sh*t when an SSL certificate doesn’t validate. Unfortunately, commons-httpclient was written by some pedantic f*cknozzles who have never tried to fetch real-world webpages.” (Asterisks put in so a zillion spam/profanity blockers won’t block this entire e-mail.)
I’m not losing a whole lot of sleep because of these attacks. But — come on, people — no one should be using MD5 anymore.
I was interviewed on 60 Minutes about airport security. I’m particularly croggled by this quote from the CBS page: “‘…it’s why the TSA was created: to never forget,’ Hawley tells Stahl.” This quote summarizes nicely a lot about what’s wrong with the TSA. They focus much too much on the specifics of the tactics that have been used, and not enough on the broad threat.
Interview with me from CIO Insight:
Interview with me from CSO Magazine:
The account “bruceschneier” on Twitter is not me. The account “schneier” is me. I have never posted; I don’t promise that I ever will.
I spoke at the Cato Institute’s conference: “Shaping the Obama Administration’s Counterterrorism Strategy.” All of it was very interesting. Videos are on the Internet.
Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.
And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.
A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.
Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.
Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.
One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.
Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.
This essay originally appeared in the Guardian.
It’s an update of an essay I wrote in 1998.
There are hundreds of comments — many of them interesting — on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.