January 15, 2009
by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0901.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
Impersonation
Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre, and recently hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.
Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.
These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?
Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a website, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.
Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.
These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.
This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.
Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.
This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Websites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.
Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.
Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.
Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.
We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.
This essay originally appeared on The Wall Street Journal’s website:
http://online.wsj.com/article/SB123125633551557469.html
Martin Guerre:
http://en.wikipedia.org/wiki/Martin_Guerre
Obama’s Twitter account:
http://bits.blogs.nytimes.com/2009/01/05/…
Mimic octopus video:
http://news.nationalgeographic.com/news/2001/09/…
Impersonating garbage men:
http://pauldotcommunity.blogspot.com/2008/12/…
Impersonating parking lot attendants:
http://www.jsonline.com/watchdog/pi/36196199.html
Impersonating the French president:
http://news.bbc.co.uk/2/hi/americas/…
http://news.scotsman.com/world/…
Impersonating policemen:
https://www.schneier.com/blog/archives/2006/01/…
Impersonating security guards:
https://www.schneier.com/blog/archives/2006/05/…
Impersonating meter readers:
http://query.nytimes.com/gst/fullpage.html?…
Trusting uniforms:
https://www.schneier.com/blog/archives/2006/05/…
Impersonating the mayor of Paris:
http://www.nytimes.com/2008/12/22/opinion/…
Faxing yourself a forged release notice from jail:
https://www.schneier.com/blog/archives/2004/11/…
Fooling automatic speed cameras:
http://www.thesentinel.com/302730670790449.php
http://www.thenewspaper.com/news/26/2632.asp
Fooling fingerprint readers:
http://www.smh.com.au/travel/…
http://news.yahoo.com/s/afp/20090101/wl_asia_afp/…
Fooling automatic face scanners:
http://news.cnet.com/8301-17938_105-10110987-1.html
Fax signatures:
https://www.schneier.com/blog/archives/2008/06/…
Impersonating Best Buy employees:
https://www.schneier.com/blog/archives/2006/05/…
REAL-ID:
http://www.schneier.com/testimony-realid.html
Solving identity theft:
http://www.schneier.com/essay-153.html
Mistakenly not believing it’s Barack Obama:
http://news.bbc.co.uk/2/hi/also_in_the_news/7765574.stm
http://cnews.canoe.ca/CNEWS/WeirdNews/2008/12/04/…
http://althouse.blogspot.com/2008/12/…
http://www.wayodd.com/…
News
Really interesting article on snipers.
http://www.theregister.co.uk/2008/11/28/sniper_feature/
Terrorism fear mongering; buying fake Nintendo consoles helps terrorists:
https://www.schneier.com/blog/archives/2008/12/…
How to spot a fake Nintendo console:
http://news.bbc.co.uk/cbbcnews/hi/newsid_7760000/…
I have mixed feelings about this proposal to train New York City police with machine guns. On the one hand, deploying these weapons seems like a bad idea. On the other hand, training is almost never a bad thing.
http://www.nypost.com/seven/12082008/news/…
Good comments by Ed Felten on TSA behavioral screening:
http://freedom-to-tinker.com//felten/…
Brazilian logging firms hire hackers to modify logging limits:
http://www.theregister.co.uk/2008/12/12/…
Clever DNS dead drops:
http://landonf.bikemonkey.org/code/security/…
http://landonf.bikemonkey.org/code/security/…
It’s worth reading this interview with James Bamford on the NSA:
http://sacurrent.com/news/story.asp?id=69490
Also worth reading is his new book:
http://www.amazon.com/exec/obidos/ASIN/0385521324/…
How to bypass airport security checkpoints:
https://www.schneier.com/blog/archives/2008/12/…
Good article on “nut allergy” fear and overreaction:
http://news.bbc.co.uk/1/hi/health/7773210.stm
There’s a lively discussion in the blog comments:
https://www.schneier.com/blog/archives/2008/12/…
Dilbert on computer security:
http://www.dilbert.com/strips/comic/2008-12-07/
Security cartoon—overly specific countermeasures at President Bush press conferences:
http://www.news.com.au/common/imagedata/…
Mexico wants to create a registry of cell phone owners. How easy is it to steal a cell phone? I’m generally not impressed with security measures, especially expensive ones, that merely result in the bad guys changing their tactics.
http://www.blacklistednews.com/?news_id=2602
Seems that voice prints are hard.
http://dsc.discovery.com/news/2008/12/04/…
DHS reality show on ABC. I saw part of an episode: pure propaganda.
http://abc.go.com/primetime/homelandsecurity/index
Comparing the security of electronic slot machines and electronic voting machines:
http://media3.washingtonpost.com/wp-dyn/content/…
Other important differences:
1) Slot machines are used every day, 24 hours a day. Electronic voting machines are used, at most, twice a year—often less frequently.
2) Slot machines involve money. Electronic voting machines involve something much more abstract.
3) Slot machine accuracy is a non-partisan issue. For some reason I can’t fathom, electronic voting machine accuracy is seen as a political issue.
Just declassified by the NSA, this document—A History of U.S. Communications Security (Volumes I and II); the David G. Boak Lectures, National Security Agency (NSA), 1973—is definitely worth reading. The first sections are highly redacted, but the remainder is fascinating.
http://www.governmentattic.org/2docs/…
Another recently released NSA document: “American Cryptology during the Cold War,” by Thomas R. Johnson.
http://www.gwu.edu/~nsarchiv/NSAEBB/NSAEBB260/index.htm
The NSA on the origins of the NSA:
https://www.nsa.gov/publications/publi00015.cfm
NSA patent on network tampering detection:
http://www.itworld.com/networking/59610/…
http://patft.uspto.gov/netacgi/nph-Parser?…
“Securing Cyberspace for the 44th Presidency,” by the Center for Strategic and International Studies.
http://www.csis.org/component/option,com_csis_pubs/…
Due to lack of funding, CCTV cameras aren’t being monitored. This is not surprising at all; when money is scarce, these sorts of things go unfunded. Perhaps the biggest surprise is that people thought the cameras were ever monitored—generally, they’re not.
http://www.dailymail.co.uk/news/article-1095609/…
It’s okay to bring gunpowder on an airplane; putting it in a clear plastic baggie magically makes it safe:
http://wildbee.org/2008/12/09/…
Shoplifting is on the rise in our bad economy:
http://www.nytimes.com/2008/12/23/us/23shoplift.html
Or maybe it’s not:
http://www.slate.com/id/2207504/
Here’s a list of the most frequently shoplifted items: small, expensive things with a long shelf life.
https://www.schneier.com/blog/archives/2005/06/…
Matthew Alexander is a former Special Operations interrogator who worked in Iraq in 2006. His op-ed on torture is worth reading:
http://www.washingtonpost.com/wp-dyn/content/…
Also, this interview from Harper’s:
http://harpers.org/archive/2008/12/hbc-90004036
Excerpts:
https://www.schneier.com/blog/archives/2008/12/…
Yet another interview:
http://paulharrisonline.blogspot.com/2008/12/…
CDC bioterrorism readiness plan from 1999:
http://www.cdc.gov/ncidod/dhqp/pdf/bt/…
Real-world data on software security programs.
http://www.informit.com/articles/article.aspx?p=1315431
Counterfeiting is getting worse; it’s of poorer quality, so it’s easier to detect, but there’s more of it.
http://www.usatoday.com/news/nation/…
FBI’s new cryptanalysis contest:
http://www.fbi.gov/page2/dec08/code_122908.html
This Kip Hawley quote sounds like me: “‘In the hurly-burly and the infinite variety of travel, you can end up with nonsensical results in which the T.S.A. person says, “Well, I’m just following the rules,”‘ Mr. Hawley said. ‘But if you have an enemy who is going to study your technology and your process, and if you have something they can figure out a way to get around, and they’re always figuring, then you have designed in a vulnerability.'”
http://www.nytimes.com/2008/12/30/business/30road.html
The best capers of 2008:
http://.wired.com/27bstroke6/2008/12/capers.html
Censorship on Google Maps:
https://www.schneier.com/blog/archives/2009/01/…
Reporting unruly football fans via text message:
http://www.usatoday.com/sports/football/nfl/…
Allocating resources: financial fraud vs. terrorism. We’ve seen this problem over and over again when it comes to counterterrorism: in an effort to defend against the rare threats, we make ourselves more vulnerable to the common threats.
https://www.schneier.com/blog/archives/2009/01/…
Movie-plot threat: terrorists using insects.
http://.wired.com/defense/2009/01/…
Fear sells books.
Interesting article on what sorts of files the DHS keeps on travelers.
http://current.newsweek.com/budgettravel/2008/12/…
Twitter fell to a dictionary attack because the site allowed unlimited failed login attempts. Come on, people; this is basic stuff.
http://.wired.com/27bstroke6/2009/01/…
http://www.codinghorror.com/blog/archives/001206.html
Twitter responds:
http://al3x.net/2009/01/12/…
A security camera study from San Francisco says they don’t work:
http://www.citris-uc.org/news/SFcamerastudy
One from London says they do:
http://www.telegraph.co.uk/news/newstopics/politics/…
My own writing on security cameras:
http://www.schneier.com/essay-225.html
The question isn’t whether they’re useful or not, but whether their benefits are worth the costs.
It’s a good idea to encrypt USB keys—they get lost so easily—but it’s stupid to attach the key to the device:
http://www.lep.co.uk/news/…
Michael Chertoff parodied in The Onion.
http://www.theonion.com/content/news/…
Forging SSL Certificates
We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates.
This isn’t a big deal. The research is great; it’s good work, and I always like to see cryptanalytic attacks used to break real-world security systems. Making that jump is often much harder than cryptographers think.
But SSL doesn’t provide much in the way of security, so breaking it doesn’t harm security very much. Pretty much no one ever verifies SSL certificates, so there’s not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints—Trojans and rootkits on users’ computers, attacks against databases and servers, etc—and not in the network.
While it is true that browsers do some SSL certificate verification, when they find an invalid certificate they display a warning dialog box which everyone—me included—ignores. There are simply too many valid sites out there with bad certificates for that warning to mean anything.
This comment by Ted Dziuba is far too true: “If you’re like me and every other user on the planet, you don’t give a sh*t when an SSL certificate doesn’t validate. Unfortunately, commons-httpclient was written by some pedantic f*cknozzles who have never tried to fetch real-world webpages.” (Asterisks put in so a zillion spam/profanity blockers won’t block this entire e-mail.)
I’m not losing a whole lot of sleep because of these attacks. But—come on, people—no one should be using MD5 anymore.
http://news.cnet.com/8301-1009_3-10129693-83.html
http://s.zdnet.com/security/?p=2339
http://phreedom.org/research/rogue-ca/
http://www.theregister.co.uk/2008/12/30/ssl_spoofing/
http://gizmodo.com/5120924/…
http://arstechnica.com/news.ars/post/…
The research:
http://www.win.tue.nl/hashclash/rogue-ca/
http://events.ccc.de/congress/2008/Fahrplan/track/…
http://events.ccc.de/congress/2008/Fahrplan/…
That quote:
http://teddziuba.com/2008/12/…
Schneier News
I was interviewed on 60 Minutes about airport security. I’m particularly croggled by this quote from the CBS page: “‘…it’s why the TSA was created: to never forget,’ Hawley tells Stahl.” This quote summarizes nicely a lot about what’s wrong with the TSA. They focus much too much on the specifics of the tactics that have been used, and not enough on the broad threat.
http://www.cbsnews.com/stories/2008/12/18/60minutes/…
Interview with me from CIO Insight:
http://www.cioinsight.com/c/a/Expert-Voices/…
Interview with me from CSO Magazine:
http://www.csoonline.com/article/473663/…
The account “bruceschneier” on Twitter is not me. The account “schneier” is me. I have never posted; I don’t promise that I ever will.
http://twitter.com/bruceschneier
http://twitter.com/schneier
I spoke at the Cato Institute’s conference: “Shaping the Obama Administration’s Counterterrorism Strategy.” All of it was very interesting. Videos are on the Internet.
http://www.cato.org/events/counterterrorism/index.html
Biometrics
Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.
And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.
A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.
Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.
Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.
One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.
Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.
This essay originally appeared in the Guardian.
http://www.guardian.co.uk/technology/2009/jan/08/…
It’s an update of an essay I wrote in 1998.
http://www.schneier.com/…
Comments from Readers
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is the Chief Security Technology Officer of BT (BT acquired Counterpane in 2006), and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2009 by Bruce Schneier.