Identity, Authentication, and Authorization
Good essay on why they must remain distinct. I spent a chapter on this in Beyond Fear.
Page 17 of 28
Good essay on why they must remain distinct. I spent a chapter on this in Beyond Fear.
Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.
What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.
Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.
On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.
And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.
A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.
Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.
The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.
Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.
One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.
Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.
This essay originally appeared in the Guardian, and is an update of an essay I wrote in 1998.
We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates:
Molnar, Appelbaum, and Sotirov joined forces with the European MD5 research team in mid-2008, along with Swiss cryptographer Dag Arne Osvik. They realized that the co-construction technique could be used to simultaneously generate one normal SSL certificate and one forged certificate, which could be used to sign and vouch for any other. They purchased a signature for the legitimate certificate from an established company that was still using MD5 for signing, and then applied the legitimate signature to the forged certificate. Because the legitimate and forged certificates had the same MD5 value, the legitimate signature also marked the forged one as acceptable.
Lots and lots more articles, and the research.
This isn’t a big deal. The research is great; it’s good work, and I always like to see cryptanalytic attacks used to break real-world security systems. Making that jump is often much harder than cryptographers think.
But SSL doesn’t provide much in the way of security, so breaking it doesn’t harm security very much. Pretty much no one ever verifies SSL certificates, so there’s not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints—Trojans and rootkits on users’ computers, attacks against databases and servers, etc—and not in the network.
I’m not losing a whole lot of sleep because of these attacks. But—come on, people—no one should be using MD5 anymore.
EDITED TO ADD (12/31): While it is true that browsers do some SSL certificate verification, when they find an invalid certificate they display a warning dialog box which everyone—me included—ignores. There are simply too many valid sites out there with bad certificates for that warning to mean anything. This is far too true:
If you’re like me and every other user on the planet, you don’t give a shit when an SSL certificate doesn’t validate. Unfortunately, commons-httpclient was written by some pedantic fucknozzles who have never tried to fetch real-world webpages.
A reporter managed to file legal papers, transferring ownership of the Empire State Building to himself. Yes, it’s a stunt:
The office of the city register, upon receipt of the phony documents prepared by the newspaper, transferred ownership of the 102-story building from Empire State Land Associates to Nelots Properties, LLC. Nelots is “stolen” spelled backward.
To further enhance the absurdity of the heist, included on the bogus paperwork were original “King Kong” star Fay Wray as witness and Willie Sutton, the notorious bank robber, as the notary.
Still, this sort of thing has been used to commit fraud in the past, and will continue to be a source of fraud in the future. The problem is that there isn’t enough integrity checking to ensure that the person who is “selling” the real estate is actually the person who owns it.
This is a nifty little device: a credit card with an onboard one-time password generator. The idea is that the user enters his PIN every time he makes an online purchase, and enters the one-time code on the screen into the webform. The article doesn’t say if the code is time-based or just sequence-based, but in either case the credit card company will be able to verify it remotely.
The idea is that this cuts down on card-not-present credit card fraud.
The efficacy of this countermeasure depends a lot on how much these new credit cards cost versus the amount of this type of fraud that happens, but in general it seems like a really good idea. Certainly better than that three-digit code printed on the back of cards these days.
According to the article, Visa will be testing this card in 2009 in the UK.
EDITED TO ADD (12/6): Several commenters point out that banks in the Netherlands have had a similar system for years.
It’s a tough security trade-off. Guests lose their hotel room keys, and the hotel staff needs to be accommodating. But at the same time, they can’t be giving out hotel room keys to anyone claiming to have lost one. Generally, hotels ask to see some ID before giving out a replacement key and, if the guest doesn’t have his wallet with him, have someone walk to the room with the key and check their ID.
This normally works pretty well, but there’s a court case in Brisbane right now about a hotel giving a room key to someone who ended up sexually attacking the woman who had rented the room.
In civil action launched yesterday, the woman alleges the man was given the spare access key to her room by a hotel staffer.
The article doesn’t say what kind of authentication the hotel requested or received.
Kip Hawley, head of the TSA, has responded to my airport security penetration testing, published in The Atlantic.
Unfortunately, there’s not really anything to his response. It’s obvious he doesn’t want to admit that they’ve been checking ID’s all this time to no purpose whatsoever, so he just emits vague generalities like a frightened squid filling the water with ink. Yes, some of the stunts in article are silly (who cares if people fly with Hezbollah T-shirts?) so that gives him an opportunity to minimize the real issues.
Watch-lists and identity checks are important and effective security measures. We identify dozens of terrorist-related individuals a week and stop No-Flys regularly with our watch-list process.
It is simply impossible that the TSA catches dozens of terrorists every week. If it were true, the administration would be trumpeting this all over the press—it would be an amazing success story in their war on terrorism. But note that Hawley doesn’t exactly say that; he calls them “terrorist-related individuals.” Which means exactly what? People so dangerous they can’t be allowed to fly for any reason, yet so innocent they can’t be arrested—even under the provisions of the Patriot Act.
And if Secretary Chertoff is telling the truth when he says that there are only 2,500 people on the no-fly list and fewer than 16,000 people on the selectee list—they’re the ones that get extra screening—and that most of them live outside the U.S., then it is just plain impossible that the TSA identifies “dozens” of these people every week. The math just doesn’t make sense.
And I also don’t believe this:
Behavior detection works and we have 2,000 trained officers at airports today. They alert us to people who may pose a threat but who may also have items that could elude other layers of physical security.
It does work, but I don’t see the TSA doing it properly. (Fly El Al if you want to see it done properly.) But what I think Hawley is doing is engaging in a little bit of psychological manipulation. Like sky marshals, the real benefit of behavior detection isn’t whether or not you do it but whether or not the bad guys believe you’re doing it. If they think you are doing behavior detection at security checkpoints, or have sky marshals on every airplane, then you don’t actually have to do it. It’s the threat that’s the deterrent, not the actual security system.
This doesn’t impress me, either:
Items carried on the person, be they a ‘beer belly’ or concealed objects in very private areas, are why we are buying over 100 whole body imagers in upcoming months and will deploy more over time. In the meantime, we use hand-held devices that detect hydrogen peroxide and other explosives compounds as well as targeted pat-downs that require private screening.
Optional security measures don’t work, because the bad guys will opt not to use them. It’s like those air-puff machines at some airports now. They’re probably great at detecting explosive residue off clothing, but every time I have seen the machines in operation, the passengers have the option whether to go through the lane with them or another lane. What possible good is that?
The closest thing to a real response from Hawley is that the terrorists might get caught stealing credit cards.
Using stolen credit cards and false documents as a way to get around watch-lists makes the point that forcing terrorists to use increasingly risky tactics has its own security value.
He’s right about that. And, truth be told, that was my sloppiest answer during the original interview. Thinking about it afterwards, it’s far more likely is that someone with a clean record and a legal credit card will buy the various plane tickets.
This is new:
Boarding pass scanners and encryption are being tested in eight airports now and more will be coming.
Ignoring for a moment that “eight airports” nonsense—unless you do it at every airport, the bad guys will choose the airport where you don’t do it to launch their attack—this is an excellent idea. The reason my attack works, the reason I can get through TSA checkpoints with a fake boarding pass, is that the TSA never confirms that the information on the boarding pass matches a legitimate reservation. If all TSA checkpoints had boarding pass scanners that connected to the airlines’ computers, this attack would not work. (Interestingly enough, I noticed exactly this system at the Dublin airport earlier this month.)
Stopping the “James Bond” terrorist is truly a team effort and I whole-heartedly agree that the best way to stop those attacks is with intelligence and law enforcement working together.
This isn’t about “Stopping the ‘James Bond’ terrorist,” it’s about stopping terrorism. And if all this focus on airports, even assuming it starts working, shifts the terrorists to other targets, we haven’t gotten a whole lot of security for our money.
FYI: I did a long interview with Kip Hawley last year. If you haven’t read it, I strongly recommend you do. I pressed him on these and many other points, and didn’t get very good answers then, either.
EDITED TO ADD (10/28): Kip Hawley responds in comments. Yes, it’s him.
EDITED TO ADD (11/17): Another article on those boarding pass verifiers.
Great article from The Atlantic:
As we stood at an airport Starbucks, Schneier spread before me a batch of fabricated boarding passes for Northwest Airlines flight 1714, scheduled to depart at 2:20 p.m. and arrive at Reagan National at 5:47 p.m. He had taken the liberty of upgrading us to first class, and had even granted me “Platinum/Elite Plus” status, which was gracious of him. This status would allow us to skip the ranks of hoi-polloi flyers and join the expedited line, which is my preference, because those knotty, teeming security lines are the most dangerous places in airports: terrorists could paralyze U.S. aviation merely by detonating a bomb at any security checkpoint, all of which are, of course, entirely unsecured. (I once asked Michael Chertoff, the secretary of Homeland Security, about this. “We actually ultimately do have a vision of trying to move the security checkpoint away from the gate, deeper into the airport itself, but there’s always going to be some place that people congregate. So if you’re asking me, is there any way to protect against a person taking a bomb into a crowded location and blowing it up, the answer is no.”)
Schneier and I walked to the security checkpoint. “Counterterrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schneier said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”
Schneier and I joined the line with our ersatz boarding passes. “Technically we could get arrested for this,” he said, but we judged the risk to be acceptable. We handed our boarding passes and IDs to the security officer, who inspected our driver’s licenses through a loupe, one of those magnifying-glass devices jewelers use for minute examinations of fine detail. This was the moment of maximum peril, not because the boarding passes were flawed, but because the TSA now trains its officers in the science of behavior detection. The SPOT program—Screening of Passengers by Observation Techniques—was based in part on the work of a psychologist who believes that involuntary facial-muscle movements, including the most fleeting “micro-expressions,” can betray lying or criminality. The training program for behavior-detection officers is one week long. Our facial muscles did not cooperate with the SPOT program, apparently, because the officer chicken-scratched onto our boarding passes what might have been his signature, or the number 4, or the letter y. We took our shoes off and placed our laptops in bins. Schneier took from his bag a 12-ounce container labeled “saline solution.”
“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.
“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”
“Bottles labeled saline solution. They won’t check what’s in it, trust me.”
They did not check. As we gathered our belongings, Schneier held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”
“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schneier would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)
Turns out you can add anyone’s number to—or remove anyone’s number from—the Canadian do-not-call list. You can also add (but not remove) numbers to the U.S. do-not-call list, though only up to three at a time, and you have to provide a valid e-mail address to confirm the addition.
Here’s my idea. If you’re a company, add every one of your customers to the list. That way, none of your competitors will be able to cold call them.
CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don’t verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.
If a user visits an attacker’s website, the attacker can force the user’s browser to send a request to a page that performs a sensitive action on behalf of the user. The target website sees a request coming from an authenticated user and happily performs some action, whether it was invoked by the user or not. CSRF attacks have been confused with Cross-Site Scripting (XSS) attacks, but they are very different. A site completely protected from XSS is still vulnerable to CSRF attacks if no protections are taken.
Paper here.
Sidebar photo of Bruce Schneier by Joe MacInnis.