Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PM34 Comments

Comments

Anonymous January 9, 2009 2:47 PM

“… hackers impersonated brown octopus….”

@Bruce

Never mind the missing phrases, I want to hear THIS story.

Davi Ottenheimer January 9, 2009 3:06 PM

“Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.”

Ha. Like a deny all rule, except it isn’t…

Strange to me that you toss in the “because you know them intimately” at the start, but then pretty much leave alone the importance of trend analysis for impersonation. The flaws you point out are mostly related to single point-in-time assessments, which is why databases are being setup to compensate for the “intimately” aspect of recognition.

So the flaw with regard to recognizing Obama could as easily to be blamed on a lack of intimacy, as opposed to calling it a system was set to be “too secure”.

Anonymous January 9, 2009 3:21 PM

“… hackers impersonated brown octopus….”

Well, it’s a C story. And, no shit, it begins with the squid….

fearing the flames January 9, 2009 3:42 PM

What factors could one use to authenticate God? Or that someone is His authorized agent?

Jason January 9, 2009 3:48 PM

Re: Fear the flames

You ask him what your favorite color is, and your mother’s maiden name, and your first pet’s name, and your social security number, and how many hairs are on your head.

Tom Davis January 9, 2009 3:56 PM

While most people (other than law enforcement agents) will give only a cursory glance at a driver’s license presented as a identification token, everyone who accepts credit or charge cards will give those identification tokens much more than a cursory glance. Every time a credit card is used, it is authenticated, and while most store clerks do not check the signatures, if a card is lost or stolen, most people notify their banks quickly enough to prevent is use for fraudulent purposes (it’s only when the card itself is not required that we run into problems).

And while a drivers license address may be out of date, it is almost certain that the bank has a very good idea of the card holders residence. Though in these days of Web based statements and payment options, that is perhaps less true than it was in 1992.

SumDumGuy January 9, 2009 4:10 PM

FWIW – signatures on credit cards are not for authentication, they are proof of contractual acceptance between card issuer and card user. That’s why clerks are instructed to make you sign your card if you attempt to use one that is not signed (and why they are also instructed to refuse or confiscate cards which have non-signature phrases in that field, like “check id”).

In fact, the standard merchant agreement requires the merchant to accept a card regardless of whether the signature at point-of-sale matches the signature on the back of the card. They are only allowed to refuse a card under those circumstances if they have OTHER reason to suspect fraud. It is perfectly legit to lend your credit card to someone else for them to use (in which case they should be signing their name at the point-of-sale terminals, not forging your signature).

I’m not saying that practice and theory always agree, just what the theory (and merchant contracts) say.

SumDumGuy January 9, 2009 4:14 PM

One more point in response to an implicit assumption in Tom Davis’s post – typical merchant agreements forbid the merchant from requiring additional identification in order to use the charge card. They are allowed to ask for additional ID, they just aren’t allowed to require it unless they have OTHER reason to suspect fraud.

Davi Ottenheimer January 9, 2009 4:44 PM

@ fearing the flames

First of all, which god? Second, since authentication of god is tied to the concept of existence, I think Descartes probably has what you’re looking for in his Ontological Argument — when you see supreme perfection, you have proof of what you sought to find. Good luck trying to make that fit on a token.

rubberman January 9, 2009 8:33 PM

How does one authenticate God or His authorized agent? Hmmmm – first, can they walk on water (try my new flotation shoes)? Raise the dead (I know CPR)? Feed multitudes with a couple of loaves of bread and fishes (call Dominoes and use my credit card)? Grant Enlightenment with a touch or thought – oops, that’s the Buddha. 🙂

fusion January 9, 2009 10:36 PM

Rubberman…
If I have it straight, the Buddha’s pitch was that you have to experience enlightenment for yourself…not a transaction…?

Henning Makholm January 10, 2009 4:45 AM

Wouldn’t the easy way to verify you’re talking to the real Obama be to offer to call him back through the White House switchboard? If the operator there actually puts you through to someone, it’ll probably be the real guy. However, do look up the number for the White House yourself; don’t be fooled if the alleged Obama offers to dictate it to you…

Okay, this will only work after the inauguration. Right now there is no obvious way to obtain a trusted verification callback number — for example, http://www.barackobaba.com seems to have been taken over by a t-shirt merchant, with no contact details in evidence on the website.

Bill January 10, 2009 12:15 PM

So, fusion to put it another way: Become enlightened and you will become one with the authentication of god.

Harry January 10, 2009 1:54 PM

Faxed signatures are only placeholders, that have to be followed up by real paper with real signatures.

Which can be signed in counterpart. With a bare signature, no witness or notary required. Without each page being signed or initialed.

fusion January 10, 2009 5:14 PM

Bill,

As I read it, Zen enlightenment denies separation of the idea of the person from any idea of a God…but the interesting thing here is that there is no separate authentication; it’s just that if you experience it you’ll know it…

There doesn’t seem to be any analogous idea within cryptography. Yet Zen students have been doing this for centuries..

R January 10, 2009 11:36 PM

We still arguably don’t have the best authentication we could for the money. Browser UI for telling users what site they’re talking to is weak (though it’s getting better). We could use better protocols underneath (challenge-response auth, trustable UI, generating unique passwords per domain by hashing) to address some of the weak points of password auth. It would be super-expensive to deploy, but not expensive to use day-to-day.

R January 10, 2009 11:38 PM

(Of course, not all the most urgent problems like this are computer- or even auth-related — helping people get alerted to scams, sort of the way we do with phishing, wouldn’t hurt anything.)

John Waters January 11, 2009 1:13 AM

The biggest threat that I have seen so far are people with CISSPs impersonating security experts.

Clive Robinson January 11, 2009 4:39 AM

@ Bruce,

Just a point, computers are not humans so why use expressions like,

“Computers are fast at computation but not very good at judgment”

“… can impersonate you to a computer, which doesn’t know any better.”

Computers do not and cannot excercise judgment they simply follow a set of rules programed into them (usually but not always by humans).

Likewise computers do not know anything they simply have data which is given computational value by meta data built into the set of rules.

Incorrectly ascribing human traits to computers makes people belive they are capable of these traits when they are not.

Which in turn with the perception that computers are infalable gives rise to people wanting,

“… computerized systems to do the verification for us.”

So that they are absolved of any blaim for failure to act in an appropriate way.

But worse, the failings are very rarely passed back to those who developed the rules for the computer.

So the failure of the rules become a known vulnerability that can be exploited either directly or indirectly.

Clive Robinson January 11, 2009 4:51 AM

@ fusion,

“… it’s just that if you experience it you’ll know it…

There doesn’t seem to be any analogous idea within cryptography.”

How about a “brute force” or “British museum” search on a block or stream cipher, when the plain text jumps out before you?

bob!! January 11, 2009 9:51 PM

@Clive Robinson

“Incorrectly ascribing human traits to computers makes people belive they are capable of these traits when they are not.”

Actually, people already believe they are capable of these traits – nothing Bruce says is going to change that. So he’s right in pointing out that computers are sadly lacking in these human traits.

AppSec January 12, 2009 8:48 AM

@Clive and bob!!

Johnny Five is alive!

What makes a computer different from a human with regards to exercising judgment? One is self programmed based on previous experiences, the other is programmed by others based on their rules. Put an AI engine (learning based system) and a rules engine for how they feel and feed that into an actions engine and you have how a human determines their actions as well.

InfoSecWanderer January 12, 2009 9:43 AM

Speaking of the Best Buy affair, I once made the mistake of going to Target in a bright red polo shirt two days before Christmas.

Couldn’t go five feet without being asked for assistance by too-trusting customer. I was lucky to get out of there before they promoted me to district manager.

Chas January 12, 2009 12:48 PM

Interesting to me is that the purpose of a “uniform” is to to make all wearing it appear the same, and it is this very characteristic that makes them easier to impersonate. A city cop is meant to be acting as an agent of the city – not to be doing his personal whim. So it kind of makes sense that all city cops look as much alike as possible; it takes the personal issues out of the equation somewhat.

Yet it is this sameness that makes impersonation possible. People can impersonate cops by dressing in their uniform, but there is no “my brother” uniform. He has a unique form.

It seems to me that both sides of this discussion have a security purpose and (as in many places in security) there’s a tradeoff and tension.

No solutions come to mind, just a statement of the problem from another perspective.

Clive Robinson January 12, 2009 2:10 PM

@ AppSec,

“Put an AI engine (learning based system) and a rules engine for how they feel and feed that into an actions engine and you have how a human determines their actions as well.”

The problem with AI is while the A is true the “I” is not 😉

And it cannot learn (except by rote) essentialy all working AI engines are rule based.

The new rules it may come up with are at best statisitcaly based random selections. There is no understanding no ability to take the corelations and find the reasons that underlie them.

That intuative leap is beyond AI now as it has been in the past and I suspect will be for the reasonable future.

And because of this a rule based engine can only apply rules to the pre chosen input streams nothing more (it cannot seak out new unknown information types).

Perhaps the first sign of real “I” in an entity is the ability to make tools to test the environment around its self in ways that where originaly unknown to it.

RH January 12, 2009 2:13 PM

Something I’ve wanted to see is a “super ID”… as in “above identification.” Think of it as a public key signing algorithm for IDs.

Roughly handled, you get a “super ID.” It cannot be used for anything except signing smaller IDs. For example, if your ID number is 12345, you might sign a new ID, 12345/1, as “me at work.” This number would be just as verifiable as the original ID, but couldn’t be used to open a credit card.

The problem with computer verification is not that it limits verification, but rather that we want to use computer verification but refuse to take advantage of features computer verification allows. We insist that a SSN is “you” but yet should be given to many people.

Thyraeus January 12, 2009 5:01 PM

“The biggest threat that I have seen so far are people with CISSPs impersonating security experts.
Posted by: John Waters at January 11, 2009 1:13 AM”

Without rising to the inflammatory potential (and as an old CISSP); I always say that the certification really show nothing more than a commitment to the industry…..it is when it is combined with experience that you get credibility! Your point is correct, people aren’t experts because they passed the test.

Scott Wright January 13, 2009 4:51 AM

This sure puts Facebook’s authentication trade-off in an interesting light, given that people tend to :

1) Put so much information into their profiles using the default privacy settings (without setting privacy to “My Friends Only”);

2) Accept “Friending” invitations from any and all, especially people whom they know casually;

3) Use untrusted and unverified applications like “Throw a Snowball” and “Secret Crush”, which and have no QA and whose programmers hold no accountability.

Authentication of users should be greater, or they should provide better default protection of private information.

Looks like a poor trade-off to me; no wonder there is so much impersonation – it’s worth it to the bad guys.

Jerry January 15, 2009 3:42 AM

Regarding Police Badges.

In the place I live – Western Australia – it is an offense to photograph and/or publish an image of a Police I.D.

This is not restricted to an actually issued I.D but any includes sample IDs.

As a result of this restriction there is no published reference to even get an idea what a Police ID looks like. Not even from the Police web site.

The consequence of this is you have no idea what a Police ID looks like – so cannot even begin to verify an ID shown to you.

The immediate corollary to this is that there is free reign for anyone to make up their own badges. 99.9% of the public would not know what was real and what was made-up.

If they published an actual sample image then perhaps a few more percent of the public would know what to look for. Even better from the Police point of view, it would be easier to convict someone for forgery if the ID bore a passing resemblance to an actual ID.

However these considerations have passed the hierarchy by, so have a virtually anonymous Police force. As a result we have limited or no way to verify if the plainclothes person who stopped us on a dark road was legitimate or not.

I leave the possibilities for exploitation of this bureaucratic decision as an exercise for the reader.

Vicente Aceituno January 15, 2009 4:07 AM

I find the term “authentication” ambiguous. It is normally used to mean two different things. The first is authentication in the sense of checking if the user of some credential is the owner. For example, when present my user account, and a password to a system. The second is authentication in the sense of checking if someone is who he says he is. The second “authentication”, which I would call “identification” is often performed by checking how many tokens you have claiming that you are you; photos help reinforce identification, as fingerprints do even better. Why is this important? Because there are two sides to the first authentication sense: Either I want to know who uses a credential, and make sure that only the owner can use it (accountability), or I want to NOT know who uses a credential, and still make sure that only the owner can use it (anonymity). The later can be crucial for privacy. So “strong authentication” and “stong identification” are not the same and should be distinguished.

Fidicen January 15, 2009 7:45 AM

Impersonating an official reminds me of the story of a parking lot attendant at Bristol Zoo (UK). For many years this uniformed gentleman collected parking fees from car visitors to the Zoo (at £1 a time for a popular zoo go figure), and it was only a week or so after he failed to show that questions began to be asked. The Zoo thought he was a local council employee; the local council thought he was a Zoo employee … um. To the best of my knowledge, a few years on he still hasn’t been traced.

David Evans January 15, 2009 5:03 PM

If we come to rely on biometric authentication, then surely the ideal criminal enterprise is one that collects them into a large database. Choose someone who looks sufficiently like you from a database – selected by an algorithm similar to the one used by the authentications system – and get a set of fingerprint gells to wear. Retina might be more difficult, but still possible. The economics would be interesting, but if you could reduce the cost of acquisiton per heard down, and get some investment…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.