The Zone of Essential Risk

Bob Blakley makes an interesting point. It’s in the context of eBay fraud, but it’s more general than that.

If you conduct infrequent transactions which are also small, you’ll never lose much money and it’s not worth it to try to protect yourself – you’ll sometimes get scammed, but you’ll have no trouble affording the losses.

If you conduct large transactions, regardless of frequency, each transaction is big enough that it makes sense to insure the transactions or pay an escrow agent. You’ll have occasional experiences of fraud, but you’ll be reimbursed by the insurer or the transactions will be reversed by the escrow agent and you don’t lose anything.

If you conduct small or medium-sized transactions frequently, you can amortize fraud losses using the gains from your other transactions. This is how casinos work; they sometimes lose a hand, but they make it up in the volume.

But if you conduct medium-sized transactions rarely, you’re in trouble. The transactions are big enough so that you care about losses, you don’t have enough transaction volume to amortize those losses, and the cost of insurance or escrow is high enough compared to the value of your transactions that it doesn’t make economic sense to protect yourself.

Posted on March 30, 2009 at 6:50 AM30 Comments

Comments

Nicholas Weaver March 30, 2009 7:38 AM

One other option: Reduce the cost of escrow, where what the escrow service does is twofold:

One, conduct a lot of transactions so IT gets the amoritzation advantage, even though individual transactions don’t.

Two, plays real hardball when defrauded (even when this costs more money) to set the precident that this low-cost escrow service plays hardball.

sooth sayer March 30, 2009 7:55 AM

I always marvel at these people who try to formalize what everyone does almost subconsciously.

There is no “higher” intelligence here – just a plain freeloading pontificator.

Bill March 30, 2009 8:07 AM

@sooth sayer
Oh there’s value in taking a subjective ‘feeling of unease’ and breaking it down. We’re not very rational when it comes to ‘subconscious’ risk processing, so it’s a useful first step – Break it down, label the parts, measure them and discuss.

@Nicholas – yeah I came here to similar sentiments; it’s an actuarial issue really. If insurers can raise sufficient premium to profit from the risks, then demand ought to stimulate that market. shrug.

trapspam March 30, 2009 8:08 AM

The higher intelligence is just not to use eBay…..

eBay now makes it difficult to close your account. Wait period six months, even if you have not had any transactions over the last few years.

Frank March 30, 2009 8:48 AM

When I purchase stuff on eBay I keep my purchases in the sub-$100 range, a bit more than I’d just throw into the trash, but not so much that it would be truly impactful, I typically even stay sub-$50. When selling I do the same and mostly put stuff up that, should I mail it away, and never receive payment, I am really only out the cost of postage, and I’ve reduced clutter in my house.

David March 30, 2009 8:57 AM

@sooth sayer
A lot of breakthroughs happen when somebody takes something that everybody knows informally and breaks it down into pieces that are specific and measurable.

For example, most people have a good idea of what a good logical argument is, but if people hadn’t tried formalizing it, it would be essentially impossible to build modern computers and software.

Therefore, if it wasn’t for formalization of the difference between “All men are mortal, Socrates is a man, therefore Socrates is mortal” and “You beat your dog, your dog is a father, therefore you beat your father” (see Plato’s dialog including Protagoras), you wouldn’t be arguing with us here.

HJohn March 30, 2009 9:06 AM

Best rule of thumb I’ve learned on any kind of insurance, including eBay, is that it is to tranfer risks I cannot afford to take. Saves a lot of money on both ends.

David March 30, 2009 9:33 AM

@sooth sayer
A lot of breakthroughs happen when somebody takes something that everybody knows informally and breaks it down into pieces that are specific and measurable.

For example, most people have a good idea of what a good logical argument is, but if people hadn’t tried formalizing it, it would be essentially impossible to build modern computers and software.

Therefore, if it wasn’t for formalization of the difference between “All men are mortal, Socrates is a man, therefore Socrates is mortal” and “You beat your dog, your dog is a father, therefore you beat your father” (see Plato’s dialog including Protagoras), you wouldn’t be arguing with us here.

sooth sayer March 30, 2009 9:49 AM

@bill
“We’re not very rational when it comes to ‘subconscious’ risk processing, so it’s a useful first step ”

What evidence exists to substantiate such pontification? From all evidence humans are very good at risk management (AIG excepted); humans have managed to run supreme on this planet (with apologies to dolphines and mice)

Economists use this “human action” technique – but very few — i mean VERY VERY VERY VERY few economist have actually done useful study of it.

and @David .. let go of that return key -it’s a becoming a security risk.

luke March 30, 2009 10:03 AM

@sooth sayer: A plethora of research (some of which is referenced in the archives of this blog) establishes that our innate risk assessment is faulty. For example, we overestimate rare events and underestimate common ones.

What evidence exists to substantiate the claim that humanity’s rise to power is due to our superior risk management? I would think that American roadway fatalities, for one, show clearly that we don’t have a firm grasp on that yet.

RH March 30, 2009 10:42 AM

Anyone else find it interesting that they suggest insurance/escrow for high value transactions, yet the modern American concept of “dental insurance” covers everything you’d have in a year, except the big things you couldn’t afford?

I think analysis into this topic is wise, its not pontificating. Yee average American really doesn’t have a clue what insurance is for anymore.

Clive Robinson March 30, 2009 10:51 AM

@ RH,

“Yee average American really doesn’t have a clue what insurance is for anymore.”

I’m not sure that the average insurance organisation know either, especialy when it comes to health care, motor insurance, mortgage protection insurance or any other insurance you are forced to take out legaly or by terms of contract…

Anney Randy March 30, 2009 11:17 AM

Can the libertarians stop pretending that humanity is rational? Once this is done we can have reasonable debates instead of being pulled into the muck by these surrealists.

Pat Cahalan March 30, 2009 11:24 AM

@ sooth sayer

From all evidence humans are very good at risk management
(AIG excepted); humans have managed to run supreme on
this planet (with apologies to dolphines and mice)

That’s actually a critically limited set of evidence, given that our “run supreme” track has met little intelligent opposition.

Human evolutionary behavior may be good at moving us, as a species, to the top of the heap when all the competitors are unintelligent. That in no wise implies that we’re good at risk management, just that we’re better than a bunch of critters that have the collective cognitive capability and future planning abilities of a batch of five-year-olds. Minus tool use.

Or, to paraphrase our Wikipedia overlords, “More citations needed for this claim”.

sooth sayer March 30, 2009 11:51 AM

@Pat Calahan

That’s actually a critically limited set of evidence, given that our “run supreme” track has met little intelligent opposition

First you are intelligent mice an dolphins — they are not here to defend so we shouldn’t be rude.

Secondly, are you suggesting humans will not be able to stand an “encounter” of superior beings — that is highly rational evaluation of surroundings.

I wonder how one would define such an intelligent being? or where would one find such beings except in alien-episodes on history channel.

Anonymous March 30, 2009 11:52 AM

— sorry about the last post — my mistake on not reading it before hitting return – not a very intelligent thing to do 🙂
@Pat Calahan

That’s actually a critically limited set of evidence, given that our “run supreme” track has met little intelligent opposition

First you are insulting mice an dolphins — they are not here to defend so we shouldn’t be rude.

Secondly, are you suggesting humans will not be able to stand an “encounter” of superior beings — that is highly rational evaluation of surroundings.

I wonder how one would define such an intelligent being? or where would one find such beings except in alien-episodes on history channel.

David Donahue March 30, 2009 12:11 PM

Pat, that’s grading on am absolute scale of perfect risk management.

Species success regarding risk management is generally considered using grading on a curve of all other known participants. If you’re better than the others, then you win.

The fact that by our standards the other players on this planet are semi-intellectual non-sentients means that we can probably get by with a non-optimal performance.

Woo Hoo! I can evaluate risk better than a tree frog, algae, ants and a squirrel but not better than a theoretically perfect being with perfect knowledge that evaluates all risk impartially and reacts with perfect proportional mitigation.

Who knows, such a being may experience currently unknown trade-offs that make it non-viable as a species. For example; would it get involved in successful procreative relationships or would it focus on optimizing it’s own performance/resources vs. the long-term benefit to it’s species of reproduction?

If so, are we willing to accept that some carefully chosen irrationality is optimal?

sooth sayer March 30, 2009 12:26 PM

@David

Yes — I once heard a joke that driving a ferrari in LA traffic was like showing up for your flight in a space suit

Species exist in an environment — they are optimally intelligent; so are humans in a society — some aren’t – that’s what Darwin Awards are for — http://www.darwinawards.com/

Pat Cahalan March 30, 2009 12:44 PM

@ sooth sayer

ss> Secondly, are you suggesting humans will not be able to stand
ss> an “encounter” of superior beings — that is highly rational
ss> evaluation of surroundings.

No, not at all. There’s no evidence to support that one way or the other. Besides, we don’t need to bring theoretical “superior beings” into the discussion, we have some set of humans that are good at risk assessment, and some set of humans that are bad at risk assessment. The good ones run casinos, the bad ones lose money there.

I’m saying that your proposition (humans have flourished, ergo humans are good at risk management) is flawed, as it only compares human risk analysis to non-intelligent risk analysis.

In other words, I don’t believe that your Q follows from your P 🙂

@ David

dd> The fact that by our standards the other players on this planet
dd> are semi-intellectual non-sentients means that we can
dd> probably get by with a non-optimal performance.

Sure, as long as we’re only grading ourselves vs the other players in the game of natural selection. Just because we’re “winning” the dominant species battle doesn’t mean we’re not bad at risk assessment; we’re just better at it than the other players in that particular game.

dd> Pat, that’s grading on am absolute scale of perfect
dd> risk management.

Sure. Bill said:

b> “We’re not very rational when it comes to ‘subconscious’
b> risk processing, so it’s a useful first step ”

to which, sooth sayer replied:

ss> What evidence exists to substantiate such pontification?
ss> From all evidence humans are very good at risk management

In other words, Bill said “I believe we’re bad at risk assessment”, and sooth sayer said, “You’re not offering any evidence” and “my evidence demonstrates the opposite”.

I find sooth sayer’s evidence to be poorly supporting of his claim, and I do not provide it credible weight.

While I’ll give sooth sayer the legitimacy nod for calling Bill out for a lack of evidence, Bill got supporting (albeit poorly cited) support from Luke.

To make an extremely bombastic and pedantic comment a bit more streamlined:

I think that the evolution of human’s risk assessment capabilities works just fine for dominating a planet inhabited by non- or semi-intelligent species. I think there is plenty of research to support the proposition that in fact these risk assessment capabilities are not optimized for dealing with intelligent adversaries or complex problems, which means that as a species we’re crappy at evaluating complex risks.

dd> Are we willing to accept that some carefully chosen
dd> irrationality is optimal?

Now we’re skirting the edge of an interesting philosophical commentary.

Without getting into the grander scope of whether or not it is “optimal”, I think one has to assume some default level of irrationality when building your systems (security, technical, social, or whatever), or you’re ignoring the practical reality of modern human social systems.

Because most people are bad at risk assessment.

Pat Cahalan March 30, 2009 12:49 PM

@ sooth sayer

Species exist in an environment — they are optimally intelligent

Evolutionary theory: you’re doing it wrong.

Complex adaptive systems like evolutionary processes do not by default produce optimal results, they generally produce least pessimum results.

There’s a big difference between “adapting and gaining an advantage over the other players” vs “optimized for advantage over the other players”.

paul March 30, 2009 12:51 PM

@Nicholas Weaver

One big problem with escrow is that it’s a lot like arbitration: Sure, it does a lot of business, but the repeated transactions are overwhelmingly with people who are more likely to be accused of sharp dealing than to be victims of it. Any escrow operation that spends heavily on recovery and enforcement will find a certain class of sellers telling their buyers they can’t use it (because it’s not reliable, doesn’t have the right interface, blah blah blah) while less-heavy-handed escrow operations have more money to spend on marketing and seller relations.

Now obviously an unwillingness to deal with certain escrow agencies will be a sign for buyers to beware, but then agencies can start imposing onerous terms on sellers, knowing that refusal to deal would hurt the sellers’ reputations, and so on around the maypole.

Thin markets are always subject to manipulation; you just get to choose which kinds.

Tynk March 30, 2009 1:03 PM

A couple of things to consider, proper risk assessment is not all that has propelled the human race above the others, you must also include the ability to procreate faster then we fail our risk assessments.

@David Donahue

For example; would it get involved in successful procreative relationships or would it focus on optimizing it’s own performance/resources vs. the long-term benefit to it’s species of reproduction?<<

A great exposition on just such a subject would be the movie Idiocracy where the intelligent amongst us simple decide there are better things to do then personally continue the race.

SIWOTI Cat March 30, 2009 1:23 PM

I second @HJohn:

Best rule of thumb I’ve learned on any kind of
insurance, including eBay, is that it is to
tranfer risks I cannot afford to take. Saves a
lot of money on both ends.

The decision to get eBay insurance seems to be mostly about whether you have low enough cash reserves compared to the purchase price that you’d feel real pain if you got scammed.

If you have thousands of dollars in the bank, it’s rational to directly bear a 10% risk of losing $100 for a 90% chance of gaining $15: as long as you’re playing outside the zone where $100 cash fluctuations affect your real comfort level, a dollar is a dollar.

But if you’re, say, a student with no savings and little disposable income, the practical harm from losing $100 is much greater; you’ll be cutting basic spending for a month to make up for it. That loss is probably more than 9x the practical benefit from the $15 gain. So the transaction isn’t worth it if you can’t transfer risk.

More (too much more!) about the logic of risk aversion: http://www.google.com/search?q=risk+aversion+and+utility

Also: if you’re new to eBay transactions that aren’t small, there’s the cost of learning how to get escrow, etc.; you’re probably not great at noticing signs of fraud yourself; and you may have a wildly high or low estimate of how common fraud is. Being a first-time eBayer looks like it’d affect your risk much more than the type/frequency of transactions you do.

kangaroo March 30, 2009 3:05 PM

@luke: “A plethora of research (some of which is referenced in the archives of this blog) establishes that our innate risk assessment is faulty. For example, we overestimate rare events and underestimate common ones.”

Do you really believe that research? It says that we’re faulty at estimating risk, because our risks estimations don’t fit the formalism.

But is the formalism good? Does real-world risk really fit into that box? Is the amortization function correct — or is our hyperexponential amortization a more accurate real-world measure of risk than compounded interest?

Sooth-sayer is wrong in claiming that the formalization is worthless; but she may be right if she were claiming that the CURRENT formalization is worthless.

kangaroo March 30, 2009 3:08 PM

Pat: “There’s a big difference between “adapting and gaining an advantage over the other players” vs “optimized for advantage over the other players”.”

Shorter: A gazelle doesn’t have to be faster than the dogs — he just can’t be the slowest gazelle.

Clive Robinson March 30, 2009 5:00 PM

When talking about optimal stratagies you have to consider the type of game you are actually in.

Our world is effectivly isolated in the universe except for two things, the movment of energy (from the sun and back into space) and the movment of information inwards.

Which means at any particular point in time it is effectivly a zero sum game (what one gains another looses).

However when viewed against time resources are being effectivly being consumed and thus are diminishing.

You then have to ask what is an individuals aim is effectivly,

1, To live for themselves
2, To live for their decendents.
3, To provide for their decendents.

Depending on your view point the other options manifest themselves as “greed” because you are taking resources away from the stratagy of your choice.

So you end up with the clasic game of “paper knife stone”…

Pat Cahalan March 30, 2009 5:41 PM

@ kangaroo

Do you really believe that research? It says that we’re faulty at
estimating risk, because our risks estimations don’t fit the formalism.

But is the formalism good? Does real-world risk really fit into that box?

Now that is a worthwhile (potential) criticism. My impulse is to come down on the side of “Yes, the formalism is good, or at the very least it’s good enough for the comparisons we’re talking about”, but playing devil’s advocate for a moment…

One of the problems with these sorts of studies, which I’ll grant you, is that the formal expression of the risk generally depend more upon quantitative measures than qualitative ones; as Bruce has pointed out before, security theater can have a benefit of making people feel safer, even if they’re not actually any safer… which can, in fact, yield benefits, sometimes very real ones.

Generally, I think real-world risk does fit into that box. Certainly there is a disconnect between real-world risk and perception of real world risk, but in most of the cases, I would argue the proper recourse is to manage the expectation and perception, not attempt to mitigate the actual risk.

sub-optimally non-pessimal March 30, 2009 6:28 PM

@ Pat Cahalan
“That’s actually a critically limited set of evidence, given that our “run supreme” track has met little intelligent opposition.”

There’s equally limited evidence that we can even save ourselves from our less-intelligent-in-the long-run decisions. Place the bet on short-term advantage over long-term disadvantage every time.

Pogo FTW.

http://en.wikipedia.org/wiki/Pogo_(comics)#.22We_have_met_the_enemy…..22

David March 31, 2009 8:47 AM

@Pat Cahalan

I think we need to talk about both real risks and the perception of risks. We’re better off if we estimate the real risks well, since then we’ll be economical and effective at mitigating them. This is objective and quantifiable (except for the magnitude of the risks – would you rather have a 10% chance of being 10K in debt or a 1% chance of losing a finger?).

Making people feel better about risks is a matter of addressing the perception of risk in a way that makes them feel safer, which may or may not have anything to do with making anybody really safer.

Clearly, a group (species, country, genetic group, whatever) that perceives risk more accurately has a competitive advantage over those who don’t. Given equal accuracy, I’d suspect that being brave or foolhardy or not risk-averse (whatever you want to call it) is a group competitive advantage, since the group would call for less security theater.

bah April 1, 2009 3:04 PM

assessing risk management is only possible in hindsight and is always flawed. Many of the riskiest things are low-probability, but extremely high cost. You (almost by definition) don’t have enough data to know when they’re coming.

Read Black_Swan.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.