Criminal Intent Prescreening and the Base Rate Fallacy

I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless:

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations—an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes—which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

It’s that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?

Posted on May 3, 2012 at 6:22 AM40 Comments

Comments

Gweihir May 3, 2012 7:06 AM

…how do you exonerate yourself?

You cannot. That is the beauty of it. With this, “law enforcement” can claim 100% success rates. (And the prison industry is grateful as well, I would imagine.) They finally found a way to do away with those pesky troublemakers that claim to be “innocent”.

Quite ingenious, really.

Tom May 3, 2012 8:06 AM

I’m not convinced the base rate fallacy is the right way to look at this sort of data mining exercise. It is instructive that many people compare it to flipping a coin, and conclude that flipping a coin is better because the coin achieves p=0.5 while a screening exercise achieves p=0.02 or some such.

Actually the right comparison would be to use the coin to sort between terrorists and non-terrorists at p=0.5 and then feed that number into Bayes theorem.

Call A the event that someone is a terrorist and B the event that someone scores positive in the NSA’s test.

Using some numbers from the second linked article, suppose that in a population of 300,000,000 there are 1,000 terrorists and the NSA can identify them at p=0.4, misidentifying innocent people at p=0.0001. So P(A) = 1/30000, P(B) is 0.0001013, P(B|A) is 0.4 and the probably that someone is a terrorist given a positive identification is 0.0132 – about 13 in 1000 people identified will actually be a terrorist.

Now suppose we flip a coin to decide if they are a terrorist or not. P(A) is still 1/30000, P(B) is 0.5 and P(B|A) = P(B) = 0.5, because the events are uncorrelated. The probability that someone is a terrorist given a positive identification is now 3.3E-6 – worse than 4 out of every million people called a terrorist is actually a terrorist. Worse than that, half the actual terrorists will get through.

So the NSA’s system is about 4,000 times better at identifying terrorists than flipping a coin.

Because of the politics surrounding it, this sort of system is always going to be heavily weighted towards false positives. The consequences of any false negative are so disastrous (at least politically) that the system will be deliberately weighted to avoid them. So I think it more likely that such a system might be weighted to produce, say, 95% accurate identification of real terrorists and 5% identification of people who are really innocent. Then the probability of someone being a terrorist if the system says they are is 6E-5 – on the face of it pretty awful but still 20 times better than flipping a coin. And I think this is the right way to look at it – not asking, “What is its absolute performance?” but rather, “Is it better than what we’ve got and, if so, is it worth it?”

And what have we got at the moment? At the moment we have no way of predicting whether someone is a terrorist or not and so we assume everyone is a terrorist and treat them accordingly. Everyone walking into an airport is assumed to be a terrorist and they have to empty as many bags and take off as much clothing as necessary to convince someone that they are not, or they don’t get on the plane. Right now we have a misidentification rate of 100%. If the NSA’s system reduced that to 5% and 95% of people could just walk through, wouldn’t that be worthwhile? Obviously if we use the system to decide guilt then that’s not good, because ~5% of the population would be falsely imprisoned. But what if we used the system to decide who needs intensive screening at an airport? That 5% are no worse off than they are now with 100% misidentification, and the other 95% of people are much better off.

Suggy May 3, 2012 8:41 AM

That last comment is right on the money. If we take out terror and replace with mammography (another imbalanced data problem), we can see that a reduction of noise is a good thing, then those people get extra screening. The context of the application is the important factor.

Bravo to Tom for the excellent response.

Suggy

Clive Robinson May 3, 2012 8:51 AM

@ Tom,

That 5% are no worse off than they are now with 100% misidentification and the other 95% of people are much better off

Do you actualy believe that???

If you do think on this, if you get picked it will have the same mental effect as having being robbed. That is it stays with you and increases your anxiety levels at the next test making you even more likely to be in that 5% than the first time. And this is a downward spiral that is well known in psychiatry, it’s part of many “syndromes” inculding PTSD or battle fatigue and a major contributor to avoidance behaviour.

Thus your “5%” will have their lives effected, because they will reach a point where “being groped up” or told to “squat naked” will either destroy them mentaly and thus physicaly (ie die an early death) or cause them to either not travel or find alternative transport means (also increase their risk of an early death).

So what you are saying is it’s ok to consign your 5% to the “dustbin of life” just as long as it makes others “feel good” for absolutly no proven increase in their security?

Brian May 3, 2012 9:23 AM

I was thinking along similar lines as Tom. A system that automatically just DECIDED you were a terrorist would be a terrible idea. But searching for terrorists is like looking for a needle in a haystack. The smart way to go about it would be to make the haystack smaller. It shouldn’t be about trying to automatically find people who are terrorists but more about finding the (much larger) group of people who AREN’T terrorists. Then the non-automatic part of the process could look at those who are left. Such a system would indeed have a very high false positive rate for “not in the not-a-terrorist group”, but since the point isn’t to directly identify terrorists, that doesn’t really matter. As long as being flagged as “maybe a terrorist” by the system doesn’t allow the government to look any harder at you than they legally could before, the false positive rate doesn’t seem very important.

The issue with a system like this wouldn’t be the base-rate fallacy, it would be how to construct a profile that’s effective at categorizing terrorists correctly. Terrorism is incredibly rare, and terrorists can change their behavior. Constructing an automatic system that never identifies an actual terrorists as “not-a-terrorist” would be extremely difficult.

Clive Robinson May 3, 2012 9:26 AM

For those who still “don’t quite get it” FAST and all the other “automated screening” processes are not about catching terrorists or for that matter enriching a few “well connected” people.

They are about externalising risk and liability.

The DHS and TSA and all the other TLA’s know that at some point in time the current “favorable attitude” in “the places it counts” will change. That is they will no longer be able to ignore the law or the voters. Thus they need to be able to blaim an external agency to absolve themselves.

What better way than to take the “blaimable human” “out of the loop” and replace them with a machine…

This way they claim the old “acting under orders defense” of the modern world of “the computer told me to do it”. Most of us here know that because humans program computers the computers suffer from the failings of the programers and they in turn suffer the failings of the product speciffication which in turn suffers the failings of a different set of humans etc etc.

Which has two important points,

Firstly most people actually still “blaim the computer” or worse “hold it to be infallible” (especialy Judges who swallow the line virtually every time).

Secondly, even when people actualy start blaiming the humans because “the humans designed and built the system” none of the humans are in the DHS or TSA and the chances are they are long gone from the companies etc. So nobody gets to carry the can and a minor sum of money is paid as a fine to make the problem go away.

This unfortunatly is the way of the world and the way senior bureaucrats get to keep their jobs untill they become “people of influence” to leverage a few six figure a year pluss share options consultant positions by supposadly using their “networking contacts” (where as it is more likely to be a “pay off” for placing contracts with the companies whilst they were still “senior bureaucrats”).

Brian May 3, 2012 9:32 AM

@Clive Robinson:

That’s why it’s so important that being flagged by a system does not give the government any extra authority in dealing with you. In the airport security context, being in the 5% would not allow the TSA to do anything that they aren’t doing to everyone right now.

Now obviously things don’t always work out as well as they should in real life, but the basic IDEA at least seems interesting.

Dr Zero May 3, 2012 9:53 AM

Tom, I would be extremely astonished if your 95% of people would be left to board a plane without being checked. This system is almost certainly not meant for that at all. So you’d end up with 5% people “possible terrorists, arrested, etc”, and 95% people who go though the same “kowtow to the govt, you peon” crap as now.

paul May 3, 2012 9:56 AM

Arguments like Tom’s work, as far as I can tell, only if you can guarantee that the lighter-screening population includes no terrorists. Otherwise you have to do heavy and heavier, which is pretty much the system now.

But more to the point, these kinds of analysis assume that differential screening and (perceived) persecution of the innocent have no effect on behavior (either of terrorists or non-terrorists). If FAST and systems like it are working properly, the non-terrorists who get tagged as positives are also going to be people who are disproportionately likely to be adjacent to terrorists and terrorist supporters, i.e. exactly the people you don’t want to alienate. Every one of those people who gets caught in a Kafkaesque maze is going to be someone much less likely to report suspicious actiivity or conversations, because they know what such reports can do to innocent people.

kashmarek May 3, 2012 10:12 AM

IBM has a system in place in Florida that “is supposed to” predict juvenile delinquency and take action on the targeted delinguents before they commit any actions as such.

At some point in time, if not already, IBM’s WATSON system will be used to do this as well in other venues, such as Medicare fraud, potential traffic offenses, tax fraud, and the like.

Such a system might be useful for analysis of political claims or background checks. You can be sure that the NSA data center in Utah will attempt to use such systems (or improve the hit success by ignoring/eliminating data that is benefical to you or only considering data that is detrimental to you).

Daniel May 3, 2012 11:22 AM

I agree with Bruce that the base rate fallacy is real. The difficulty is that politicians are not tasked with evaluating probabilities but with risk and risk is probability * loss. The question then becomes “is the 10,000 people who are falsely accused of being terrorists worth the price of catching the one terrorist who is going to nuke New York City”. The answer to that question depends not just on how likely it is a terrorist will attack NYC but on the value one places on the loss of civil liberties relative to the value one places on the loss of millions of people.

The point is that while probability can (to some degree) be calculated independent of value risk can never be calculated independent of value. The problem isn’t that NSA or the TSA are bad at calculating odds it’s that their decision making reflects a different value hierarchy. One that tends to promote their own bureaucratic self-interest and not the interest of the American public.

George May 3, 2012 11:45 AM

“Uselessness” has never been a bar to the TSA deploying technology that enhances the impressiveness of its security theatre. They’d probably see the high false positive rate as a major advantage. They’d have to hire more employees to inflict “special” screening on flagged individuals (e.g., “Pull down your trousers and cough when I bellow the COUGH command in your ear…”). And of course, since the TSA considers a false positive a “successful interdiction,” they can issue weekly press releases touting the impressive number of “successes.” The people who believe the TSA is doing a good job of keeping aviation safe will be very impressed. The people who believe the TSA is a waste of money and liberties will complain, and as usual the TSA will ignore them. And if a well-connected company profits from selling the software to the TSA, that can only be a good thing. It sounds like a win-win situation for everyone. Except the traveling public and the taxpayer, about whom the TSA has never cared anyway.

Brandioch Conner May 3, 2012 12:27 PM

@Tom
“So the NSA’s system is about 4,000 times better at identifying terrorists than flipping a coin.”

You do, of course, know that the numbers in the article were unrealistic and used only to demonstrate the “Base Rate Fallacy”.

Bill Mitchell May 3, 2012 12:43 PM

It’s not clear to me that the false positive rate necessarily means that the procedure is useless. You don’t need to arrest everyone identified as a possible terrorist; you just mark them down for more intense screening.

Rota May 3, 2012 12:50 PM

Bruce, thanks for the article. I agree that the probability calculation provides good evidence against the utility of FAST. Although, I am not sure we should be using probability to assess probability as it in itself is poor tool for single events with incalculable dependencies.

Along with the objective of this article, one might substitute dollars for the mitigative control. Upon calculation we would quickly see that we would go bankrupt very quickly trying to identify and mitigate the threat. Hey…wait… isn’t that what our country is doing?

Fred P May 3, 2012 12:58 PM

@Tom- There were about 638,000,000 passengers last year on U.S. airlines. http://www.transtats.bts.gov/

So your proposal would appear to have around 31,900,000 false positives for having a reasonable level of accuracy in finding terrorists as well.

At which time terrorists would just not use airplanes – making the program useless once again.

But is pure fiction; I dispute your assumption that a program can be made 95% accurate in catching terrorists without having an extremely high false positive rate.

As noted in our May 2010 report, SPOT officials told us that it was not known if the SPOT program resulted in the arrest of anyone who is a terrorist or who was planning to engage in terrorist-related activity. According to TSA, in fiscal year 2010, SPOT referred about 50,000 passengers for additional screening and made about 3,600 referrals to law enforcement officers.

Source: http://www.gao.gov/assets/590/589588.txt

SPOT has a 0% terrorist detection rate with a 100% false positive rate.

At least with the information from this article, there’s no reason to think that FAST would be more useful.

Figureitout May 3, 2012 1:32 PM

Isn’t this a little like a lie-detector?

Those can’t be beat, right? Like I don’t know, maybe doing difficult arithmetic in my head right off the bat (“brain-wave activity is off-the-charts, Officer Doofy”), squinting my eyes and raising my heart/breathing rate at random control questions (“what color is the sky?!”, “Gr–, no red…no wait blue! blue!) , smile and eliminate thoughts when I’m lying (“No, I think you guys are doing a great job, keep it up!).

I want to test the FAST program out.

You know, since no self-respecting terrorist is going to go to an airport and look around all bug-eyed and sketchy these days…you think that eventually the screeners will seek out the laughing/smiling “normal-looking” individuals, since only the worst of the worst would have a big fat trollface smile on their face and carry on small talk with screeners with a new undetectable explosive strapped to them.

You know, I don’t really understand what’s happening these days. Either our TLA’s are extremely good at what they do, would-be terrorists are laying low ’til the heat dies down, or the threat has been highly overstated.

Honestly, how hard would it be to gather some individuals who have practically nothing to lose (struggling economy), draw them in with talk about how “da gubermint” is encroaching on their rights and holding them down, send them to target areas, tell them to get a crappy job somewhere to provide cover, then provide entirely legal firearms to them. Next, have them conduct random attacks, and I mean random e.g. get out of your car at an intersection and start shooting, go to a school (that typically don’t have fences or armed guards) and start shooting, go to a bar or restaurant and start shooting.

How hard would it be to do this by yourself?–Not hard, it would be hard to do it and evade capture. But, we’re still not seeing it happen. People are too focused on trying to economically support themselves, but strange things happen when people get desperate enough…

I just don’t get it, I think a bigger majority of people are fearful of state-actors who have force, perceived capabilities, and technological weapons unmatched within the citizenry.

RSaunders May 3, 2012 2:09 PM

Perhaps we’re missing the real goal of the system. TSA gets good publicity when it catches a terrorist; hmmm the math makes that look very close to never. TSA gets bad publicity when it does something obviously stupid like frisking a pilot, or a senator, or a poor handicapped old guy, or a screaming 5 year old girl; about every month.

What could reduce the bad publicity, since being smarter isn’t affordable, how about doing less? Let’s say you have a machine that lets 80% of passengers just go through the old metal detector. You can put all the folks you might reasonable suspect in the “high security” lane, with enough random folks (19.99992%) to avoid any political charges of profiling Muslims or other protected groups.

What’s the chance that a real terrorist makes it into the 80% group? Actually it’s mighty low, for the same base-rate fallacy reasons we’ve all called them stupid over. The TSA has simply embraced the power of statistics to make itself look good.

Suppose that something happens, and in a deep retrospective it turns out the evildoer made it through with the 80%. (This worst case scenario is still very very unlikely). TSA will be able to point to a really super powerful program that said “there are no indications that this person is a terrorist”. Sure, we add a new term to the formula, and one less random person per thousand is needed, and it doesn’t make any difference. BUT nobody at TSA is burned at the stake on the national mall. That’s agenda for you.

I sorta like the idea, it shows that at least some people at TSA took statistics. It gives them a graceful exit strategy, potentially allows 80% of the screening money to be shifted to intelligence where it might do some good, and it doesn’t require they admit to past sins in front of the vast majority of Americans. OK, they are busted to the people that read this blog, but that’s not news, we called them out ages ago.

What’s the down side?

Gallowglass2005 May 3, 2012 2:16 PM

We deal with the base rate fallacy all the time in health care. Breast cancer is not the best example because it is a common disease.

But let us say we want to screen people for anthrax. The chances of having anthrax are incredibly rare in the normal population. Let us say that the anthrax screening test is a very accurate test. So we decide to test everyone in the country for anthrax because anthrax=terrorism=bad things. But even if there are only 1% false positives or even 0.1% false positives or even %0.01 percent false positives, the false positives will overwhelmingly outweigh the true positives.

So how do we fix this in medicine? We do everything we can to increase the pre-test probability that the tested population has anthrax. We could select to screen only those who received white powder in the mail or only those who were in a high anthrax part of the world. Since the disease would be more likely in these people then any tests that are positive are much more likely to be true positives than false positives. Statisticians will tell you that this is still not perfect, but then NOTHING is perfect.

So how do we increase the pretest probability in airport screenings? Intelligence gathering, smart selection of who undergoes increased scrutiny, and the appropriate screening in each case.

eWilliams May 3, 2012 3:25 PM

@RSaunders, so now TSA has computer science theatre to reduce the cost of their security theatre? Does the magical thinking never end in the US?

LinkTheValiant May 3, 2012 4:15 PM

@Gallowglass2005

So how do we increase the pretest probability in airport screenings? Intelligence gathering, smart selection of who undergoes increased scrutiny, and the appropriate screening in each case.

The difficulty, from the perspective of the TSA and all the vast barrels of pork associated with it, is that this does not lend itself conveniently to CYA after the fact.

And besides, good intelligence gathering costs money. Money that could instead be going into pork barrels.

@eWilliams

so now TSA has computer science theatre to reduce the cost of their security theatre? Does the magical thinking never end in the US?

Of course not, because it’s not actually intended to reduce costs, except in CYA terms.

In its own twisted way, it’s nothing short of brilliant. The TSA can point at it and shout to the uninformed public that “we have NEW and IMPROVED PROCEDURES that are backed up with REAL MATHS!!11!!” The fact that the numbers don’t actually favour the TSA will be ignored by the general public.

So no, not a reduction in monetary or human cost, but certainly an easier “justification” for expenditures.

Mulder May 3, 2012 8:21 PM

If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?

You can’t, because the Pre-Crime Unit has already determined you are guilty. The only hope for you is to get out of the country, change your looks, your fingerprints, your eyeballs…everything about you to avoid detection and capture.

This is what happens when a government abandons all rational thought about who is and is not a criminal, and leaves it to a select group of people to decide guilt or innocence based on their thoughts, real or perceived.

If you want to find the truth, you need to find the Minority Report.

Anton May 3, 2012 10:42 PM

Incarcerating people for thought crimes based on some arbitrary algorithm predefined by politicians is fascism pure.

Anyone know what the acronym FAST stands for?

Miguel May 4, 2012 1:46 AM

@anton: no, it is not fascism, its pure bolshevism. 🙂 Check Solshenitzin (sorry for the spelling) and the interesting comments he makes about the soviet laws in his “Gulag archipielago”.

Interestingly, the debate is focused in would be terrorists caught while takinga plane. Can someone give an estimate of the would be terrorists that avoided catching a flight because they didn’t want to even risk passing through the screening process?They are completely out of the loop.
This doesn’t mean that I fully support FAST or any other specific “counterterrorist” filtering process. I just want to point out that the mere existance of some controls will change the behaviour of the guys the control is designed to catch, and that could even be a measure of some success if it forces the bad guys to be more careful and with less options. Like, if planes had had some kind of protection to keep pilots isolated from the passengers, then 11-S would have been a lot harder if terrorists could access the cockpit. But then, no one would have realized the protection that armoured cockpits gave to flights…
Of course, guilty (now or in th future) guys will change their behavior to defeat the control, so we are talking about and arms race in the purest darwiniam sense.

Zone May 4, 2012 2:31 AM

http://www.washingtonpost.com/world/middle_east/dont-break-oaths-and-dont-kill-muslims-in-letters-bin-laden-worried-about-al-qaida-image/2012/05/04/gIQA7RKc0T_story.html

“uring his trial, Shahzad — a Pakistani who gained U.S. citizenship — told the court he “didn’t mean it” when he took his American citizenship oath, which includes a vow not to harm the United States.

Bin Laden said lying about an oath breaks Islamic law.

“This is not the kind of lying to the enemy that is permitted. It is treachery,” bin Laden wrote. ”

While this may seem absurd to most of us, I wonder if requiring certain oaths before entering the country or whatever could actually provide some sort of deterrent effect, or even just turn them against each other? Sure, we might think it’s stupid to get worked up about that if you’re going to murder people, but as long as they’re being predictably irrational, it might be worth exploiting that irrationality.

Clive Robinson May 4, 2012 5:24 AM

@ zone,

While this may seem absurd to most of us, I wonder if requiring certain oaths before entering the country or whatever could actually provide some sort of deterrent effect, or even just turn them against each other?

First off have you realised what a sad indictment of modern WASP society the first part of your statment is?

Not so very long ago, (probably less than fifty years) a persons word was treated as binding even in the minutia of life, such that it would cause significant offence if you were ten minutes late for an apointment you had suggested without good cause. Even in the 1980’s people pretty much did as they had said they would.

Even today you are still required to make an oath (often before God) before giving evidence in court, and there are still quite severe punishments for not abiding by it (in the UK perjury can get you six years imprisonment).

What was it that changed society (for what many consider the worse), and what have we lost by it. What I do know is that peoples word first started being denegrated in the central areas of large metropolis areas, most notably in the Regan / Thatcher years.

For instance we now see people applying for jobs with C.V.s that are not just “fanciful exaggeration” but compleatly unsupportable lies.

Is it any surprise that prospective employers resort to what many consider an “invasion of privacy” and that sspecialised companies are earning good money as “outing agencies” digging through every candidates past including any social networking or blogging and applying metrics to highlight even a slight deviation not just from the truth but what is considered “normal”…

It’s reached the point where a UK satirical magazine (Private Eye) recently published a cartoon of a job interview where an interviewer said something along the lines of “What do you mean you don’t have a facebook page! Have you something to hide?”

noble_serf May 4, 2012 8:40 AM

My opinion on this is based on perception being equal to reality.

This program is cover for profiling. Plain and simple.

They won’t frisk grandma and little Suzie any more. They won’t take the bait of union-busting political stuntmen recording faux, “Don’t touch my junk,” screeds. They’ll use this to put the screws to the demographic they want to target. The demobraphic the reactionary voting blocks want them to target.

Cover it is.

Thunderbird May 4, 2012 1:10 PM

I saw several comments to the effect that since all we’re doing is selecting people for more intensive screening, this is a good idea, analogizing to detection of disease (e.g., mammography). Unfortunately, breast cancer, while an unpleasant opponent, isn’t an intelligent opponent. Terrorists can adapt to reduce their chance of being selected.

Zone May 4, 2012 4:41 PM

@Clive:

You’re assuming I think it unreasonable to keep one’s word. I’m saying that it’s unreasonable that someone willing to murder innocent people would have hangups about lying. But if they do, hey, might as well take advantage of that.

I’m not quite sure what a WASP is, either.

Clive Robinson May 4, 2012 5:48 PM

@ Zone,

WASP is an acronym for White Anglo Saxon Protestant, it’s a term applied to various first world nations which were usually settled by people from the UK who along with others oppressed the sparse indigenous population. WASP nations includes the UK, USA, Australia, New Zealand etc.

With regards “assuming”, no I was not assuming anything, I was simply asking you what is in effect a philosophic question about the fairly rapid change in society, from what was one based on trust to one where people think nothing of making false statments.

It is almost as though the outlying hawks have become the norm and doves are rapidly becoming an endangered species.

It is a point I’ve brought up before, most notably when Bruce originaly posted about his open WiFi access point. I remarked that at one time people in small communities did not lock their doors, and would frequently leave valuable tools etc out where their neighbours could borrow and return them. In our now large and anonymous metropolitan areas not only do we lock our doors we hide our valuables and go to quite significant expense to install security systems and purchase insurance as a back stop incase the other measures fail.

Zone May 5, 2012 1:31 AM

@Clive: You talked about an indictment of WASP society implicit in how I thought it ridiculous that requiring such an oath might bring about greater security. But that statement contains no such indictment, once you realize that I merely said that it’s odd to think that some murderers might be trusted to keep their word.

Then again, in the end, the person actually carrying out the crime was a liar, so make of it what you will.

I think you will find that the fact that we have fewer small communities where a person’s reputation really matters is a lot of the reason for the decline. When you’re practically anonymous and you can just be a dick to everyone and move on when no one will have anything to do with you any more, that’s what happens.

Tom May 5, 2012 3:54 AM

@Fred P –

I tried not to make assumptions. I took other people’s assumptions and said, “Well, what if we increased the identification rate by about 150% but increased the misidentification rate by 50,000% – then what? Is it still better than what we have?” I think that was actually pretty generous to the counter argument.

My point was that people look at systems like this, spot the output of Bayes’ formula and say, “Look, what a terrible false positive rate!” without considering whether it’s better than what we have already. When we’re thinking about policy the question should not be, “Does it have a terrible false positive rate?” That is on the one hand subjective – it doesn’t matter how good it is, you’ll still say, “But what about the 1 person who is misidentified for every 100,000 terrorists that it catches?” – and on the other hand irrelevant, because the right questions to be asking are, “Is it better than what we have?” and “Is it cost effective for that improvement?”

The reference points for improvement today are either 100% assumption of guilt or, in more theoretical scenarios, flipping a coin. The proposed system is a lot better than either.

Peter May 5, 2012 6:22 AM

@ Figureitout
“since only the worst of the worst would have a big fat trollface smile on their face and carry on small talk with screeners with a new undetectable explosive strapped to them. ”

As anyone who has seen ‘Arlington Road’ will confirm. Now that was a really scary movie considering ‘terrorism prevention’… 🙁

Figureitout May 6, 2012 12:06 PM

@Peter

That sure sounded like a FUDish-statistically-unlikely-movieplot statement didn’t it?

I said that we don’t hear about random attacks that would achieve the assumed goal of ‘perpetual/paralyzing terror’, but I forgot about the most recent incident in France involving “Mohamed Merah” and the killing of seven people in a manner that I described. Here’s a description of one of his killings:

He dismounted, and immediately opened fire toward the schoolyard. The first victim was a rabbi and teacher at the school who was shot outside the school gates as he tried to shield his two young sons from the gunman. The gunman shot one of the boys as he crawled away, as his father and brother lay dying on the pavement. He then walked into the schoolyard, chasing people into the building. Inside, he shot at staff, parents, and students. The killer chased a 8-year-old girl into the courtyard, caught her by her hair and raised a gun to shoot her. The gun jammed at this point and Merah changed weapons from what the police identified as a 9mm pistol to a .45 calibre gun, and shot the girl in her temple at point-blank range.

That movie sounds awful though, I may try to watch it if I need a reason to feel depressed. That quote may be quite enough for now though. :/

David Collier-Brown November 14, 2013 7:16 AM

A late addition: Birthday Paradox

If you are only looking for 1 bad guy in a population of 1,000,000 you have a 99.99% chance of getting the 1 right match, and the corresponding 99 people mis-identified.

However, if you have 100 bad guys to compare against the number of comparisons is that much larger, and therefor the true- and false-positive numbers are increased by that. The probabilities are that you’ll get 100 bad guys and 990 false positives.

If they were a single population of N people, then you would be doing N * (N-1) comparisons, and if the probability is p * N * (N-1). This is the reason that you find people with the same birthday as you at parties: the number of comparisons is hugely greater that intuition leads you to expect.

The German Federal Security service investigated this years ago, but one of the suppliers (Siemens) pointed out that no matter how low the error rate was, the sheer number of comparisons guaranteed failure in their use case, that of airport screening.

–dave

Dirk Praet November 14, 2013 9:30 AM

@ David Collier-Brown

Are you the same David Collier-Brown as the one who accidentally got riffed by an idiot bean counter manager at a former employer of mine and then (reluctantly) came back on his own less than friendly terms after threats by the DoD to upper management that they would cancel millions worth in contracts if they didn’t get you back ? If so, you are one of my all-time heroes.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.