Schneier on Security
A blog covering security and security technology.
« Global Envelope |
| American Authorities Secretly Give International Travellers Terrorist "Risk" Score »
December 1, 2006
There's new software that can predict who is likely to become a murderer.
Using probation department cases entered into the system between 2002 and 2004, Berk and his colleagues performed a two-year follow-up study -- enough time, they theorized, for a person to reoffend if he was going to. They tracked each individual, with particular attention to the people who went on to kill. That created the model. What remains at this stage is to find a way to marry the software to the probation department's information technology system.
When caseworkers begin applying the model next year they will input data about their individual cases - what Berk calls "dropping 'Joe' down the model" -- to come up with scores that will allow the caseworkers to assign the most intense supervision to the riskiest cases.
Even a crime as serious as aggravated assault -- pistol whipping, for example -- "might not mean that much" if the first-time offender is 30, but it is an "alarming indicator" in a first-time offender who is 18, Berk said.
The model was built using adult probation data stripped of personal identifying information for confidentiality. Berk thinks it could be an even more powerful diagnostic tool if he could have access to similarly anonymous juvenile records.
The central public policy question in all of this is a resource allocation problem. With not enough resources to go around, overloaded case workers have to cull their cases to find the ones in most urgent need of attention -- the so-called true positives, as epidemiologists say.
But before that can begin in earnest, the public has to decide how many false positives it can afford in order to head off future killers, and how many false negatives (seemingly nonviolent people who nevertheless go on to kill) it is willing to risk to narrow the false positive pool.
Pretty scary stuff, as it gets into the realm of thoughtcrime.
Posted on December 1, 2006 at 7:34 AM
• 54 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
This is not thoughtcrime. They have committed actual crimes, and been convicted. Now they have been let out of jail and are being supervised on the order of a court.
Of course we want the most intensive supervision to be given to the most dangerous individuals. And really dangerous ones shouldn't be on probation at all.
The worst that could happen is that criminals could game the system. But they game probation boards at present, so this is not a new problem.
Actually the worst that could happen is that people might come to believe that prisoners have a RIGHT to be let out on probation. It's a privilege, people! The court stripped them of their right to liberty when the jury decided they were guilty.
Probation is about mercy, rehabilitation and risk management, not rights.
In many ways not that surprising realy.
There has been a predictive models for both sociopaths psychopaths for some time (google for [sociopaths psychopaths "predictive model"] the results are quite scarey).
There have also been predictive models for the likleyhood of re-offending as well. Some claim that they can detect it from analysing the parents and other pre-birth social issues.
In the UK the Home Office is setting up a Childrens Register, where they also intend to put records of "suspect foetuses"
The question then arises as to what do you do with the data, and the children concerned?
What almost makes it funny is that research on leading business men show that they have considerable numbers of identifing points with both sociopaths psychopaths from various identification models...
So what do we do encorage them to become the next generation of "world class business leaders" or condem them to a life of suspicion and self doubt (which would probably cause most ordinary mortals to rebel and offend).
The question is not realy the prediction but "what do we do with those at risk" and society needs to answer it fairly soon.
Sorry the above post was mine.
Didn't you used to have a fillter that picked up a blank name field?
"This is not thoughtcrime..."
Maybe not yet - but I can pretty much surmise that, as the algorithm is matured, that the government will find it hard to resist the temptation to apply it to more and more people, at a younger age. And, if they can find some way to marry this with some of the data-mining they have going on...
"Apr. 14, 2026: FBI and IRS agents break down the door of Joe. Q. Schmutz, and take him off to the local detention facility. The reason given: TIA-NG determined that Mr. Schmutz had a 'high probability' of cheating on his income taxes."
"Sept 6, 2037: Police stormed onto the playground of Jones Elementary, and seized a kindergartener, after Tasering him 3 times. A police spokesman said that the arrest was justified, as TIA-NG identified him as a 'high probablility' sex offender, and the Tasering was based on his risk profile which showed he might fight back someday."
"January 1, 2050: A judge signed an order requiring Jane X Doe to terminate her current pregnancy, based on the determination by TIA-NG V4.0 that she was carrying a 'suspect fetus'. When asked what the fetus was suspected of, the police spokesman declined to comment, noting that National Security issues were involved."
I say scary.
Not because they don't deserve it who have been convicted. But because once the system is good and tried and true - they (yea, the scary 'they') could expand it to cover the whole population and we'll be sent to jail for crimes we're likely to commit according to statistics. Certainly, this is most likely stupid paranoia, except when you factor in oppressive governments and people "likely to fight against those governments in the future".
Now, that's the worst case. The best case (which rarely happens in real world) is we actually have fewer dangerous cases wandering freely.
Anyway I personally don't like the idea of statistics and predicting when it comes to individuals. False positives are inevitable.
Wouldn't it be at least as useful to be able to predict who's likely to become a victim?
Wouldn't it be at least as useful to be able to predict who's likely to become a victim?
Good point. Except I bet it's twice as hard to profile out victims since some crimes are targeted more or less at random. You probably don't figure out what kind of people work at the bank you're about to rob...
...but you know they work at a bank so... !
'Minority Report' anyone?
I think I agree with you. I have allways assumed (but I don't have meaningfull experiences with it) that when somebody is sentanced to probation, it is up to the discretion of the case officer how much supervision the person recieves. For example a gang member might recieve special attention/requests to avoid gang hangouts.
So, it seems to me that perhaps the officers are already making certain judgment calls regarding who is likely to offend again. Note that it is implied in the article that they already have some internal formal for determining how much attention to give to who.
It appears that a system is being proposed to improve a decision making process which already occurs. Certain people get more attention during probation, and this choice is in some sense arbitrary, but this isn't news.
The danger is mission creep. But as long as they see it as just a tool for helping to decide who to pay attention to, and as long as they are rational when evaluating its accuracy, I don't see a problem with the current plans.
If you think this is scary, have a look at the UK's Met Police Homicide Prevention Unit:
QUOTE: Criminal profilers are drawing up a list of the 100 most dangerous murderers and rapists of the future even before they commit such crimes, The Times has learnt. The highly controversial database will be used by police and other agencies to target suspects before they can carry out a serious offence.
[…]Experts from the Metropolitan Police’s Homicide Prevention Unit are creating psychological profiles of likely offenders to predict patterns of criminal behaviour. Statements from former partners, information from mental health workers and details of past complaints are being combined to identify the men considered most likely to commit serious violent crimes. UNQUOTE
From the UK Times, 27 Nov: http://www.timesonline.co.uk/article/...
More (with more links) here: http://crimepsychblog.com/?p=1241
What would you do if you were accused of a murder, you had not committed... yet?
movie of the day: Minority Report
There is a technical point I would have liked to see addressed in the article: How did they do the model reliability assessment?
If they assign probabilities of recidivism to their model scores based on their calibration data -- the training set used to determine the model parameters -- then they have an incest problem, and don't really know the accuracy of the model. They only know that it is accurate in predicting the data that the model was designed to get close to. This is not a fair assessment of model accuracy.
What is required is a test on an independent data set, one that was not used for model calibration.
I could not tell from the article whether the model creators have kept calibration and reliability assessment separate. Perhaps they did. Clearly, nobody thought it worth pointing out the distinction to the reporter, who otherwise did an OK job trying to discuss other technical issues, such as the false positive/false negative rates.
"Pretty scary stuff, as it gets into the realm of thoughtcrime."
Thoughtcrime? That's overly alarmist.
This is no different than the models they use at doctor's offices to compute the risk of developing heart disease or certain forms of cancer and decide who might derive the most benefit from aggressive preventative care.
It all depends on how you use the data. If you assume that someone with a high risk score on this model *will* murder, then you don't understand statistics, as the reporter clearly didn't. If you correctly note that those with highest risk are the best targets for preventive efforts if you want to playthe odds, well then, that can be quite useful.
Of course, it does seem that the probabtion officers might not understand statistics either, but the problem is how the people use the tool not the existence of it.
Would be interesting to see how the model changes as the subjects adapt to it. After all humans are great adapting machines reacting very fast to pressure.
Just assume a person will have reduced time intervals between consultation because of some behaviour prediction. How long would it take this person to find out how to behave to increase the time intervals or avoid further reduction?
Of course this is a good program! We've got to protect ourselves and get these future-offenders off the streets! It's obvious these people are dangerous when you look at some of the red flags in this murder-prediction model:
Pistol-whipping before age 18.
Armed robbery before age 21.
Membership in radical left-wing "peace" movements.
Subscription to "Mother Jones" magazine.
A history of domestic violence.
Entry on a TSA "No Fly" list.
I think it's clear we need to get these kind of people locked up someplace where their "lawyers" can't get at them... for our own safety!
Systems like this will continue to be created. Since we can't (and don't want to , in my opinion) stop this type of research, what do we do to prevent the "Minority Report" outcome?
I have a recommendation for the comment section. How about instead of using anonymous you assign them a name based on their IP. It can be silly, Fancy McUnderpants or something. Just a thought.
"Actually the worst that could happen is that people might come to believe that prisoners have a RIGHT to be let out on probation. "
Actually, I think the framers of the constitution felt pretty strongly that an even worse occurance would be the wrongful detention of a person not proved guilty beyond a reasonable doubt of an actual crime, as opposed to a future crime.
1) it would be irresponsible to just sit back and do nothing if the algorithm says there's a major crime might be committed. The resulting intervention will be judged a success, but we'll never know if a major crime was prevented or if someone was locked up uncessarily
2) Is there an implication that LEAs have to be statisticians otherwise they'll misapply the tool?
(Studies have shown that most doctors can't understand statistics, so what chance does a probabtion worker have?)
This isn't thoughtcrime, it's a voluntarily accepted limitation in return for early release from prison. If you don't want to be subject to probation rules, you get to sit in your cell until your alloted sentence expires.
That's not to say we don't have thoughtcrime problems already. The "sexual predator" monitoring programs are definitely thoughtcrime. We don't have "car predator" or "drug predator" monitoring programs that require convicted-and-duly-released car thieves or drug pushers to register with the local police and have their names and addresses put on public websites. But somehow sexual crimes are different. THAT'S thoughtcrime!
Indeed sounds like "Minority Report" and somehow scary.
But it strongly depends on how it is used. Detain murderers-to-be (or tax-and-parking-offenders-to-be) on the base of statistics is certainly a bad idea. Focusing prevention measures on the other hand might have some merits.
It strikes me that this would be an incredible way to allocate limited therapy resources. The model describes which offenders are most at risk for an escalating cycle of criminal activity, and these are also the people most in need of intervention with treatment.
I'd have to agree that the risk isn't this application - it's the other uses for the tool that will undoubtably arise. It makes sense to concentrate a limited resource (probation officer man-hours) where they do the most good. Unfortunately, someone's going to think that it also makes sense to concentrate other limited resources (local law enforcement, FBI, BATF) where they do the most good - without considering (or perhaps, without caring about) the collateral effects.
I dn't see how this is different from training a neural network on some data, then forecasting. It'd be nice to know their data sets, and whether they did any blinded tests (trained software vs. random chance vs. simple metrics).
This concept has so much wrong and good within it that it is hard to overstate the situations.
First, probation offices, while they do spend a certain amount of time checking up on their cases, are not omniscient and are woefully understaffed (I believe I have seen a ratio of 200 to 500 probationers to one officer). The idea that a P.O. might use such scoring to determine which parolees to spend more time on has a certain elegance from a case management point of view, but think of the consequences of a false negative (i.e. ignoring someone who reoffends because "they didn't score high", even though that will never be the reason given by the P.O. after an incident comes out).
The reason someone might look into such a system is because the case load is extremely high. How can we lower it (and that can mean, depending on your predelictions, anything from hiring more tax-supported P.O.s or reworking the penal code to remove many "victimless" crimes).
Finally, there already is such a method in place in almost all states--sex offender registries. If you are convicted of a sex offense (or almost any), you are required to stay away from schools/parks/public places, have to repeatedly register and check in with local law enforcement, etc., because you are automatically considered a repeat offender to be. Such a system as described above might allow these persons who are extremely unlikely to become reoffenders to leave the registry system (unless it remains a blackball offense).
And obviously, thoughtcrime should remain a non-offense. I can recall many an attempted shoplifter who backed off of their theft attempt even though they had the goods and were heading to the exit...it should be the act, not the preparation or the potential to act, that is the crime.
what's that, anonymous? you can't see any qualitative difference between me stealing your car, and me coming up behind you when you're alone, putting a knife to the side of your neck, bending you over a park bench and buggering you in your poop chute?
your statement that probation is a voluntarily accepted limitation in return for early release is true as far as it goes, but there's more to the issue that you left out.
our correctional resources are nowhere near adequate to keep every convict imprisoned for his full term. triage must be employed to keep the baddest guys locked up, and distinctions must be made within individual classes of crime, e.g., an effort must be made to identify the relatively benign armed robbers who are unlikely to reoffend versus the career banditos, and the crux is what metrics they select for this analysis.
at the risk of sounding like a liberal, i'm going to go out on a limb here and say that a lot of property crime is associated with poverty. while poverty transcends race, it nevertheless is statistically associated with certain ethnicities more than others. are the researchers employing metrics which are independent of race and class? i don't believe it's acceptable in our system for the probation department to say, well, you're poor and black, you fit the recidivism model so we're gonna keep you locked up longer.
The application as it stands strikes me as unobjectionable. The target population is a group already legally subject to increased observation and restriction. Parole officers already make ad-hoc decisions about which parolees to pay less or more attention to. The program formalizes the decision-making process, but doesn't call for the parole officer to perform any acts s/he doesn't already do. So what if there are false positives? All that would happen to FPs is that their parole officers would be tracking them closely during the period of their paroles. Close tracking is a goal of a parole system; if we had the resources we'd be closely tracking all parolees.
I do see a problem with false negatives. If someone is harmed by a false negative, chances are good that the victim would sue the jurisdiction for not properly watching that parolee.
It would be a problem if the program were applied to the general population and not just parolees. But I'm not willing to throw away a promising idea because it's subject to the slippery slope fallacy (http://www.logicalfallacies.info/slipperyslope.html).
On another note, the article is very unclear about the validity of the pilot program. Hopefully the statisticians understand the importance of not using the same data to test and validate a proposition.
This is clearly useful technology. Society benefits from optimized decisions on parole, surveillance and therapy.
It is critical, however, to prevent the abuse of this data. And abuse would be sorely tempting. When police investigate a crime, wouldn't they love to have a list of likely perpetrators? When prosecutors decide whom to prosecute, wouldn't they love to have this data to "confirm their belief" in the subject's guilt?
These are unfair imputations of guilt based on statistics, therefore any use such data must be highly regulated to constrain its fair and legal uses, and exclude its prejudicial use in determining individual (as opposed to statistical) guilt.
As Steven Colbert likes to joke, "I don't see color!" Software that crunches data and statistics to identify likely murderers is comical because human behavior is chaotic.
I can tell you someone from my 1000 person town will die in a car crash in the next year. Maybe we should retest all the drivers?
My idealist side agrees with this program--create software to help parole officers triage their cases and help determine who needs more supervision. I think that would help in that those more inclined to commit a crime once released will be watched a little more closely, thus potentially not committing crime and possibly becoming a positive addition to society. Yes, I would like to see prisons and probabation lead to rehabilitation.
My more realistic side is afraid of the scope creep that a lot of others are mentioning--that this will be used by more organizations than parole officers, then misused and abused.
If this software is developed, I hope my idealist side is correct, not my realist (paranoid?) side.
I just hope they don't use it on me -- it's final exams week soon, and I've felt pretty homicidal towards some of my students who've been bugging me for extra credit.
[Note to NSA: Kidding!]
"i can tell you someone from my 1000 person town will die in a car crash in the next year."
gee, i love a wagering opportunity. if we can define the town (as it is now, or move-ins and move-outs too?) and the risk class (wrecks in town, or does getting run over by a taxi in mexico count too?) and you give me decent odds, i'll plunk a bet down, at least i'd be betting on the side of life. c'mon you good drivers of podunk, just one more year!
X-Murderer-Checker-Version: AssassinAssassin 1.1.38 (2011-06-01)
X-Murderer-Status: Yes, score=7.4 required 5.0 tests=RACIAL_PROFILING,RELIG_NOT_IN_XTIANTY,BAYES_99,VOIGHT_KAMPFF_99
"what's that, anonymous? you can't see any qualitative difference between me stealing your car, and me coming up behind you when you're alone, putting a knife to the side of your neck, bending you over a park bench and buggering you in your poop chute?"
I don't think that's what s/he was saying at all. The complaint was that after you've supposedly paid your debt to society you're still deprived of more liberty than for other crimes.
"our correctional resources are nowhere near adequate to keep every convict imprisoned for his full term. triage must be employed to keep the baddest guys locked up, and distinctions must be made within individual classes of crime, e.g., an effort must be made to identify the relatively benign armed robbers who are unlikely to reoffend versus the career banditos, and the crux is what metrics they select for this analysis."
Ah, but why is that? If we think sex offenses deserve more severe punishment (otherwise why would we be letting them out if we don't trust them?), don't we as a society have a responsibility to change the laws to more accurately reflect our beliefs? Are we not prepared to put our money where our mouths are? Clearly we are not, but too many of us have no problem with the sex offender registry idea.
I have long believed that registerring sex offenders is an abhorrent, hypocritical violation of all that a republic founded on liberty holds dear. I'm a father, one of my kids is a pretty blonde girl, and words can't convey what I would want to do to anyone who hurt her because she is one. However, my belief in pretty much all of the goals of the founders of the US is perhaps even stronger than my love for my children (thankfully, it's never been put to the test, and I hope it never will).
As you might imagine, this is one of my hot buttons. If you don't belive it's safe for any particular criminal to be out in public, then change the laws so they don't get out. If you believe it strongly enough, be willing to pay for it. But letting them out with the proviso that they'll be held up for public ridicule for the rest of their lives is just as wrong as it gets (never mind the ones for whom that sentence was effectively retroactive.
I'd love to see if the system withstands its own analysis. In other words, if it would come out as probably harmless when its behavioral patterns are input to the algorithm it uses. I fear it may send more people to their demise than it will benefit.
"If you assume that someone with a high risk score on this model *will* murder, then you don't understand statistics, as the reporter clearly didn't."
I think the reporter is the rule, not the exception.
The problem I am concerned with is, is a person with a high score actually more likely to commit homicide? We have seen in the past that when we have computers play the game of guess the criminal, they are usually wrong.
Nowhere in the article do they say that people with higher scores will be unduly deprived of life or limb. Oh yeah, they're convicted criminals on parole or probation. They've already been duly convicted of a crime.
Besides, where does it say doors will be knocked down, children will be tasered, or abortions will be ordered. It's a nice movie-plot slippery slope, but as such, it can not be used as a logical argument against the system.
You might argue it's discriminatory to supervise criminals differently based on this score. And that's not a bad argument, except again we're dealing with a duly convicted criminal who has not paid his debt to society (that's why the criminal is on parole or probation).
Sure, incrementalism is scary, but there's no reason a policy can't go right up to the line of rights and privacy and NOT cross over it.
For example, we have trial by jury.
Well, that's scary, a jury might wrongly convict me. Yeah, they might... and that would be awful. However, do you throw out the jury system in favor of a judgment by government official? Do we not feel the former is the safer and more dilligent at protecting our rights? Instead, the government has to convince 12 disinterested people who've got other things to do than hear your case that someone is guilty.
Look, this program is frankly kind of stupid. There's a tiny amount of data over a couple of years and probably won't lead to much of anything, but it still doesn't seem like rights are being overburdened when we're talking about folks still under a criminal sentence.
This is a system that's up for valid debate, but the alarmism about what the inept government (come on, they can't get anything right in government and they're shrewd enough to spy on you?) is going to do when it can read your criminal thoughts..? Please...
Perhaps something like this could be used to screen postal employees? Or diners at Ihop?
Maybe we could combine the murder-prediction software with this:
and start prosecuting CEOs before they have ruined major industries...
Oh, I'm sorry, these technologies are only for oppressing the poor, my mistake.
Besides the idea of mission creep, there are other side-defects of such a system, but, then, the "frequent shopper" algorithms are already well entrenched, aren't they?
In some ways, the "presumption of innocence" is intended to take the chance by, worst case, letting an offender free, rather than imprisoning an innocent.
Not that we haven't imprisoned innocents before, of course. Or don't know. Or won't in the future.
It can be argued that this relates closest to what hospitals use for staffing: Acuity levels. As each patient's needs are tallied up, the staffing office can work out how many people on unit X they need to bring in for a shift.
So it can be argued that this is trying to find the "critical indicators" needed to work out what a particular probationer will need in the way of attention and surviellance.
So it's meant to save money, time and attention... but the same techniques are already available for political suppression already...
Is there any evidence that more intense supervision by caseworkers will have any positive effect?
Well, I certainly find statistical modeling to be preferable to subjective profiling methods. The kind of increased surveillance these "likely murderers" will be subject to concerns me, but as individuals on probation, they would already be under surveillance or supervision. If the model proves accurate and devoting more time to monitoring (in ways that do not infringe upon their rights) "likely murderers" proves effective, I see no problem with the program.
Whoops. Forgot to mention something: there is an abundance of data about murderers (and non-murderers) to feed the modeling software with, which makes it far superior to programs that target statistically insignificant groups, like terrorists.
Two doubts not mentioned so far:
a) Will the system be neutral for the psyche of the "likely murderers"?
We know, that expectations often produce the expected feedback, and in a constitutional state, such profiling can't be kept secret to the examined subject - can it?
So perhaps a 76% fallback-expectation will make the person pessimistic for his own future, fatalistic and less engaged in programs to change his life.
b) Is data collected in the past valid for the future?
Social parameters change in time. Will the way the data is collected stay stable? Will the meaning of the data be the same?
"Parole officers already make ad-hoc decisions about which parolees to pay less or more attention to. The program formalizes the decision-making process, but doesn't call for the parole officer to perform any acts s/he doesn't already do."
Excellent reasoning. My concern would be the model would be "enhanced" over time to insure criminals are not discriminated against by race, sex or other politically incorrect ways to look at people.
If this is provided as but another tool for the PO, allowing the PO to make the final decision on how to allocate his limited time, this should be a no-brainer.
Wow, this is not just a little scary: Confirms exactly to William Bogard's image of a hyper control society where surveillance and profiling tends to expedite "the conversion of objects, events, and people into information" (Bogard, W: The Simulation of Surveillance - hypercontrol in telematic societies", Cambridge UP 1996).
I fail to see how this is not a simple systematization of what is already done.
We *expect* our parole boards to judge the likelihood of an offender to reoffend when deciding who to parole. Similarly we expect our probation officers to estimate what types of crimes the individual is likely to commit. Probation officers who are dealing with a habitual drug user no doubt spend more time looking for drug use while with someone who was a con man probably look for other signs. The only thing alarming about this is that they probably just use their intuitive judgement and we all know how faulty that can be.
There have been statistical models of who is likely to commit crimes for a LONG time so it isn't the mere fact that someone is trying to predict future crimes that is bothersome. We already expect probation and parole officers to do this sort of thing so the future punishment aspect isn't worrisome, we are just deciding whether or not to let some past offenders off easy because they don't pose a particular threat. So I'm really failing to see what is disturbing about this sort of system
There is a big difference between processing data and how that data is used. This system makes perfect sense for parolees whose parole officers *already* must judge the likelyhood of recidivism.
Foolishly applying it to the entire population would not only be ineffective (because of the question it's designed to ask), but scary. It goes past thought crime to correlation crime.
Note that we already have thought crime of sorts (intent is a big aspect of certain crimes).
Now that it ocurrs to me, we do have correlation crime already, in the form of various kinds of profiling.
Isn't the incest problem largely solved by partitioning the datapoints into exclusive subsets, training the system with one subset and testing it with another?
On a slightly related tangent to what some others have suggested above, what about using this to profile leaders, detention "professionals" and law enforcement?
You certainly want to avoid situations like Abu Ghraib, and the Bay County boot camp incident, no?
I mean, what would the software tell you if you entered all these incidents:
I work in data-mining, so take this with a rock of salt.
One thing that no-one has mentioned so far is the old truth in our field: people (probation officers in this case) *already* actively try to pattern-match. They can't help it: humans are pattern-matching animals, that's one of our main intelligence strengths.
However, when you rely on *just* the officer's pattern-matching ability, you have a couple of limitations: his own mental sample is necessary smaller than the model's, his abilities maybe in fact lower than the model's (and for advanced, well-trained models, that is usually the case) and his own prejudices/background may in fact be skewing his decisions.
Even in ideal circumstances (brilliant officer, extensive experience, fairness that Solomon himself would envy), decision support systems such as this will end up helping by properly focusing attention where it's really needed, i.e. in the *borderline* cases. For those cases that the model will be quite certain of, it's unlikely that the human will do much better; but for cases where the model is not certain, the human will simply have more time to apply his attention to.
Also, spare me the "human behavior is chaotic" naivete: statistical models predict human behavior all the time: from airfare pricing algorithms, to recommendation systems, to fraud detecting systems, to credit rating algorithms. And *at the aggregate* they usually do very, very, well. And when you have to allocate resources (attention in this case), the aggregate is all that matters.
Yes, we have to be careful on how this system is used, but that's true for any crime-fighting technology, from tasers to rubber bullets.
@Crunchy McJockstrap: "therefore any use such data must be highly regulated to constrain its fair and legal uses"
And therein lies the problem. Heck, didn't the finally-passed bill authorizing Social Security specifically state that the SSN couldn't be used for any other purpose? Look how well that worked out...
////Even a crime as serious as aggravated assault -- pistol whipping, for example -- "might not mean that much" if the first-time offender is 30, but it is an "alarming indicator" in a first-time offender who is 18, Berk said.
It depends on the circumstances - not just much the age. Especially, if it was malicious and unprovoked!
You could keep this kind of system honest by turning parole decisions into an insurance decision--for each prisoner, you want to buy recidivism insurance for the next five years, say at a million dollars. You try to minimize your costs of incarceration (ideally including the costs to the prisoners). So if the cost to buy recidivism insurance is greater than the cost to keep the person in prison for the rest of his sentence, he doesn't get paroled. (Obviously, you need to ensure that the sentences are long enough for deterrence and have a human in the loop to prevent wildly out-of-line answers.)
The biggest problem I see with this is that statistical models and computer programs tend to be seen by non-techies in a very different way than by people who understand them. Criminals will eventually be choosing which crimes to plead guilty to (in the plea-bargaining) based on minimizing their predicted probability of recidivism, and even which crimes they commit. Some people will get flagged as high-risk even though a human can see that they're low risk. (Some others wil be high risk, but will be such good con-men that they convince the humans they're low risk regardless of what the model says.)
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..