Partial Fingerprints Barred from Murder Trial

Brandon Mayfield, the Oregon man who was arrested because his fingerprint “matched” that of an Algerian who handled one of the Madrid bombs, now has a legacy: a judge has ruled partial prints cannot be used in a murder case.

“The repercussions are terrifically broad,” said David L. Faigman, a professor at the University of California’s Hastings College of the Law and an editor of Modern Scientific Evidence: The Law and Science of Expert Testimony.

“Fingerprints, before DNA, were always considered the gold standard of forensic science, and it’s turning out that there’s a lot more tin in that field than gold,” he said. “The public needs to understand that. This judge is declaring, not to mix my metaphors, that the emperor has no clothes.”

Posted on October 25, 2007 at 7:03 AM36 Comments

Comments

Rr October 25, 2007 8:09 AM

The summary and article leave out some very important information pertaining to the judge’s reasoning. It seems that a key objection surrounds the introduction of some possibly severe bias, both by detectives and forensic examiners, in the print matching process. I recommend reading the full decision.

Matthew Dares October 25, 2007 8:09 AM

This just proves that fingerprints cannot paint the whole picture, but to bar them as evidence is silly. The cases cited may have pointed to some of the wrong people being ‘fingered’, but they didn’t fail to narrow the list of suspects. All evidence needs to be weighed in context, and the jury simply needs to be instructed that the fingerprints alone do not prove he was there. The issue at heart is that trials need to focus more on discovering the truth and act much less like a contest to be won or lost.

ttrent October 25, 2007 8:30 AM

If we looked upon trials as efforts to find and consider the entire sum of evidence, rather than some insane zero-sum litigation game, we’d have more acurate outcomes — in other words, maybe more of the hundreds of thousands of solid cases that fall on absurd exclusionary rules or get dropped for pleas or deferred adjudication or nothing at all would have their day in court. Then, of course, we’d have to fund the courts. So instead, we do nothing to resolve 98% of crime, and pretend that’s justice.

Roy October 25, 2007 8:54 AM

The problem with modern fingerprinting is that actual prints aren’t compared. The evidence print is reduced to an abstraction, and the abstraction is compared to a library of abstractions for near matches.

The more specific the criteria, the fewer matches will be found. The looser the specificity, the more matches will be found.

There normally is never a reason for some human to visually compare the file print to the evidence print to look for mismatching. People can get convicted on fingerprint evidence that would be undone by actual inspection by a 12-year-old with a magnifying glass.

Sparky October 25, 2007 9:28 AM

I thought, or at least assumed (silly me), that all possible matches coming from a fingerprint database would be manually compared to the print found on the crime scene. The database should only be used to drastically reduce the number of prints that need to be manually compared.

So, now what? Everyone in a US prison, convinced because of fingerprint evidence, can ask for a retrial?

Rr October 25, 2007 9:47 AM

@sparky

They do what you describe. They’re apparently just easily biased, which results in accepting matches too easily.

Also realize this is specifically being scrutinized by the judge because this is a death penalty case.

ARM October 25, 2007 10:08 AM

@ Matthew Dares

The prints are being barred in this case because the judge does not find credible the assertion that identifications are never wrong.

The statement that fingerprints cannot paint the whole picture is a truism – the fact that you find my fingerprints at a crime scene doesn’t mean that I had anything to do with the crime itself. This case is about how one goes about actually making the determination that those are actually MY fingerprints that you found at the crime scene, and what the chances are that such an assertion is incorrect.

This is a capital case that appears to lack any other means of placing the defendant at the crime scene, so the determination at the prints found belong to this man – and no-one else – could spell the difference between a proper sentencing or the execution of a man innocent of the crime he’s being executed for.

galv October 25, 2007 10:17 AM

The fundamental problem with fingerprinting is that jurors are rarely allowed to examine the direct evidence. Instead they are only allowed to hear the testimony of “experts,” almost invariably government-paid police workers. The potential for conflict of interest and bias is very large, and examples of shoddy work and outright perjury are numerous. Brandon Mayfield’s case was particularly outrageous because competent authorities (the Spanish police) had alerted the FBI a full month before his arrest to the fact that their results were hogwash. Mr. Mayfield was far from the first person to be railroaded by unscrupulous government agents and there is little reason to believe he will be the last.

Michael Burns October 25, 2007 10:27 AM

@Roy

Having worked for a company that produced such equipment, I know that the process is generally as follows:

1) The Latent Print lifted at the crime scene is scanned in to the system.
2) The fingerprint database is searched for “likely” matches.
3) The Latent Examiner visually compares the “likely” matches thereby reducing the set.
4) The Latent Examiner compares the physical latent print with physical print cards.

As “Rr” states, human biases can play a major role in steps 3 and 4.

Anonymous October 25, 2007 10:50 AM

@bzelbob: I think the problem was unreal pressure to get bad guys, and implicit permission to get some innocents in the process to make sure. For instance, look at this one: http://tinyurl.com/35mqxp. Again, an agent going way beyond normal to get a prosecution. Is it right? No. Did it happen? yep.

Matthew Dares October 25, 2007 11:10 AM

@ARM

It does sound like their case is based on nothing else, but that should mean there will be insufficient evidence for charges, or at least for a guilty verdict, not that the fingerprints should be excluded. I understand what you are getting at, but I have a hard time believing evidence should ever be excluded because it is not in and of itself 100% accurate.

greg October 25, 2007 11:13 AM

I think this is more a method being used outside its orginal context.

The original context was to compare against a small set of suspects just like DNA fingerprinting. In this case false positives is a much smaller problem.

Both methods have not been developed to the point where they work with massive databases where birthday paradox starts to create a expectation of collisions.

In particular DNA does not cover enough markers in different ethnic groups to give any kind of guarantee of uniqueness.

Were are all a lot more similar than we think. DNA and fingerprints included.

FNORD October 25, 2007 11:38 AM

To extend a metaphor, even tin has its uses. Partial prints, while not certain, can provide circumstantial evidence.

It is important to note the judges conclusion: “ACE-V…purports to be infallible”. While I disagree that a specific error rate must be known for evidence to be admissible, the problem here is that this specific testing methodology presents itself as exact when it is clearly not. It would appear that she would allow testimony that the fingerprints are similar, just not testimony of positive identification.

In any case, expect this case to be appealed.

Anonymous October 25, 2007 12:27 PM

@Anonymous, that link http://tinyurl.com/35mqxp was very interesting, especially the conclusive evidence that the US Court of Appeals suppressed details of the FBI coercion methods.

Still, a coerced confession in the 9/11 case that’s later thrown out because an uninvolved third party shows up is a pretty glaring screw up. Too bad the court felt it appropriate to hide details from the light of day.

Terry Cloth October 25, 2007 12:29 PM

Excerpts from the judge’s decision (found in The Detail, a fingerprint technical periodical) include some fantastic reasoning by the prosecution. Check out some of the text (paragraph 8 before the section labeled “Conclusion”):

(Note: Meagher: expert witness for the state. ACE-V: A method for identifying fingerprints [I’d love to know more about what methods exist, and their respective alleged accuracy]).

“Mr. Meagher incredibly testified that there is no error rate in ACE-V as it is an infallible methodology. He attributed all erroneous identifications to examiner error in applying the methodology. [And how do we determine whether we have an erroneous examiner, hmmm?] Mr. Meagher was neither credible nor persuasive in this regard. Without impartial testing, however, whether or not the methodology is infallible is unknown.”

“An error rate, or lack thereof, must be demonstrated by reliable scientific studies, not by assumption. Where tests have attempted to imitate actual conditions, error rates by fingerprint examiners have been alarmingly high.”

The court earier mentions two cases in which the alleged identifications by fingerprints were dead wrong. (Not literally, because neither was executed.)

“In a 1995 test conducted by a commercial testing service, less than half of the fingerprint examiners were able to identify correctly all of the matches and eliminate the non-matches. On a similar test in 1998, less than sixty percent of the examiners were able to make all identifications and eliminations. An error rate that runs remarkably close to chance can hardly be viewed as acceptable […].”

It sounds an awfully lot like no one has determined rates of positive or negative errors. I read some time ago that there had been no clinical trial for fingerprints, and that an ID was a matter of finding five points of coincidence between two samples.

It sounds as if the defendant has a compelling case, and the judge has ruled correctly. My only worry is that the decision will be overruled on the basis of “but we’ve always done it this way.”

Brian Carnell October 25, 2007 12:53 PM

“`Mr. Meagher incredibly testified that there is no error rate in ACE-V as it is an infallible methodology. He attributed all erroneous identifications to examiner error in applying the methodology. [And how do we determine whether we have an erroneous examiner, hmmm?] Mr. Meagher was neither credible nor persuasive in this regard. Without impartial testing, however, whether or not the methodology is infallible is unknown.””

And this really gets to the heart of the matter. Fingerprint “matches” are simply not reliable or based in any objective science.

Fingerprinting is a nonscientific evidence gathering method that has been wrongfully grandfathered in as scientific by courts. Fingerprints should not be allowed as evidence in courts, period.

http://www.news.cornell.edu/releases/Jan02/fingerprint.study.deb.html

ARM October 25, 2007 1:14 PM

@ Matthew Dares

You’re right – 100% is a very high bar, and somewhat unreasonable. But what’s really at issue in this case is that no-one can say what the percentage of accurate results really is. If you don’t know what the error rate is, you can’t actually say that you’ve proven anything. There could be no other people with a match, there could be 50 people who are much better matches.

Added to this is the fact that the public has grown up thinking that fingerprints ARE 100% reliable. Therefore fingerprints, because of their supposed reliability, become prejudicial, rather than actually evidentiary.

Petréa Mitchell October 25, 2007 1:29 PM

Science News did this cover story a few years ago on bullet-lead analysis:

http://www.sciencenews.org/articles/20040327/bob9.asp

Which notes:

“Bullet-lead analysis isn’t the only forensic technique to come under fire in recent memory. Courts and legal experts have begun questioning tool-mark analysis—say, the pry-bar markings on a doorframe; handwriting analysis; and even fingerprint analysis. David Faigman at the University of California, San Francisco’s Hastings College of Law says the problem is that many forensic techniques have been used for decades without undergoing significant validity testing.”

It also mentions a case in 2002 where partial fingerprints were barred.

Brian Carnell October 25, 2007 2:48 PM

@ Matthew Dares writes:

“It does sound like their case is based on nothing else, but that should mean there will be insufficient evidence for charges, or at least for a guilty verdict, not that the fingerprints should be excluded. I understand what you are getting at, but I have a hard time believing evidence should ever be excluded because it is not in and of itself 100% accurate.”

That sounds reasonable, but this is contradicted by The International Association for Identification which is the major international standards body that certifies fingerprint examiners. The IAI actually passed a resolution in 1979 that “testimony of possible, probable or likely [fingerprint] identification shall be deemed to be engaged in conduct unbecoming.”

Meagher is not an outlier here when he testified that there are never any errors with the procedure — this is the claim employed generally by fingerprinting advocates. The cases where fingerprint identifications have been later found to be inaccurate are always chalked up to operator error/incompetence.

Should someone who says that his technique is never in error be allowed to testify? No. That is a psuedoscientific claim. You don’t see DNA exports testifying that they are never wrong — they always give probabilities.

The problem is that the DNA probabilities are relatively low, whereas with fingerprinting the probabilities of a mismatch are likely to be significantly higher.

Carlo Graziani October 25, 2007 3:29 PM

One depressing aspect of this story is that there is no practical obstacle to scientific calibration of fingerprint-matching methodologies like this one. Obtaining the sensitivity and false-positive rate as a function of any controllable parameters is so straightforward that it is a scandal that this appears never to have been done.

A scandal, but hardly a surprise. There seems to be a general rule of law-enforcement use of this sort of system that no actual scientific validation is ever required. That’s why eyes roll whenever we see stories about high-tech eye/face/behavior/fill-in-the-blank scanners that supposedly nail bad guys. So long as no validation is required, none will be supplied, and law-enforcement officials dazzled by scientific-sounding claims will continue to be unable to distinguish valid systems from snake oil.

bzelbob October 25, 2007 4:16 PM

Anonymous said:

“I think the problem was unreal pressure to get bad guys, and implicit permission to get some innocents in the process to make sure. For instance, look at this one: http://tinyurl.com/35mqxp. Again, an agent going way beyond normal to get a prosecution. Is it right? No. Did it happen? yep.”

Thanks, I had already read that story recently! 🙂

Your comment shows the heart of the problem: in our quest to punish “the guilty” we are punishing innocents as well. This is completely contrary to the nature of law, where the preservation of the rights of the innocent are supposed to take precedence over the punishment of the guilty.

See the following article from wikipedia, which sums it up nicely: http://en.wikipedia.org/wiki/Presumption_of_innocence

AwaitTheAppeal October 25, 2007 5:02 PM

By any chance, has this judge replaced the one who sued a dry cleaner for $65mil over a lost pair of pants?

John Phillips October 25, 2007 5:34 PM

There was a case in Scotland in 2000 where various members of their fingerprint bureau were so wrong it almost amounted to corruption in support of a police case. BBC Panorama did a program on it and called in a number of recognised international experts to examine the prints and they could not understand how the 4 bureau officers had made the determination they had. The program displayed the fingerprints under dispute and even as a lay person it was easy to see that there was no comparison. It led to an investigation of the bureau that only finished last year. Here is a transcript of the Panorama program;

http://news.bbc.co.uk/1/hi/programmes/panorama/5312452.stm

Terry Cloth October 25, 2007 7:49 PM

@Rr: You don’t mention the strongest bias: that of the jury. They consider fingerprints infallible.

@Matthew Dares: “[T]o bar [fingerprints] as evidence is silly….the jury simply needs to be instructed that the fingerprints alone do not prove he was there.”

Yeah, right. No matter what the judge’s instructions, 99 and 44/100 % of the populace will not be able to overcome the presumption of absolute ID (see comment to Rr). Hence, the only way to handle such strong bias is to not allow the evidence to be admitted.

One of my pet peeves (greatest fears'' is more like it) is the current atmosphere of accusing that courtscoddle the criminals”. No one could be against coddling criminals, of course. What the lawn ordure crew never mentions is that these “coddling” laws are there to protect the innocent, an ideal almost unmentioned these days in the U.S.

Lawrence D'Oliveiro October 25, 2007 8:47 PM

Fingerprints have never been subjected to the same sort of statistical scrutiny as DNA analysis, even though they can be just as fallible. Just because fingerprinting became commonplace before the importance of such statistical error analysis was recognized is no excuse.

supersnail October 26, 2007 4:54 AM

I think the basic problem is the the recent merging of various fingerprint databases. Which is a very sensible idea when it comes to detecting and locating criminals.

However people are very bad at dealing with the statistical consequences of large samples.
You tell someone its a “million to one” that two fingerprints match and they assume its close to iimpossible. But if you are choosing from a database of five million fingerprints you will hit five matches every time.

Clive Robinson October 26, 2007 6:30 AM

@ Lawrence D’Oliveiro,

“Fingerprints have never been subjected to the same sort of statistical scrutiny as DNA analysis”

The problem with the majority of the statistical tests on DNA systems is that they are usually compleatly riddled with assumptions.

Also we still know so little about why the DNA sequence is as it is we cannot attribute any particular marker to a reliable means of identification.

One area of study that has not been undertaken in any realistic form is that of cross contamination on a sample. If you take DNA from a pure source then it is possible to make some statisitical statments about it. However if the sample is taken from somewhere such as a public area your are very very unlikley to be able to take a pure DNA sample, or one that has not in some way been degraded by time / chemical / biological action.

Several years ago I looked into how simple it would be to fake DNA evidence as I was unconvinced about the “You cannot clone Human DNA” argument. What I found at the time absolutly shocked me. It is oh so easy to fake DNA that somebody with moderate ability and access to under grad books and a credit card could get both the knowledge and equipment to do it.

It was not a subject the “experts” wanted to talk about as they prefered to adopt a “head down BTM up” attitude and treated enquiry into this asspect with all the hostility you might expect if you had suggested killing their “Golden Goose”.

Put simply the judicial process relies to heavily on “new techniques” to obtain convictions as there appears to be an inbuilt set of assumptions by “Mi-Learned” Bretherin,

1) It’s Hi-Tec = Must be the best
2) It has current scientific papers = Must be accepted by the scientific community.
3) It has lots of mathmatics = Must be infalable.

And last but not least,

4) It’s to complicated for me to understand = Leave it to the experts and don’t ask questions, and don’t let the “experts” be cross examined infront of the Jury on the subject as it will only confuse them as well…

bob October 26, 2007 7:27 AM

As Roy alluded to earlier, fingerprints are used similar to file hashes. So the likelihood that multiple people both “hash” to the same fingerprint is similar to multiple files generating the same MD5 hash… (albeit not statistically similar, md5 is going to be MUCH more unique)

Swiss forensics guy October 26, 2007 7:49 AM

The interpretation of the value of fingerprints needs a statiscal approach, preferably a bayesian approach as it is most commonly conducted with DNA evidence. It is nonsense to see this as a binary problem, either identification (good quality print with > 12 minutiae) or rejection! of course a partial print (medium to bad quality print with < 12 minutiae ) has some value and through a bayesian framework this evidence can be connected with other evidence, even non-physical one. The state of the art pertaining to fingerprints analysis can be found in this book:

http://www.amazon.com/Fingerprints-Other-Ridge-Skin-Impressions/dp/0415271754

Among other things, mostly chemistry, it thoroughly discusses a statistical approach to the interpretation of fingerprint evidence. Here is another book that deals (in simple terms) with the statistical interpretation of physical evidence in general:

http://www.amazon.com/Interpreting-Evidence-Evaluating-Forensic-Courtroom/dp/0471960268/ref=sr_1_7/103-8793371-9219802?ie=UTF8&s=books&qid=1193402558&sr=1-7

The judge’s decision in this case is simply wrong and nicely illustrates the gap between scientists and jurists. Judges and Juries don’t want to deal with probabilities, they want black or white answers (such as given in shows like CSI), not a grayscale. Nature doesn’t work that way and neither does forensic evidence.

erasmus October 26, 2007 9:23 AM

Swiss forensics guy, you wrote “The judge’s decision in this case is simply wrong” … but he was following the system..

Last time I looked British & US criminal law were founded on system that generally requires a clear indication of guilt “beyond reasonable doubt”, not just on weighing the “balance of probabilities” – which is all that’s needed for a lesser civil case with no jail sentence. If the forensic tool can’t produce 100% reliable evidence then a conviction based on that alone must ultimately fail as ‘unsafe’.

If you are prepared to accept a probablistic approach, for whatever reason, then you start to dismantle the legal system as it is known and accepted in these countries.

BTW, spot on synopsys, Clive R!

Rr October 26, 2007 9:44 AM

@Swiss forensics guy

It’s not that there isn’t a good method for approaching the matching, it’s that it’s claimed to be infallible, which just happens to be a little detrimental to the defendant’s well-being when presented as such to a jury in a death penalty case.

That’s where the judge is coming from, and I think it’s a fine decision (although I’d rather have them abolish capital punishment, but that’s another discussion.)

markm October 26, 2007 1:29 PM

The application of statistical methods to science just didn’t exist when fingerprinting started. OTOH, when DNA evidence began to be introduced, courts knew these methods existed and insisted on testimony showing that they had been used to establish the error rate. So you get estimates such as “1 in 5 million” – and then the defense attorney can then point out that this means there are about 300/5 = 60 Americans with matching DNA, so it’s going to take more evidence to bring that down to one man. With fingerprints, you don’t get any such acknowledgment of an error rate…

I think that although their analysis was primitive, those 19th Century forensic scientists adequately proved that the chance of two people having all ten fingers match in clear police-booking style prints was an astronomical number, more than the number of humans that ever lived. So, if the court hearing is about whether the guy hauled into court now is the same fellow that skipped bail ten years ago, and he was fingerprinted before bail was granted, and you’ve got a chain of evidence for those old prints, a full match is conclusive.

OTOH, what the cops are usually trying to match from a crime scene is a blurry and incomplete print of one finger, and there have been few experiments to show the error rate for such cases. Even ignoring the quality of the print, if the chances of a false match for 10 fingers are, say, 1/10^20, then I would expect the chance of a false match for one finger to be 1/100. That’s a pretty significant error rate, among other things implying that searching any sizeable database will produce many false matches. And yet, “experts” still testify that their matches are certain, and jurors believe them.

So, what is needed is a rating system for fingerprint quality in crime-scene prints, a large series of tests to show the error rate of the profession as a whole leading to proven error rates for the quality of the print, and (since there are subjective factors) finally a proficiency test standard that each examiner must periodically take and reveal in any reports and testimony.

I think the FBI’s database could be used for creating the tests. That is, you take sets of prints out of the database, reduce each set to one, two, or three fingers, and then smudge, blur, and obscure parts of the remaining prints. Have the examiner being tested then enter the print for a database search, but when the results come back with several possible matches, the test computer intercepts this and randomly picks just one set of prints. The examiner then manually compares and gives a yes-no-don’t know answer as to whether it matches. The computer checks against the record of where the test print came from and keeps score…

Then when a case comes to trial and the examiner testifies, the jury first hears both the profession’s overall average with prints of similar quality, and how the particular examiner compares to the average.

markm October 26, 2007 1:44 PM

Even better, for the individual ratings: periodically slip a test print into the stack of work awaiting the examiner, with nothing to mark this as different from the normal work. Alternate between two kinds of requests: search a given or all databases for this partial print (sometimes including a print that is NOT in the database), and compare this partial to this suspect’s filed prints. Keep score of false positives and false negatives for each type of test.

I’m hoping that the average examiner will be able to go into court with a lifetime zero number of false positives out of a hundred test cases by the time he is certified for his job, and at least twentyfive more every year afterwards. In that case the jury will be informed of the false positive rate across the whole profession, which we know is nonzero, but hope is low enough that you have to average many examiners to get the rate.

usome October 31, 2007 8:35 AM

I think the basic problem is the the recent merging of various fingerprint databases. Which is a very sensible idea when it comes to detecting and locating criminals.

Jim September 22, 2008 3:18 PM

Roy stated Oct 25, 07 that the prints ane not actually compared….. Couldn’t be further from the truth. Prints are manually compared and after a positive match is found a second person confirms the identification.

There is no such thing as an error rate north of the border. As a fingerprint examiner in Canada if I make ONE mistake I lose my job.

One problem is that there are too many unqualified examiners in the profession. If veiwed by competent examiners there is not a better form of determining a persons identity.

Gee…. over one hundred years of collecting fingerprints and two prints have never been found alike.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.