Entries Tagged "false negatives"

Page 1 of 4

Thermal Imaging as Security Theater

Seems like thermal imaging is the security theater technology of today.

These features are so tempting that thermal cameras are being installed at an increasing pace. They’re used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.

  • They are not intended for medical purposes.
  • Their accuracy can be reduced by their distance from the people being inspected.
  • They are “an imprecise method for scanning crowds” now put into a context where precision is critical.
  • They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
  • They will create false negatives, which, perhaps most significantly for public health purposes, “could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms,” as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.

Posted on May 28, 2020 at 6:50 AMView Comments

Me on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems—based on GPS and Bluetooth—just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact—the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

EDITED TO ADD: This post has been translated into Spanish.

Posted on May 1, 2020 at 6:22 AMView Comments

Cardiac Biometric

MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

[…]

Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.

Posted on July 8, 2019 at 12:38 PMView Comments

"Insider Threat" Detection Software

Notice this bit from an article on the arrest of Christopher Hasson:

It was only after Hasson’s arrest last Friday at his workplace that the chilling plans prosecutors assert he was crafting became apparent, detected by an internal Coast Guard program that watches for any “insider threat.”

The program identified suspicious computer activity tied to Hasson, prompting the agency’s investigative service to launch an investigation last fall, said Lt. Cmdr. Scott McBride, a service spokesman.

Any detection system of this kind is going to have to balance false positives with false negatives. Could it be something as simple as visiting right-wing extremist websites or watching their videos? It just has to be something more sophisticated than researching pressure cookers. I’m glad that Hasson was arrested before he killed anyone rather than after, but I worry that these systems are basically creating thoughtcrime.

Posted on February 27, 2019 at 6:22 AMView Comments

The Fallibility of DNA Evidence

This is a good summary article on the fallibility of DNA evidence. Most interesting to me are the parts on the proprietary algorithms used in DNA matching:

William Thompson points out that Perlin has declined to make public the algorithm that drives the program. “You do have a black-box situation happening here,” Thompson told me. “The data go in, and out comes the solution, and we’re not fully informed of what happened in between.”

Last year, at a murder trial in Pennsylvania where TrueAllele evidence had been introduced, defense attorneys demanded that Perlin turn over the source code for his software, noting that “without it, [the defendant] will be unable to determine if TrueAllele does what Dr. Perlin claims it does.” The judge denied the request.

[…]

When I interviewed Perlin at Cybergenetics headquarters, I raised the matter of transparency. He was visibly annoyed. He noted that he’d published detailed papers on the theory behind TrueAllele, and filed patent applications, too: “We have disclosed not the trade secrets of the source code or the engineering details, but the basic math.”

It’s the same problem as any biometric: we need to know the rates of both false positives and false negatives. And if these algorithms are being used to determine guilt, we have a right to examine them.

EDITED TO ADD (6/13): Three more articles.

Posted on May 31, 2016 at 1:04 PMView Comments

Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The “cost” of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Posted on July 21, 2015 at 6:51 AMView Comments

Reassessing Airport Security

News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security “red team” tests was justifiably shocking. It’s clear that we’re not getting value for the $7 billion we’re paying the TSA annually.

But there’s another conclusion, inescapable and disturbing to many, but good news all around: we don’t need $7 billion worth of airport security. These results demonstrate that there isn’t much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.

We don’t need perfect airport security. We just need security that’s good enough to dissuade someone from building a plot around evading it. If you’re caught with a gun or a bomb, the TSA will detain you and call the FBI. Under those circumstances, even a medium chance of getting caught is enough to dissuade a sane terrorist. A 95% failure rate is too high, but a 20% one isn’t.

For those of us who have been watching the TSA, the 95% number wasn’t that much of a surprise. The TSA has been failing these sorts of tests since its inception: failures in 2003, a 91% failure rate at Newark Liberty International in 2006, a 75% failure rate at Los Angeles International in 2007, more failures in 2008. And those are just the public test results; I’m sure there are many more similarly damning reports the TSA has kept secret out of embarrassment.

Previous TSA excuses were that the results were isolated to a single airport, or not realistic simulations of terrorist behavior. That almost certainly wasn’t true then, but the TSA can’t even argue that now. The current test was conducted at many airports, and the testers didn’t use super-stealthy ninja-like weapon-hiding skills.

This is consistent with what we know anecdotally: the TSA misses a lot of weapons. Pretty much everyone I know has inadvertently carried a knife through airport security, and some people have told me about guns they mistakenly carried on airplanes. The TSA publishes statistics about how many guns it detects; last year, it was 2,212. This doesn’t mean the TSA missed 44,000 guns last year; a weapon that is mistakenly left in a carry-on bag is going to be easier to detect than a weapon deliberately hidden in the same bag. But we now know that it’s not hard to deliberately sneak a weapon through.

So why is the failure rate so high? The report doesn’t say, and I hope the TSA is going to conduct a thorough investigation as to the causes. My guess is that it’s a combination of things. Security screening is an incredibly boring job, and almost all alerts are false alarms. It’s very hard for people to remain vigilant in this sort of situation, and sloppiness is inevitable.

There are also technology failures. We know that current screening technologies are terrible at detecting the plastic explosive PETN—that’s what the underwear bomber had—and that a disassembled weapon has an excellent chance of getting through airport security. We know that some items allowed through airport security make excellent weapons.

The TSA is failing to defend us against the threat of terrorism. The only reason they’ve been able to get away with the scam for so long is that there isn’t much of a threat of terrorism to defend against.

Even with all these actual and potential failures, there have been no successful terrorist attacks against airplanes since 9/11. If there were lots of terrorists just waiting for us to let our guard down to destroy American planes, we would have seen attacks—attempted or successful—after all these years of screening failures. No one has hijacked a plane with a knife or a gun since 9/11. Not a single plane has blown up due to terrorism.

Terrorists are much rarer than we think, and launching a terrorist plot is much more difficult than we think. I understand this conclusion is counterintuitive, and contrary to the fearmongering we hear every day from our political leaders. But it’s what the data shows.

This isn’t to say that we can do away with airport security altogether. We need some security to dissuade the stupid or impulsive, but any more is a waste of money. The very rare smart terrorists are going to be able to bypass whatever we implement or choose an easier target. The more common stupid terrorists are going to be stopped by whatever measures we implement.

Smart terrorists are very rare, and we’re going to have to deal with them in two ways. One, we need vigilant passengers—that’s what protected us from both the shoe and the underwear bombers. And two, we’re going to need good intelligence and investigation—that’s how we caught the liquid bombers in their London apartments.

The real problem with airport security is that it’s only effective if the terrorists target airplanes. I generally am opposed to security measures that require us to correctly guess the terrorists’ tactics and targets. If we detect solids, the terrorists will use liquids. If we defend airports, they bomb movie theaters. It’s a lousy game to play, because we can’t win.

We should demand better results out of the TSA, but we should also recognize that the actual risk doesn’t justify their $7 billion budget. I’d rather see that money spent on intelligence and investigation—security that doesn’t require us to guess the next terrorist tactic and target, and works regardless of what the terrorists are planning next.

This essay previously appeared on CNN.com.

Posted on June 11, 2015 at 6:10 AMView Comments

Intelligence Analysis and the Connect-the-Dots Metaphor

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

It’s an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber’s failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. All you have to do is move your pencil from one dot to the next, and when you’re done, you’ve drawn a sailboat. Or a tiger. It’s so simple that 5-year-olds can do it.

But in real life, the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from a Russian request for information to a foreign visit to some other piece of information that might have been collected.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

How many? We don’t know. But we know that the no-fly list had 21,000 people on it last year. The Terrorist Identities Datamart Environment, also known as the watch list, has 700,000 names on it.

We have no idea how many potential “dots” the FBI, CIA, NSA and other agencies collect, but it’s easily in the millions. It’s easy to work backwards through the data and see all the obvious warning signs. But before a terrorist attack, when there are millions of dots—some important but the vast majority unimportant—uncovering plots is a lot harder.

Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a sailboat, a puppy, two guys with pressure-cooker bombs, or just an unintelligible mess of dots? You try to figure it out.

It’s not a matter of not enough data, either.

Piling more data onto the mix makes it harder, not easier. The best way to think of it is a needle-in-a-haystack problem; the last thing you want to do is increase the amount of hay you have to search through. The television show Person of Interest is fiction, not fact.

There’s a name for this sort of logical fallacy: hindsight bias. First explained by psychologists Daniel Kahneman and Amos Tversky, it’s surprisingly common. Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen. It’s a surprisingly strong tendency, one that has been observed in countless laboratory experiments and real-world examples of behavior. And it’s what all the post-Boston-Marathon bombing dot-connectors are doing.

Before we start blaming agencies for failing to stop the Boston bombers, and before we push “intelligence reforms” that will shred civil liberties without making us any safer, we need to stop seeing the past as a bunch of obvious dots that need connecting.

Kahneman, a Nobel prize winner, wisely noted: “Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.” Kahneman calls it “the illusion of understanding,” explaining that the past is only so understandable because we have cast it as simple inevitable stories and leave out the rest.

Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” We humans are natural storytellers, and the world of stories is much more tidy, predictable and coherent than the real world.

Millions of people behave strangely enough to warrant the FBI’s notice, and almost all of them are harmless. It is simply not possible to find every plot beforehand, especially when the perpetrators act alone and on impulse.

We have to accept that there always will be a risk of terrorism, and that when the occasional plot succeeds, it’s not necessarily because our law enforcement systems have failed.

This essay previously appeared on CNN.

EDITED TO ADD (5/7): The hindsight bias was actually first discovered by Baruch Fischhoff: “Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty,” Journal of Experimental Psychology: Human Perception and Performance, 1(3), 1975, pp. 288-299.

Posted on May 7, 2013 at 6:10 AMView Comments

The TSA Proves its Own Irrelevance

Have you wondered what $1.2 billion in airport security gets you? The TSA has compiled its own “Top 10 Good Catches of 2011“:

10) Snakes, turtles, and birds were found at Miami (MIA) and Los Angeles (LAX). I’m just happy there weren’t any lions, tigers, and bears…

[…]

3) Over 1,200 firearms were discovered at TSA checkpoints across the nation in 2011. Many guns are found loaded with rounds in the chamber. Most passengers simply state they forgot they had a gun in their bag.

2) A loaded .380 pistol was found strapped to passenger’s ankle with the body scanner at Detroit (DTW). You guessed it, he forgot it was there…

1) Small chunks of C4 explosives were found in passenger’s checked luggage in Yuma (YUM). Believe it or not, he was brining it home to show his family.

That’s right; not a single terrorist on the list. Mostly forgetful, and entirely innocent, people. Note that they fail to point out that the firearms and knives would have been just as easily caught by pre-9/11 screening procedures. And that the C4—their #1 “good catch”—was on the return flight; they missed it the first time. So only 1 for 2 on that one.

And the TSA decided not to mention its stupidest confiscations:

TSA confiscates a butter knife from an airline pilot. TSA confiscates a teenage girl’s purse with an embroidered handgun design. TSA confiscates a 4-inch plastic rifle from a GI Joe action doll on the grounds that it’s a “replica weapon.” TSA confiscates a liquid-filled baby rattle from airline pilot’s infant daughter. TSA confiscates a plastic “Star Wars” lightsaber from a toddler.

In related news, here’s a rebuttal of the the Vanity Fair article about the TSA and airline security that featured me. I agree with the two points at the end of the post; I just don’t think it changes any of my analysis.

Posted on January 9, 2012 at 6:00 AMView Comments

1 2 3 4

Sidebar photo of Bruce Schneier by Joe MacInnis.