Entries Tagged "false positives"

Page 1 of 13

Risks of Evidentiary Software

Over at Lawfare, Susan Landau has an excellent essay on the risks posed by software used to collect evidence (a Breathalyzer is probably the most obvious example).

Bugs and vulnerabilities can lead to inaccurate evidence, but the proprietary nature of software makes it hard for defendants to examine it.

The software engineers proposed a three-part test. First, the court should have access to the “Known Error Log,” which should be part of any professionally developed software project. Next the court should consider whether the evidence being presented could be materially affected by a software error. Ladkin and his co-authors noted that a chain of emails back and forth are unlikely to have such an error, but the time that a software tool logs when an application was used could easily be incorrect. Finally, the reliability experts recommended seeing whether the code adheres to an industry standard used in an non-computerized version of the task (e.g., bookkeepers always record every transaction, and thus so should bookkeeping software).

[…]

Inanimate objects have long served as evidence in courts of law: the door handle with a fingerprint, the glove found at a murder scene, the Breathalyzer result that shows a blood alcohol level three times the legal limit. But the last of those examples is substantively different from the other two. Data from a Breathalyzer is not the physical entity itself, but rather a software calculation of the level of alcohol in the breath of a potentially drunk driver. As long as the breath sample has been preserved, one can always go back and retest it on a different device.

What happens if the software makes an error and there is no sample to check or if the software itself produces the evidence? At the time of our writing the article on the use of software as evidence, there was no overriding requirement that law enforcement provide a defendant with the code so that they might examine it themselves.

[…]

Given the high rate of bugs in complex software systems, my colleagues and I concluded that when computer programs produce the evidence, courts cannot assume that the evidentiary software is reliable. Instead the prosecution must make the code available for an “adversarial audit” by the defendant’s experts. And to avoid problems in which the government doesn’t have the code, government procurement contracts must include delivery of source code­—code that is more-or-less readable by people—­for every version of the code or device.

Posted on June 29, 2021 at 9:12 AMView Comments

Thermal Imaging as Security Theater

Seems like thermal imaging is the security theater technology of today.

These features are so tempting that thermal cameras are being installed at an increasing pace. They’re used in airports and other public transportation centers to screen travelers, increasingly used by companies to screen employees and by businesses to screen customers, and even used in health care facilities to screen patients. Despite their prevalence, thermal cameras have many fatal limitations when used to screen for the coronavirus.

  • They are not intended for medical purposes.
  • Their accuracy can be reduced by their distance from the people being inspected.
  • They are “an imprecise method for scanning crowds” now put into a context where precision is critical.
  • They will create false positives, leaving people stigmatized, harassed, unfairly quarantined, and denied rightful opportunities to work, travel, shop, or seek medical help.
  • They will create false negatives, which, perhaps most significantly for public health purposes, “could miss many of the up to one-quarter or more people infected with the virus who do not exhibit symptoms,” as the New York Times recently put it. Thus they will abjectly fail at the core task of slowing or preventing the further spread of the virus.

Posted on May 28, 2020 at 6:50 AMView Comments

Me on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems—based on GPS and Bluetooth—just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact—the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

EDITED TO ADD: This post has been translated into Spanish.

Posted on May 1, 2020 at 6:22 AMView Comments

USB Cable Kill Switch for Laptops

BusKill is designed to wipe your laptop (Linux only) if it is snatched from you in a public place:

The idea is to connect the BusKill cable to your Linux laptop on one end, and to your belt, on the other end. When someone yanks your laptop from your lap or table, the USB cable disconnects from the laptop and triggers a udev script [1, 2, 3] that executes a series of preset operations.

These can be something as simple as activating your screensaver or shutting down your device (forcing the thief to bypass your laptop’s authentication mechanism before accessing any data), but the script can also be configured to wipe the device or delete certain folders (to prevent thieves from retrieving any sensitive data or accessing secure business backends).

Clever idea, but I—and my guess is most people—would be much more likely to stand up from the table, forgetting that the cable was attached, and yanking it out. My problem with pretty much all systems like this is the likelihood of false alarms.

Slashdot article.

EDITED TO ADD (1/14): There are Bluetooth devices that will automatically encrypt a laptop when the device isn’t in proximity. That’s a much better interface than a cable.

Posted on January 7, 2020 at 6:03 AMView Comments

Cardiac Biometric

MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:

A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

[…]

Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.

Posted on July 8, 2019 at 12:38 PMView Comments

"Insider Threat" Detection Software

Notice this bit from an article on the arrest of Christopher Hasson:

It was only after Hasson’s arrest last Friday at his workplace that the chilling plans prosecutors assert he was crafting became apparent, detected by an internal Coast Guard program that watches for any “insider threat.”

The program identified suspicious computer activity tied to Hasson, prompting the agency’s investigative service to launch an investigation last fall, said Lt. Cmdr. Scott McBride, a service spokesman.

Any detection system of this kind is going to have to balance false positives with false negatives. Could it be something as simple as visiting right-wing extremist websites or watching their videos? It just has to be something more sophisticated than researching pressure cookers. I’m glad that Hasson was arrested before he killed anyone rather than after, but I worry that these systems are basically creating thoughtcrime.

Posted on February 27, 2019 at 6:22 AMView Comments

Terrorist False Alarm at JFK Airport Demonstrates How Unprepared We Really Are

The detailed accounts of the terrorist-shooter false-alarm at Kennedy Airport in New York last week illustrate how completely and totally unprepared the airport authorities are for any real such event.

I have two reactions to this. On the one hand, this is a movie-plot threat—the sort of overly specific terrorist scenario that doesn’t make sense to defend against. On the other hand, police around the world need training in these types of scenarios in general. Panic can easily cause more deaths than terrorists themselves, and we need to think about what responsibilities police and other security guards have in these situations.

Posted on August 19, 2016 at 2:23 PMView Comments

The Fallibility of DNA Evidence

This is a good summary article on the fallibility of DNA evidence. Most interesting to me are the parts on the proprietary algorithms used in DNA matching:

William Thompson points out that Perlin has declined to make public the algorithm that drives the program. “You do have a black-box situation happening here,” Thompson told me. “The data go in, and out comes the solution, and we’re not fully informed of what happened in between.”

Last year, at a murder trial in Pennsylvania where TrueAllele evidence had been introduced, defense attorneys demanded that Perlin turn over the source code for his software, noting that “without it, [the defendant] will be unable to determine if TrueAllele does what Dr. Perlin claims it does.” The judge denied the request.

[…]

When I interviewed Perlin at Cybergenetics headquarters, I raised the matter of transparency. He was visibly annoyed. He noted that he’d published detailed papers on the theory behind TrueAllele, and filed patent applications, too: “We have disclosed not the trade secrets of the source code or the engineering details, but the basic math.”

It’s the same problem as any biometric: we need to know the rates of both false positives and false negatives. And if these algorithms are being used to determine guilt, we have a right to examine them.

EDITED TO ADD (6/13): Three more articles.

Posted on May 31, 2016 at 1:04 PMView Comments

1 2 3 13

Sidebar photo of Bruce Schneier by Joe MacInnis.