Entries Tagged "base rate"

Page 1 of 2

Me on COVID-19 Contact Tracing Apps

I was quoted in BuzzFeed:

“My problem with contact tracing apps is that they have absolutely no value,” Bruce Schneier, a privacy expert and fellow at the Berkman Klein Center for Internet & Society at Harvard University, told BuzzFeed News. “I’m not even talking about the privacy concerns, I mean the efficacy. Does anybody think this will do something useful? … This is just something governments want to do for the hell of it. To me, it’s just techies doing techie things because they don’t know what else to do.”

I haven’t blogged about this because I thought it was obvious. But from the tweets and emails I have received, it seems not.

This is a classic identification problem, and efficacy depends on two things: false positives and false negatives.

  • False positives: Any app will have a precise definition of a contact: let’s say it’s less than six feet for more than ten minutes. The false positive rate is the percentage of contacts that don’t result in transmissions. This will be because of several reasons. One, the app’s location and proximity systems—based on GPS and Bluetooth—just aren’t accurate enough to capture every contact. Two, the app won’t be aware of any extenuating circumstances, like walls or partitions. And three, not every contact results in transmission; the disease has some transmission rate that’s less than 100% (and I don’t know what that is).
  • False negatives: This is the rate the app fails to register a contact when an infection occurs. This also will be because of several reasons. One, errors in the app’s location and proximity systems. Two, transmissions that occur from people who don’t have the app (even Singapore didn’t get above a 20% adoption rate for the app). And three, not every transmission is a result of that precisely defined contact—the virus sometimes travels further.

Assume you take the app out grocery shopping with you and it subsequently alerts you of a contact. What should you do? It’s not accurate enough for you to quarantine yourself for two weeks. And without ubiquitous, cheap, fast, and accurate testing, you can’t confirm the app’s diagnosis. So the alert is useless.

Similarly, assume you take the app out grocery shopping and it doesn’t alert you of any contact. Are you in the clear? No, you’re not. You actually have no idea if you’ve been infected.

The end result is an app that doesn’t work. People will post their bad experiences on social media, and people will read those posts and realize that the app is not to be trusted. That loss of trust is even worse than having no app at all.

It has nothing to do with privacy concerns. The idea that contact tracing can be done with an app, and not human health professionals, is just plain dumb.

EDITED TO ADD: This Brookings essay makes much the same point.

EDITED TO ADD: This post has been translated into Spanish.

Posted on May 1, 2020 at 6:22 AMView Comments

Worst-Case Thinking Breeds Fear and Irrationality

Here’s a crazy story from the UK. Basically, someone sees a man and a little girl leaving a shopping center. Instead of thinking “it must be a father and daughter, which happens millions of times a day and is perfectly normal,” he thinks “this is obviously a case of child abduction and I must alert the authorities immediately.” And the police, instead of thinking “why in the world would this be a kidnapping and not a normal parental activity,” thinks “oh my god, we must all panic immediately.” And they do, scrambling helicopters, searching cars leaving the shopping center, and going door-to-door looking for clues. Seven hours later, the police eventually came to realize that she was safe asleep in bed.

Lenore Skenazy writes further:

Can we agree that something is wrong when we leap to the worst possible conclusion upon seeing something that is actually nice? In an email Furedi added that now, “Some fathers told me that they think and look around before they kiss their kids in public. Society is all too ready to interpret the most innocent of gestures as a prelude to abusing a child.”

So our job is to try to push the re-set button.

If you see an adult with a child in plain daylight, it is not irresponsible to assume they are caregiver and child. Remember the stat from David Finkelhor, head of the Crimes Against Children Research Center at the University of New Hampshire. He has heard of NO CASE of a child kidnapped from its parents in public and sold into sex trafficking.

We are wired to see “Taken” when we’re actually witnessing something far less exciting called Everyday Life. Let’s tune in to reality.

This is the problem with the “see something, say something” mentality. As I wrote back in 2007:

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

And the police need to understand the base-rate fallacy better.

Posted on November 18, 2018 at 1:12 PMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

The Trouble with Airport Profiling

Why do otherwise rational people think it’s a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: “Muslims, or anyone who looks like he or she could conceivably be Muslim.”

This is a bad idea. It doesn’t make us any safer—and it actually puts us all at risk.

The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn’t, we’d be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he’s going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile…and then run away. This makes perfect sense, and is why evolution produced sheep—and other animals—that react this way. But this sort of profiling doesn’t work with humans at airports, for several reasons.

First, in the sheep’s case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn’t hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists—arguably a singular event—that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the “base rate fallacy,” and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.

Second, sheep can safely ignore animals that don’t look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else—most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don’t match it.

Third, wolves can’t deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent—and innocent-looking—travelers. Randomized secondary screening is more effective, especially since the goal isn’t to catch every plot but to create enough uncertainty that terrorists don’t even try.

And fourth, sheep don’t care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.

I too am incensed—but not surprised—when the TSA singles out four-year old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that’s really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.

The proper reaction to screening horror stories isn’t to subject only “those people” to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.

This essay previously appeared on Forbes.com and Sam Harris’s blog.

Posted on May 14, 2012 at 6:19 AMView Comments

Criminal Intent Prescreening and the Base Rate Fallacy

I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless:

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations—an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes—which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

It’s that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?

Posted on May 3, 2012 at 6:22 AMView Comments

More Brain Scans to Detect Future Terrorists

Worked well in a test:

For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe “persons of interest” in the lab.

The most intriguing part of the study in terms of real-world implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.

“Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime-related details,” Rosenfeld said. “The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity.”

Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity—or brief electrical patterns in the cortex—that occur, according to the research, when meaningful information is presented to a person with “guilty knowledge.”

More news stories.

The base rate of terrorism makes this test useless, but the technology will only get better.

Posted on August 6, 2010 at 5:36 AMView Comments

Fever Screening at Airports

I’ve seen the IR screening guns at several airports, primarily in Asia. The idea is to keep out people with Bird Flu, or whatever the current fever scare is. This essay explains why it won’t work:

The bottom line is that this kind of remote fever sensing had poor positive predictive value, meaning that the proportion of people correctly identified as having fever was low, ranging from 10% to 16%. Thus there were a lot of false positives. Negative predictive value, the proportion of people classified by the IR device as not having fever who in fact did not have fever was high (97% to 99%), so not many people with fevers will be missed with the IR device. Predictive values depend not only on the accuracy of the device but also how prevalent fever is in the screened population. In the early days of a pandemic, fever prevalence will be very low, leading to low positive predictive value. The false positives produced at airport security would make the days of only taking off your shoes look good.

The idea of airport fever screening to keep a pandemic out has a lot of psychological appeal. Unfortunately its benefits are also only psychological: pandemic preparedness theater. There’s no magic bullet for warding off a pandemic. The best way to prepare for a pandemic or any other health threat is to have a robust and resilient public health infrastructure.

Lots more science in the essay.

Posted on June 26, 2008 at 6:58 AMView Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.