Entries Tagged "false positives"

Page 10 of 13

Press Security Concerns in Lebanon

Problems of reporting from a war zone:

Among broadcasters there is a concern about how our small convoys of cars full of equipment and personnel look from the air. There is a risk Israelis (eyes in the sky: drones, satellites) could mistake them for a Hezbollah convoy headed closer to the border and within striking distance of Israel. So simply being on the road with several vehicles is a risk.

Plus, when we fire up our broadcast signals it is unclear what we look like to Israeli military monitoring stations. If there are a number of broadcasters firing up signals from the same remote place, the hope is that the Israelis would identify it as media signals, and not Hezbollah rocket electronics, and thus avoid being a target.

Posted on July 26, 2006 at 5:56 AM

Security Certifications

I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AMView Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AMView Comments

Microsoft Windows Kill Switch

Does Microsoft have the ability to disable Windows remotely? Maybe:

Two weeks ago, I wrote about my serious objections to Microsoft’s latest salvo in the war against unauthorized copies of Windows. Two Windows Genuine Advantage components are being pushed onto users’ machines with insufficient notification and inadequate quality control, and the result is a big mess. (For details, see Microsoft presses the Stupid button.)

Guess what? WGA might be on the verge of getting even messier. In fact, one report claims WGA is about to become a Windows “kill switch” ­ and when I asked Microsoft for an on-the-record response, they refused to deny it.

And this, supposedly from someone at Microsoft Support:

He told me that “in the fall, having the latest WGA will become mandatory and if its not installed, Windows will give a 30 day warning and when the 30 days is up and WGA isn’t installed, Windows will stop working, so you might as well install WGA now.”

The stupidity of this idea is amazing. Not just the inevitability of false positives, but the potential for a hacker to co-opt the controls. I hope this rumor ends up not being true.

Although if they actually do it, the backlash could do more for non-Windows OSs than anything those OSs could do for themselves.

Posted on June 30, 2006 at 11:51 AMView Comments

Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

Posted on April 21, 2006 at 12:49 PMView Comments

Man Detained for Singing a Clash Song

I was going to ignore this one, but too many people sent it to me.

I was in New York yesterday, and I saw a sign at the entrance to the Midtown Tunnel that said: “See something? Say something.” The problem with a nation of amateur spies is that it results in these sorts of results. “I know he’s a terrorist because he’s dressing funny and he always has white wires hanging out of his pocket.” “They all talk in a funny language, and their cooking smells bad.”

Amateur spies perform amateur spying. If everybody does it, the false alarms will overwhelm the police.

Posted on April 7, 2006 at 11:31 AMView Comments

Airport Passenger Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we’re all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn’t the “underwear bomber.”)

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for—for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate … the list goes on and on.

Technology can help. X-ray machines already randomly insert “test” bags into the stream—keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.

Sure, there’ll be a lot of false alarms, and some bad things will still get through. But it’s better than the alternative.

And it’s likely good enough. Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

The terrorists’ goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there’s even a small explosion, everyone on the plane dies. But there’s a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven’t really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did—again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we’re relying on airport screeners to prevent terrorism, it’s already too late. After all, we can’t keep weapons out of prisons. How can we ever hope to keep them out of airports?

A version of this essay originally appeared on Wired.com.

Posted on March 23, 2006 at 7:03 AMView Comments

Airport Security Failure

At LaGuardia, a man successfully walked through the metal detector, but screeners wanted to check his shoes. (Some reports say that his shoes set off an alarm.) But he didn’t wait, and disappeared into the crowd.

The entire Delta Airlines terminal had to be evacuated, and between 2,500 and 3,000 people had to be rescreened. I’m sure the resultant flight delays rippled through the entire system.

Security systems can fail in two ways. They can fail to defend against an attack. And they can fail when there is no attack to defend. The latter failure is often more important, because false alarms are more common than real attacks.

Aside from the obvious security failure—how did this person manage to disappear into the crowd, anyway—it’s painfully obvious that the overall security system did not fail well. Well-designed security systems fail gracefully, without affecting the entire airport terminal. That the only thing the TSA could do after the failure was evacuate the entire terminal and rescreen everyone is a testament to how badly designed the security system is.

Posted on March 14, 2006 at 12:15 PMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

The Terrorist Threat of Paying Your Credit Card Balance

This article shows how badly terrorist profiling can go wrong:

They paid down some debt. The balance on their JCPenney Platinum MasterCard had gotten to an unhealthy level. So they sent in a large payment, a check for $6,522.

And an alarm went off. A red flag went up. The Soehnges’ behavior was found questionable.

And all they did was pay down their debt. They didn’t call a suspected terrorist on their cell phone. They didn’t try to sneak a machine gun through customs.

They just paid a hefty chunk of their credit card balance. And they learned how frighteningly wide the net of suspicion has been cast.

After sending in the check, they checked online to see if their account had been duly credited. They learned that the check had arrived, but the amount available for credit on their account hadn’t changed.

So Deana Soehnge called the credit-card company. Then Walter called.

“When you mess with my money, I want to know why,” he said.

They both learned the same astounding piece of information about the little things that can set the threat sensors to beeping and blinking.

They were told, as they moved up the managerial ladder at the call center, that the amount they had sent in was much larger than their normal monthly payment. And if the increase hits a certain percentage higher than that normal payment, Homeland Security has to be notified. And the money doesn’t move until the threat alert is lifted.

The article goes on to blame something called the Bank Privacy Act, but that’s not correct. The culprit here is the amendments made to the Bank Secrecy Act by the USA Patriot Act, Sections 351 and 352. There’s a general discussion here, and the Federal Register here.

There has been some rumbling on the net that this story is badly garbled—or even a hoax—but certainly this kind of thing is what financial institutions are required to report under the Patriot Act.

Remember, all the time spent chasing down silly false alarms is time wasted. Finding terrorist plots is a signal-to-noise problem, and stuff like this substantially decreases that ratio: it adds a lot of noise without adding enough signal. It makes us less safe, because it makes terrorist plots harder to find.

Posted on March 6, 2006 at 10:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.