Entries Tagged "false negatives"

Page 3 of 4

Another Biometric: Vein Patterns

Interesting:

In fact, vein recognition technology has one fundamental advantage over finger print systems: vein patterns in fingers and palms are biometric characteristics that are not left behind unintentionally in every-day activities. In tests conducted by heise, even extreme close-ups of a palm taken with a digital camera, whose RAW format can be filtered systematically to emphasize the near-infrared range, were unable to deliver a clear reproduction of the line pattern. With the transluminance method used by Hitachi it is practically impossible to read out the pattern unnoticed with today’s technology. Another side effect of near-infrared imaging also has relevance to security: vein patterns of inanimate bodily parts become useless after few minutes, due to the increasing deoxidisation of the tissue.

Even if someone manages to obtain a person’s vein pattern, there is no known method for creating a functioning dummy, as is the case for finger prints, where this can be achieved even with home-made tools, as demonstrated by the german computer magazine c’t. As in the case with vendors of finger print systems, Hitachi and Fujitsu do not disclose information on liveness detection methods used in their products.

Besides the considerably improved forgery protection, the vendors of vein recognition technology claim further advantages. Compared to finger print sensors, vein recognition systems are said to deliver false rejection rates (FRR) two orders below that of finger print systems when operating at a comparable false acceptance rate (FAR). This can be ascribed to the basic structure of vein patterns having a much higher degree of variability than finger prints.

This is all interesting. I don’t know about the details of the technology, but the discussions of false positives, false negatives, and forgeability are the right ones to have. Remember, though, that while biometrics are an effective security technology, they’re not a panacea.

Posted on August 8, 2007 at 7:02 AMView Comments

Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AMView Comments

Keystroke Biometrics

This sounds like a good idea. From a news article:

The technology, which measures the time for which keys are held down, as well as the length between strokes, takes advantage of the fact that most computer users evolve a method of typing which is both consistent and idiosyncratic ­ especially for words used frequently such as a user name and password.

When registering, the user types his or her details nine times so that the software can generate a profile. Future login attempts are measured against the profile which, the company claims, can recognise the same user’s keystrokes with 99 per cent accuracy, using what is known as a “behavioural biometric.”

I wouldn’t want to automatically block users unless they get this right, and the false-positive/false-negative ratio would have to be jiggered properly, but if they can get it working right, it’s an extra layer of authentication for “free.”

Another news article. Slashdot thread.

Posted on April 23, 2007 at 6:49 AMView Comments

TSA Failures in the News

I’m not sure which is more important—the news or the fact that no one is surprised:

Sources told 9NEWS the Red Team was able to sneak about 90 percent of simulated weapons past checkpoint screeners in Denver. In the baggage area, screeners caught one explosive device that was packed in a suitcase. However later, screeners in the baggage area missed a book bomb, according to sources.

“There’s very little substance to security,” said former Red Team leader Bogdan Dzakovic. “It literally is all window dressing that we’re doing. It’s big theater on TV and when you go to the airport. It’s just security theater.”

Dzakovic was a Red Team leader from 1995 until September 11, 2001. After the terrorist attacks, Dzakovic became a federally protected whistleblower and alleged that thousands of people died needlessly. He testified before the 9/11 Commission and the National Commission on Terrorist Attacks Upon the US that the Red Team “breached security with ridiculous ease up to 90 percent of the time,” and said the FAA “knew how vulnerable aviation security was.”

Dzakovic, who is currently a TSA inspector, said security is no better today.

“It’s worse now. The terrorists can pretty much do what they want when they want to do it,” he said.

Posted on April 2, 2007 at 12:16 PMView Comments

Microsoft Anti-Phishing and Small Businesses

Microsoft has a new anti-phishing service in Internet Explorer 7 that will turn the address bar green and display the website owner’s identity when surfers visit on-line merchants previously approved as legitimate. So far, so good. But the service is only available to corporations: not to sole proprietorships, partnerships, or individuals.

Of course, if a merchant’s bar doesn’t turn green it doesn’t mean that they’re bad. It’ll be white, which indicates “no information.” There are also yellow and red indications, corresponding to “suspicious” and “known fraudulent site.” But small businesses are worried that customers will be afraid to buy from non-green sites.

That’s possible, but it’s more likely that users will learn that the marker isn’t reliable and start to ignore it.

Any white-list system like this has two sources of error. False positives, where phishers get the marker. And false negatives, where legitimate honest merchants don’t. Any system like this has to effectively deal with both.

EDITED TO ADD (12/21): Research paper: “Phinding Phish: An Evaulation of Anti-Phishing Toolbars,” by L. Cranor, S. Egleman, J. Hong, and Y. Zhang.

Posted on December 21, 2006 at 6:58 AMView Comments

Forecasting Murderers

There’s new software that can predict who is likely to become a murderer.

Using probation department cases entered into the system between 2002 and 2004, Berk and his colleagues performed a two-year follow-up study—enough time, they theorized, for a person to reoffend if he was going to. They tracked each individual, with particular attention to the people who went on to kill. That created the model. What remains at this stage is to find a way to marry the software to the probation department’s information technology system.

When caseworkers begin applying the model next year they will input data about their individual cases – what Berk calls “dropping ‘Joe’ down the model”—to come up with scores that will allow the caseworkers to assign the most intense supervision to the riskiest cases.

Even a crime as serious as aggravated assault—pistol whipping, for example—”might not mean that much” if the first-time offender is 30, but it is an “alarming indicator” in a first-time offender who is 18, Berk said.

The model was built using adult probation data stripped of personal identifying information for confidentiality. Berk thinks it could be an even more powerful diagnostic tool if he could have access to similarly anonymous juvenile records.

The central public policy question in all of this is a resource allocation problem. With not enough resources to go around, overloaded case workers have to cull their cases to find the ones in most urgent need of attention—the so-called true positives, as epidemiologists say.

But before that can begin in earnest, the public has to decide how many false positives it can afford in order to head off future killers, and how many false negatives (seemingly nonviolent people who nevertheless go on to kill) it is willing to risk to narrow the false positive pool.

Pretty scary stuff, as it gets into the realm of thoughtcrime.

Posted on December 1, 2006 at 7:34 AMView Comments

Security Certifications

I’ve long been hostile to certifications—I’ve met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I’ve come to believe that, while certifications aren’t perfect, they’re a decent way for a security professional to learn some of the things he’s going to know, and a potential employer to assess whether a job candidate has the security expertise he’s going to need to know.

What’s changed? Both the job requirements and the certification programs.

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

That kind of expertise can’t be found in a certification. It’s a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I’ve hired people to design and evaluate security systems, I’ve paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.

But most organizations don’t need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren’t enough researchers to go around. Certification programs are good at churning out practitioners.

And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.

At my company, we encourage our security analysts to take certification courses. We find that it’s the most cost-effective way to give them the skills they need to do ever-more-complex jobs.

Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.

In the end, certifications are like profiling. They work , but they’re sloppy. Just because someone has a particular certification doesn’t mean that he has the security expertise you’re looking for (in other words, there are false positives). And just because someone doesn’t have a security certification doesn’t mean that he doesn’t have the required security expertise (false negatives). But we use them for the same reason we profile: We don’t have the time, patience, or ability to test for what we’re looking for explicitly.

Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that’s usually good enough.

This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus’s counterpoint, but 1) you can lie, and 2) it’s worth it.)

EDITED TO ADD (7/21): A Guide to Information Security Certifications.

EDITED TO ADD (9/11): Here’s Marcus’s column.

Posted on July 20, 2006 at 7:20 AMView Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AMView Comments

Airport Passenger Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we’re all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn’t the “underwear bomber.”)

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for—for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate … the list goes on and on.

Technology can help. X-ray machines already randomly insert “test” bags into the stream—keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.

Sure, there’ll be a lot of false alarms, and some bad things will still get through. But it’s better than the alternative.

And it’s likely good enough. Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

The terrorists’ goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there’s even a small explosion, everyone on the plane dies. But there’s a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven’t really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did—again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we’re relying on airport screeners to prevent terrorism, it’s already too late. After all, we can’t keep weapons out of prisons. How can we ever hope to keep them out of airports?

A version of this essay originally appeared on Wired.com.

Posted on March 23, 2006 at 7:03 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.