Entries Tagged "false positives"

Page 5 of 13

Three Security Anecdotes from the Insect World

Beet armyworm caterpillars react to the sound of a passing wasp by freezing in place, or even dropping off the plant. Unfortunately, armyworm intelligence isn’t good enough to tell the difference between enemy aircraft (the wasps that prey on them) and harmless commercial flights (bees); they react the same way to either. So by producing nectar for bees, plants not only get pollinated, but also gain some protection against being eaten by caterpillars.

The small hive beetle lives by entering beehives to steal combs and honey. They home in on the hives by detecting the bees’ own alarm pheromones. They also track in yeast that ferments the pollen and releases chemicals that spoof the alarm pheromones, attracting more beetles and more yeast. Eventually the bees abandon the hive, leaving their store of pollen and honey to the beetles and yeast.

Mountain alcon blue caterpillars get ants to feed them by spoofing a biometric: the sounds made by the queen ant.

Posted on March 3, 2009 at 1:20 PMView Comments

Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PMView Comments

Reporting Unruly Football Fans via Text Message

This system is available in most NFL stadiums:

Fans still are urged to complain to an usher or call a security hotline in the stadium to report unruly behavior. But text-messaging lines—typically advertised on stadium scoreboards and on signs where fans gather—are aimed at allowing tipsters to surreptitiously alert security personnel via cellphone without getting involved with rowdies or missing part of a game.

As of this week, 29 of the NFL’s 32 teams had installed a text-message line or telephone hotline. Three clubs have neither: the New Orleans Saints, St. Louis Rams and Tennessee Titans. Ahlerich says he will “strongly urge” all clubs to have text lines in place for the 2009 season. A text line will be available at the Super Bowl for the first time when this season’s championship game is played at Tampa’s Raymond James Stadium on Feb. 1.

“If there’s someone around you that’s just really ruining your day, now you don’t have to sit there in silence,” says Jeffrey Miller, the NFL’s director of strategic security. “You can do this. It’s very easy. It’s quick. And you get an immediate response.”

The article talks a lot about false alarms and prank calls, but—in general—this seems like a good use of technology.

Posted on January 8, 2009 at 6:44 AMView Comments

My LA Times Op Ed on Photo ID Checks at Airport

Opinion

The TSA’s useless photo ID rules

No-fly lists and photo IDs are supposed to help protect the flying public from terrorists. Except that they don’t work.

By Bruce Schneier

August 28, 2008

The TSA is tightening its photo ID rules at airport security. Previously, people with expired IDs or who claimed to have lost their IDs were subjected to secondary screening. Then the Transportation Security Administration realized that meant someone on the government’s no-fly list—the list that is supposed to keep our planes safe from terrorists—could just fly with no ID.

Now, people without ID must also answer personal questions from their credit history to ascertain their identity. The TSA will keep records of who those ID-less people are, too, in case they’re trying to probe the system.

This may seem like an improvement, except that the photo ID requirement is a joke. Anyone on the no-fly list can easily fly whenever he wants. Even worse, the whole concept of matching passenger names against a list of bad guys has negligible security value.

How to fly, even if you are on the no-fly list: Buy a ticket in some innocent person’s name. At home, before your flight, check in online and print out your boarding pass. Then, save that web page as a PDF and use Adobe Acrobat to change the name on the boarding pass to your own. Print it again. At the airport, use the fake boarding pass and your valid ID to get through security. At the gate, use the real boarding pass in the fake name to board your flight.

The problem is that it is unverified passenger names that get checked against the no-fly list. At security checkpoints, the TSA just matches IDs to whatever is printed on the boarding passes. The airline checks boarding passes against tickets when people board the plane. But because no one checks ticketed names against IDs, the security breaks down.

This vulnerability isn’t new. It isn’t even subtle. I wrote about it in 2003, and again in 2006. I asked Kip Hawley, who runs the TSA, about it in 2007. Today, any terrorist smart enough to Google “print your own boarding pass” can bypass the no-fly list.

This gaping security hole would bother me more if the very idea of a no-fly list weren’t so ineffective. The system is based on the faulty notion that the feds have this master list of terrorists, and all we have to do is keep the people on the list off the planes.

That’s just not true. The no-fly list—a list of people so dangerous they are not allowed to fly yet so innocent we can’t arrest them—and the less dangerous “watch list” contain a combined 1 million names representing the identities and aliases of an estimated 400,000 people. There aren’t that many terrorists out there; if there were, we would be feeling their effects.

Almost all of the people stopped by the no-fly list are false positives. It catches innocents such as Ted Kennedy, whose name is similar to someone’s on the list, and Yusuf Islam (formerly Cat Stevens), who was on the list but no one knew why.

The no-fly list is a Kafkaesque nightmare for the thousands of innocent Americans who are harassed and detained every time they fly. Put on the list by unidentified government officials, they can’t get off. They can’t challenge the TSA about their status or prove their innocence. (The U.S. 9th Circuit Court of Appeals decided this month that no-fly passengers can sue the FBI, but that strategy hasn’t been tried yet.)

But even if these lists were complete and accurate, they wouldn’t work. Timothy McVeigh, the Unabomber, the D.C. snipers, the London subway bombers and most of the 9/11 terrorists weren’t on any list before they committed their terrorist acts. And if a terrorist wants to know if he’s on a list, the TSA has approved a convenient, $100 service that allows him to figure it out: the Clear program, which issues IDs to “trusted travelers” to speed them through security lines. Just apply for a Clear card; if you get one, you’re not on the list.

In the end, the photo ID requirement is based on the myth that we can somehow correlate identity with intent. We can’t. And instead of wasting money trying, we would be far safer as a nation if we invested in intelligence, investigation and emergency response—security measures that aren’t based on a guess about a terrorist target or tactic.

That’s the TSA: Not doing the right things. Not even doing right the things it does.

Posted on September 1, 2008 at 5:15 AMView Comments

Fever Screening at Airports

I’ve seen the IR screening guns at several airports, primarily in Asia. The idea is to keep out people with Bird Flu, or whatever the current fever scare is. This essay explains why it won’t work:

The bottom line is that this kind of remote fever sensing had poor positive predictive value, meaning that the proportion of people correctly identified as having fever was low, ranging from 10% to 16%. Thus there were a lot of false positives. Negative predictive value, the proportion of people classified by the IR device as not having fever who in fact did not have fever was high (97% to 99%), so not many people with fevers will be missed with the IR device. Predictive values depend not only on the accuracy of the device but also how prevalent fever is in the screened population. In the early days of a pandemic, fever prevalence will be very low, leading to low positive predictive value. The false positives produced at airport security would make the days of only taking off your shoes look good.

The idea of airport fever screening to keep a pandemic out has a lot of psychological appeal. Unfortunately its benefits are also only psychological: pandemic preparedness theater. There’s no magic bullet for warding off a pandemic. The best way to prepare for a pandemic or any other health threat is to have a robust and resilient public health infrastructure.

Lots more science in the essay.

Posted on June 26, 2008 at 6:58 AMView Comments

Framing Computers Under the DMCA

Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can’t download anything, received nine takedown notices:

The researchers rigged the software agents to implicate three laserjet printers, which were then accused in takedown letters by the M.P.A.A. of downloading copies of “Iron Man” and the latest Indiana Jones film.

Research, including the paper, here.

Posted on June 9, 2008 at 6:47 AMView Comments

Sky Marshals on the No-Fly List

If this weren’t so sad, it would be funny:

The problem with federal air marshals (FAM) names matching those of suspected terrorists on the no-fly list has persisted for years, say air marshals familiar with the situation.

One air marshal said it has been “a major problem, where guys are denied boarding by the airline.”

“In some cases, planes have departed without any coverage because the airline employees were adamant they would not fly,” the air marshal said. “I’ve seen guys actually being denied boarding.”

A second air marshal says one agent “has been getting harassed for six years because his exact name is on the no-fly list.”

Another article.

Seriously—if these people can’t get their names off the list, what hope do the rest of us have? Not that the no-fly list has any real value, anyway.

Posted on May 2, 2008 at 7:14 AMView Comments

Web Entrapment

Frightening sting operation by the FBI. They posted links to supposed child porn videos on boards frequented by those types, and obtained search warrants based on access attempts.

This seems like incredibly flimsy evidence. Someone could post the link as an embedded image, or send out e-mail with the link embedded, and completely mess with the FBI’s data—and the poor innocents’ lives. Such are the problems when the mere clicking on a link is justification for a warrant.

See also this Slashdot thread and this article.

Posted on March 27, 2008 at 2:46 PMView Comments

Israel Implementing IFF System for Commercial Aircraft

Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.

The news article implies that it’s a basic challenge-and-response system. Ground control issues some kind of alphanumeric challenge to the plane. The pilot types the challenge into some hand-held computer device, and reads back the reply. Authentication is achieved by 1) physical possession of the device, and 2) typing a legitimate PIN into the device to activate it.

The article talks about a distress mode, where the pilot signals that a terrorist is holding a gun to his head. Likely, that’s done by typing a special distress PIN into the device, and reading back whatever the screen displays.

The military has had this sort of system—first paper-based, and eventually computer-based—for decades. The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won’t be a lot of false alarms.

Posted on March 10, 2008 at 12:24 PMView Comments

Locked Call Boxes and Banned Geiger Counters

Fire Engineering magazine points out that fire alarms used to be kept locked to prevent false alarms:

Q: Prior to 1870, street corner fire alarm pull boxes were kept locked. Why were they kept locked and how did a person gain access to ‘pull the box?’

A: They were kept locked due to false alarms. Nearby shopkeepers or beat cops carried the keys.

According to Robert Cromie in The Great Chicago Fire (Thomas Nelson: 1994, p. 33), this may have been one reason for the slow response to the fire:

William Lee, the O’Leary’s neighbor, rushed into Goll’s drugstore, and gasped out a request for the key to the alarm box. The new boxes were attached to the walls of stores or other convenient locations. To prevent false alarms and crank calls, the boxes were locked, and the keys given to trustworthy citizens nearby.

What happened when Lee made his request is not clear. Only one fact emerges from the confusion: No alarm was registered from any box in the vicinity of the fire until it was too late to do any good.

Apparently, Lee said that Goll refused to give him the key because he’d already seen a fire engine go past; Goll said he actually did pull the alarm, twice, but if so it must not have worked.

(There’s more about what sounds like a really bad communications failure, but it’s a little too hard for me to read on the Amazon website.)

Here’s more:

But did you know that the fire burned for over half an hour before an alarm was ever sounded? Alarm boxes were actually kept locked in those days, to prevent false alarms!

When the first alarm box was finally opened and the lever pulled, the alarm somehow did not get through. The fire dispatcher was playing a guitar for a couple of girls at the time and he kept on serenely strumming, completely unawares. After the fire had been growing and blazing for nearly an hour a watchman screamed at the dispatcher to sound an alarm, which he did, and the first three engines, two hose wagons, and two hook and ladders were sent out—but in the wrong direction!

At first the dispatcher refused to sound another alarm, hoping to avoid further confusion.

Compare this with a proposed law in New York City that will require people to get a license before they can buy chemical, biological, or radiological attack detectors:

The legislation—which was proposed by the Bloomberg administration and would be the first of its kind in the nation—would empower the police commissioner to decide whether to grant a free five-year permit to individuals and companies seeking to “possess or deploy such detectors.” Common smoke alarms and carbon monoxide detectors would not be covered by the law, the Police Department said. Violations of the law would be considered a misdemeanor.

Why does the administration think such a law is necessary? Richard A. Falkenrath, the Police Department’s deputy commissioner for counterterrorism, told the Council’s Public Safety Committee at a hearing today, “Our mutual goal is to prevent false alarms and unnecessary public concern by making sure that we know where these detectors are located and that they conform to standards of quality and reliability.”

The law would also require anyone using such a detector—regardless of whether they have obtained the required permit—to notify the Police Department if the detector alerted them to a biological, chemical or radiological agent. “In this way, emergency response personnel will be able to assess threats and take appropriate action based on the maximum information available,” Dr. Falkenrath said.

False positives are a problem with any detection system, and certainly putting Geiger counters in the hands of everyone will mean a lot of amateurs calling false alarms into the police. But the way to handle that isn’t to ban Geiger counters. (Just as the way to deal with false fire alarms 100 years ago wasn’t to lock the alarm boxes.) The way to deal with it is by 1) putting a system in place to quickly separate the real alarms from the false alarms, and 2) prosecuting those who maliciously sound false alarms.

We don’t want to encourage people to report everything; that’s too many false alarms. Nor do we want to discourage them from reporting things they feel are serious. In the end, it’s the job of the police to figure out what’s what. I said this in an essay last year:

…these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.

EDITED TO ADD (1/18): Two commenters pointed to a 1938 invention: an alarm box that locks up your arm until the fire department sets you free. Yikes.

Posted on January 18, 2008 at 7:44 AMView Comments

1 3 4 5 6 7 13

Sidebar photo of Bruce Schneier by Joe MacInnis.