Entries Tagged "false positives"

Page 12 of 13

Terrorists, Steganography, and False Alarms

Remember all thost stories about the terrorists hiding messages in television broadcasts? They were all false alarms:

The first sign that something was amiss came a few days before Christmas Eve 2003. The US department of homeland security raised the national terror alert level to “high risk”. The move triggered a ripple of concern throughout the airline industry and nearly 30 flights were grounded, including long hauls between Paris and Los Angeles and subsequently London and Washington.

But in recent weeks, US officials have made a startling admission: the key intelligence that prompted the security alert was seriously flawed. CIA analysts believed they had detected hidden terrorist messages in al-Jazeera television broadcasts that identified flights and buildings as targets. In fact, what they had seen were the equivalent of faces in clouds – random patterns all too easily over-interpreted.

It’s a signal-to-noise issue. If you look at enough noise, you’re going to find signal just by random chance. It’s only signal that rises above random chance that’s valuable.

And the whole notion of terrorists using steganography to embed secret messages was ludicrous from the beginning. It makes no sense to communicate with terrorist cells this way, given the wide variety of more efficient anonymous communications channels.

I first wrote about this in September of 2001.

Posted on August 15, 2005 at 11:03 AMView Comments

Shoot-to-Kill

We’ve recently learned that London’s Metropolitan Police has a shoot-to-kill policy when dealing with suspected suicide terrorists. The theory is that only a direct headshot will kill the terrorist immediately, and thus destroy the ability to execute a bombing attack.

Roy Ramm, former Met Police specialist operations commander, said the rules for confronting potential suicide bombers had recently changed to “shoot to kill”….

Mr Ramm said the danger of shooting a suspected suicide bomber in the body was that it could detonate a bomb they were carrying on them.

“The fact is that when you’re dealing with suicide bombers they only way you can stop them effectively—and protect yourself—is to try for a head-shot,” he said.

This policy is based on the extremely short-sighted assumption that a terrorist needs to push buttons to make a bomb explode. In fact, ever since World War I, the most common type of bomb carried by a person has been the hand grenade. It is entirely conceivable, especially when a shoot-to-kill policy is known to be in effect, that suicide bombers will use the same kind of dead-man’s trigger on their bombs: a detonate that is activated when a button is released, rather than when it is pushed.

This is a difficult one. Whatever policy you choose, the terrorists will adapt to make that policy the wrong one.

The police are now sorry they accidentally killed an innocent they suspected of being a suicide bomber, but I can certainly understand the mistake. In the end, the best solution is to train police officers and then leave the decision to them. But honestly, policies that are more likely to result in living incarcerated suspects—and recover well from false alarms—that can be interrogated are better than policies that are more likely to result in corpses.

EDITED TO ADD these comments by Nicholas Weaver:

“One other thing: The suspect was on the ground, and immobilized. Thus the decision was made to shoot the suspect, repeatedly (7 times) in the head, based on the perception that he could have been a suicide attacker (who dispite being a suicide attacker, wasn’t holding a dead-man’s switch. Or heck, wire up the bomb to a $50 heart-rate monitor).

“If this is policy, it is STUPID: There is an easy way for the attackers to counter it, and when you have a subway execution of an innocent man, the damage (in the hearts and minds of british muslims) is immense.

“One thing to remember:

“These were NON uniformed officers, and the suspect was brasilian (and probably didn’t speak very good english).

“Why did he run? What would YOU do if three individuals accosted you, speaking a language which you were unfamiliar with, drawing weapons? You would RUN LIKE HELL!

“I find the blaming the victim (‘but he was running!’) reprehensible.”

ANOTHER EDIT: The consensus seems to be that he spoke English well enough. I don’t think we can blame the officers without a whole lot more details about what happened, and possibly not even then. Clearly they were under a lot of stress, and made a split-second decision.

But I think we can reasonably criticize the shoot-to-kill policy that the officers were following. That policy is a threat to our security, and our society.

Posted on July 25, 2005 at 1:59 PMView Comments

Profiling

There is a great discussion about profiling going on in the comments to the previous post. To help, here is what I wrote on the subject in Beyond Fear (pp. 133-7):

Good security has people in charge. People are resilient. People can improvise. People can be creative. People can develop on-the-spot solutions. People can detect attackers who cheat, and can attempt to maintain security despite the cheating. People can detect passive failures and attempt to recover. People are the strongest point in a security process. When a security system succeeds in the face of a new or coordinated or devastating attack, it’s usually due to the efforts of people.

On 14 December 1999, Ahmed Ressam tried to enter the U.S. by ferryboat from Victoria Island, British Columbia. In the trunk of his car, he had a suitcase bomb. His plan was to drive to Los Angeles International Airport, put his suitcase on a luggage cart in the terminal, set the timer, and then leave. The plan would have worked had someone not been vigilant.

Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning—there was no one else crossing the border, so two other agents got involved—and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

There’s a dirty word for what Dean did that chilly afternoon in December, and it’s profiling. Everyone does it all the time. When you see someone lurking in a dark alley and change your direction to avoid him, you’re profiling. When a storeowner sees someone furtively looking around as she fiddles inside her jacket, that storeowner is profiling. People profile based on someone’s dress, mannerisms, tone of voice … and yes, also on their race and ethnicity. When you see someone running toward you on the street with a bloody ax, you don’t know for sure that he’s a crazed ax murderer. Perhaps he’s a butcher who’s actually running after the person next to you to give her the change she forgot. But you’re going to make a guess one way or another. That guess is an example of profiling.

To profile is to generalize. It’s taking characteristics of a population and applying them to an individual. People naturally have an intuition about other people based on different characteristics. Sometimes that intuition is right and sometimes it’s wrong, but it’s still a person’s first reaction. How good this intuition is as a countermeasure depends on two things: how accurate the intuition is and how effective it is when it becomes institutionalized or when the profile characteristics become commonplace.

One of the ways profiling becomes institutionalized is through computerization. Instead of Diana Dean looking someone over, a computer looks the profile over and gives it some sort of rating. Generally profiles with high ratings are further evaluated by people, although sometimes countermeasures kick in based on the computerized profile alone. This is, of course, more brittle. The computer can profile based only on simple, easy-to-assign characteristics: age, race, credit history, job history, et cetera. Computers don’t get hinky feelings. Computers also can’t adapt the way people can.

Profiling works better if the characteristics profiled are accurate. If erratic driving is a good indication that the driver is intoxicated, then that’s a good characteristic for a police officer to use to determine who he’s going to pull over. If furtively looking around a store or wearing a coat on a hot day is a good indication that the person is a shoplifter, then those are good characteristics for a store owner to pay attention to. But if wearing baggy trousers isn’t a good indication that the person is a shoplifter, then the store owner is going to spend a lot of time paying undue attention to honest people with lousy fashion sense.

In common parlance, the term “profiling” doesn’t refer to these characteristics. It refers to profiling based on characteristics like race and ethnicity, and institutionalized profiling based on those characteristics alone. During World War II, the U.S. rounded up over 100,000 people of Japanese origin who lived on the West Coast and locked them in camps (prisons, really). That was an example of profiling. Israeli border guards spend a lot more time scrutinizing Arab men than Israeli women; that’s another example of profiling. In many U.S. communities, police have been known to stop and question people of color driving around in wealthy white neighborhoods (commonly referred to as “DWB”—Driving While Black). In all of these cases you might possibly be able to argue some security benefit, but the trade-offs are enormous: Honest people who fit the profile can get annoyed, or harassed, or arrested, when they’re assumed to be attackers.

For democratic governments, this is a major problem. It’s just wrong to segregate people into “more likely to be attackers” and “less likely to be attackers” based on race or ethnicity. It’s wrong for the police to pull a car over just because its black occupants are driving in a rich white neighborhood. It’s discrimination.

But people make bad security trade-offs when they’re scared, which is why we saw Japanese internment camps during World War II, and why there is so much discrimination against Arabs in the U.S. going on today. That doesn’t make it right, and it doesn’t make it effective security. Writing about the Japanese internment, for example, a 1983 commission reported that the causes of the incarceration were rooted in “race prejudice, war hysteria, and a failure of political leadership.” But just because something is wrong doesn’t mean that people won’t continue to do it.

Ethics aside, institutionalized profiling fails because real attackers are so rare: Active failures will be much more common than passive failures. The great majority of people who fit the profile will be innocent. At the same time, some real attackers are going to deliberately try to sneak past the profile. During World War II, a Japanese American saboteur could try to evade imprisonment by pretending to be Chinese. Similarly, an Arab terrorist could dye his hair blond, practice an American accent, and so on.

Profiling can also blind you to threats outside the profile. If U.S. border guards stop and search everyone who’s young, Arab, and male, they’re not going to have the time to stop and search all sorts of other people, no matter how hinky they might be acting. On the other hand, if the attackers are of a single race or ethnicity, profiling is more likely to work (although the ethics are still questionable). It makes real security sense for El Al to spend more time investigating young Arab males than it does for them to investigate Israeli families. In Vietnam, American soldiers never knew which local civilians were really combatants; sometimes killing all of them was the security solution they chose.

If a lot of this discussion is abhorrent, as it probably should be, it’s the trade-offs in your head talking. It’s perfectly reasonable to decide not to implement a countermeasure not because it doesn’t work, but because the trade-offs are too great. Locking up every Arab-looking person will reduce the potential for Muslim terrorism, but no reasonable person would suggest it. (It’s an example of “winning the battle but losing the war.”) In the U.S., there are laws that prohibit police profiling by characteristics like ethnicity, because we believe that such security measures are wrong (and not simply because we believe them to be ineffective).

Still, no matter how much a government makes it illegal, profiling does occur. It occurs at an individual level, at the level of Diana Dean deciding which cars to wave through and which ones to investigate further. She profiled Ressam based on his mannerisms and his answers to her questions. He was Algerian, and she certainly noticed that. However, this was before 9/11, and the reports of the incident clearly indicate that she thought he was a drug smuggler; ethnicity probably wasn’t a key profiling factor in this case. In fact, this is one of the most interesting aspects of the story. That intuitive sense that something was amiss worked beautifully, even though everybody made a wrong assumption about what was wrong. Human intuition detected a completely unexpected kind of attack. Humans will beat computers at hinkiness-detection for many decades to come.

And done correctly, this intuition-based sort of profiling can be an excellent security countermeasure. Dean needed to have the training and the experience to profile accurately and properly, without stepping over the line and profiling illegally. The trick here is to make sure perceptions of risk match the actual risks. If those responsible for security profile based on superstition and wrong-headed intuition, or by blindly following a computerized profiling system, profiling won’t work at all. And even worse, it actually can reduce security by blinding people to the real threats. Institutionalized profiling can ossify a mind, and a person’s mind is the most important security countermeasure we have.

A couple of other points (not from the book):

  • Whenever you design a security system with two ways through—an easy way and a hard way—you invite the attacker to take the easy way. Profile for young Arab males, and you’ll get terrorists that are old non-Arab females. This paper looks at the security effectiveness of profiling versus random searching.
  • If we are going to increase security against terrorism, the young Arab males living in our country are precisely the people we want on our side. Discriminating against them in the name of security is not going to make them more likely to help.
  • Despite what many people think, terrorism is not confined to young Arab males. Shoe-bomber Richard Reid was British. Germaine Lindsay, one of the 7/7 London bombers, was Afro-Caribbean. Here are some more examples:

    In 1986, a 32-year-old Irish woman, pregnant at the time, was about to board an El Al flight from London to Tel Aviv when El Al security agents discovered an explosive device hidden in the false bottom of her bag. The woman’s boyfriend—the father of her unborn child—had hidden the bomb.

    In 1987, a 70-year-old man and a 25-year-old woman—neither of whom were Middle Eastern—posed as father and daughter and brought a bomb aboard a Korean Air flight from Baghdad to Thailand. En route to Bangkok, the bomb exploded, killing all on board.

    In 1999, men dressed as businessmen (and one dressed as a Catholic priest) turned out to be terrorist hijackers, who forced an Avianca flight to divert to an airstrip in Colombia, where some passengers were held as hostages for more than a year-and-half.

    The 2002 Bali terrorists were Indonesian. The Chechnyan terrorists who downed the Russian planes were women. Timothy McVeigh and the Unabomber were Americans. The Basque terrorists are Basque, and Irish terrorists are Irish. Tha Tamil Tigers are Sri Lankan.

    And many Muslims are not Arabs. Even worse, almost everyone who is Arab is not a terrorist—many people who look Arab are not even Muslims. So not only are there an large number of false negatives—terrorists who don’t meet the profile—but there an enormous number of false positives: innocents that do meet the profile.

Posted on July 22, 2005 at 3:12 PMView Comments

White Powder Anthrax Hoaxes

Earlier this month, there was an anthrax scare at the Indonesian embassy in Australia. Someone sent them some white powder in an envelope, which was scary enough. Then it tested positive for bacillus. The building was decontaminated, and the staff was quarantined for twelve hours. By then, tests came back negative for anthrax.

A lot of thought went into this false alarm. The attackers obviously knew that their white powder would be quickly tested for the presence of a bacterium of the bacillus family (of which anthrax is a member), but that the bacillus would have to be cultured for a couple of days before a more exact identification could be made. So even without any anthrax, they managed to cause two days of terror.

At a guess, this incident had something to do with Schapelle Corby (yet another security related story). Corby was arrested in Bali for smuggling drugs into the country. Her defense, widely believed in Australia, was that she was an unwitting dupe of the real drug smugglers. Supposedly, the smugglers work as airport baggage handlers and slip packages into checked baggage and remove them at the far end before reclaim. In any case, Bali has very strict drug laws and Corby was recently convicted in what Australians consider a miscarriage of justice. There have been news reports saying that there is no connection, but it just seems too obvious.

In an interesting side note, the media have revealed for the first time that 360 “white powder” incidents have taken place since 11 September 2001. This news had been suppressed by the government, which had issued D notices to the media for all such incidents. So there has been one such incident approximately every four days—an astonishing number, given Australia’s otherwise low crime rate.

Posted on June 14, 2005 at 2:41 PMView Comments

The Emergence of a Global Infrastructure for Mass Registration and Surveillance

The International Campaign Against Mass Surveillance has issued a report (dated April 2005): “The Emergence of a Global Infrastructure for Mass Registration and Surveillance.” It’s a chilling assessment of the current international trends towards global surveillance. Most of it you will have seen before, although it’s good to have everything in one place. I am particularly pleased that the report explicitly states that these measures do not make us any safer, but only create the illusion of security.

The global surveillance initiatives that governments have embarked upon do not make us more secure. They create only the illusion of security.

Sifting through an ocean of information with a net of bias and faulty logic, they yield outrageous numbers of false positives ­ and false negatives. The dragnet approach might make the public feel that something is being done, but the dragnet is easily circumvented by determined terrorists who are either not known to authorities, or who use identity theft to evade them.

For the statistically large number of people that will be wrongly identified or wrongly assessed as a risk under the system, the consequences can be dire.

At the same time, the democratic institutions and protections, which would be the safeguards of individuals’ personal security, are being weakened. And national sovereignty and the ability of national governments to protect citizens against the actions of other states (when they are willing) are being compromised as security functions become more and more deeply integrated.

The global surveillance dragnet diverts crucial resources and efforts away from the kind of investments that would make people safer. What is required is good information about specific threats, not crude racial profiling and useless information on the nearly 100 percent of the population that poses no threat whatsoever.

Posted on April 29, 2005 at 8:54 AMView Comments

Failures of Airport Screening

According to the AP:

Security at American airports is no better under federal control than it was before the Sept. 11 attacks, a congressman says two government reports will conclude.

The Government Accountability Office, the investigative arm of Congress, and the Homeland Security Department’s inspector general are expected to release their findings soon on the performance of Transportation Security Administration screeners.

This finding will not surprise anyone who has flown recently. How does anyone expect competent security from screeners who don’t know the difference between books and books of matches? Only two books of matches are now allowed on flights; you can take as many reading books as you can carry.

The solution isn’t to privatize the screeners, just as the solution in 2001 wasn’t to make them federal employees. It’s a much more complex problem.

I wrote about it in Beyond Fear (pages 153-4):

No matter how much training they get, airport screeners routinely miss guns and knives packed in carry-on luggage. In part, that’s the result of human beings having developed the evolutionary survival skill of pattern matching: the ability to pick out patterns from masses of random visual data. Is that a ripe fruit on that tree? Is that a lion stalking quietly through the grass? We are so good at this that we see patterns in anything, even if they’re not really there: faces in inkblots, images in clouds, and trends in graphs of random data. Generating false positives helped us stay alive; maybe that wasn’t a lion that your ancestor saw, but it was better to be safe than sorry. Unfortunately, that survival skill also has a failure mode. As talented as we are at detecting patterns in random data, we are equally terrible at detecting exceptions in uniform data. The quality-control inspector at Spacely Sprockets, staring at a production line filled with identical sprockets looking for the one that is different, can’t do it. The brain quickly concludes that all the sprockets are the same, so there’s no point paying attention. Each new sprocket confirms the pattern. By the time an anomalous sprocket rolls off the assembly line, the brain simply doesn’t notice it. This psychological problem has been identified in inspectors of all kinds; people can’t remain alert to rare events, so they slip by.

The tendency for humans to view similar items as identical makes it clear why airport X-ray screening is so difficult. Weapons in baggage are rare, and the people studying the X-rays simply lose the ability to see the gun or knife. (And, at least before 9/11, there was enormous pressure to keep the lines moving rather than double-check bags.) Steps have been put in place to try to deal with this problem: requiring the X-ray screeners to take frequent breaks, artificially imposing the image of a weapon onto a normal bag in the screening system as a test, slipping a bag with a weapon into the system so that screeners learn it can happen and must expect it. Unfortunately, the results have not been very good.

This is an area where the eventual solution will be a combination of machine and human intelligence. Machines excel at detecting exceptions in uniform data, so it makes sense to have them do the boring repetitive tasks, eliminating many, many bags while having a human sort out the final details. Think about the sprocket quality-control inspector: If he sees 10,000 negatives, he’s going to stop seeing the positives. But if an automatic system shows him only 100 negatives for every positive, there’s a greater chance he’ll see them.

Paying the screeners more will attract a smarter class of worker, but it won’t solve the problem.

Posted on April 19, 2005 at 9:22 AMView Comments

Radiation Detectors in Ports

According to Reuters:

The United States is stepping up investment in radiation detection devices at its ports to thwart attempts to smuggle a nuclear device or dirty bomb into the country, a Senate committee heard on Wednesday.

Robert Bonner, commissioner of U.S. Customs and Border Protection, told a Senate subcommittee on homeland security that since the first such devices were installed in May 2000, they had picked up over 10,000 radiation hits in vehicles or cargo shipments entering the country. All proved harmless.

It amazes me that 10,000 false alarms—instances where the security system failed—are being touted as proof that the system is working.

As an example of how the system was working, Bonner said on Jan. 26, 2005, a machines got a hit from a South Korean vessel at the Los Angeles seaport. The radiation turned out to be emanating from the ship’s fire extinguishing system and was no threat to safety.

That sounds like an example of how the system is not working to me. Sometimes I wish that those in charge of security actually understood security.

Posted on March 16, 2005 at 7:51 AMView Comments

Sneaking Items Aboard Aircraft

A Pennsylvania Supreme Court Justice faces a fine—although no criminal charges at the moment—for trying to sneak a knife aboard an aircraft.

Saylor, 58, and his wife entered a security checkpoint Feb. 4 on a trip to Philadelphia when screeners found a small Swiss Army-style knife attached to his key chain.

A police report said he was told the item could not be carried onto a plane and that he needed to place the knife into checked luggage or make other arrangements.

When Saylor returned a short time later to be screened a second time, an X-ray machine detected a knife inside his carry-on luggage, police said.

There are two points worth making here. One: ridiculous rules have a way of turning people into criminals. And two: this is an example of a security failure, not a security success.

Security systems fail in one of two ways. They can fail to stop the bad guy, and they can mistakenly stop the good guy. The TSA likes to measure its success by looking at the forbidden items they have prevented from being carried onto aircraft, but that’s wrong. Every time the TSA takes a pocketknife from an innocent person, that’s a security failure. It’s a false alarm. The system has prevented access where no prevention was required. This, coupled with the widespread belief that the bad guys will find a way around the system, demonstrates what a colossal waste of money it is.

Posted on February 28, 2005 at 8:00 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.