Entries Tagged "sensors"

Page 11 of 13

Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

Posted on April 21, 2006 at 12:49 PMView Comments

Airport Passenger Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we’re all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn’t the “underwear bomber.”)

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for—for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate … the list goes on and on.

Technology can help. X-ray machines already randomly insert “test” bags into the stream—keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.

Sure, there’ll be a lot of false alarms, and some bad things will still get through. But it’s better than the alternative.

And it’s likely good enough. Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

The terrorists’ goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there’s even a small explosion, everyone on the plane dies. But there’s a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven’t really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did—again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we’re relying on airport screeners to prevent terrorism, it’s already too late. After all, we can’t keep weapons out of prisons. How can we ever hope to keep them out of airports?

A version of this essay originally appeared on Wired.com.

Posted on March 23, 2006 at 7:03 AMView Comments

Surreptitious Lie Detector

According to The New Scientist:

THE US Department of Defense has revealed plans to develop a lie detector that can be used without the subject knowing they are being assessed. The Remote Personnel Assessment (RPA) device will also be used to pinpoint fighters hiding in a combat zone, or even to spot signs of stress that might mark someone out as a terrorist or suicide bomber.

“Revealed plans” is a bit of an overstatement. It seems that they’re just asking for proposals:

In a call for proposals on a DoD website, contractors are being given until 13 January to suggest ways to develop the RPA, which will use microwave or laser beams reflected off a subject’s skin to assess various physiological parameters without the need for wires or skin contacts. The device will train a beam on “moving and non-cooperative subjects”, the DoD proposal says, and use the reflected signal to calculate their pulse, respiration rate and changes in electrical conductance, known as the “galvanic skin response”. “Active combatants will in general have heart, respiratory and galvanic skin responses that are outside the norm,” the website says.

The DoD asks for pie-in-the-sky stuff all the time. For example, they’ve wanted a synthetic blood substitute for decades. A surreptitious lie detector would be pretty neat.

Posted on January 20, 2006 at 12:37 PMView Comments

Bomb-Sniffing Wasps

No, this isn’t from The Onion. Trained wasps:

The tiny, non-stinging wasps can check for hidden explosives at airports and monitor for toxins in subway tunnels.

“You can rear them by the thousands, and you can train them within a matter of minutes,” says Joe Lewis, a U.S. Agriculture Department entomologist. “This is just the very tip of the iceberg of a very new resource.”

Sounds like it will be cheap enough….

EDITED TO ADD (12/29): Bomb-sniffing bees are old news.

Posted on December 28, 2005 at 12:47 PMView Comments

Eavesdropping Through a Wall

From The New Scientist:

With half a century’s experience of listening to feeble radio signals from space, NASA is helping US security services squeeze super-weak bugging data from Earth-bound buildings.

It is easy to defeat ordinary audio eavesdropping, just by sound-proofing a room. And simply drawing the curtains can defeat newer systems, which shine a laser beam onto a glass window and decode any modulation of the reflected beam caused by sound vibrations in the room.

So the new “through-the-wall audio surveillance system” uses a powerful beam of very high frequency radio waves instead of light. Radio can penetrate walls – if they didn’t, portable radios wouldn’t work inside a house.

The system uses a horn antenna to radiate a beam of microwave energy –between 30 and 100 gigahertz – through a building wall. If people are speaking inside the room, any flimsy surface, such as clothing, will be vibrating. This modulates the radio beam reflected from the surface.

Although the radio reflection that passes back through the wall is extremely faint, the kind of electronic extraction and signal cleaning tricks used by NASA to decode signals in space can be used to extract speech.

Here’s the patent, and here’s a Slashdot thread on the topic.

Wow. (If it works, that is.)

Posted on October 26, 2005 at 3:12 PMView Comments

Chemical Trace Screening

New advances in technology:

“Mass spectrometry is one of the most sensitive methods for finding drugs, chemicals, pollutants and disease, but the problem is that you have to extract a sample and treat that sample before you can analyze it,” said Evan Williams, a chemistry professor at UC Berkeley.

That process can take anywhere from two to 15 minutes for each sample. Multiply that by the number of people in line at airport security at JFK the day before Thanksgiving, and you’ve got a logistical nightmare on your hands.

The research from Purdue, led by analytical chemistry professor Graham Cooks, developed a technique called desorption electrospray ionization, or DESI, that eliminates a part of the mass spectrometry process, and thus speeds up the detection of substances to less than 10 seconds, said Williams.

To use it, law enforcement officials and security screeners will spray methanol or a water and salt mixture on the surface of an object, or a person’s clothing or skin, and test immediately for microscopic traces of chemical compounds.

As this kind of technology gets better, the problems of false alarms becomes greater. We already know that a large percentage of U.S. currency bears traces of cocaine, but can a low-budget terrorist close down an airport by spraying trace chemicals randomly at passengers’ luggage when they’re not looking?

Posted on October 14, 2005 at 1:56 PMView Comments

Automatic License Plate Scanners

The Boston Transportation Department, among other duties, hands out parking tickets. If a car has too many unpaid parking tickets, the BTD will lock a Denver Boot to one of the wheels, making the car unmovable. Once the tickets are paid up, the BTD removes th boot.

The white SUV in this photo is owned by the Boston Transportation Department. Its job is to locate cars that need to be booted. The two video cameras on top of the vehicle are hooked up to a laptop computer running license plate scanning software. The vehicle drives around the city scanning plates and comparing them with the database of unpaid parking tickets. When a match is found, the BTD officers jump out and boot the offending car. You can sort of see the boot on the front right wheel of the car behind the SUV in the photo.

This is the kind of thing I call “wholesale surveillance,” and I’ve written about license plate scanners in that regard last year.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

[…]

Like the license-plate scanners, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backwards in time.

The effects of wholesale surveillance on privacy and civil liberties is profound; but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all techniques at their disposal. What we need are corresponding mechanisms to prevent abuse, and that don’t place an unreasonable burden on the innocent.

Throughout our nation’s history, we have maintained a balance between the necessary interests of police and the civil rights of the people. The license plate itself is such a balance. Imagine the debate from the early 1900s: The police proposed affixing a plaque to every car with the car owner’s name, so they could better track cars used in crimes. Civil libertarians objected because that would reduce the privacy of every car owner. So a compromise was reached: a random string of letter and numbers that the police could use to determine the car owner. By deliberately designing a more cumbersome system, the needs of law enforcement and the public’s right to privacy were balanced.

The search warrant process, as prescribed in the Fourth Amendment, is another balancing method. So is the minimization requirement for telephone eavesdropping: the police must stop listening to a phone line if the suspect under investigation is not talking.

For license-plate scanners, one obvious protection is to require the police to erase data collected on innocent car owners immediately, and not save it. The police have no legitimate need to collect data on everyone’s driving habits. Another is to allow car owners access to the information about them used in these automated searches, and to allow them to challenge inaccuracies.

The Boston Globe has written about this program.

Richard M. Smith, who took this photo, made a public request to the BTD last summer for the database of scanned license plate numbers that is being collected by this vehicle. The BTD told him at the time that the database is not a public record, because the database is owned by AutoVu, the Canadian company that makes the license plate scanner software used in the vehicle. This software is being “loaned” to the City of Boston as part of a “beta” test program.

Anyone doubt that AutoVu is going to sell this data to a company like ChoicePoint?

Posted on October 7, 2005 at 1:49 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.