Entries Tagged "resilience"

Page 3 of 4

Breaching the Secure Area in Airports

An unidentified man breached airport security at Newark Airport on Sunday, walking into the secured area through the exit, prompting the evacuation of a terminal and flight delays that continued into the next day. This isn’t common, but it happens regularly. The result is always the same, and it’s not obvious that fixing the problem is the right solution.

This kind of security breach is inevitable, simply because human guards are not perfect. Sometimes it’s someone going in through the out door, unnoticed by a bored guard. Sometimes it’s someone running through the checkpoint and getting lost in the crowd. Sometimes it’s an open door that should be locked. Amazing as it seems to frequent fliers, the perpetrator often doesn’t even know he did anything wrong.

Basically, whenever there is—or could be—an unscreened person lost within the secure area of an airport, there are two things the TSA can do. They can say “this isn’t a big deal,” and ignore it. Or they can evacuate everyone inside the secure area, search every nook and cranny—inside the large boxes of napkins at the fast food restaurant, above the false ceilings in the bathrooms, everywhere—looking for anyone hiding or anything anyone hid, and then rescreen everybody: causing delays of six, eight, twelve, or more hours. That’s it; those are the options. And there’s no way someone in charge will choose to ignore the risk; even if the odds of a terrorist exploit are minuscule, it’ll cost him his career if he’s wrong.

Several European airports have their security screening organized differently. At Schipol Airport in Amsterdam, for example, passengers are screened at the gates. This is more expensive and requires a substantially different airport design, but it does mean that if there is a security breach, only the gate has to be evacuated and searched, and the people rescreened.

American airports can do more to secure against this risk, but I’m reasonably sure it’s not worth it. We could double the guards to reduce the risk of inattentiveness, and redesign the airports to make this kind of thing less likely, but those are expensive solutions to an already rare problem. As much as I don’t like saying it, the smartest thing is probably to live with this occasional but major inconvenience.

This essay originally appeared on ThreatPost.com.

EDITED TO ADD (1/9): A first-person account of the chaos at Newark Airport, with observations and recommendations.

Posted on January 6, 2010 at 6:10 AMView Comments

A Rational Response to Peanut Allergies and Children

Some parents of children with peanut allergies are not asking their school to ban peanuts. They consider it more important that teachers know which children are likely to have a reaction, and how to deal with it when it happens; i.e., how to use an Epipen.

This is a much more resilient response to the threat. It works even when the peanut ban fails. It works whether the child has an anaphylactic reaction to nuts, fruit, dairy, gluten, or whatever.

It’s so rare to see rational risk management when it comes to children and safety; I just had to blog it.

Related blog post, including a very lively comments section.

Posted on January 27, 2009 at 2:10 PMView Comments

Biometrics

Biometrics may seem new, but they’re the oldest form of identification. Tigers recognize each other’s scent; penguins recognize calls. Humans recognize each other by sight from across the room, voices on the phone, signatures on contracts and photographs on driver’s licenses. Fingerprints have been used to identify people at crime scenes for more than 100 years.

What is new about biometrics is that computers are now doing the recognizing: thumbprints, retinal scans, voiceprints, and typing patterns. There’s a lot of technology involved here, in trying to both limit the number of false positives (someone else being mistakenly recognized as you) and false negatives (you being mistakenly not recognized). Generally, a system can choose to have less of one or the other; less of both is very hard.

Biometrics can vastly improve security, especially when paired with another form of authentication such as passwords. But it’s important to understand their limitations as well as their strengths. On the strength side, biometrics are hard to forge. It’s hard to affix a fake fingerprint to your finger or make your retina look like someone else’s. Some people can mimic voices, and make-up artists can change people’s faces, but these are specialized skills.

On the other hand, biometrics are easy to steal. You leave your fingerprints everywhere you touch, your iris scan everywhere you look. Regularly, hackers have copied the prints of officials from objects they’ve touched, and posted them on the Internet. We haven’t yet had an example of a large biometric database being hacked into, but the possibility is there. Biometrics are unique identifiers, but they’re not secrets.

And a stolen biometric can fool some systems. It can be as easy as cutting out a signature, pasting it onto a contract, and then faxing the page to someone. The person on the other end doesn’t know that the signature isn’t valid because he didn’t see it fixed onto the page. Remote logins by fingerprint fail in the same way. If there’s no way to verify the print came from an actual reader, not from a stored computer file, the system is much less secure.

A more secure system is to use a fingerprint to unlock your mobile phone or computer. Because there is a trusted path from the fingerprint reader to the stored fingerprint the system uses to compare, an attacker can’t inject a previously stored print as easily as he can cut and paste a signature. A photo on an ID card works the same way: the verifier can compare the face in front of him with the face on the card.

Fingerprints on ID cards are more problematic, because the attacker can try to fool the fingerprint reader. Researchers have made false fingers out of rubber or glycerin. Manufacturers have responded by building readers that also detect pores or a pulse.

The lesson is that biometrics work best if the system can verify that the biometric came from the person at the time of verification. The biometric identification system at the gates of the CIA headquarters works because there’s a guard with a large gun making sure no one is trying to fool the system.

Of course, not all systems need that level of security. At Counterpane, the security company I founded, we installed hand geometry readers at the access doors to the operations center. Hand geometry is a hard biometric to copy, and the system was closed and didn’t allow electronic forgeries. It worked very well.

One more problem with biometrics: they don’t fail well. Passwords can be changed, but if someone copies your thumbprint, you’re out of luck: you can’t update your thumb. Passwords can be backed up, but if you alter your thumbprint in an accident, you’re stuck. The failures don’t have to be this spectacular: a voiceprint reader might not recognize someone with a sore throat, or a fingerprint reader might fail outside in freezing weather. Biometric systems need to be analyzed in light of these possibilities.

Biometrics are easy, convenient, and when used properly, very secure; they’re just not a panacea. Understanding how they work and fail is critical to understanding when they improve security and when they don’t.

This essay originally appeared in the Guardian, and is an update of an essay I wrote in 1998.

Posted on January 8, 2009 at 12:53 PMView Comments

First Responders

I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.

Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly futile attempt to save six miners in Utah.)

Perhaps the most amazing aspect of these stories is that there’s nothing particularly amazing about it. No matter what the disaster—hurricane, earthquake, terrorist attack—the nation’s first responders get to the scene soon after.

Which is why it’s such a crime when these people can’t communicate with each other.

Historically, police departments, fire departments and ambulance drivers have all had their own independent communications equipment, so when there’s a disaster that involves them all, they can’t communicate with each other. A 1996 government report said this about the first World Trade Center bombing in 1993: “Rescuing victims of the World Trade Center bombing, who were caught between floors, was hindered when police officers could not communicate with firefighters on the very next floor.”

And we all know that police and firefighters had the same problem on 9/11. You can read details in firefighter Dennis Smith’s book and 9/11 Commission testimony. The 9/11 Commission Report discusses this as well: Chapter 9 talks about the first responders’ communications problems, and commission recommendations for improving emergency-response communications are included in Chapter 12 (pp. 396-397).

In some cities, this communication gap is beginning to close. Homeland Security money has flowed into communities around the country. And while some wasted it on measures like cameras, armed robots and things having nothing to do with terrorism, others spent it on interoperable communications capabilities. Minnesota did that in 2004.

It worked. Hennepin County Sheriff Rich Stanek told the St. Paul Pioneer-Press that lives were saved by disaster planning that had been fine-tuned and improved with lessons learned from 9/11:

“We have a unified command system now where everyone—police, fire, the sheriff’s office, doctors, coroners, local and state and federal officials—operate under one voice,” said Stanek, who is in charge of water recovery efforts at the collapse site.

“We all operate now under the 800 (megahertz radio frequency system), which was the biggest criticism after 9/11,” Stanek said, “and to have 50 to 60 different agencies able to speak to each other was just fantastic.”

Others weren’t so lucky. Louisiana’s first responders had catastrophic communications problems in 2005, after Hurricane Katrina. According to National Defense Magazine:

Police could not talk to firefighters and emergency medical teams. Helicopter and boat rescuers had to wave signs and follow one another to survivors. Sometimes, police and other first responders were out of touch with comrades a few blocks away. National Guard relay runners scurried about with scribbled messages as they did during the Civil War.

A congressional report on preparedness and response to Katrina said much the same thing.

In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25 percent of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80 percent of cities, municipal authorities couldn’t communicate with the FBI, FEMA and other federal agencies.

The source of the problem is a basic economic one, called the collective action problem. A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.

Jerry Brito of George Mason University shows how this applies to first-responder communications. Each of the nation’s 50,000 or so emergency-response organizations—local police department, local fire department, etc.—buys its own communications equipment. As you’d expect, they buy equipment as closely suited to their needs as they can. Ensuring interoperability with other organizations’ equipment benefits the common good, but sacrificing their unique needs for that compatibility may not be in the best immediate interest of any of those organizations. There’s no central directive to ensure interoperability, so there ends up being none.

This is an area where the federal government can step in and do good. Too much of the money spent on terrorism defense has been overly specific: effective only if the terrorists attack a particular target or use a particular tactic. Money spent on emergency response is different: It’s effective regardless of what the terrorists plan, and it’s also effective in the wake of natural or infrastructure disasters.

No particular disaster, whether intentional or accidental, is common enough to justify spending a lot of money on preparedness for a specific emergency. But spending money on preparedness in general will pay off again and again.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/13): More research.

Posted on August 23, 2007 at 3:23 AMView Comments

Airport Security Breach

One of the problems with airport security checkpoints is that the system is a single point of failure. If someone slips through, the only way to regain security is for the entire airport to be emptied and everyone searched again. This happens rarely, but when it does, it can close an airport for hours.

It happened today at the Charlotte airport.

One sentence struck me:

Passengers on another 15 planes that took off after the breach will have to go through screening again when they reach their destinations, the TSA said.

It’s understandable why the TSA would want to screen everybody once someone evades security: that person could give his contraband to someone else. And since the entire airport system is a single secure area—once you go through security at one airport, you are considered to be inside security at all airports—it makes sense for those passengers to be screened if they’re changing planes.

But it must feel weird to have to go through screening after flying, before being able to leave the airport.

Posted on August 10, 2007 at 11:12 AMView Comments

Software Failure Causes Airport Evacuation

Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts “test” bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta’s Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.

The screening system injects test images onto the screen. Normally the software flashes the words “This is a test” on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a “suspicious device,” according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn’t find it, of course, but they evacuated the airport and spent two hours vainly searching for it.

Hartsfield-Jackson is the country’s busiest passenger airport. It’s Delta’s hub city. The delays were felt across the country for the rest of the day.

Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn’t fail—everyone did what they were supposed to do.

What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure—in this case, a software glitch in a single X-ray machine—cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates—I’ve seen this design at several European airports—so that when there’s a problem the effects are restricted to that gate.

Of course, this distributed security solution would be more expensive. But I’m willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.

Posted on April 21, 2006 at 12:49 PMView Comments

Airport Security Failure

At LaGuardia, a man successfully walked through the metal detector, but screeners wanted to check his shoes. (Some reports say that his shoes set off an alarm.) But he didn’t wait, and disappeared into the crowd.

The entire Delta Airlines terminal had to be evacuated, and between 2,500 and 3,000 people had to be rescreened. I’m sure the resultant flight delays rippled through the entire system.

Security systems can fail in two ways. They can fail to defend against an attack. And they can fail when there is no attack to defend. The latter failure is often more important, because false alarms are more common than real attacks.

Aside from the obvious security failure—how did this person manage to disappear into the crowd, anyway—it’s painfully obvious that the overall security system did not fail well. Well-designed security systems fail gracefully, without affecting the entire airport terminal. That the only thing the TSA could do after the failure was evacuate the entire terminal and rescreen everyone is a testament to how badly designed the security system is.

Posted on March 14, 2006 at 12:15 PMView Comments

Database Error Causes Unbalanced Budget

This story of a database error cascading into a major failure has some interesting security morals:

A house erroneously valued at $400 million is being blamed for budget shortfalls and possible layoffs in municipalities and school districts in northwest Indiana.

[…]

County Treasurer Jim Murphy said the home usually carried about $1,500 in property taxes; this year, it was billed $8 million.

Most local officials did not learn about the mistake until Tuesday, when 18 government taxing units were asked to return a total of $3.1 million of tax money. The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million. As a result, the school system has a $200,000 budget shortfall, and the city loses $900,000.

User error is being blamed for the problem:

An outside user of Porter County’s computer system may have triggered the mess by accidentally changing the value of the Valparaiso house, said Sharon Lippens, director of the county’s information technologies and service department.

[…]

Lippens said the outside user changed the property value, most likely while trying to access another program while using the county’s enhanced access system, which charges users a fee for access to public records that are not otherwise available on the Internet.

Lippens said the user probably tried to access a real estate record display by pressing R-E-D, but accidentally typed R-E-R, which brought up an assessment program written in 1995. The program is no longer in use, and technology officials did not know it could be accessed.

Three things immediately spring to mind:

One, the system did not fail safely. This one error seems to have cascaded into multiple errors, as the new tax total immediately changed budgets of “18 government taxing units.”

Two, there were no sanity checks on the system. “The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million.” Didn’t the city wonder where all that extra money came from in the first place?

Three, the access-control mechanisms on the computer system were too broad. When a user is authenticated to use the “R-E-D” program, he shouldn’t automatically have permission to use the “R-E-R” program as well. Authentication isn’t all or nothing; it should be granular to the operation.

Posted on February 17, 2006 at 7:29 AMView Comments

The Myth of Panic

This New York Times op ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called “panic” because of the stress.

If our leaders are really planning for panic, in the technical sense, then they are at best wasting resources on a future that is unlikely to happen. At worst, they may be doing our enemies’ work for them – while people are amazing under pressure, it cannot help to have predictions of panic drummed into them by supposed experts.

It can set up long-term foreboding, causing people to question whether they have the mettle to handle terrorists’ challenges. Studies have found that when interpreting ambiguous situations, people look to one another for cues. Panicky warnings can color the cues that people draw from one another when interpreting ambiguous situations, like seeing a South Asian-looking man with a backpack get on a bus.

Nor can it help if policy makers talk about possible draconian measures (like martial law and rigidly policed quarantines) to control the public and deny its right to manage its own affairs. The very planning for such measures can alienate citizens and the authorities from each other.

Whatever its source, the myth of panic is a threat to our welfare. Given the difficulty of using the term precisely and the rarity of actual panic situations, the cleanest solution is for the politicians and the press to avoid the term altogether. It’s time to end chatter about “panic” and focus on ways to support public resilience in an emergency.

Posted on August 9, 2005 at 7:25 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.