Attacking U.S. Critical Infrastructure
We have a cognitive bias to exaggerate risks caused by other humans, and downplay risks caused by animals (and, even more, by natural phenomena.)
Page 31 of 80
We have a cognitive bias to exaggerate risks caused by other humans, and downplay risks caused by animals (and, even more, by natural phenomena.)
This is just silly:
Beaver Stadium is a terrorist target. It is most likely the No. 1 target in the region. As such, it deserves security measures commensurate with such a designation, but is the stadium getting such security?
[..]
When the stadium is not in use it does not mean it is not a target. It must be watched constantly. An easy solution is to assign police officers there 24 hours a day, seven days a week. This is how a plot to destroy the Brooklyn Bridge was thwarted—police presence. Although there are significant costs to this, the costs pale in comparison if the stadium is destroyed or damaged.
The idea is to create omnipresence, which is a belief in everyone’s minds (terrorists and pranksters included) that the stadium is constantly being watched so that any attempt would be futile.
Actually, the Brooklyn Bridge plot failed because the plotters were idiots and the plot—cutting through cables with blowtorches—was dumb. That, and the all-too-common police informant who egged the plotters on.
But never mind that. Beaver Stadium is Pennsylvania State University’s football stadium, and this article argues that it’s a potential terrorist target that needs 24/7 police protection.
The problem with that kind of reasoning is that it makes no sense. As I said in an article that will appear in New Internationalist:
To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it’s impossible to defend every place against everything, and it’s impossible to predict which tactic and target terrorists will try next.
Defending individual targets only makes sense if the number of potential targets is few. If there are seven terrorist targets and you defend five of them, you seriously reduce the terrorists’ ability to do damage. But if there are a million terrorist targets and you defend five of them, the terrorists won’t even notice. I tend to dislike security measures that merely cause the bad guys to make a minor change in their plans.
And the expense would be enormous. Add up these secondary terrorist targets—stadiums, theaters, churches, schools, malls, office buildings, anyplace where a lot of people are packed together—and the number is probably around 200,000, including Beaver Stadium. Full-time police protection requires people, so that’s 1,000,000 policemen. At an encumbered cost of $100,000 per policeman per year, probably a low estimate, that’s a total annual cost of $100B. (That’s about what we’re spending each year in Iraq.) On the other hand, hiring one out of every 300 Americans to guard our nation’s infrastructure would solve our unemployment problem. And since policemen get health care, our health care problem as well. Just make sure you don’t accidentally hire a terrorist to guard against terrorists—that would be embarrassing.
The whole idea is nonsense. As I’ve been saying for years, what works is investigation, intelligence, and emergency response:
We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.
At a demonstration of the technology this week, project manager Robert P. Burns said the idea is to track a set of involuntary physiological reactions that might slip by a human observer. These occur when a person harbors malicious intent—but not when someone is late for a flight or annoyed by something else, he said, citing years of research into the psychology of deception.
The development team is investigating how effective its techniques are at flagging only people who intend to do harm. Even if it works, the technology raises a slew of questions – from privacy concerns, to the more fundamental issue of whether machines are up to a task now entrusted to humans.
I have a lot of respect for Paul Ekman’s opinion on the matter:
“I can understand why there’s an attempt being made to find a way to replace or improve on what human observers can do: the need is vast, for a country as large and porous as we are. However, I’m by no means convinced that any technology, any hardware will come close to doing what a highly trained human observer can do,'” said Ekman, who directs a company that trains government workers, including for the Transportation Security Administration, to detect suspicious behavior.
New experiment demonstrates what we already knew:
That’s because people tend to view their immediate emotions, such as their perceptions of threats or risks, as more intense and important than their previous emotions.
In one part of the study focusing on terrorist threats, using materials adapted from the U.S. Department of Homeland Security, Van Boven and his research colleagues presented two scenarios to people in a college laboratory depicting warnings about traveling abroad to two countries.
Participants were then asked to report which country seemed to have greater terrorist threats. Many of them reported that the country they last read about was more dangerous.
“What our study has shown is that when people learn about risks, even in very rapid succession where the information is presented to them in a very clear and vivid way, they still respond more strongly to what is right in front of them,” Van Boven said.
[…]
Human emotions stem from a very old system in the brain, Van Boven says. When it comes to reacting to threats, real or exaggerated, it goes against the grain of thousands of years of evolution to just turn off that emotional reaction. It’s not something most people can do, he said.
“And that’s a problem, because people’s emotions are fundamental to their judgments and decisions in everyday life,” Van Boven said. “When people are constantly being bombarded by new threats or things to be fearful of, they can forget about the genuinely big problems, like global warming, which really need to be dealt with on a large scale with public support.”
In today’s 24-hour society, talk radio, the Internet and extensive media coverage of the “threat of the day” only exacerbate the trait of focusing on our immediate emotions, he said.
“One of the things we know about how emotional reactions work is they are not very objective, so people can get outraged or become fearful of what might actually be a relatively minor threat,” Van Boven said. “One worry is some people are aware of these kinds of effects and can use them to manipulate our actions in ways that we may prefer to avoid.”
[…]
“If you’re interested in having an informed citizenry you tell people about all the relevant risks, but what our research shows is that is not sufficient because those things still happen in sequence and people will still respond immediately to whatever happens to be in front of them,” he said. “In order to make good decisions and craft good policies we need to know how people are going to respond.”
Good essay on “terrorist havens”—like Afghanistan—and why they’re not as big a worry as some maintain:
Rationales for maintaining the counterinsurgency in Afghanistan are varied and complex, but they all center on one key tenet: that Afghanistan must not be allowed to again become a haven for terrorist groups, especially al-Qaeda.
[…]
The debate has largely overlooked a more basic question: How important to terrorist groups is any physical haven? More to the point: How much does a haven affect the danger of terrorist attacks against U.S. interests, especially the U.S. homeland? The answer to the second question is: not nearly as much as unstated assumptions underlying the current debate seem to suppose. When a group has a haven, it will use it for such purposes as basic training of recruits. But the operations most important to future terrorist attacks do not need such a home, and few recruits are required for even very deadly terrorism. Consider: The preparations most important to the Sept. 11, 2001, attacks took place not in training camps in Afghanistan but, rather, in apartments in Germany, hotel rooms in Spain and flight schools in the United States.
In the past couple of decades, international terrorist groups have thrived by exploiting globalization and information technology, which has lessened their dependence on physical havens.
By utilizing networks such as the Internet, terrorists’ organizations have become more network-like, not beholden to any one headquarters. A significant jihadist terrorist threat to the United States persists, but that does not mean it will consist of attacks instigated and commanded from a South Asian haven, or that it will require a haven at all. Al-Qaeda’s role in that threat is now less one of commander than of ideological lodestar, and for that role a haven is almost meaningless.
I wrote about the DHS’s color-coded threat alert system in 2003, in Beyond Fear:
The color-coded threat alerts issued by the Department of Homeland Security are useless today, but may become useful in the future. The U.S. military has a similar system; DEFCON 1-5 corresponds to the five threat alerts levels: Green, Blue, Yellow, Orange, and Red. The difference is that the DEFCON system is tied to particular procedures; military units have specific actions they need to perform every time the DEFCON level goes up or down. The color-alert system, on the other hand, is not tied to any specific actions. People are left to worry, or are given nonsensical instructions to buy plastic sheeting and duct tape. Even local police departments and government organizations largely have no idea what to do when the threat level changes. The threat levels actually do more harm than good, by needlessly creating fear and confusion (which is an objective of terrorists) and anesthetizing people to future alerts and warnings. If the color-alert system became something better defined, so that people know exactly what caused the levels to change, what the change means, and what actions they need to take in the event of a change, then it could be useful. But even then, the real measure of effectiveness is in the implementation. Terrorist attacks are rare, and if the color-threat level changes willy-nilly with no obvious cause or effect, then people will simply stop paying attention. And the threat levels are publicly known, so any terrorist with a lick of sense will simply wait until the threat level goes down.
Of course, the codes never became useful. There were never any actions associated with them. And we now know that their primary use was political. They were, and remain, a security joke.
This is what I wrote in 2004:
The DHS’s threat warnings have been vague, indeterminate, and unspecific. The threat index goes from yellow to orange and back again, although no one is entirely sure what either level means. We’ve been warned that the terrorists might use helicopters, scuba gear, even cheap prescription drugs from Canada. New York and Washington, D.C., were put on high alert one day, and the next day told that the alert was based on information years old. The careful wording of these alerts allows them not to require any sound, confirmed, accurate intelligence information, while at the same time guaranteeing hysterical media coverage. This headline-grabbing stuff might make for good movie plots, but it doesn’t make us safer.
This kind of behavior is all that’s needed to generate widespread fear and uncertainty. It keeps the public worried about terrorism, while at the same time reminding them that they’re helpless without the government to defend them.
It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People respond to the warning, and there is a discrete period when their lives are markedly different. They feel there was a usefulness to the higher alert mode, even if nothing came of it.
It’s quite another to tell people to remain on alert, but not to alter their plans. According to scientists, California is expecting a huge earthquake sometime in the next 200 years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for 200 years. It goes against human nature. Residents of California have the same level of short-term fear and long-term apathy regarding the threat of earthquakes that the rest of the nation has developed regarding the DHS’s terrorist threat alert.
A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Even worse, it echoes the very tactics of the terrorists. There are two basic ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.
Finally, in 2009, the DHS is considering changes to the system:
A proposal by the Homeland Security Advisory Council, unveiled late Tuesday, recommends removing two of the five colors, with a standard state of affairs being a “guarded” Yellow. The Green “low risk of terrorist attacks” might get removed altogether, meaning stay prepared for your morning subway commute to turn deadly at any moment.
That’s right, according to the DHS the problem was too many levels. I hope you all feel safer now.
Here are some more whimsical designs, but I want the whole thing be ditched. And it should be easy to ditch; no one thinks it has any value. Unfortunately, if the Obama Administration can’t make this simple change, I don’t think they have the political will to make any of the harder changes we need.
On September 30, 2001, I published a special issue of Crypto-Gram discussing the terrorist attacks. I wrote about the novelty of the attacks, airplane security, diagnosing intelligence failures, the potential of regulating cryptography—because it could be used by the terrorists—and protecting privacy and liberty. Much of what I wrote is still relevant today:
Appalled by the recent hijackings, many Americans have declared themselves willing to give up civil liberties in the name of security. They’ve declared it so loudly that this trade-off seems to be a fait accompli. Article after article talks about the balance between privacy and security, discussing whether various increases of security are worth the privacy and civil-liberty losses. Rarely do I see a discussion about whether this linkage is a valid one.
Security and privacy are not two sides of a teeter-totter. This association is simplistic and largely fallacious. It’s easy and fast, but less effective, to increase security by taking away liberty. However, the best ways to increase security are not at the expense of privacy and liberty.
It’s easy to refute the notion that all security comes at the expense of liberty. Arming pilots, reinforcing cockpit doors, and teaching flight attendants karate are all examples of security measures that have no effect on individual privacy or liberties. So are better authentication of airport maintenance workers, or dead-man switches that force planes to automatically land at the closest airport, or armed air marshals traveling on flights.
Liberty-depriving security measures are most often found when system designers failed to take security into account from the beginning. They’re Band-aids, and evidence of bad security planning. When security is designed into a system, it can work without forcing people to give up their freedoms.
[…]
There are copycat criminals and terrorists, who do what they’ve seen done before. To a large extent, this is what the hastily implemented security measures have tried to prevent. And there are the clever attackers, who invent new ways to attack people. This is what we saw on September 11. It’s expensive, but we can build security to protect against yesterday’s attacks. But we can’t guarantee protection against tomorrow’s attacks: the hacker attack that hasn’t been invented, or the terrorist attack yet to be conceived.
Demands for even more surveillance miss the point. The problem is not obtaining data, it’s deciding which data is worth analyzing and then interpreting it. Everyone already leaves a wide audit trail as we go through life, and law enforcement can already access those records with search warrants. The FBI quickly pieced together the terrorists’ identities and the last few months of their lives, once they knew where to look. If they had thrown up their hands and said that they couldn’t figure out who did it or how, they might have a case for needing more surveillance data. But they didn’t, and they don’t.
More data can even be counterproductive. The NSA and the CIA have been criticized for relying too much on signals intelligence, and not enough on human intelligence. The East German police collected data on four million East Germans, roughly a quarter of their population. Yet they did not foresee the peaceful overthrow of the Communist government because they invested heavily in data collection instead of data interpretation. We need more intelligence agents squatting on the ground in the Middle East arguing the Koran, not sitting in Washington arguing about wiretapping laws.
People are willing to give up liberties for vague promises of security because they think they have no choice. What they’re not being told is that they can have both. It would require people to say no to the FBI’s power grab. It would require us to discard the easy answers in favor of thoughtful answers. It would require structuring incentives to improve overall security rather than simply decreasing its costs. Designing security into systems from the beginning, instead of tacking it on at the end, would give us the security we need, while preserving the civil liberties we hold dear.
Some broad surveillance, in limited circumstances, might be warranted as a temporary measure. But we need to be careful that it remain temporary, and that we do not design surveillance into our electronic infrastructure. Thomas Jefferson once said: “Eternal vigilance is the price of liberty.” Historically, liberties have always been a casualty of war, but a temporary casualty. This war—a war without a clear enemy or end condition—has the potential to turn into a permanent state of society. We need to design our security accordingly.
The BBC has a video demonstration of a 16-ounce bottle of liquid blowing a hole in the side of a plane.
I know no more details other than what’s in the video.
Three of the UK liquid bombers were convicted Monday. NSA-intercepted e-mail was introduced as evidence in the trial:
The e-mails, several of which have been reprinted by the BBC and other publications, contained coded messages, according to prosecutors. They were intercepted by the NSA in 2006 but were not included in evidence introduced in a first trial against the three last year.
That trial resulted in the men being convicted of conspiracy to commit murder; but a jury was not convinced that they had planned to use soft drink bottles filled with liquid explosives to blow up seven trans-Atlantic planes—the charge for which they were convicted this week in a second trial.
According to Channel 4, the NSA had previously shown the e-mails to their British counterparts, but refused to let prosecutors use the evidence in the first trial, because the agency didn’t want to tip off an alleged accomplice in Pakistan named Rashid Rauf that his e-mail was being monitored. U.S. intelligence agents said Rauf was al Qaeda’s director of European operations at the time and that the bomb plot was being directed by Rauf and others in Pakistan.
The NSA later changed its mind and allowed the evidence to be introduced in the second trial, which was crucial to getting the jury conviction. Channel 4 suggests the NSA’s change of mind occurred after Rauf, a Briton born of Pakistani parents, was reportedly killed last year by a U.S. drone missile that struck a house where he was staying in northern Pakistan.
Although British prosecutors were eager to use the e-mails in their second trial against the three plotters, British courts prohibit the use of evidence obtained through interception. So last January, a U.S. court issued warrants directly to Yahoo to hand over the same correspondence.
It’s unclear if the NSA intercepted the messages as they passed through internet nodes based in the U.S. or intercepted them overseas.
EDITED TO ADD (9/9): Just to be sure, this has nothing to do with any illegal warrantless wiretapping the NSA has done over the years; the wiretap used to intercept these e-mails was obtained with a FISA warrant.
Sidebar photo of Bruce Schneier by Joe MacInnis.