Entries Tagged "public transit"

Page 2 of 7

Terrorist Attacks and Comparable Risks, Part 2

John Adams argues that our irrationality about comparative risks depends on the type of risk:

With “pure” voluntary risks, the risk itself, with its associated challenge and rush of adrenaline, is the reward. Most climbers on Mount Everest know that it is dangerous and willingly take the risk. With a voluntary, self-controlled, applied risk, such as driving, the reward is getting expeditiously from A to B. But the sense of control that drivers have over their fates appears to encourage a high level of tolerance of the risks involved.

Cycling from A to B (I write as a London cyclist) is done with a diminished sense of control over one’s fate. This sense is supported by statistics that show that per kilometre travelled a cyclist is 14 times more likely to die than someone in a car. This is a good example of the importance of distinguishing between relative and absolute risk. Although 14 times greater, the absolute risk of cycling is still small—1 fatality in 25 million kilometres cycled; not even Lance Armstrong can begin to cover that distance in a lifetime of cycling. And numerous studies have demonstrated that the extra relative risk is more than offset by the health benefits of regular cycling; regular cyclists live longer.

While people may voluntarily board planes, buses and trains, the popular reaction to crashes in which passengers are passive victims, suggests that the public demand a higher standard of safety in circumstances in which people voluntarily hand over control of their safety to pilots, or to bus or train drivers.

Risks imposed by nature—such as those endured by those living on the San Andreas Fault or the slopes of Mount Etna—or impersonal economic forces—such as the vicissitudes of the global economy—are placed in the middle of the scale. Reactions vary widely. They are usually seen as motiveless and are responded to fatalistically – unless or until the threat appears imminent.

Imposed risks are less tolerated. Consider mobile phones. The risk associated with the handsets is either non-existent or very small. The risk associated with the base stations, measured by radiation dose, unless one is up the mast with an ear to the transmitter, is orders of magnitude less. Yet all round the world billions are queuing up to take the voluntary risk, and almost all the opposition is focussed on the base stations, which are seen by objectors as impositions. Because the radiation dose received from the handset increases with distance from the base station, to the extent that campaigns against the base stations are successful, they will increase the distance from the base station to the average handset, and thus the radiation dose. The base station risk, if it exist, might be labelled a benignly imposed risk; no one supposes that the phone company wishes to murder all those in the neighbourhood.

Less tolerated are risks whose imposers are perceived as motivated by profit or greed. In Europe, big biotech companies such as Monsanto are routinely denounced by environmentalist opponents for being more concerned with profits than the welfare of the environment or the consumers of its products.

Less tolerated still are malignly imposed risks—crimes ranging from mugging to rape and murder. In most countries in the world the number of deaths on the road far exceeds the numbers of murders, but far more people are sent to jail for murder than for causing death by dangerous driving. In the United States in 2002 16,000 people were murdered—a statistic that evoked far more popular concern than the 42,000 killed on the road—but far less than the 25 killed by terrorists.

This isn’t a new result, but it’s vital to understand how people react to different risks.

Posted on April 13, 2010 at 1:18 PMView Comments

New York and the Moscow Subway Bombing

People intent on preventing a Moscow-style terrorist attack against the New York subway system are proposing a range of expensive new underground security measures, some temporary and some permanent.

They should save their money – and instead invest every penny they’re considering pouring into new technologies into intelligence and old-fashioned policing.

Intensifying security at specific stations only works against terrorists who aren’t smart enough to move to another station. Cameras are useful only if all the stars align: The terrorists happen to walk into the frame, the video feeds are being watched in real time and the police can respond quickly enough to be effective. They’re much more useful after an attack, to figure out who pulled it off.

Installing biological and chemical detectors requires similarly implausible luck – plus a terrorist plot that includes the specific biological or chemical agent that is being detected.

What all these misguided reactions have in common is that they’re based on “movie-plot threats”: overly specific attack scenarios. They fill our imagination vividly, in full color with rich detail. Before long, we’re envisioning an entire story line, with or without Bruce Willis saving the day. And we’re scared.

It’s not that movie-plot threats are not worth worrying about. It’s that each one – Moscow’s subway attack, the bombing of the Oklahoma City federal building, etc. – is too specific. These threats are infinite, and the bad guys can easily switch among them.

New York has thousands of possible targets, and there are dozens of possible tactics. Implementing security against movie-plot threats is only effective if we correctly guess which specific threat to protect against. That’s unlikely.

A far better strategy is to spend our limited counterterrorism resources on investigation and intelligence – and on emergency response. These measures don’t hinge on any specific threat; they don’t require us to guess the tactic or target correctly. They’re effective in a variety of circumstances, even nonterrorist ones.

The result may not be flashy or outwardly reassuring – as are pricey new scanners in airports. But the strategy will save more lives.

The 2006 arrest of the liquid bombers – who wanted to detonate liquid explosives to be brought onboard airliners traveling from England to North America – serves as an excellent example. The plotters were arrested in their London apartments, and their attack was foiled before they ever got to the airport.

It didn’t matter if they were using liquids or solids or gases. It didn’t even matter if they were targeting airports or shopping malls or theaters. It was a straightforward, although hardly simple, matter of following leads.

Gimmicky security measures are tempting – but they’re distractions we can’t afford. The Christmas Day bomber chose his tactic because it would circumvent last year’s security measures, and the next attacker will choose his tactic – and target – according to similar criteria. Spend money on cameras and guards in the subways, and the terrorists will simply modify their plot to render those countermeasures ineffective.

Humans are a species of storytellers, and the Moscow story has obvious parallels in New York. When we read the word “subway,” we can’t help but think about the system we use every day. This is a natural response, but it doesn’t make for good public policy. We’d all be safer if we rose above the simple parallels and the need to calm our fears with expensive and seductive new technologies – and countered the threat the smart way.

This essay originally appeared in the New York Daily News.

Posted on April 7, 2010 at 8:52 AMView Comments

Security Cameras in the New York City Subways

The New York Times has an article about cameras in the subways. The article is all about how horrible it is that the cameras don’t work:

Moreover, nearly half of the subway system’s 4,313 security cameras that have been installed—in stations and tunnels throughout the system—do not work, because of either shoddy software or construction problems, say officials with the Metropolitan Transportation Authority, which operates the city’s bus, subway and train system.

I certainly agree that taxpayers should be upset when something they’ve purchased doesn’t function as expected. But way down at the bottom of the article, we find:

Even without the cameras, officials said crime in the transit system had dropped to a record low. In 1990, the system averaged 47.8 crimes a day, compared with 5.3 so far this year. “The subway system is safer than it’s ever been,” said Kevin Ortiz, an authority spokesman.

No data on how many crimes were solved by cameras, but we know from other studies that their effect on crime is minimal.

Posted on March 31, 2010 at 1:24 PMView Comments

Mark Twain on Risk Analysis

From 1871:

I hunted up statistics, and was amazed to find that after all the glaring newspaper headings concerning railroad disasters, less than three hundred people had really lost their lives by those disasters in the preceding twelve months. The Erie road was set down as the most murderous in the list. It had killed forty-six—or twenty-six, I do not exactly remember which, but I know the number was double that of any other road. But the fact straightway suggested itself that the Erie was an immensely long road, and did more business than any other line in the country; so the double number of killed ceased to be matter for surprise.

By further figuring, it appeared that between New York and Rochester the Erie ran eight passenger trains each way every day—sixteen altogether; and carried a daily average of 6,000 persons. That is about a million in six months—the population of New York city. Well, the Erie kills from thirteen to twenty-three persons out of its million in six months; and in the same time 13,000 of New York’s million die in their beds! My flesh crept, my hair stood on end. “This is appalling!” I said. “The danger isn’t in travelling by rail, but in trusting to those deadly beds. I will never sleep in a bed again.”

Posted on February 23, 2010 at 7:16 AMView Comments

Christmas Bomber: Where Airport Security Worked

With all the talk about the failure of airport security to detect the PETN that the Christmas bomber sewed into his underwear—and to think I’ve been using the phrase “underwear bomber” as a joke all these years—people forget that airport security played an important role in foiling the plot.

In order to get through airport security, Abdulmutallab—or, more precisely, whoever built the bomb—had to construct a far less reliable bomb than he would have otherwise; he had to resort to a much more ineffective detonation mechanism. And, as we’ve learned, detonating PETN is actually very hard.

Additionally, I don’t think it’s fair to criticize airport security for not catching the PETN. The security systems at airports aren’t designed to catch someone strapping a plastic explosive to his body. Even more strongly: no security system, at any airport, in any country on the planet, is designed to catch someone doing this. This isn’t a surprise. It isn’t even a new idea. It wasn’t even a new idea when I said this to then TSA head Kip Hawley in 2007: “I don’t want to even think about how much C4 I can strap to my legs and walk through your magnetometers.” You can try to argue that the TSA, and other airport security organizations around the world, should have been redesigned years ago to catch this, but anyone who is surprised by this attack simply hasn’t been paying attention.

EDITED TO ADD (1/4): I don’t know what to make of this:

Ben Wallace, who used to work at defence firm QinetiQ, one of the companies making the technology, warned it was not a “big silver bullet”.


Mr Wallace said the scanners would probably not have detected the failed Detroit plane plot of Christmas Day.

He said the same of the 2006 airliner liquid bomb plot and of explosives used in the 2005 bombings of three Tube trains and a bus in London.


He said the “passive millimetre wave scanners” – which QinetiQ helped develop – probably would not have detected key plots affecting passengers in the UK in recent years.


Mr Wallace told BBC Radio 4’s Today programme: “The advantage of the millimetre waves are that they can be used at longer range, they can be quicker and they are harmless to travellers.

“But there is a big but, and the but was in all the testing that we undertook, it was unlikely that it would have picked up the current explosive devices being used by al-Qaeda.”

He added: “It probably wouldn’t have picked up the very large plot with the liquids in 2006 at Heathrow or indeed the… bombs that were used on the Tube because it wasn’t very good and it wasn’t that easy to detect liquids and plastics unless they were very solid plastics.

“This is not necessarily the big silver bullet that is somehow being portrayed by Downing Street.”

A spokeswoman for QinetiQ said “no single technology can address every eventuality or security risk”.

“QinetiQ’s passive millimetre wave system, SPO, is a… people-screening system which can identify potential security threats concealed on the human body. It is not a checkpoint security system.

“SPO can effectively shortlist people who may need further investigation, either via other technology such as x-rays, or human intervention such as a pat-down search.”

Posted on January 4, 2010 at 6:28 AMView Comments

Man Arrested by Amtrak Police for Taking Photographs for Amtrak Photography Contest

You can’t make this stuff up. Even Stephen Colbert made fun of it.

This isn’t the first time Amtrak police have been idiots.

And in related news, in the U.K. it soon might be illegal to photograph the police.

EDITED TO ADD (2/10): The photographer’s page about the incident has been replaced with the words “No comment!” Anyone have a link to a copy? In the meantime, here’s an entry about the incident on a photo activist’s blog.

EDITED AGAIN: Thanks to Phil M. in comments for finding these Google Cache links from Duane Kerzic’s site:

Phil adds: “The main Amtrak page on his site has since been crawled, so Google now has the ‘no comment’ note cached.”

Posted on February 10, 2009 at 6:19 AMView Comments

FBI Stoking Fear

Another unsubstantiated terrorist plot:

An internal memo obtained by The Associated Press says the FBI has received a “plausible but unsubstantiated” report that al-Qaida terrorists in late September may have discussed attacking the subway system.


The internal bulletin says al-Qaida terrorists “in late September may have discussed targeting transit systems in and around New York City. These discussions reportedly involved the use of suicide bombers or explosives placed on subway/passenger rail systems,” according to the document.

“We have no specific details to confirm that this plot has developed beyond aspirational planning, but we are issuing this warning out of concern that such an attack could possibly be conducted during the forthcoming holiday season,” according to the warning dated Tuesday.


Rep. Peter King, the top Republican on the House Homeland Security Committee, said authorities “have very real specifics as to who it is and where the conversation took place and who conducted it.”

“It certainly involves suicide bombing attacks on the mass transit system in and around New York and it’s plausible, but there’s no evidence yet that it’s in the process of being carried out,” King said.

Knocke, the DHS spokesman, said the warning was issued “out of an abundance of caution going into this holiday season.”

Got that: “plausible but unsubstantiated,” “may have discussed attacking the subway system,” “specific details to confirm that this plot has developed beyond aspirational planning,” “attack could possibly be conducted,” “it’s plausible, but there’s no evidence yet that it’s in the process of being carried out.”

I have no specific details, but I want to warn everybody today that fiery rain might fall from the sky. Terrorists may have discussed this sort of tactic, possibly at one of their tequila-fueled aspirational planning sessions. While there is no evidence yet that the plan is in the process of being carried out, I want to be extra-cautious this holiday season. Ho ho ho.

Posted on November 27, 2008 at 12:27 PMView Comments

Terrorist Fear Mongering Seems to be Working Less Well, Part II

Last week I wrote about a story that indicated that terrorist fear mongering is working less well. Here’s another story, this one from Canada: two pipeline bombings in Northern British Columbia:

Investigators are treating the explosions as acts of vandalism, not terrorism, Shields said.

“Under the Criminal Code, it would be characterized as mischief, which is an intentional vandalism. We don’t want to characterize this as terrorism. They were very isolated locations and there would seem there was no intent to hurt people,” he said.

It’s not all good, though. Here’s a story from Philadelphia, where a subway car is criticized because people can see out the front. Because, um, because terrorist will be able to see out the front, and we all know how dangerous terrorists are:

Marcus Ruef, a national vice president with the Brotherhood of Locomotive Engineers and Trainmen, compared a train cab to an airliner cockpit and said a cab should be similarly secure. He invoked post-9/11 security concerns as a reason to provide a full cab that prevents passengers from seeing the rails and signals ahead.

“We don’t think the forward view of the right-of-way should be available to whoever wants to watch … and the conductor and the engineer should be able to talk privately,” Ruef said.

Pat Nowakowski, SEPTA chief of operations, said the smaller cabs pose no security risk. “I have never heard that from a security expert,” he said.

At least there was pushback against that kind of idiocy.

And from the UK:

Transport Secretary Geoff Hoon has said the government is prepared to go “quite a long way” with civil liberties to “stop terrorists killing people”.

He was responding to criticism of plans for a database of mobile and web records, saying it was needed because terrorists used such communications.

By not monitoring this traffic, it would be “giving a licence to terrorists to kill people”, he said.

I hope there will be similar pushback against this “choice.”

EDITED TO ADD (11/13): Seems like the Philadelphia engineers have another agenda—the cabs in the new trains are too small—and they’re just using security as an excuse.

Posted on October 22, 2008 at 6:44 AMView Comments

Terrorist Fear Mongering Seems to be Working Less Well

BART, the San Francisco subway authority, has been debating allowing passengers to bring drinks on trains. There are all sorts of good reasons why or why not—convenience, problems with spills, and so on—but one reason that makes no sense is that terrorists may bring flammable liquids on board. Yet that is exactly what BART managers said.

No big news—we’ve seen stupid things like this regularly since 9/11—but this time people responded:

Added Director Tom Radulovich, “If somebody wants to break the law and bring flammable liquids on, they can. It’s not like al Qaeda is waiting in their caves for us to have a sippy-cup rule.”

Directing his comments to BART administrators, he said, “You know, it’s just fearmongering and you should be ashamed.”

Posted on October 15, 2008 at 7:07 AMView Comments

Full Disclosure and the Boston Farecard Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.

EDITD TO ADD (9/12): A good legal analysis.

Posted on August 26, 2008 at 6:04 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.