Entries Tagged "risk assessment"

Page 14 of 21

Threat Modeling at Microsoft

Interesting paper by Adam Shostack:

Abstract. Describes a decade of experience threat modeling products and services at Microsoft. Describes the current threat modeling methodology used in the Security Development Lifecycle. The methodology is a practical approach, usable by non-experts, centered on data ow diagrams and a threat enumeration technique of ‘STRIDE per element.’ The paper covers some lessons learned which are likely applicable to other security analysis techniques. The paper closes with some possible questions for academic research.

Posted on October 13, 2008 at 6:21 AMView Comments

Taleb on the Limitations of Risk Management

Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb:

Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again—a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you’ll overestimate the probability of these scenarios. If you examine them at the expense of those you don’t examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I’ll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of “make-sense” type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don’t give a fool the illusion of risk management. Don’t ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk.

Posted on October 3, 2008 at 7:48 AMView Comments

Movie-Plot Threats in the Guardian

We spend far more effort defending our countries against specific movie-plot threats, rather than the real, broad threats. In the US during the months after the 9/11 attacks, we feared terrorists with scuba gear, terrorists with crop dusters and terrorists contaminating our milk supply. Both the UK and the US fear terrorists with small bottles of liquid. Our imaginations run wild with vivid specific threats. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.

It’s not just terrorism; it’s any rare risk in the news. The big fear in Canada right now, following a particularly gruesome incident, is random decapitations on intercity buses. In the US, fears of school shootings are much greater than the actual risks. In the UK, it’s child predators. And people all over the world mistakenly fear flying more than driving. But the very definition of news is something that hardly ever happens. If an incident is in the news, we shouldn’t worry about it. It’s when something is so common that its no longer news – car crashes, domestic violence – that we should worry. But that’s not the way people think.

Psychologically, this makes sense. We are a species of storytellers. We have good imaginations and we respond more emotionally to stories than to data. We also judge the probability of something by how easy it is to imagine, so stories that are in the news feel more probable – and ominous – than stories that are not. As a result, we overreact to the rare risks we hear stories about, and fear specific plots more than general threats.

The problem with building security around specific targets and tactics is that its only effective if we happen to guess the plot correctly. If we spend billions defending the Underground and terrorists bomb a school instead, we’ve wasted our money. If we focus on the World Cup and terrorists attack Wimbledon, we’ve wasted our money.

It’s this fetish-like focus on tactics that results in the security follies at airports. We ban guns and knives, and terrorists use box-cutters. We take away box-cutters and corkscrews, so they put explosives in their shoes. We screen shoes, so they use liquids. We take away liquids, and they’re going to do something else. Or they’ll ignore airplanes entirely and attack a school, church, theatre, stadium, shopping mall, airport terminal outside the security area, or any of the other places where people pack together tightly.

These are stupid games, so let’s stop playing. Some high-profile targets deserve special attention and some tactics are worse than others. Airplanes are particularly important targets because they are national symbols and because a small bomb can kill everyone aboard. Seats of government are also symbolic, and therefore attractive, targets. But targets and tactics are interchangeable.

The following three things are true about terrorism. One, the number of potential terrorist targets is infinite. Two, the odds of the terrorists going after any one target is zero. And three, the cost to the terrorist of switching targets is zero.

We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn’t require us to guess. We need to focus resources on intelligence and investigation: identifying terrorists, cutting off their funding and stopping them regardless of what their plans are. We need to focus resources on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy.

In 2006, UK police arrested the liquid bombers not through diligent airport security, but through intelligence and investigation. It didn’t matter what the bombers’ target was. It didn’t matter what their tactic was. They would have been arrested regardless. That’s smart security. Now we confiscate liquids at airports, just in case another group happens to attack the exact same target in exactly the same way. That’s just illogical.

This essay originally appeared in The Guardian. Nothing I haven’t already said elsewhere.

Posted on September 4, 2008 at 5:56 AMView Comments

Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It’s become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It’s a good idea in theory, but it’s mostly bunk in practice.

Before I get into the details, there’s one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It’s an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn’t make sense in this context.

But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercises knows, when you’re trying to make your numbers, cutting costs is the same as increasing revenues. So while security can’t produce ROI, loss prevention most certainly affects a company’s bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth. Conversely, it shouldn’t ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you’re wasting money. Spend less than that, and you’re also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it’s worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn’t.

The Data Imperative

The key to making this work is good data; the term of art is “actuarial tail.” If you’re doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you’re having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won’t get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn’t enough good data. There aren’t good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don’t even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we’re trying to prevent change so quickly that we can’t accumulate data fast enough. By the time we get some data, there’s a new threat model for which we don’t have enough data. So we can’t create ALE models.

But there’s another problem, and it’s that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can’t argue, since we’re just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you’re in charge of terrorism mitigation at a chlorine plant. What’s the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I’m making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They’ve jiggered the numbers so that they do.

This doesn’t mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don’t even show the vendor your improvements; it won’t consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you’re deciding what security products and services to buy.

This essay previously appeared in CSO Magazine.

Posted on September 2, 2008 at 6:05 AMView Comments

Mental Illness and Murder

Contrary to popular belief, homicide due to mental illness is declining, at least in England and Wales:

The rate of total homicide and the rate of homicide due to mental disorder rose steadily until the mid-1970s. From then there was a reversal in the rate of homicides attributed to mental disorder, which declined to historically low levels, while other homicides continued to rise.

Paper and press release.

Remember this the next time you read a newspaper article about how scared everyone is because some patients escaped from a mental institution:

We are convinced by the media that people with serious mental illnesses make a significant contribution to murders, and we formulate our approach as a society to tens of thousands of people on the basis of the actions of about 20. Once again, the decisions we make, the attitudes we have, and the prejudices we express are all entirely rational, when analysed in terms of the flawed information we are fed, only half chewed, from the mouths of morons.

Posted on August 19, 2008 at 3:23 PMView Comments

Random Killing on a Canadian Greyhound Bus

After a random and horrific knife decapitation on a Greyhound bus last week, does this surprise anyone:

A grisly slaying on a Greyhound bus has prompted calls for tighter security on Canadian bus lines, despite the company and Canada’s transport agency calling the stabbing death a tragic but isolated incident.

Greyhound spokeswoman Abby Wambaugh said bus travel is the safest mode of transportation, even though bus stations do not have metal detectors and other security measures used at airports.

Despite editorials telling people not to overreact, it’s easy to:

“Hearing about this incident really worries me,” said Donna Ryder, 56, who was waiting Thursday at the bus depot in Toronto.

“I’m in a wheelchair and what would I be able to do to defend myself? Probably nothing. So that’s really scary.”

Ryder, who was heading to Kitchener, Ont., said buses are essentially the only way she can get around the province, as her wheelchair won’t fit on Via Rail trains. As it is her main option for travel, a lack of security is troubling, she said.

“I guess we’re going to have to go the airline way, maybe have a search and baggage check, X-ray maybe,” she said.

“Really, I don’t know what you can do about security anymore.”

Of course, airplane security won’t work on buses.

But—more to the point—this essay I wrote on overreacting to rare risks applies here:

People tend to base risk analysis more on personal story than on data, despite the old joke that “the plural of anecdote is not data.” If a friend gets mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than abstract crime statistics.

We give storytellers we have a relationship with more credibility than strangers, and stories that are close to us more weight than stories from foreign lands. In other words, proximity of relationship affects our risk assessment. And who is everyone’s major storyteller these days? Television.

Which is why Canadians are talking about increasing security on long-haul busses, and not Americans.

EDITED TO ADD (8/4): Look at this headline: “Man beheads girlfriend on Santorini island.” Do we need airport-style security measures for Greek islands, too?

EDITED TO ADD (8/5): A surprisingly refreshing editorial:

Here is our suggestion for what ought to be done to upgrade the security of bus transportation after the knife killing of Tim McLean by a fellow Greyhound bus passenger: nothing. Leave the system alone. Mr. McLean could have been murdered equally easily by a random psychopath in a movie theatre or a classroom or a wine bar or a shopping mall—or on his front lawn, for that matter. Unless all of those venues, too, are to be included in the new post-Portage la Prairie security crackdown, singling out buses makes no sense.

Posted on August 4, 2008 at 6:19 AMView Comments

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AMView Comments

How to Sell Security

It’s a truism in sales that it’s easier to sell someone something he wants than a defense against something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It’s not they don’t ever buy these things, but it’s an uphill struggle.

The reason is psychological. And it’s the same dynamic when it’s a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security, or a security officer trying to implement a security policy with her company’s employees.

It’s also true that the better you understand your buyer, the better you can sell.

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of “economic man,” a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here’s an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you’re contemplating a gain or a loss doesn’t make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn’t affect the mathematics and therefore shouldn’t affect the results. This is traditional economics, and it’s called Utility Theory.

But Kahneman’s and Tversky’s experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of “economic man,” Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or “A bird in the hand is worth two in the bush.” And two, a sure loss is worse than a chance at a greater loss, or “Run away and live to fight another day.” Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-averse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the “Asian Disease Experiment” for an almost surreal example. Describing the same policy choice in different ways—either as “200 lives saved out of 600” or “400 lives lost out of 600″—yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It’s a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It’s a choice between a small sure loss—the cost of the security product—and a large risky loss: for example, the results of an attack on one’s network. Of course there’s a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won’t happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don’t understand why, and are continually trying to frame their products in positive results. That’s why you see slogans with the basic message, “We take care of security so you can focus on your business,” or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they’re willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they’ve been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they’re not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn’t be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

This essay originally appeared in CIO.

Posted on May 26, 2008 at 5:57 AMView Comments

Risk and Culture

The Second National Risk and Culture Study, conducted by the Cultural Cognition Project at Yale Law School.

Abstract:

Cultural Cognition refers to the disposition to conform one’s beliefs about societal risks to one’s preferences for how society should be organized. Based on surveys and experiments involving some 5,000 Americans, the Second National Risk and Culture Study presents empirical evidence of the effect of this dynamic in generating conflict about global warming, school shootings, domestic terrorism, nanotechnology, and the mandatory vaccination of school-age girls against HPV, among other issues. The Study also presents evidence of risk-communication strategies that counteract cultural cognition. Because nuclear power affirms rather than threatens the identity of persons who hold individualist values, for example, proposing it as a solution to global warming makes persons who hold such values more willing to consider evidence that climate change is a serious risk. Because people tend to impute credibility to people who share their values, persons who hold hierarchical and egalitarian values are less likely to polarize when they observe people who hold their values advocating unexpected positions on the vaccination of young girls against HPV. Such techniques can help society to create a deliberative climate in which citizens converge on policies that are both instrumentally sound and expressively congenial to persons of diverse values.

And from the conclusion:

Conclusion:

There is a culture war in America, but it is about facts, not values. There is very little evidence that most Americans care nearly as much about issues that symbolize competing cultural values as they do about the economy, national security, and the safety and health of themselves and their loved ones. There is ample evidence, however, that Americans are sharply divided along cultural lines about what sorts of conditions endanger these interests and what sorts of policies effectively counteract such risks.

Findings from the Second National Culture and Risk Study help to show why. Psychologically speaking, it’s much easier to believe that conduct one finds dishonorable or offensive is dangerous, and conduct one finds noble or admirable is socially beneficial, than vice versa. People are also much more inclined to accept information about risk and danger when it comes from someone who shares their values than when it comes from someone who holds opposing commitments.

Posted on May 21, 2008 at 5:19 AMView Comments

Al Qaeda Threat Overrated

Seems obvious to me:

“I reject the notion that Al Qaeda is waiting for ‘the big one’ or holding back an attack,” Sheehan writes. “A terrorist cell capable of attacking doesn’t sit and wait for some more opportune moment. It’s not their style, nor is it in the best interest of their operational security. Delaying an attack gives law enforcement more time to detect a plot or penetrate the organization.”

Terrorism is not about standing armies, mass movements, riots in the streets or even palace coups. It’s about tiny groups that want to make a big bang. So you keep tracking cells and potential cells, and when you find them you destroy them. After Spanish police cornered leading members of the group that attacked trains in Madrid in 2004, they blew themselves up. The threat in Spain declined dramatically.

Indonesia is another case Sheehan and I talked about. Several high-profile associates of bin Laden were nailed there in the two years after 9/11, then sent off to secret CIA prisons for interrogation. The suspects are now at Guantánamo. But suicide bombings continued until police using forensic evidence—pieces of car bombs and pieces of the suicide bombers—tracked down Dr. Azahari bin Husin, “the Demolition Man,” and the little group around him. In a November 2005 shootout the cops killed Dr. Azahari and crushed his cell. After that such attacks in Indonesia stopped.

The drive to obliterate the remaining hives of Al Qaeda training activity along the Afghanistan-Pakistan frontier and those that developed in some corners of Iraq after the U.S. invasion in 2003 needs to continue, says Sheehan. It’s especially important to keep wanna-be jihadists in the West from joining with more experienced fighters who can give them hands-on weapons and explosives training. When left to their own devices, as it were, most homegrown terrorists can’t cut it. For example, on July 7, 2005, four bombers blew themselves up on public transport in London, killing 56 people. Two of those bombers had trained in Pakistan. Another cell tried to do the same thing two weeks later, but its members had less foreign training, or none. All the bombs were duds.

[…]

Sir David Omand, who used to head Britain’s version of the National Security Agency and oversaw its entire intelligence establishment from the Cabinet Office earlier this decade, described terrorism as “one corner” of the global security threat posed by weapons proliferation and political instability. That in turn is only one of three major dangers facing the world over the next few years. The others are the deteriorating environment and a meltdown of the global economy. Putting terrorism in perspective, said Sir David, “leads naturally to a risk management approach, which is very different from what we’ve heard from Washington these last few years, which is to ‘eliminate the threat’.”

Yet when I asked the panelists at the forum if Al Qaeda has been overrated, suggesting as Sheehan does that most of its recruits are bunglers, all shook their heads. Nobody wants to say such a thing on the record, in case there’s another attack tomorrow and their remarks get quoted back to them.

That’s part of what makes Sheehan so refreshing. He knows there’s a big risk that he’ll be misinterpreted; he’ll be called soft on terror by ass-covering bureaucrats, breathless reporters and fear-peddling politicians. And yet he charges ahead. He expects another attack sometime, somewhere. He hopes it won’t be made to seem more apocalyptic than it is. “Don’t overhype it, because that’s what Al Qaeda wants you to do. Terrorism is about psychology.” In the meantime, said Sheehan, finishing his fruit juice, “the relentless 24/7 job for people like me is to find and crush those guys.”

I’ve ordered Sheehan’s book, Crush the Cell: How to Defeat Terrorism Without Terrorizing Ourselves.

Posted on May 7, 2008 at 12:56 PMView Comments

1 12 13 14 15 16 21

Sidebar photo of Bruce Schneier by Joe MacInnis.