Entries Tagged "threat models"

Page 5 of 5

The Seven Habits of Highly Ineffective Terrorists

Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. If we’re ever going to defeat terrorism, we need to understand what drives people to become terrorists in the first place.

Conventional wisdom holds that terrorism is inherently political, and that people become terrorists for political reasons. This is the “strategic” model of terrorism, and it’s basically an economic model. It posits that people resort to terrorism when they believe — rightly or wrongly — that terrorism is worth it; that is, when they believe the political gains of terrorism minus the political costs are greater than if they engaged in some other, more peaceful form of protest. It’s assumed, for example, that people join Hamas to achieve a Palestinian state; that people join the PKK to attain a Kurdish national homeland; and that people join al-Qaida to, among other things, get the United States out of the Persian Gulf.

If you believe this model, the way to fight terrorism is to change that equation, and that’s what most experts advocate. Governments tend to minimize the political gains of terrorism through a no-concessions policy; the international community tends to recommend reducing the political grievances of terrorists via appeasement, in hopes of getting them to renounce violence. Both advocate policies to provide effective nonviolent alternatives, like free elections.

Historically, none of these solutions has worked with any regularity. Max Abrahms, a predoctoral fellow at Stanford University’s Center for International Security and Cooperation, has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. In a paper published this year in International Security that — sadly — doesn’t have the title “Seven Habits of Highly Ineffective Terrorists,” he discusses, well, seven habits of highly ineffective terrorists. These seven tendencies are seen in terrorist organizations all over the world, and they directly contradict the theory that terrorists are political maximizers:

Terrorists, he writes, (1) attack civilians, a policy that has a lousy track record of convincing those civilians to give the terrorists what they want; (2) treat terrorism as a first resort, not a last resort, failing to embrace nonviolent alternatives like elections; (3) don’t compromise with their target country, even when those compromises are in their best interest politically; (4) have protean political platforms, which regularly, and sometimes radically, change; (5) often engage in anonymous attacks, which precludes the target countries making political concessions to them; (6) regularly attack other terrorist groups with the same political platform; and (7) resist disbanding, even when they consistently fail to achieve their political objectives or when their stated political objectives have been achieved.

Abrahms has an alternative model to explain all this: People turn to terrorism for social solidarity. He theorizes that people join terrorist organizations worldwide in order to be part of a community, much like the reason inner-city youths join gangs in the United States.

The evidence supports this. Individual terrorists often have no prior involvement with a group’s political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can’t describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren’t working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida.

For example, several of the 9/11 hijackers planned to fight in Chechnya, but they didn’t have the right paperwork so they attacked America instead. The mujahedeen had no idea whom they would attack after the Soviets withdrew from Afghanistan, so they sat around until they came up with a new enemy: America. Pakistani terrorists regularly defect to another terrorist group with a totally different political platform. Many new al-Qaida members say, unconvincingly, that they decided to become a jihadist after reading an extreme, anti-American blog, or after converting to Islam, sometimes just a few weeks before. These people know little about politics or Islam, and they frankly don’t even seem to care much about learning more. The blogs they turn to don’t have a lot of substance in these areas, even though more informative blogs do exist.

All of this explains the seven habits. It’s not that they’re ineffective; it’s that they have a different goal. They might not be effective politically, but they are effective socially: They all help preserve the group’s existence and cohesion.

This kind of analysis isn’t just theoretical; it has practical implications for counterterrorism. Not only can we now better understand who is likely to become a terrorist, we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations. Driving a wedge between group members — commuting prison sentences in exchange for actionable intelligence, planting more double agents within terrorist groups — will go a long way to weakening the social bonds within those groups.

We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/9): Interesting rebuttal.

Posted on October 7, 2008 at 5:48 AMView Comments

Diebold Doesn't Get It

This quote sums up nicely why Diebold should not be trusted to secure election machines:

David Bear, a spokesman for Diebold Election Systems, said the potential risk existed because the company’s technicians had intentionally built the machines in such a way that election officials would be able to update their systems in years ahead.

“For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” he said. “I don’t believe these evil elections people exist.”

If you can’t get the threat model right, you can’t hope to secure the system.

Posted on May 22, 2006 at 3:22 PMView Comments

VOIP Encryption

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path — even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

That’s basically the entire threat model for traditional phone calls. And when most people think about IP telephony — voice over internet protocol, or VOIP — that’s the threat model they probably have in their heads.

Unfortunately, phone calls from your computer are fundamentally different from phone calls from your telephone. Internet telephony’s threat model is much closer to the threat model for IP-networked computers than the threat model for telephony.

And we already know the threat model for IP. Data packets can be eavesdropped on anywhere along the transmission path. Data packets can be intercepted in the corporate network, by the internet service provider and along the backbone. They can be eavesdropped on by the people or organizations that own those computers, and they can be eavesdropped on by anyone who has successfully hacked into those computers. They can be vacuumed up by nosy hackers, criminals, competitors and governments.

It’s comparable to threat No. 3 above, but with the scope vastly expanded.

My greatest worry is the criminal attacks. We already have seen how clever criminals have become over the past several years at stealing account information and personal data. I can imagine them eavesdropping on attorneys, looking for information with which to blackmail people. I can imagine them eavesdropping on bankers, looking for inside information with which to make stock purchases. I can imagine them stealing account information, hijacking telephone calls, committing identity theft. On the business side, I can see them engaging in industrial espionage and stealing trade secrets. In short, I can imagine them doing all the things they could never have done with the traditional telephone network.

This is why encryption for VOIP is so important. VOIP calls are vulnerable to a variety of threats that traditional telephone calls are not. Encryption is one of the essential security technologies for computer data, and it will go a long way toward securing VOIP.

The last time this sort of thing came up, the U.S. government tried to sell us something called “key escrow.” Basically, the government likes the idea of everyone using encryption, as long as it has a copy of the key. This is an amazingly insecure idea for a number of reasons, mostly boiling down to the fact that when you provide a means of access into a security system, you greatly weaken its security.

A recent case in Greece demonstrated that perfectly: Criminals used a cell-phone eavesdropping mechanism already in place, designed for the police to listen in on phone calls. Had the call system been designed to be secure in the first place, there never would have been a backdoor for the criminals to exploit.

Fortunately, there are many VOIP-encryption products available. Skype has built-in encryption. Phil Zimmermann is releasing Zfone, an easy-to-use open-source product. There’s even a VOIP Security Alliance.

Encryption for IP telephony is important, but it’s not a panacea. Basically, it takes care of threats No. 2 through No. 4, but not threat No. 1. Unfortunately, that’s the biggest threat: eavesdropping at the end points. No amount of IP telephony encryption can prevent a Trojan or worm on your computer — or just a hacker who managed to get access to your machine — from eavesdropping on your phone calls, just as no amount of SSL or e-mail encryption can prevent a Trojan on your computer from eavesdropping — or even modifying — your data.

So, as always, it boils down to this: We need secure computers and secure operating systems even more than we need secure transmission.

This essay originally appeared on Wired.com.

Posted on April 6, 2006 at 5:09 AMView Comments

80 Cameras for 2,400 People

This story is about the remote town of Dillingham, Alaska, which is probably the most watched town in the country. There are 80 surveillance cameras for the 2,400 people, which translates to one camera for every 30 people.

The cameras were bought, I assume, because the town couldn’t think of anything else to do with the $202,000 Homeland Security grant they received. (One of the problems of giving this money out based on political agenda, rather than by where the actual threats are.)

But they got the money, and they spent it. And now they have to justify the expense. Here’s the movie-plot threat the Dillingham Police Chief uses to explain why the expense was worthwhile:

“Russia is about 800 miles that way,” he says, arm extending right.

“Seattle is about 1,200 miles back that way.” He points behind him.

“So if I have the math right, we’re closer to Russia than we are to Seattle.”

Now imagine, he says: What if the bad guys, whoever they are, manage to obtain a nuclear device in Russia, where some weapons are believed to be poorly guarded. They put the device in a container and then hire organized criminals, “maybe Mafiosi,” to arrange a tramp steamer to pick it up. The steamer drops off the container at the Dillingham harbor, complete with forged paperwork to ship it to Seattle. The container is picked up by a barge.

“Ten days later,” the chief says, “the barge pulls into the Port of Seattle.”

Thompson pauses for effect.

“Phoooom,” he says, his hands blooming like a flower.

The first problem with the movie plot is that it’s just plain silly. But the second problem, which you might have to look back to notice, is that those 80 cameras will do nothing to stop his imagined attack.

We are all security consumers. We spend money, and we expect security in return. This expenditure was a waste of money, and as a U.S. taxpayer, I am pissed that I’m getting such a lousy deal.

Posted on March 29, 2006 at 1:13 PMView Comments

Airport Passenger Screening

It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we’re all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn’t the “underwear bomber.”)

The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it’s going to slip past the screeners pretty easily. The explosive material won’t show up on the metal detector, and the associated electronics can look benign when disassembled. This isn’t even a new problem. It’s widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.

But guns and knives? That surprises most people.

Airport screeners have a difficult job, primarily because the human brain isn’t naturally adapted to the task. We’re wired for visual pattern matching, and are great at picking out something we know to look for — for example, a lion in a sea of tall grass.

But we’re much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there’s no point in paying attention. By the time the exception comes around, the brain simply doesn’t notice it. This psychological phenomenon isn’t just a problem in airport screening: It’s been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.

To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.

And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate … the list goes on and on.

Technology can help. X-ray machines already randomly insert “test” bags into the stream — keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.

Sure, there’ll be a lot of false alarms, and some bad things will still get through. But it’s better than the alternative.

And it’s likely good enough. Remember the point of passenger screening. We’re not trying to catch the clever, organized, well-funded terrorists. We’re trying to catch the amateurs and the incompetent. We’re trying to catch the unstable. We’re trying to catch the copycats. These are all legitimate threats, and we’re smart to defend against them. Against the professionals, we’re just trying to add enough uncertainty into the system that they’ll choose other targets instead.

The terrorists’ goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there’s even a small explosion, everyone on the plane dies. But there’s a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven’t really solved the problem.

What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn’t spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.

When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did — again recognizing the real threat.)

And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.

In some ways, if we’re relying on airport screeners to prevent terrorism, it’s already too late. After all, we can’t keep weapons out of prisons. How can we ever hope to keep them out of airports?

A version of this essay originally appeared on Wired.com.

Posted on March 23, 2006 at 7:03 AMView Comments

How to Not Fix the ID Problem

Several of the 9/11 terrorists had Virginia driver’s licenses in fake names. These were not forgeries; these were valid Virginia IDs that were illegally sold by Department of Motor Vehicle workers.

So what did Virginia do to correct the problem? They required more paperwork in order to get an ID.

But the problem wasn’t that it was too easy to get an ID. The problem was that insiders were selling them illegally. Which is why the Virginia “solution” didn’t help, and the problem remains:

The manager of the Virginia Department of Motor Vehicles office at Springfield Mall was charged yesterday with selling driver’s licenses to illegal immigrants and others for up to $3,500 apiece.

The arrest of Francisco J. Martinez marked the second time in two years that a Northern Virginia DMV employee was accused of fraudulently selling licenses for cash. A similar scheme two years ago at the DMV office in Tysons Corner led to the guilty pleas of two employees.

And after we spend billions on the REAL ID act, and require even more paperwork to get a state ID, the problem will still remain.

Posted on July 19, 2005 at 1:15 PMView Comments

More on Two-Factor Authentication

Recently I published an essay arguing that two-factor authentication is an ineffective defense against identity theft. For example, issuing tokens to online banking customers won’t reduce fraud, because new attack techniques simply ignore the countermeasure. Unfortunately, some took my essay as a condemnation of two-factor authentication in general. This is not true. It’s simply a matter of understanding the threats and the attacks.

Passwords just don’t work anymore. As computers have gotten faster, password guessing has gotten easier. Ever-more-complicated passwords are required to evade password-guessing software. At the same time, there’s an upper limit to how complex a password users can be expected to remember. About five years ago, these two lines crossed: It is no longer reasonable to expect users to have passwords that can’t be guessed. For anything that requires reasonable security, the era of passwords is over.

Two-factor authentication solves this problem. It works against passive attacks: eavesdropping and password guessing. It protects against users choosing weak passwords, telling their passwords to their colleagues or writing their passwords on pieces of paper taped to their monitors. For an organization trying to improve access control for its employees, two-factor authentication is a great idea. Microsoft is integrating two-factor authentication into its operating system, another great idea.

What two-factor authentication won’t do is prevent identity theft and fraud. It’ll prevent certain tactics of identity theft and fraud, but criminals simply will switch tactics. We’re already seeing fraud tactics that completely ignore two-factor authentication. As banks roll out two-factor authentication, criminals simply will switch to these new tactics.

Security is always an arms race, and you could argue that this situation is simply the cost of treading water. The problem with this reasoning is it ignores countermeasures that permanently reduce fraud. By concentrating on authenticating the individual rather than authenticating the transaction, banks are forced to defend against criminal tactics rather than the crime itself.

Credit cards are a perfect example. Notice how little attention is paid to cardholder authentication. Clerks barely check signatures. People use their cards over the phone and on the Internet, where the card’s existence isn’t even verified. The credit card companies spend their security dollar authenticating the transaction, not the cardholder.

Two-factor authentication is a long-overdue solution to the problem of passwords. I welcome its increasing popularity, but identity theft and bank fraud are not results of password problems; they stem from poorly authenticated transactions. The sooner people realize that, the sooner they’ll stop advocating stronger authentication measures and the sooner security will actually improve.

This essay previously appeared in Network World as a “Face Off.” Joe Uniejewski of RSA Security wrote an opposing position. Another article on the subject was published at SearchSecurity.com.

One way to think about this — a phrasing I didn’t think about until after writing the above essay — is that two-factor authentication solves security problems involving authentication. The current wave of attacks against financial systems are not exploiting vulnerabilities in the authentication system, so two-factor authentication doesn’t help.

Posted on April 12, 2005 at 11:02 AMView Comments

1 3 4 5

Sidebar photo of Bruce Schneier by Joe MacInnis.