Entries Tagged "threat models"

Page 5 of 6

Security and Function Creep

Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose—one it wasn’t suited for in the first place. And then the security system has to play catch-up.

Take driver’s licenses, for example. Originally designed to demonstrate a credential—the ability to drive a car—they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task—teenagers can be extraordinarily resourceful if they set their minds to it—and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.

Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act—the government’s attempt to make driver’s licenses even more secure—has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.

You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well—maybe even military applications.

Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.

But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and—like a driver’s license becoming an age-verification token—the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.

I don’t like having to play catch-up in security, but we seem doomed to keep doing so.

This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.

Posted on February 4, 2010 at 6:35 AMView Comments

$3.2 Million Jewelry Store Theft

I’ve written about this sort of thing before:

A robber bored a hole through the wall of jewelry shop and walked off with about 200 luxury watches worth 300 million yen ($3.2 million) in Tokyo’s upscale Ginza district, police said Saturday.

From Secrets and Lies, p. 318:

Threat modeling is, for the most part, ad hoc. You think about the threats until you can’t think of any more, then you stop. And then you’re annoyed and surprised when some attacker thinks of an attack you didn’t. My favorite example is a band of California art thieves that would break into people’s houses by cutting a hole in their walls with a chainsaw. The attacker completely bypassed the threat model of the defender. The countermeasures that the homeowner put in place were door and window alarms; they didn’t make a difference to this attack.

One of the important things to consider in threat modeling is whether the attacker is looking for any victim, or is specifically targeting you. If the attacker is looking for any victim, then countermeasures that make you a less attractive target than other people are generally good enough. If the attacker is specifically targeting you, then you need to consider a greater level of security.

Posted on January 14, 2010 at 12:43 PMView Comments

Threat Modeling at Microsoft

Interesting paper by Adam Shostack:

Abstract. Describes a decade of experience threat modeling products and services at Microsoft. Describes the current threat modeling methodology used in the Security Development Lifecycle. The methodology is a practical approach, usable by non-experts, centered on data ow diagrams and a threat enumeration technique of ‘STRIDE per element.’ The paper covers some lessons learned which are likely applicable to other security analysis techniques. The paper closes with some possible questions for academic research.

Posted on October 13, 2008 at 6:21 AMView Comments

The Seven Habits of Highly Ineffective Terrorists

Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. If we’re ever going to defeat terrorism, we need to understand what drives people to become terrorists in the first place.

Conventional wisdom holds that terrorism is inherently political, and that people become terrorists for political reasons. This is the “strategic” model of terrorism, and it’s basically an economic model. It posits that people resort to terrorism when they believe—rightly or wrongly—that terrorism is worth it; that is, when they believe the political gains of terrorism minus the political costs are greater than if they engaged in some other, more peaceful form of protest. It’s assumed, for example, that people join Hamas to achieve a Palestinian state; that people join the PKK to attain a Kurdish national homeland; and that people join al-Qaida to, among other things, get the United States out of the Persian Gulf.

If you believe this model, the way to fight terrorism is to change that equation, and that’s what most experts advocate. Governments tend to minimize the political gains of terrorism through a no-concessions policy; the international community tends to recommend reducing the political grievances of terrorists via appeasement, in hopes of getting them to renounce violence. Both advocate policies to provide effective nonviolent alternatives, like free elections.

Historically, none of these solutions has worked with any regularity. Max Abrahms, a predoctoral fellow at Stanford University’s Center for International Security and Cooperation, has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. In a paper published this year in International Security that—sadly—doesn’t have the title “Seven Habits of Highly Ineffective Terrorists,” he discusses, well, seven habits of highly ineffective terrorists. These seven tendencies are seen in terrorist organizations all over the world, and they directly contradict the theory that terrorists are political maximizers:

Terrorists, he writes, (1) attack civilians, a policy that has a lousy track record of convincing those civilians to give the terrorists what they want; (2) treat terrorism as a first resort, not a last resort, failing to embrace nonviolent alternatives like elections; (3) don’t compromise with their target country, even when those compromises are in their best interest politically; (4) have protean political platforms, which regularly, and sometimes radically, change; (5) often engage in anonymous attacks, which precludes the target countries making political concessions to them; (6) regularly attack other terrorist groups with the same political platform; and (7) resist disbanding, even when they consistently fail to achieve their political objectives or when their stated political objectives have been achieved.

Abrahms has an alternative model to explain all this: People turn to terrorism for social solidarity. He theorizes that people join terrorist organizations worldwide in order to be part of a community, much like the reason inner-city youths join gangs in the United States.

The evidence supports this. Individual terrorists often have no prior involvement with a group’s political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can’t describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren’t working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida.

For example, several of the 9/11 hijackers planned to fight in Chechnya, but they didn’t have the right paperwork so they attacked America instead. The mujahedeen had no idea whom they would attack after the Soviets withdrew from Afghanistan, so they sat around until they came up with a new enemy: America. Pakistani terrorists regularly defect to another terrorist group with a totally different political platform. Many new al-Qaida members say, unconvincingly, that they decided to become a jihadist after reading an extreme, anti-American blog, or after converting to Islam, sometimes just a few weeks before. These people know little about politics or Islam, and they frankly don’t even seem to care much about learning more. The blogs they turn to don’t have a lot of substance in these areas, even though more informative blogs do exist.

All of this explains the seven habits. It’s not that they’re ineffective; it’s that they have a different goal. They might not be effective politically, but they are effective socially: They all help preserve the group’s existence and cohesion.

This kind of analysis isn’t just theoretical; it has practical implications for counterterrorism. Not only can we now better understand who is likely to become a terrorist, we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations. Driving a wedge between group members—commuting prison sentences in exchange for actionable intelligence, planting more double agents within terrorist groups—will go a long way to weakening the social bonds within those groups.

We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/9): Interesting rebuttal.

Posted on October 7, 2008 at 5:48 AMView Comments

Diebold Doesn't Get It

This quote sums up nicely why Diebold should not be trusted to secure election machines:

David Bear, a spokesman for Diebold Election Systems, said the potential risk existed because the company’s technicians had intentionally built the machines in such a way that election officials would be able to update their systems in years ahead.

“For there to be a problem here, you’re basically assuming a premise where you have some evil and nefarious election officials who would sneak in and introduce a piece of software,” he said. “I don’t believe these evil elections people exist.”

If you can’t get the threat model right, you can’t hope to secure the system.

Posted on May 22, 2006 at 3:22 PMView Comments

VOIP Encryption

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path—even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

That’s basically the entire threat model for traditional phone calls. And when most people think about IP telephony—voice over internet protocol, or VOIP—that’s the threat model they probably have in their heads.

Unfortunately, phone calls from your computer are fundamentally different from phone calls from your telephone. Internet telephony’s threat model is much closer to the threat model for IP-networked computers than the threat model for telephony.

And we already know the threat model for IP. Data packets can be eavesdropped on anywhere along the transmission path. Data packets can be intercepted in the corporate network, by the internet service provider and along the backbone. They can be eavesdropped on by the people or organizations that own those computers, and they can be eavesdropped on by anyone who has successfully hacked into those computers. They can be vacuumed up by nosy hackers, criminals, competitors and governments.

It’s comparable to threat No. 3 above, but with the scope vastly expanded.

My greatest worry is the criminal attacks. We already have seen how clever criminals have become over the past several years at stealing account information and personal data. I can imagine them eavesdropping on attorneys, looking for information with which to blackmail people. I can imagine them eavesdropping on bankers, looking for inside information with which to make stock purchases. I can imagine them stealing account information, hijacking telephone calls, committing identity theft. On the business side, I can see them engaging in industrial espionage and stealing trade secrets. In short, I can imagine them doing all the things they could never have done with the traditional telephone network.

This is why encryption for VOIP is so important. VOIP calls are vulnerable to a variety of threats that traditional telephone calls are not. Encryption is one of the essential security technologies for computer data, and it will go a long way toward securing VOIP.

The last time this sort of thing came up, the U.S. government tried to sell us something called “key escrow.” Basically, the government likes the idea of everyone using encryption, as long as it has a copy of the key. This is an amazingly insecure idea for a number of reasons, mostly boiling down to the fact that when you provide a means of access into a security system, you greatly weaken its security.

A recent case in Greece demonstrated that perfectly: Criminals used a cell-phone eavesdropping mechanism already in place, designed for the police to listen in on phone calls. Had the call system been designed to be secure in the first place, there never would have been a backdoor for the criminals to exploit.

Fortunately, there are many VOIP-encryption products available. Skype has built-in encryption. Phil Zimmermann is releasing Zfone, an easy-to-use open-source product. There’s even a VOIP Security Alliance.

Encryption for IP telephony is important, but it’s not a panacea. Basically, it takes care of threats No. 2 through No. 4, but not threat No. 1. Unfortunately, that’s the biggest threat: eavesdropping at the end points. No amount of IP telephony encryption can prevent a Trojan or worm on your computer—or just a hacker who managed to get access to your machine—from eavesdropping on your phone calls, just as no amount of SSL or e-mail encryption can prevent a Trojan on your computer from eavesdropping—or even modifying—your data.

So, as always, it boils down to this: We need secure computers and secure operating systems even more than we need secure transmission.

This essay originally appeared on Wired.com.

Posted on April 6, 2006 at 5:09 AMView Comments

80 Cameras for 2,400 People

This story is about the remote town of Dillingham, Alaska, which is probably the most watched town in the country. There are 80 surveillance cameras for the 2,400 people, which translates to one camera for every 30 people.

The cameras were bought, I assume, because the town couldn’t think of anything else to do with the $202,000 Homeland Security grant they received. (One of the problems of giving this money out based on political agenda, rather than by where the actual threats are.)

But they got the money, and they spent it. And now they have to justify the expense. Here’s the movie-plot threat the Dillingham Police Chief uses to explain why the expense was worthwhile:

“Russia is about 800 miles that way,” he says, arm extending right.

“Seattle is about 1,200 miles back that way.” He points behind him.

“So if I have the math right, we’re closer to Russia than we are to Seattle.”

Now imagine, he says: What if the bad guys, whoever they are, manage to obtain a nuclear device in Russia, where some weapons are believed to be poorly guarded. They put the device in a container and then hire organized criminals, “maybe Mafiosi,” to arrange a tramp steamer to pick it up. The steamer drops off the container at the Dillingham harbor, complete with forged paperwork to ship it to Seattle. The container is picked up by a barge.

“Ten days later,” the chief says, “the barge pulls into the Port of Seattle.”

Thompson pauses for effect.

“Phoooom,” he says, his hands blooming like a flower.

The first problem with the movie plot is that it’s just plain silly. But the second problem, which you might have to look back to notice, is that those 80 cameras will do nothing to stop his imagined attack.

We are all security consumers. We spend money, and we expect security in return. This expenditure was a waste of money, and as a U.S. taxpayer, I am pissed that I’m getting such a lousy deal.

Posted on March 29, 2006 at 1:13 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.