Entries Tagged "threat models"

Page 4 of 6

Biometric Wallet

Not an electronic wallet, a physical one:

Virtually indestructible, the dunhill Biometric Wallet will open only with touch of your fingerprint.

It can be linked via Bluetooth to the owner’s mobile phone ­ sounding an alarm if the two are separated by more than 5 metres! This provides a brilliant warning if either the phone or wallet is stolen or misplaced. The exterior of the wallet is constructed from highly durable carbon fibre that will resist all but the most concerted effort to open it, while the interior features a luxurious leather credit card holder and a strong stainless steel money clip.

Only $825. News article.

I don’t think I understand the threat model. If your wallet is stolen, you’re going to replace all your ID cards and credit cards and you’re not going to get your cash back—whether it’s a normal wallet or this wallet. I suppose this wallet makes it less likely that someone will use your stolen credit cards quickly, before you cancel them. But you’re not going to be liable for that delay in any case.

Posted on February 18, 2011 at 1:45 PMView Comments

Domodedovo Airport Bombing

I haven’t written anything about the suicide bombing at Moscow’s Domodedovo Airport because I didn’t think there was anything to say. The bomber was outside the security checkpoint, in the area where family and friends wait for arriving passengers. From a security perspective, the bombing had nothing to do with airport security. He could have just as easily been in a movie theater, stadium, shopping mall, market, or anywhere else lots of people are crowded together with limited exits. The large death and injury toll indicates the bomber chose his location well.

I’ve often written that security measures that are only effective if the implementers guess the plot correctly are largely wastes of money—at best they would have forced this bomber to choose another target—and that our best security investments are intelligence, investigation, and emergency response. This latest terrorist attack underscores that even more. “Critics say” that the TSA couldn’t have detected this sort of attack. Of course; the TSA can’t be everywhere. And that’s precisely the point.

Many reporters asked me about the likely U.S. reaction. I don’t know; it could range from “Moscow is a long way off and that doesn’t concern us” to “Oh my god we’re all going to die!” The worry, of course, is that we will need to “do something,” even though there is no “something” that should be done.

I was interviewed by the Esquire politics blog about this. I’m not terribly happy with the interview; I was rushed and sloppy on the phone.

Posted on January 28, 2011 at 3:15 PMView Comments

Stealing SIM Cards from Traffic Lights

Johannesburg installed hundreds of networked traffic lights on its streets. The lights use a cellular modem and a SIM card to communicate.

Those lights introduced a security risk I’ll bet no one gave a moment’s thought to: that criminals might steal the SIM cards from the traffic lights and use them to make free phone calls. But that’s exactly what happened.

Aside from the theft of phone service, repairing those traffic lights is far more expensive than those components are worth.

I wrote about this general issue before:

These crimes are particularly expensive to society because the replacement cost is much higher than the thief’s profit. A manhole is worth $5–$10 as scrap, but it costs $500 to replace, including labor. A thief may take $20 worth of copper from a construction site, but do $10,000 in damage in the process. And the increased threat means more money being spent on security to protect those commodities in the first place.

Security can be viewed as a tax on the honest, and these thefts demonstrate that our taxes are going up. And unlike many taxes, we don’t benefit from their collection. The cost to society of retrofitting manhole covers with locks, or replacing them with less re­salable alternatives, is high; but there is no benefit other than reducing theft.

These crimes are a harbinger of the future: evolutionary pressure on our society, if you will. Criminals are often referred to as social parasites, but they are an early warning system of societal changes. Unfettered by laws or moral restrictions, they can be the first to respond to changes that the rest of society will be slower to pick up on. In fact, currently there’s a reprieve. Scrap metal prices are all down from last year—copper is currently $1.62 per pound, and lead is half what Berge got—and thefts are down too.

We’ve designed much of our infrastructure around the assumptions that commodities are cheap and theft is rare. We don’t protect transmission lines, manhole covers, iron fences, or lead flashing on roofs. But if commodity prices really are headed for new higher stable points, society will eventually react and find alternatives for these items—or find ways to protect them. Criminals were the first to point this out, and will continue to exploit the system until it restabilizes.

Posted on January 13, 2011 at 12:54 PMView Comments

Adam Shostack on TSA Threat Modeling

Good commentary:

I’ve said before and I’ll say again, there are lots of possible approaches to threat modeling, and they all involve tradeoffs. I’ve commented that much of the problem is the unmeetable demands TSA labors under, and suggested fixes. If TSA is trading planned responses to Congress for effective security, I think Congress ought to be asking better questions. I’ll suggest “how do you model future threats?” as an excellent place to start.

Continuing on from there, an effective systematic approach would involve diagramming the air transport system, and ensuring that everyone and everything who gets to the plane without being authorized to be on the flight deck goes through reasonable and minimal searches under the Constitution, which are used solely for flight security. Right now, there’s discrepancies in catering and other servicing of the planes, there’s issues with cargo screening, etc.

These issues are getting exposed by the red teaming which happens, but that doesn’t lead to a systematic set of balanced defenses.

As long as the President is asking “Is this effective against the kind of threat that we saw in the Christmas Day bombing?” we’ll know that the right threat models aren’t making it to the top.

Posted on December 22, 2010 at 7:15 AMView Comments

Book Review: Cyber War

Cyber War: The Next Threat to National Security and What to do About It by Richard Clarke and Robert Knake, HarperCollins, 2010.

Cyber War is a fast and enjoyable read. This means you could give the book to your non-techy friends, and they’d understand most of it, enjoy all of it, and learn a lot from it. Unfortunately, while there’s a lot of smart discussion and good information in the book, there’s also a lot of fear-mongering and hyperbole as well. Since there’s no easy way to tell someone what parts of the book to pay attention to and what parts to take with a grain of salt, I can’t recommend it for that purpose. This is a pity, because parts of the book really need to be widely read and discussed.

The fear-mongering and hyperbole is mostly in the beginning. There, the authors describe the cyberwar of novels. Hackers disable air traffic control, delete money from bank accounts, cause widespread blackouts, release chlorine gas from chemical plants, and—this is my favorite—remotely cause your printer to catch on fire. It’s exciting and scary stuff, but not terribly realistic. Even their discussions of previous “cyber wars”—Estonia, Georgia, attacks against U.S. and South Korea on July 4, 2009—are full of hyperbole. A lot of what they write is unproven speculation, but they don’t say that.

Better is the historical discussion of the formation of the U.S. Cyber Command, but there are important omissions. There’s nothing about the cyberwar fear being stoked that accompanied this: by the NSA’s General Keith Alexander—who became the first head of the command—or by the NSA’s former director, current military contractor, by Mike McConnell, who’s Senior Vice President at Booz Allen Hamilton, and by others. By hyping the threat, the former has amassed a lot of power, and the latter a lot of money. Cyberwar is the new cash cow of the military-industrial complex, and any political discussion of cyberwar should include this as well.

Also interesting is the discussion of the asymmetric nature of the threat. A country like the United States, which is heavily dependent on the Internet and information technology, is much more vulnerable to cyber-attacks than a less-developed country like North Korea. This means that a country like North Korea would benefit from a cyberwar exchange: they’d inflict far more damage than they’d incur. This also means that, in this hypothetical cyberwar, there would be pressure on the U.S. to move the war to another theater: air and ground, for example. Definitely worth thinking about.

Most important is the section on treaties. Clarke and Knake have a lot of experience with nuclear treaties, and have done considerable thinking about how to apply that experience to cyberspace. The parallel isn’t perfect, but there’s a lot to learn about what worked and what didn’t, and—more importantly—how things worked and didn’t. The authors discuss treaties banning cyberwar entirely (unlikely), banning attacks against civilians, limiting what is allowed in peacetime, stipulating no first use of cyber weapons, and so on. They discuss cyberwar inspections, and how these treaties might be enforced. Since cyberwar would be likely to result in a new worldwide arms race, one with a more precarious trigger than the nuclear arms race, this part should be read and discussed far and wide. Sadly, it gets lost in the rest of the book. And, since the book lacks an index, it can be hard to find any particular section after you’re done reading it.

In the last chapter, the authors lay out their agenda for the future, which largely I agree with.

  1. We need to start talking publicly about cyber war. This is certainly true. The threat of cyberwar is going to consume the sorts of resources we shoveled into the nuclear threat half a century ago, and a realistic discussion of the threats, risks, countermeasures, and policy choices is essential. We need more universities offering degrees in cyber security, because we need more expertise for the entire gamut of threats.
  2. We need to better defend our military networks, the high-level ISPs, and our national power grid. Clarke and Knake call this the “Defensive Triad.” The authors and I disagree strongly on how this should be done, but there is no doubt that it should be done. The two parts of that triad currently in commercial hands are simply too central to our nation, and too vulnerable, to be left insecure. And their value is far greater to the nation than it is to the corporations that own it, which means the market will not naturally secure it. I agree with the authors that regulation is necessary.
  3. We need to reduce cybercrime. Even without the cyber warriors bit, we need to do that. Cybercrime is bad, and it’s continuing to get worse. Yes, it’s hard. But it’s important.
  4. We need international cyberwar treaties. I couldn’t agree more about this. We do. We need to start thinking about them, talking about them, and negotiating them now, before the cyberwar arms race takes off. There are all kind of issues with cyberwar treaties, and the book talks about a lot of them. However full of loopholes they might be, their existence will do more good than harm.
  5. We need more research on secure network designs. Again, even without the cyberwar bit, this is essential. We need more research in cybersecurity, a lot more.
  6. We need decisions about cyberwar—what weapons to build, what offensive actions to take, who to target—to be made as far up the command structure as possible. Clarke and Knake want the president to personally approve all of this, and I agree. Because of its nature, it can be easy to launch a small-scale cyber attack, and it can be easy for a small-scale attack to get out of hand and turn into a large-scale attack. We need the president to make the decisions, not some low-level military officer ensconced in a computer-filled bunker late one night.

This is great stuff, and a fine starting place for a national policy discussion on cybersecurity, whether it be against a military, espionage, or criminal threat. Unfortunately, for readers to get there, they have to wade through the rest of the book. And unless their bullshit detectors are already well-calibrated on this topic, I don’t want them reading all the hyperbole and fear-mongering that comes before, no matter how readable the book.

Note: I read Cyber War in April, when it first came out. I wanted to write a review then, but found that while my Kindle is great for reading, it’s terrible for flipping back and forth looking for bits and pieces to write about in a review. So I let the review languish. Finally, I borrowed a paper copy from my local library.

Some other reviews of the book Cyber War. See also the reviews on the Amazon page.

I wrote two essays on cyberwar.

Posted on December 21, 2010 at 7:23 AMView Comments

Storing Cryptographic Keys with Invisible Tattoos

This idea, by Stuart Schechter at Microsoft Research, is—I think—clever:

Abstract: Implantable medical devices, such as implantable cardiac defibrillators and pacemakers, now use wireless communication protocols vulnerable to attacks that can physically harm patients. Security measures that impede emergency access by physicians could be equally devastating. We propose that access keys be written into patients’ skin using ultraviolet-ink micropigmentation (invisible tattoos).

It certainly is a new way to look at the security threat model.

Posted on April 15, 2010 at 6:43 AMView Comments

Security and Function Creep

Security is rarely static. Technology changes both security systems and attackers. But there’s something else that changes security’s cost/benefit trade-off: how the underlying systems being secured are used. Far too often we build security for one purpose, only to find it being used for another purpose—one it wasn’t suited for in the first place. And then the security system has to play catch-up.

Take driver’s licenses, for example. Originally designed to demonstrate a credential—the ability to drive a car—they looked like other credentials: medical licenses or elevator certificates of inspection. They were wallet-sized, of course, but they didn’t have much security associated with them. Then, slowly, driver’s licenses took on a second application: they became age-verification tokens in bars and liquor stores. Of course the security wasn’t up to the task—teenagers can be extraordinarily resourceful if they set their minds to it—and over the decades driver’s licenses got photographs, tamper-resistant features (once, it was easy to modify the birth year), and technologies that made counterfeiting harder. There was little value in counterfeiting a driver’s license, but a lot of value in counterfeiting an age-verification token.

Today, US driver’s licenses are taking on yet another function: security against terrorists. The Real ID Act—the government’s attempt to make driver’s licenses even more secure—has nothing to do with driving or even with buying alcohol, and everything to do with trying to make that piece of plastic an effective way to verify that someone is not on the terrorist watch list. Whether this is a good idea, or actually improves security, is another matter entirely.

You can see this kind of function creep everywhere. Internet security systems designed for informational Web sites are suddenly expected to provide security for banking Web sites. Security systems that are good enough to protect cheap commodities from being stolen are suddenly ineffective once the price of those commodities rises high enough. Application security systems, designed for locally owned networks, are expected to work even when the application is moved to a cloud computing environment. And cloud computing security, designed for the needs of corporations, is expected to be suitable for government applications as well—maybe even military applications.

Sometimes it’s obvious that security systems designed for one environment won’t work in another. We don’t arm our soldiers the same way we arm our policemen, and we can’t take commercial vehicles and easily turn them into ones outfitted for the military. We understand that we might need to upgrade our home security system if we suddenly come into possession of a bag of diamonds. Yet many think the same security that protects our home computers will also protect voting machines, and the same operating systems that run our businesses are suitable for military uses.

But these are all conscious decisions, and we security professionals often know better. The real problems arise when the changes happen in the background, without any conscious thought. We build a network security system that’s perfectly adequate for the threat and—like a driver’s license becoming an age-verification token—the network accrues more and more functions. But because it has already been pronounced “secure,” we can’t get any budget to re-evaluate and improve the security until after the bad guys have figured out the vulnerabilities and exploited them.

I don’t like having to play catch-up in security, but we seem doomed to keep doing so.

This essay originally appeared in the January/February 2010 issue of IEEE Security and Privacy.

Posted on February 4, 2010 at 6:35 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.