Entries Tagged "theory of security"

Page 2 of 3

The Topology of Covert Conflict

Interesting research paper by Shishir Nagaraja and Ross Anderson. Implications for warfare, terrorism, and peer-to-peer file sharing:

Abstract:

Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation. Albert, Jeong and Barabási famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks. We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies. We show, first, that naive defences don’t work against vertex-order attack; second, that defences based on simple redundancy don’t work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences. Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters. They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them. Finally, we draw some conclusions and present possible directions for future research.

Posted on February 6, 2006 at 7:03 AMView Comments

Totally Secure Classical Communications?

My eighth Wired column:

How would you feel if you invested millions of dollars in quantum cryptography, and then learned that you could do the same thing with a few 25-cent Radio Shack components?

I’m exaggerating a little here, but if a new idea out of Texas A&M University turns out to be secure, we’ve come close.

Earlier this month, Laszlo Kish proposed securing a communications link, like a phone or computer line, with a pair of resistors. By adding electronic noise, or using the natural thermal noise of the resistors—called “Johnson noise”—Kish can prevent eavesdroppers from listening in.

In the blue-sky field of quantum cryptography, the strange physics of the subatomic world are harnessed to create a secure, unbreakable communications channel between two points. Kish’s research is intriguing, in part, because it uses the simpler properties of classic physics—the stuff you learned in high school—to achieve the same results.

At least, that’s the theory.

I go on to describe how the system works, and then discuss the security:

There hasn’t been enough analysis. I certainly don’t know enough electrical engineering to know whether there is any clever way to eavesdrop on Kish’s scheme. And I’m sure Kish doesn’t know enough security to know that, either. The physics and stochastic mathematics look good, but all sorts of security problems crop up when you try to actually build and operate something like this.

It’s definitely an idea worth exploring, and it’ll take people with expertise in both security and electrical engineering to fully vet the system.

There are practical problems with the system, though. The bandwidth the system can handle appears very limited. The paper gives the bandwidth-distance product as 2 x 106 meter-Hz. This means that over a 1-kilometer link, you can only send at 2,000 bps. A dialup modem from 1985 is faster. Even with a fat 500-pair cable you’re still limited to 1 million bps over 1 kilometer.

And multi-wire cables have their own problems; there are all sorts of cable-capacitance and cross-talk issues with that sort of link. Phone companies really hate those high-density cables, because of how long it takes to terminate or splice them.

Even more basic: It’s vulnerable to man-in-the-middle attacks. Someone who can intercept and modify messages in transit can break the security. This means you need an authenticated channel to make it work—a link that guarantees you’re talking to the person you think you’re talking to. How often in the real world do we have a wire that is authenticated but not confidential? Not very often.

Generally, if you can eavesdrop you can also mount active attacks. But this scheme only defends against passive eavesdropping.

For those keeping score, that’s four practical problems: It’s only link encryption and not end-to-end, it’s bandwidth-limited (but may be enough for key exchange), it works best for short ranges and it requires authentication to make it work. I can envision some specialized circumstances where this might be useful, but they’re few and far between.

But quantum key distributions have the same problems. Basically, if Kish’s scheme is secure, it’s superior to quantum communications in every respect: price, maintenance, speed, vibration, thermal resistance and so on.

Both this and the quantum solution share another problem, however; they’re solutions looking for a problem. In the realm of security, encryption is the one thing we already do pretty well. Focusing on encryption is like sticking a tall stake in the ground and hoping the enemy runs right into it, instead of building a wide wall.

Arguing about whether this kind of thing is more secure than AES—the United States’ national encryption standard—is like arguing about whether the stake should be a mile tall or a mile and a half tall. However tall it is, the enemy is going to go around the stake.

Software security, network security, operating system security, user interface—these are the hard security problems. Replacing AES with this kind of thing won’t make anything more secure, because all the other parts of the security system are so much worse.

This is not to belittle the research. I think information-theoretic security is important, regardless of practicality. And I’m thrilled that an easy-to-build classical system can work as well as a sexy, media-hyped quantum cryptosystem. But don’t throw away your crypto software yet.

Here’s the press release, here’s the paper, and here’s the Slashdot thread.

EDITED TO ADD (1/31): Here’s an interesting rebuttal.

Posted on December 15, 2005 at 6:13 AMView Comments

Hans Bethe on Security

Hans Bethe was one of the first nuclear scientists, a member of the Manhattan Project, and a political activist. In this article about him, there’s a great quote:

Sometimes insistence on 100 percent security actually impairs our security, while the bold decision—though at the time it seems to involve some risk—will give us more security in the long run.

Posted on November 15, 2005 at 12:41 PMView Comments

Eric Schmidt on Secrecy and Security

From Information Week:

InformationWeek: What about security? Have you been paying as much attention to security as, say Microsoft—you can debate whether or not they’ve been successful, but they’ve poured a lot of resources into it.

Schmidt: More people to a bad architecture does not necessarily make a more secure system. Why don’t you define security so I can answer your question better?

InformationWeek: I suppose it’s an issue of making the technology transparent enough that people can deploy it with confidence.

Schmidt: Transparency is not necessarily the only way you achieve security. For example, part of the encryption algorithms are not typically made available to the open source community, because you don’t want people discovering flaws in the encryption.

Actually, he’s wrong. Everything about an encryption algorithm should always be made available to everyone, because otherwise you’ll invariably have exploitable flaws in your encryption.

My essay on the topic is here.

Posted on May 31, 2005 at 1:09 PMView Comments

PS2 Cheat Codes Hacked

From Adam Fields weblog:

Some guy tore apart his PS2 controller, connected it to the parallel port on his computer, and wrote a script to press a large number of button combinations. He used it to figure out all of the cheat codes for GTA San Andreas (including some not released by Rockstar, apparently).

http://games.slashdot.org/article.pl?sid=05/01/17/1411251

This is a great example of a “class break” in systems security—the creation of a tool means that this same technique can be easily used on all games, and game developers can no longer rely (if they did before) on the codes being secret because it’s hard to try them all.

Posted on January 29, 2005 at 8:00 AM

Fingerprinting Students

A nascent security trend in the U.S. is tracking schoolchildren when they get on and off school buses.

Hoping to prevent the loss of a child through kidnapping or more innocent circumstances, a few schools have begun monitoring student arrivals and departures using technology similar to that used to track livestock and pallets of retail shipments.

A school district in Spring, Texas, is using computerized ID badges to record this information, and wirelessly sending it to police headquarters. Another school district, in Phoenix, is doing the same thing with fingerprint readers. The system is supposed to help prevent the loss of a child, whether through kidnapping or accident.

What’s going on here? Have these people lost their minds? Tracking kids as they get on and off school buses is a ridiculous idea. It’s expensive, invasive, and doesn’t increase security very much.

Security is always a trade-off. In Beyond Fear, I delineated a five-step process to evaluate security countermeasures. The idea is to be able to determine, rationally, whether a countermeasure is worth it. In the book, I applied the five-step process to everything from home burglar alarms to military action against terrorism. Let’s apply it in this case.

Step 1: What assets are you trying to protect? Children.

Step 2: What are the risks to these assets? Loss of the child, either due to kidnapping or accident. Child kidnapping is a serious problem in the U.S.; the odds of a child being abducted by a family member are one in 340 and by a non-family member are 1 in 1200 (per year). (These statistics are for 1999, and are from NISMART-2, U.S. Department of Justice. My guess is that the current rates in Spring, Texas, are much lower.) Very few of these kidnappings involve school buses, so it’s unclear how serious the specific risks being addressed here are.

Step 3: How well does the security solution mitigate those risks? Not very well.

Let’s imagine how this system might provide security in the event of a kidnapping. If a kidnapper—assume it’s someone the child knows—goes onto the school bus and takes the child off at the wrong stop, the system would record that. Otherwise—if the kidnapping took place either before the child got on the bus or after the child got off—the system wouldn’t record anything suspicious. Yes, it would tell investigators if the kidnapping happened before morning attendance and either before or after the school bus ride, but is that one piece of information worth this entire tracking system? I doubt it.

You could imagine a movie-plot scenario where this kind of tracking system could help the hero recover the kidnapped child, but it hardly seems useful in the general case.

Step 4: What other risks does the security solution cause? The additional risk is the data collected through constant surveillance. Where is this information collected? Who has access to it? How long is it stored? These are important security questions that get no mention.

Step 5: What costs and trade-offs does the security solution impose? There are two. The first is obvious: money. I don’t have it figured, but it’s expensive to outfit every child with an ID card and every school bus with this system. The second cost is more intangible: a loss of privacy. We are raising children who think it normal that their daily movements are watched and recorded by the police. That feeling of privacy is not something we should give up lightly.

So, finally: is this system worth it? No. The security gained is not worth the money and privacy spent. If the goal is to make children safer, the money would be better spent elsewhere: guards at the schools, education programs for the children, etc.

If this system makes so little sense, why have at least two cities in the U.S. implemented it? The obvious answer is that the school districts didn’t think the problem through. Either they were seduced by the technology, or by the companies that built the system. But there’s another, more interesting, possibility.

In Beyond Fear, I talk about the notion of agenda. The five-step process is a subjective one, and should be evaluated from the point of view of the person making the trade-off decision. If you imagine that the school officials are making the trade-off, then the system suddenly makes sense.

If a kidnapping occurs on school property, the subsequent investigation could easily hurt school officials. They could even lose their jobs. If you view this security countermeasure as one protecting them just as much as it protects children, it suddenly makes more sense. The trade-off might not be worth it in general, but it’s worth it to them.

Kidnapping is a real problem, and countermeasures that help reduce the risk are a good thing. But remember that security is always a trade off, and a good security system is one where the security benefits are worth the money, convenience, and liberties that are being given up. Quite simply, this system isn’t worth it.

Posted on January 11, 2005 at 9:49 AMView Comments

Security Notes from All Over: Israeli Airport Security Questioning

In both Secrets and Lies and Beyond Fear, I discuss a key difference between attackers and defenders: the ability to concentrate resources. The defender must defend against all possible attacks, while the attacker can concentrate his forces on one particular avenue of attack. This precept is fundamental to a lot of security, and can be seen very clearly in counterterrorism. A country is in the position of the interior; it must defend itself against all possible terrorist attacks: airplane terrorism, chemical bombs, threats at the ports, threats through the mails, lone lunatics with automatic weapons, assassinations, etc, etc, etc. The terrorist just needs to find one weak spot in the defenses, and exploit that. This concentration versus diffusion of resources is one reason why the defender’s job is so much harder than the attackers.

This same principle guides security questioning at the Ben Gurion Airport in Israel. In this example, the attacker is the security screener and the defender is the terrorist. (It’s important to remember that “attacker” and “defender” are not moral labels, but tactical ones. Sometimes the defenders are the good guys and the attackers are the bad guys. In this case, the bad guy is trying to defend his cover story against the good guy who is attacking it.)

Security is impressively tight at the airport, and includes a potentially lengthy interview by a trained security screener. The screener asks each passenger questions, trying to determine if he’s a security risk. But instead of asking different questions—where do you live, what do you do for a living, where were you born—the screener asks questions that follow a storyline: “Where are you going? Who do you know there? How did you meet him? What were you doing there?” And so on.

See the ability to concentrate resources? The defender—the terrorist trying to sneak aboard the airplane—needs a cover story sufficiently broad to be able to respond to any line of questioning. So he might memorize the answers to several hundred questions. The attacker—the security screener—could ask questions scattershot, but instead concentrates his questioning along one particular line. The theory is that eventually the defender will reach the end of his memorized story, and that the attacker will then notice the subtle changes in the defender as he starts to make up answers.

Posted on December 14, 2004 at 9:26 AMView Comments

An Impressive Car Theft

The armored Mercedes belonging to the CEO of DaimlerChrysler has been stolen:

The black company car, which is worth about 800,000 euros ($1 million), disappeared on the night of Oct. 26, police spokesman Klaus-Peter Arand said in a telephone interview. The limousine, which sports a 12-cylinder engine and is equipped with a broadcasting device to help retrieve the car, hasn’t yet been found, the police said.

There are two types of thieves, whether they be car thieves or otherwise. First, there are the thieves that want a car, any car. And second, there are the thieves that want one particular car. Against the first type, any security measure that makes your car harder to steal than the car next to it is good enough. Against the second type, even a sophisticated GPS tracking system might not be enough.

Posted on December 1, 2004 at 11:01 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.