Blog: October 2008 Archives

Keeping America Safe from Terrorism by Monitoring Distillery Webcams

Really:

We had an email recently from an observer “curious as to why the webcam that was inside the shop/bar is no longer there, or at least, functional”. The email was from the Defense Threat Reduction Agency in the United States.

When we replied that it was simply a short term technical problem, we asked why on earth they could be interested in the comings and goings of a small Distillery off the West Coast of Scotland. Were there secret manoeuvres taking place in Loch Indaal, or even a threat of terrorists infiltrating the mainland via Islay?

The answer we received was even more surreal. Evidently the mission of the DTRA is to safeguard the US and its allies from weapons of mass destruction -chemical, biological, radiological, nuclear and high explosives. The department which contacted the Distillery deals with the implementation of the Chemical Weapons Convention, going to sites to verify treaty compliance. Funnily enough chemical weapon processes look very similar to the distilling process and as part of training there is a visit to a brewery for familiarization with reactors, batch processors and evaporators. As they said, it just goes to show how “tweaks” to the process flow or equipment, can create something very pleasant (whisky) or deadly (chemical weapons).

As they say: “In the post-Cold War environment, a unified, consistent approach to deterring, reducing and countering weapons of mass destruction is essential to maintaining our national security. Under DTRA, Department of Defense resources, expertise and capabilities are combined to ensure the United States remains ready and able to address the present and future WMD threat. We perform four essential functions to accomplish our mission: combat support, technology development, threat control and threat reduction. These functions form the basis for how we are organized and our daily activities. Together, they enable us to reduce the physical and psychological terror of weapons of mass destruction, thereby enhancing the security of the world’s citizens. At the dawn of the 21st century, no other task is as challenging or demanding”.

EDITED TO ADD (11/7): This story seems mostly bogus. See “The Story Continues…” on this page.

Posted on October 31, 2008 at 11:15 AM31 Comments

UPC Switching Scam

It’s not a new scam to switch bar codes and buy merchandise for a lower value, but how do you get away with over $1M worth of merchandise with this scam?

In a statement of facts filed with Tidwell’s plea, he admitted that, during one year, he and others conspired to steal more than $1 million in merchandise from large retailers and sell the items through eBay. The targeted merchandise included high-end vacuum cleaners, electric welders, power winches, personal computers, and electric generators.

Tidwell created fraudulent UPC labels on his home personal computer. Conspirators entered various stores in Ohio, Illinois, Indiana, Pennsylvania and Texas and placed the fraudulent labels on merchandise they targeted, and then bought the items from the store. The fraudulent UPC labels attached to the merchandise would cause the item to be rung up for a price far below its actual retail value.

That requires a lot of really clueless checkout clerks.

EDITED TO ADD (11/7): Video of talk on barcode hacks.

Posted on October 31, 2008 at 6:43 AM71 Comments

Horrible Identity Theft Story

This is a story of how smart people can be neutralized through stupid procedures.

Here’s the part of the story where some poor guy’s account get’s completely f-ed. This thief had been bounced to the out-sourced to security so often that he must have made a check list of any possible questions they would ask him. Through whatever means, he managed to get the answers to these questions. Now when he called, he could give us the information we were asking for, but by this point we knew his voice so well that we still tried to get him to security. It worked like this: We put him on hold and dial the extension for security. We get a security rep and start to explain the situation; we tell them he was able to give the right information, but that we know is the same guy that’s been calling for weeks and we are certain he is not the account holder. They begrudgingly take the call. Minutes later another one of us gets a call from a security rep saying they are giving us a customer who has been cleared by them. And here the thief was back in our department. For those of us who had come to know him, the fight waged on night after night.

Posted on October 30, 2008 at 12:10 PM40 Comments

Movie-Plot Threat: Terrorists Using Twitter

No, really. (Commentary here.)

This is just ridiculous. Of course the bad guys will use all the communications tools available to the rest of us. They have to communicate, after all. They’ll also use cars, water faucets, and all-you-can-eat buffet lunches. So what?

This commentary is dead on:

Steven Aftergood, a veteran intelligence analyst at the Federation of the American Scientists, doesn’t dismiss the Army presentation out of hand. But nor does he think it’s tackling a terribly seriously threat. “Red-teaming exercises to anticipate adversary operations are fundamental. But they need to be informed by a sense of what’s realistic and important and what’s not,” he tells Danger Room. “If we have time to worry about ‘Twitter threats’ then we’re in good shape. I mean, it’s important to keep some sense of proportion.”

Posted on October 30, 2008 at 7:51 AM33 Comments

TSA News

Item 1: Kip Hawley says that the TSA may reduce size restrictions on liquids. You’ll still have to take them out of your bag, but they can be larger than three ounces. The reasons—so he states—are that technologies are getting better, not that the threat is reduced.

I’m skeptical, of course. But read his post; it’s interesting.

Item 2: Hawley responded to my response to his blog post about an article about me in The Atlantic.

Item 3: The Atlantic is holding a contest, based on Hawley’s comment that the TSA is basically there to catch stupid terrorists:

And so, a contest: How would the Hawley Principle of Federally-Endorsed Mediocrity apply to other government endeavors?

Not the same as my movie-plot threat contest, but fun all the same.

Item 4: What would the TSA make of this?

Posted on October 29, 2008 at 2:27 PM37 Comments

The Skein Hash Function

NIST is holding a competition to replace the SHA family of hash functions, which have been increasingly under attack. (I wrote about an early NIST hash workshop here.)

Skein is our submission (myself and seven others: Niels Ferguson, Stefan Lucks, Doug Whiting, Mihir Bellare, Tadayoshi Kohno, Jon Callas, and Jesse Walker). Here’s the paper:

Executive Summary

Skein is a new family of cryptographic hash functions. Its design combines speed, security, simplicity, and a great deal of flexibility in a modular package that is easy to analyze.

Skein is fast. Skein-512—our primary proposal—hashes data at 6.1 clock cycles per byte on a 64-bit CPU. This means that on a 3.1 GHz x64 Core 2 Duo CPU, Skein hashes data at 500 MBytes/second per core—almost twice as fast as SHA-512 and three times faster than SHA-256. An optional hash-tree mode speeds up parallelizable implementations even more. Skein is fast for short messages, too; Skein-512 hashes short messages in about 1000 clock cycles.

Skein is secure. Its conservative design is based on the Threefish block cipher. Our current best attack on Threefish-512 is on 25 of 72 rounds, for a safety factor of 2.9. For comparison, at a similar stage in the standardization process, the AES encryption algorithm had an attack on 6 of 10 rounds, for a safety factor of only 1.7. Additionally, Skein has a number of provably secure properties, greatly increasing confidence in the algorithm.

Skein is simple. Using only three primitive operations, the Skein compression function can be easily understood and remembered. The rest of the algorithm is a straightforward iteration of this function.

Skein is flexible. Skein is defined for three different internal state sizes—256 bits, 512 bits, and 1024 bits—and any output size. This allows Skein to be a drop-in replacement for the entire SHA family of hash functions. A completely optional and extendable argument system makes Skein an efficient tool to use for a very large number of functions: a PRNG, a stream cipher, a key derivation function, authentication without the overhead of HMAC, and a personalization capability. All these features can be implemented with very low overhead. Together with the Threefish large-block cipher at Skein core, this design provides a full set of symmetric cryptographic primitives suitable for most modern applications.

Skein is efficient on a variety of platforms, both hardware and software. Skein-512 can be implemented in about 200 bytes of state. Small devices, such as 8-bit smart cards, can implement Skein-256 using about 100 bytes of memory. Larger devices can implement the larger versions of Skein to achieve faster speeds.

Skein was designed by a team of highly experienced cryptographic experts from academia and industry, with expertise in cryptography, security analysis, software, chip design, and implementation of real-world cryptographic systems. This breadth of knowledge allowed them to create a balanced design that works well in all environments.

Here’s source code, text vectors, and the like for Skein. Watch the Skein website for any updates—new code, new results, new implementations, the proofs.

NIST’s deadline is Friday. It seems as if everyone—including many amateurs—is working on a hash function, and I predict that NIST will receive at least 80 submissions. (Compare this to the sixteen NIST submissions received for the AES competition in 1998.) I expect people to start posting their submissions over the weekend. (Ron Rivest already presented MD6 at Crypto in August.) Probably the best place to watch for new hash functions is here; I’ll try to keep a listing of the submissions myself.

The selection process will take around four years. I’ve previously called this sort of thing a cryptographic demolition derby—last one left standing wins—but that’s only half true. Certainly all the groups will spend the next couple of years trying to cryptanalyze each other, but in the end there will be a bunch of unbroken algorithms; NIST will select one based on performance and features.

NIST has stated that the goal of this process is not to choose the best standard but to choose a good standard. I think that’s smart of them; in this process, “best” is the enemy of “good.” My advice is this: immediately sort them based on performance and features. Ask the cryptographic community to focus its attention on the top dozen, rather than spread its attention across all 80—although I also expect that most of the amateur submissions will be rejected by NIST for not being “complete and proper.” Otherwise, people will break the easy ones and the better ones will go unanalyzed.

EDITED TO ADD (10/30): Here is a single website for all information, including cryptanalysis, of all the SHA-3 submissions. A spoke to a reporter who told me that, as of yesterday, NIST had received 30 submissions. And three news articles about Skein.

Posted on October 29, 2008 at 6:35 AM148 Comments

Rubber-Hose Cryptanalysis

Cryptographers have long joked about rubber-hose cryptanalysis: basically, beating the keys out of someone. Seems that this might have actually happened in Turkey:

According to comments allegedly made by Howard Cox, a US Department of Justice official in a closed-door meeting last week, after being frustrated with the disk encryption employed by Yastremskiy, Turkish law enforcement may have resorted to physical violence to force the password out of the Ukrainian suspect.

Mr Cox’s revelation came in the context of a joke made during his speech. While the exact words were not recorded, multiple sources have verified that Cox quipped about leaving a stubborn suspect alone with Turkish police for a week as a way to get them to voluntarily reveal their password. The specifics of the interrogation techniques were not revealed, but all four people I spoke to stated that it was clear that physical coercion was the implied method.

Posted on October 27, 2008 at 12:45 PM71 Comments

Barack Obama Discusses Security Trade-Offs

I generally avoid commenting on election politics—that’s not what this blog is about—but this comment by Barack Obama is worth discussing:

[Q] I have been collecting accounts of your meeting with David Petraeus in Baghdad. And you had [inaudible] after he had made a really strong pitch [inaudible] for maximum flexibility. A lot of politicians at that moment would have said [inaudible] but from what I hear, you pushed back.

[BO] I did. I remember the conversation, pretty precisely. He made the case for maximum flexibility and I said you know what if I were in your shoes I would be making the exact same argument because your job right now is to succeed in Iraq on as favorable terms as we can get. My job as a potential commander in chief is to view your counsel and your interests through the prism of our overall national security which includes what is happening in Afghanistan, which includes the costs to our image in the middle east, to the continued occupation, which includes the financial costs of our occupation, which includes what it is doing to our military. So I said look, I described in my mind at list an analogous situation where I am sure he has to deal with situations where the commanding officer in [inaudible] says I need more troops here now because I really think I can make progress doing x y and z. That commanding officer is doing his job in Ramadi, but Petraeus’s job is to step back and see how does it impact Iraq as a whole. My argument was I have got to do the same thing here. And based on my strong assessment particularly having just come from Afghanistan were going to have to make a different decision. But the point is that hopefully I communicated to the press my complete respect and gratitude to him and Proder who was in the meeting for their outstanding work. Our differences don’t necessarily derive from differences in sort of, or my differences with him don’t derive from tactical objections to his approach. But rather from a strategic framework that is trying to take into account the challenges to our national security and the fact that we’ve got finite resources.

I have made this general point again and again—about airline security, about terrorism, about a lot of things—that the person in charge of the security system can’t be the person who decides what resources to devote to that security system. The analogy I like to use is a company: the VP of marketing wants all the money for marketing, the VP of engineering wants all the money for engineering, and so on; and the CEO has to balance all of those needs and do what’s right for the company. So of course the TSA wants to spend all this money on new airplane security systems; that’s their job. Someone above the TSA has to balance the risks to airlines with the other risks our country faces and allocate budget accordingly. Security is a trade-off, and that trade-off has to be made by someone with responsibility over all aspects of that trade-off.

I don’t think I’ve ever heard a politician make this point so explicitly.

EDITED TO ADD (10/27): This is a security blog, not a political blog. As such, I have deleted all political comments below—on both sides.. You are welcome to discuss this notion of security trade-offs and the appropriate level to make them, but not the election or the candidates.

Posted on October 27, 2008 at 6:31 AM60 Comments

ANSI Cyberrisk Calculation Guide

Interesting:

In a nutshell, the guide advocates that organizations calculate cyber security risks and costs by asking questions of every organizational discipline that might be affected: legal, compliance, business operations, IT, external communications, crisis management, and risk management/insurance. The idea is to involve everyone who might be affected by a security breach and collect data on the potential risks and costs.

Once all of the involved parties have weighed in, the guide offers a mathematical formula for calculating financial risk: Essentially, it is a product of the frequency of an event multiplied by its severity, multiplied by the likelihood of its occurrence. If risk can be transferred to other organizations, that part of the risk can be subtracted from the net financial risk.

Guide is here.

Posted on October 24, 2008 at 7:04 AM26 Comments

Remotely Eavesdropping on Keyboards

Clever work:

The researchers from the Security and Cryptography Laboratory at Ecole Polytechnique Federale de Lausanne are able to capture keystrokes by monitoring the electromagnetic radiation of PS/2, universal serial bus, or laptop keyboards. They’ve outline four separate attack methods, some that work at a distance of as much as 65 feet from the target.

In one video demonstration, researchers Martin Vuagnoux and Sylvain Pasini sniff out the the keystrokes typed into a standard keyboard using a large antenna that’s about 20 to 30 feet away in an adjacent room.

Website here.

Posted on October 23, 2008 at 12:48 PM44 Comments

Kip Hawley Responds to My Airport Security Antics

Kip Hawley, head of the TSA, has responded to my airport security penetration testing, published in The Atlantic.

Unfortunately, there’s not really anything to his response. It’s obvious he doesn’t want to admit that they’ve been checking ID’s all this time to no purpose whatsoever, so he just emits vague generalities like a frightened squid filling the water with ink. Yes, some of the stunts in article are silly (who cares if people fly with Hezbollah T-shirts?) so that gives him an opportunity to minimize the real issues.

Watch-lists and identity checks are important and effective security measures. We identify dozens of terrorist-related individuals a week and stop No-Flys regularly with our watch-list process.

It is simply impossible that the TSA catches dozens of terrorists every week. If it were true, the administration would be trumpeting this all over the press—it would be an amazing success story in their war on terrorism. But note that Hawley doesn’t exactly say that; he calls them “terrorist-related individuals.” Which means exactly what? People so dangerous they can’t be allowed to fly for any reason, yet so innocent they can’t be arrested—even under the provisions of the Patriot Act.

And if Secretary Chertoff is telling the truth when he says that there are only 2,500 people on the no-fly list and fewer than 16,000 people on the selectee list—they’re the ones that get extra screening—and that most of them live outside the U.S., then it is just plain impossible that the TSA identifies “dozens” of these people every week. The math just doesn’t make sense.

And I also don’t believe this:

Behavior detection works and we have 2,000 trained officers at airports today. They alert us to people who may pose a threat but who may also have items that could elude other layers of physical security.

It does work, but I don’t see the TSA doing it properly. (Fly El Al if you want to see it done properly.) But what I think Hawley is doing is engaging in a little bit of psychological manipulation. Like sky marshals, the real benefit of behavior detection isn’t whether or not you do it but whether or not the bad guys believe you’re doing it. If they think you are doing behavior detection at security checkpoints, or have sky marshals on every airplane, then you don’t actually have to do it. It’s the threat that’s the deterrent, not the actual security system.

This doesn’t impress me, either:

Items carried on the person, be they a ‘beer belly’ or concealed objects in very private areas, are why we are buying over 100 whole body imagers in upcoming months and will deploy more over time. In the meantime, we use hand-held devices that detect hydrogen peroxide and other explosives compounds as well as targeted pat-downs that require private screening.

Optional security measures don’t work, because the bad guys will opt not to use them. It’s like those air-puff machines at some airports now. They’re probably great at detecting explosive residue off clothing, but every time I have seen the machines in operation, the passengers have the option whether to go through the lane with them or another lane. What possible good is that?

The closest thing to a real response from Hawley is that the terrorists might get caught stealing credit cards.

Using stolen credit cards and false documents as a way to get around watch-lists makes the point that forcing terrorists to use increasingly risky tactics has its own security value.

He’s right about that. And, truth be told, that was my sloppiest answer during the original interview. Thinking about it afterwards, it’s far more likely is that someone with a clean record and a legal credit card will buy the various plane tickets.

This is new:

Boarding pass scanners and encryption are being tested in eight airports now and more will be coming.

Ignoring for a moment that “eight airports” nonsense—unless you do it at every airport, the bad guys will choose the airport where you don’t do it to launch their attack—this is an excellent idea. The reason my attack works, the reason I can get through TSA checkpoints with a fake boarding pass, is that the TSA never confirms that the information on the boarding pass matches a legitimate reservation. If all TSA checkpoints had boarding pass scanners that connected to the airlines’ computers, this attack would not work. (Interestingly enough, I noticed exactly this system at the Dublin airport earlier this month.)

Stopping the “James Bond” terrorist is truly a team effort and I whole-heartedly agree that the best way to stop those attacks is with intelligence and law enforcement working together.

This isn’t about “Stopping the ‘James Bond’ terrorist,” it’s about stopping terrorism. And if all this focus on airports, even assuming it starts working, shifts the terrorists to other targets, we haven’t gotten a whole lot of security for our money.

FYI: I did a long interview with Kip Hawley last year. If you haven’t read it, I strongly recommend you do. I pressed him on these and many other points, and didn’t get very good answers then, either.

EDITED TO ADD (10/28): Kip Hawley responds in comments. Yes, it’s him.

EDITED TO ADD (11/17): Another article on those boarding pass verifiers.

Posted on October 23, 2008 at 6:24 AM69 Comments

Terrorists and Child Porn, Oh My!

It’s the ultimate movie-plot threat: terrorists using child porn:

It is thought Islamist extremists are concealing messages in digital images and audio, video or other files.

Police are now investigating the link between terrorists and paedophilia in an attempt to unravel the system.

It could lead to the training of child welfare experts to identify signs of terrorist involvement as they monitor pornographic websites.

Of course, terrorists and strangers preying on our children are two of the things that cause the most fear in people. Put them together, and there’s no limit to what sorts of laws you can get passed.

EDITED TO ADD (10/22): Best comment:

Why would terrorists hide incriminating messages inside incriminating photographs? That would be like drug smugglers hiding kilos of cocaine in bales of marijuana.

Posted on October 22, 2008 at 12:57 PM59 Comments

Terrorist Fear Mongering Seems to be Working Less Well, Part II

Last week I wrote about a story that indicated that terrorist fear mongering is working less well. Here’s another story, this one from Canada: two pipeline bombings in Northern British Columbia:

Investigators are treating the explosions as acts of vandalism, not terrorism, Shields said.

“Under the Criminal Code, it would be characterized as mischief, which is an intentional vandalism. We don’t want to characterize this as terrorism. They were very isolated locations and there would seem there was no intent to hurt people,” he said.

It’s not all good, though. Here’s a story from Philadelphia, where a subway car is criticized because people can see out the front. Because, um, because terrorist will be able to see out the front, and we all know how dangerous terrorists are:

Marcus Ruef, a national vice president with the Brotherhood of Locomotive Engineers and Trainmen, compared a train cab to an airliner cockpit and said a cab should be similarly secure. He invoked post-9/11 security concerns as a reason to provide a full cab that prevents passengers from seeing the rails and signals ahead.

“We don’t think the forward view of the right-of-way should be available to whoever wants to watch … and the conductor and the engineer should be able to talk privately,” Ruef said.

Pat Nowakowski, SEPTA chief of operations, said the smaller cabs pose no security risk. “I have never heard that from a security expert,” he said.

At least there was pushback against that kind of idiocy.

And from the UK:

Transport Secretary Geoff Hoon has said the government is prepared to go “quite a long way” with civil liberties to “stop terrorists killing people”.

He was responding to criticism of plans for a database of mobile and web records, saying it was needed because terrorists used such communications.

By not monitoring this traffic, it would be “giving a licence to terrorists to kill people”, he said.

I hope there will be similar pushback against this “choice.”

EDITED TO ADD (11/13): Seems like the Philadelphia engineers have another agenda—the cabs in the new trains are too small—and they’re just using security as an excuse.

Posted on October 22, 2008 at 6:44 AM37 Comments

ID Cards for Port Workers

While I am strongly opposed to a national ID, I have consistently said that giving strongly secured ID cards to groups like port workers is a good idea. It’s happening in New England:

The scannable card serves as proof that a background check has been performed and it contains features aimed at preventing misuse. In addition to a photograph, the card contains a smart chip that carries a copy of the holder’s fingerprint. Port and delivery workers, cargo handlers, and other employees who must venture into sensitive or secure areas will be required to submit to a fingerprint scan before entering those locations. The scanning machine will automatically perform a match analysis with the fingerprint embedded in the smart chip.

This is a great application for these cards.

Posted on October 21, 2008 at 1:28 PM31 Comments

Quantum Cryptography

Quantum cryptography is back in the news, and the basic idea is still unbelievably cool, in theory, and nearly useless in real life.

The idea behind quantum crypto is that two people communicating using a quantum channel can be absolutely sure no one is eavesdropping. Heisenberg’s uncertainty principle requires anyone measuring a quantum system to disturb it, and that disturbance alerts legitimate users as to the eavesdropper’s presence. No disturbance, no eavesdropper—period.

This month we’ve seen reports on a new working quantum-key distribution network in Vienna, and a new quantum-key distribution technique out of Britain. Great stuff, but headlines like the BBC’s “‘Unbreakable’ encryption unveiled” are a bit much.

The basic science behind quantum crypto was developed, and prototypes built, in the early 1980s by Charles Bennett and Giles Brassard, and there have been steady advances in engineering since then. I describe basically how it all works in Applied Cryptography, 2nd Edition (pages 554-557). At least one company already sells quantum-key distribution products.

Note that this is totally separate from quantum computing, which also has implications for cryptography. Several groups are working on designing and building a quantum computer, which is fundamentally different from a classical computer. If one were built—and we’re talking science fiction here—then it could factor numbers and solve discrete-logarithm problems very quickly. In other words, it could break all of our commonly used public-key algorithms. For symmetric cryptography it’s not that dire: A quantum computer would effectively halve the key length, so that a 256-bit key would be only as secure as a 128-bit key today. Pretty serious stuff, but years away from being practical. I think the best quantum computer today can factor the number 15.

While I like the science of quantum cryptography—my undergraduate degree was in physics—I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.

As I’ve often said, it’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be 50 feet tall or 100 feet tall, because either way, the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: The keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.

I’m always in favor of security research, and I have enjoyed following the developments in quantum cryptography. But as a product, it has no future. It’s not that quantum cryptography might be insecure; it’s that cryptography is already sufficiently secure.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/21): It’s amazing; even reporters responding to my essay get it completely wrong:

Keith Harrison, a cryptographer with HP Laboratories, is quoted by the Telegraph as saying that, as quantum computing becomes commonplace, hackers will use the technology to crack conventional encryption.

“We have to be thinking about solutions to the problems that quantum computing will pose,” he told the Telegraph. “The average consumer is going to want to know their own transactions and daily business is secure.

“One way of doing this is to use a one time pad essentially lists of random numbers where one copy of the numbers is held by the person sending the information and an identical copy is held by the person receiving the information. These are completely unbreakable when used properly,” he explained.

The critical feature of quantum computing is the unique fact that, if someone tampers with an information feed between two parties, then the nature of the quantum feed changes.

This makes eavesdropping impossible.

No, it wouldn’t make eavesdropping impossible. It would make eavesdropping on the communications channel impossible unless someone made an implementation error. (In the 80s, the NSA broke Soviet one-time-pad systems because the Soviets reused the pad.) Eavesdropping via spyware or Trojan or TEMPEST would still be possible.

EDITED TO ADD (10/26): Here’s another commenter who gets it wrong:

Now let me get this straight: I have no doubt that there are many greater worries in security than “mathematical crypography.” But does this justify totally ignoring the possibility that a cryptographic system might possibly be breakable? I mean maybe I’m influenced by this in the fact that I’ve been sitting in on a cryptanalysis course and I just met a graduate student who broke a cryptographic pseudorandom number generator, but really what kind of an argument is this? “Um, well, sometimes our cryptographic systems have been broken, but that’s nothing to worry about, because, you know, everything is kosher with the systems we are using.”

The point isn’t to ignore the possibility that a cryptographic system might possibly be broken; the point is to pay attention to the other parts of the system that are much much more likely to be already broken. Security is a chain; it’s only as secure as the weakest link. The cryptographic systems, as potentially flawed as they are, are the strongest link in the chain. We’d get a lot more security devoting our resources to making all those weaker links more secure.

Again, this is not to say that quantum cryptography isn’t incredibly cool research. It is, and I hope it continues to receive all sorts of funding. But for an operational network that is worried about security: you’ve got much bigger worries than whether Diffie-Hellman will be broken someday.

Posted on October 21, 2008 at 6:48 AM74 Comments

The Psychology of Con Men

Interesting:

My all-time favourite [short con] only makes the con artist a few dollars every time he does it, but I absolutely love it. These guys used to go door-to-door in the 1970s selling lightbulbs and they would offer to replace every single lightbulb in your house, so all your old lightbulbs would be replaced with a brand new lightbulb, and it would cost you, say $5, so a fraction of the cost of what new lightbulbs would cost. So the man comes in, he replaces each lightbulb, every single one in the house, and does it, you can check, and they all work, and then he takes all the lightbulbs that he’s just taken from the person’s house, goes next door and then sells them the same lightbulbs again. So it’s really just moving lightbulbs from one house to another and charging people a fee to do it.

Posted on October 20, 2008 at 5:57 AM37 Comments

Me Helping Evade Airport Security

Great article from The Atlantic:

As we stood at an airport Starbucks, Schneier spread before me a batch of fabricated boarding passes for Northwest Airlines flight 1714, scheduled to depart at 2:20 p.m. and arrive at Reagan National at 5:47 p.m. He had taken the liberty of upgrading us to first class, and had even granted me “Platinum/Elite Plus” status, which was gracious of him. This status would allow us to skip the ranks of hoi-polloi flyers and join the expedited line, which is my preference, because those knotty, teeming security lines are the most dangerous places in airports: terrorists could paralyze U.S. aviation merely by detonating a bomb at any security checkpoint, all of which are, of course, entirely unsecured. (I once asked Michael Chertoff, the secretary of Homeland Security, about this. “We actually ultimately do have a vision of trying to move the security checkpoint away from the gate, deeper into the airport itself, but there’s always going to be some place that people congregate. So if you’re asking me, is there any way to protect against a person taking a bomb into a crowded location and blowing it up, the answer is no.”)

Schneier and I walked to the security checkpoint. “Counterterrorism in the airport is a show designed to make people feel better,” he said. “Only two things have made flying safer: the reinforcement of cockpit doors, and the fact that passengers know now to resist hijackers.” This assumes, of course, that al-Qaeda will target airplanes for hijacking, or target aviation at all. “We defend against what the terrorists did last week,” Schneier said. He believes that the country would be just as safe as it is today if airport security were rolled back to pre-9/11 levels. “Spend the rest of your money on intelligence, investigations, and emergency response.”

Schneier and I joined the line with our ersatz boarding passes. “Technically we could get arrested for this,” he said, but we judged the risk to be acceptable. We handed our boarding passes and IDs to the security officer, who inspected our driver’s licenses through a loupe, one of those magnifying-glass devices jewelers use for minute examinations of fine detail. This was the moment of maximum peril, not because the boarding passes were flawed, but because the TSA now trains its officers in the science of behavior detection. The SPOT program—Screening of Passengers by Observation Techniques—was based in part on the work of a psychologist who believes that involuntary facial-muscle movements, including the most fleeting “micro-expressions,” can betray lying or criminality. The training program for behavior-detection officers is one week long. Our facial muscles did not cooperate with the SPOT program, apparently, because the officer chicken-scratched onto our boarding passes what might have been his signature, or the number 4, or the letter y. We took our shoes off and placed our laptops in bins. Schneier took from his bag a 12-ounce container labeled “saline solution.”

“It’s allowed,” he said. Medical supplies, such as saline solution for contact-lens cleaning, don’t fall under the TSA’s three-ounce rule.

“What’s allowed?” I asked. “Saline solution, or bottles labeled saline solution?”

“Bottles labeled saline solution. They won’t check what’s in it, trust me.”

They did not check. As we gathered our belongings, Schneier held up the bottle and said to the nearest security officer, “This is okay, right?” “Yep,” the officer said. “Just have to put it in the tray.”

“Maybe if you lit it on fire, he’d pay attention,” I said, risking arrest for making a joke at airport security. (Later, Schneier would carry two bottles labeled saline solution—24 ounces in total—through security. An officer asked him why he needed two bottles. “Two eyes,” he said. He was allowed to keep the bottles.)

Posted on October 16, 2008 at 4:32 PM56 Comments

Designing a Malicious Processor

From the LEET ’08 conference: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Abstract:

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker,
rather than designing one specific attack, can instead design hardware to support attacks. Such flexible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Posted on October 16, 2008 at 12:39 PM31 Comments

How to Write Injection-Proof SQL

It’s about time someone wrote this paper:

ABSTRACT

Googling for “SQL injection” gets about 4 million hits. The topic excites interest and superstitious fear. This whitepaper dymystifies the topic and explains a straightforward approach to writing database PL/SQL programs that provably guarantees their immunity to SQL injection.

Only when a PL/SQL subprogram executes SQL that it creates at run time is there a risk of SQL injection; and you’ll see that it’s easier than you might think to freeze the SQL at PL/SQL compile time. Then you’ll understand that you need the rules which prevent the risk only for the rare scenarios that do require run-time-created SQL. It turns out that these rules are simple to state and easy to follow.

EDITED TO ADD (10/26): Never mind; this seems to be a self-serving marketing piece.

Posted on October 16, 2008 at 5:56 AM53 Comments

NSA's Warrantless Eavesdropping Targets Innocent Americans

Remember when the U.S. government said it was only spying on terrorists? Anyone with any common sense knew it was lying—power without oversight is always abused—but even I didn’t think
it was this bad:

Faulk says he and others in his section of the NSA facility at Fort Gordon routinely shared salacious or tantalizing phone calls that had been intercepted, alerting office mates to certain time codes of “cuts” that were available on each operator’s computer.

“Hey, check this out,” Faulk says he would be told, “there’s good phone sex or there’s some pillow talk, pull up this call, it’s really funny, go check it out. It would be some colonel making pillow talk and we would say, ‘Wow, this was crazy’,” Faulk told ABC News.

Warrants are a security device. They protect us against government abuse of power.

Posted on October 15, 2008 at 12:39 PM80 Comments

Terrorist Fear Mongering Seems to be Working Less Well

BART, the San Francisco subway authority, has been debating allowing passengers to bring drinks on trains. There are all sorts of good reasons why or why not—convenience, problems with spills, and so on—but one reason that makes no sense is that terrorists may bring flammable liquids on board. Yet that is exactly what BART managers said.

No big news—we’ve seen stupid things like this regularly since 9/11—but this time people responded:

Added Director Tom Radulovich, “If somebody wants to break the law and bring flammable liquids on, they can. It’s not like al Qaeda is waiting in their caves for us to have a sippy-cup rule.”

Directing his comments to BART administrators, he said, “You know, it’s just fearmongering and you should be ashamed.”

Posted on October 15, 2008 at 7:07 AM32 Comments

New Chip-and-Pin Scam in the UK

The readers were hacked when they were built, “either during the manufacturing process at a factory in China, or shortly after they came off the production line.” It’s being called a “supply chain hack.”

Sophisticated stuff, and yet another demonstration that these all-computer security systems are full of risks.

BTW, what’s it worth to rig an election?

Posted on October 14, 2008 at 1:44 PM51 Comments

Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes—miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior—dart left, dart right, dart left, and at the last moment jump—instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision—buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living—insurance companies, financial trading firms and arbitrageurs—try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PM25 Comments

Speeding up WiFi Hacking with Hardware Accelerators

Elcomsoft is claiming that the WPA protocol is dead, just because they can speed up brute-force cracking by 100 times using a hardware accelerator. Why exactly is this news? Yes, weak passwords are weak—we already know that. And strong WPA passwords are still strong. This seems like yet another blatant attempt to grab some press attention with a half-baked cryptanalytic result.

Posted on October 14, 2008 at 6:25 AM37 Comments

Clever Counterterrorism Tactic

Used against the IRA:

One of the most interesting operations was the laundry mat [sic]. Having lost many troops and civilians to bombings, the Brits decided they needed to determine who was making the bombs and where they were being manufactured. One bright fellow recommended they operate a laundry and when asked “what the hell he was talking about,” he explained the plan and it was incorporated—to much success.

The plan was simple: Build a laundry and staff it with locals and a few of their own. The laundry would then send out “color coded” special discount tickets, to the effect of “get two loads for the price of one,” etc. The color coding was matched to specific streets and thus when someone brought in their laundry, it was easy to determine the general location from which a city map was coded.

While the laundry was indeed being washed, pressed and dry cleaned, it had one additional cycle—every garment, sheet, glove, pair of pants, was first sent through an analyzer, located in the basement, that checked for bomb-making residue. The analyzer was disguised as just another piece of the laundry equipment; good OPSEC [operational security]. Within a few weeks, multiple positives had shown up, indicating the ingredients of bomb residue, and intelligence had determined which areas of the city were involved. To narrow their target list, [the laundry] simply sent out more specific coupons [numbered] to all houses in the area, and before long they had good addresses. After confirming addresses, authorities with the SAS teams swooped down on the multiple homes and arrested multiple personnel and confiscated numerous assembled bombs, weapons and ingredients. During the entire operation, no one was injured or killed.

Posted on October 13, 2008 at 1:22 PM67 Comments

Threat Modeling at Microsoft

Interesting paper by Adam Shostack:

Abstract. Describes a decade of experience threat modeling products and services at Microsoft. Describes the current threat modeling methodology used in the Security Development Lifecycle. The methodology is a practical approach, usable by non-experts, centered on data ow diagrams and a threat enumeration technique of ‘STRIDE per element.’ The paper covers some lessons learned which are likely applicable to other security analysis techniques. The paper closes with some possible questions for academic research.

Posted on October 13, 2008 at 6:21 AM17 Comments

Friday Squid Blogging: Natural Squid Steganography

Squid can communicate with each other without any other fish noticing:

Squid and their relatives have eyes that are sensitive to polarised light and to them and are known to use it to signal to one another. Their predators on the other hand, like seals or whales, don’t share this ability and cannot see the squids’ signals.

Most of all, the polarised iridescent light, is not affected by the chromatophores and passes through unaltered. This means that camouflaged squid can have entire visual conversations while remaining invisible to passing predators. In the world of squid, conversations carry secrets wrapped in lies.

Posted on October 10, 2008 at 4:58 PM16 Comments

The More Things Change, the More They Stay the Same

Guess the year:

Murderous organizations have increased in size and scope; they are more daring, they are served by the most terrible weapons offered by modern science, and the world is nowadays threatened by new forces which, if recklessly unchained, may some day wreck universal destruction. The Orsini bombs were mere children’s toys compared with the later developments of infernal machines. Between 1858 and 1898 the dastardly science of destruction had made rapid and alarming strides…

No, that wasn’t a typo. “Between 1858 and 1898….” This quote is from Major Arthur Griffith, Mysteries of Police and Crime, London, 1898, II, p. 469. It’s quoted in: Walter Laqueur, A History of Terrorism, New Brunswick/London, Transaction Publishers, 2002.

Posted on October 10, 2008 at 12:30 PM20 Comments

Data Mining for Terrorists Doesn't Work

According to a massive report from the National Research Council, data mining for terrorists doesn’t work. Here’s a good summary:

The report was written by a committee whose members include William Perry, a professor at Stanford University; Charles Vest, the former president of MIT; W. Earl Boebert, a retired senior scientist at Sandia National Laboratories; Cynthia Dwork of Microsoft Research; R. Gil Kerlikowske, Seattle’s police chief; and Daryl Pregibon, a research scientist at Google.

They admit that far more Americans live their lives online, using everything from VoIP phones to Facebook to RFID tags in automobiles, than a decade ago, and the databases created by those activities are tempting targets for federal agencies. And they draw a distinction between subject-based data mining (starting with one individual and looking for connections) compared with pattern-based data mining (looking for anomalous activities that could show illegal activities).

But the authors conclude the type of data mining that government bureaucrats would like to do—perhaps inspired by watching too many episodes of the Fox series 24—can’t work. “If it were possible to automatically find the digital tracks of terrorists and automatically monitor only the communications of terrorists, public policy choices in this domain would be much simpler. But it is not possible to do so.”

A summary of the recommendations:

  • U.S. government agencies should be required to follow a systematic process to evaluate the effectiveness, lawfulness, and consistency with U.S. values of every information-based program, whether classified or unclassified, for detecting and countering terrorists before it can be deployed, and periodically thereafter.
  • Periodically after a program has been operationally deployed, and in particular before a program enters a new phase in its life cycle, policy makers should (carefully review) the program before allowing it to continue operations or to proceed to the next phase.
  • To protect the privacy of innocent people, the research and development of any information-based counterterrorism program should be conducted with synthetic population data… At all stages of a phased deployment, data about individuals should be rigorously subjected to the full safeguards of the framework.
  • Any information-based counterterrorism program of the U.S. government should be subjected to robust, independent oversight of the operations of that program, a part of which would entail a practice of using the same data mining technologies to “mine the miners and track the trackers.”
  • Counterterrorism programs should provide meaningful redress to any individuals inappropriately harmed by their operation.
  • The U.S. government should periodically review the nation’s laws, policies, and procedures that protect individuals’ private information for relevance and effectiveness in light of changing technologies and circumstances. In particular, Congress should re-examine existing law to consider how privacy should be protected in the context of information-based programs (e.g., data mining) for counterterrorism.

Here are more news articles on the report. I explained why data mining wouldn’t find terrorists back in 2005.

EDITED TO ADD (10/10): More commentary:

As the NRC report points out, not only is the training data lacking, but the input data that you’d actually be mining has been purposely corrupted by the terrorists themselves. Terrorist plotters actively disguise their activities using operational security measures (opsec) like code words, encryption, and other forms of covert communication. So, even if we had access to a copious and pristine body of training data that we could use to generalize about the “typical terrorist,” the new data that’s coming into the data mining system is suspect.

To return to the credit reporting analogy, credit scores would be worthless to lenders if everyone could manipulate their credit history (e.g., hide past delinquencies) the way that terrorists can manipulate the data trails that they leave as they buy gas, enter buildings, make phone calls, surf the Internet, etc.

So this application of data mining bumps up against the classic GIGO (garbage in, garbage out) problem in computing, with the terrorists deliberately feeding the system garbage. What this means in real-world terms is that the success of our counter-terrorism data mining efforts is completely dependent on the failure of terrorist cells to maintain operational security.

The combination of the GIGO problem and the lack of suitable training data combine to make big investments in automated terrorist identification a futile and wasteful effort. Furthermore, these two problems are structural, so they’re not going away. All legitimate concerns about false positives and corrosive effects on civil liberties aside, data mining will never give authorities the ability to identify terrorists or terrorist networks with any degree of confidence.

Posted on October 10, 2008 at 6:35 AM22 Comments

Nonviolent Activists Are Now Terrorists

Heard about this:

The Maryland State Police classified 53 nonviolent activists as terrorists and entered their names and personal information into state and federal databases that track terrorism suspects, the state police chief acknowledged yesterday.

Why did they do that?

Both Hutchins and Sheridan said the activists’ names were entered into the state police database as terrorists partly because the software offered limited options for classifying entries.

I know that once we had this “either you’re with us or with the terrorists” mentality, but don’t you think that—just maybe—the software should allow for a little bit more nuance?

Posted on October 9, 2008 at 1:07 PM54 Comments

"New Attack" Against Encrypted Images

In a blatant attempt to get some PR:

In a new paper, Bernd Roellgen of Munich-based encryption outfit PMC Ciphers, explains how it is possible to compare an encrypted backup image file made with almost any commercial encryption program or algorithm to an original that has subsequently changed so that small but telling quantities of data ‘leaks’.

Here’s the paper. Turns out that if you use a block cipher in Electronic Codebook Mode, identical plaintexts encrypt to identical ciphertexts.

Yeah, we already knew that.

And -1 point for a security company requiring the use of Javascript, and not failing gracefully for a browser that doesn’t have it enabled.

And—ahem—what is it with that photograph in the paper? Couldn’t the researchers have found something a little less adolescent?

For the record, I doghoused PMC Ciphers back in 2003:

PMC Ciphers. The theory description is so filled with pseudo-cryptography that it’s funny to read. Hypotheses are presented as conclusions. Current research is misstated or ignored. The first link is a technical paper with four references, three of them written before 1975. Who needs thirty years of cryptographic research when you have polymorphic cipher theory?

EDITED TO ADD (10/9): I didn’t realize it, but last year PMC Ciphers responded to my doghousing them. Funny stuff.

EDITED TO ADD (10/10): Three new commenters using dialups at the same German ISP have showed up here to defend the paper. What are the odds?

Posted on October 9, 2008 at 6:44 AM64 Comments

Do-Not-Call Lists

Turns out you can add anyone’s number to—or remove anyone’s number from—the Canadian do-not-call list. You can also add (but not remove) numbers to the U.S. do-not-call list, though only up to three at a time, and you have to provide a valid e-mail address to confirm the addition.

Here’s my idea. If you’re a company, add every one of your customers to the list. That way, none of your competitors will be able to cold call them.

Posted on October 7, 2008 at 3:51 PM42 Comments

The Seven Habits of Highly Ineffective Terrorists

Most counterterrorism policies fail, not because of tactical problems, but because of a fundamental misunderstanding of what motivates terrorists in the first place. If we’re ever going to defeat terrorism, we need to understand what drives people to become terrorists in the first place.

Conventional wisdom holds that terrorism is inherently political, and that people become terrorists for political reasons. This is the “strategic” model of terrorism, and it’s basically an economic model. It posits that people resort to terrorism when they believe—rightly or wrongly—that terrorism is worth it; that is, when they believe the political gains of terrorism minus the political costs are greater than if they engaged in some other, more peaceful form of protest. It’s assumed, for example, that people join Hamas to achieve a Palestinian state; that people join the PKK to attain a Kurdish national homeland; and that people join al-Qaida to, among other things, get the United States out of the Persian Gulf.

If you believe this model, the way to fight terrorism is to change that equation, and that’s what most experts advocate. Governments tend to minimize the political gains of terrorism through a no-concessions policy; the international community tends to recommend reducing the political grievances of terrorists via appeasement, in hopes of getting them to renounce violence. Both advocate policies to provide effective nonviolent alternatives, like free elections.

Historically, none of these solutions has worked with any regularity. Max Abrahms, a predoctoral fellow at Stanford University’s Center for International Security and Cooperation, has studied dozens of terrorist groups from all over the world. He argues that the model is wrong. In a paper published this year in International Security that—sadly—doesn’t have the title “Seven Habits of Highly Ineffective Terrorists,” he discusses, well, seven habits of highly ineffective terrorists. These seven tendencies are seen in terrorist organizations all over the world, and they directly contradict the theory that terrorists are political maximizers:

Terrorists, he writes, (1) attack civilians, a policy that has a lousy track record of convincing those civilians to give the terrorists what they want; (2) treat terrorism as a first resort, not a last resort, failing to embrace nonviolent alternatives like elections; (3) don’t compromise with their target country, even when those compromises are in their best interest politically; (4) have protean political platforms, which regularly, and sometimes radically, change; (5) often engage in anonymous attacks, which precludes the target countries making political concessions to them; (6) regularly attack other terrorist groups with the same political platform; and (7) resist disbanding, even when they consistently fail to achieve their political objectives or when their stated political objectives have been achieved.

Abrahms has an alternative model to explain all this: People turn to terrorism for social solidarity. He theorizes that people join terrorist organizations worldwide in order to be part of a community, much like the reason inner-city youths join gangs in the United States.

The evidence supports this. Individual terrorists often have no prior involvement with a group’s political agenda, and often join multiple terrorist groups with incompatible platforms. Individuals who join terrorist groups are frequently not oppressed in any way, and often can’t describe the political goals of their organizations. People who join terrorist groups most often have friends or relatives who are members of the group, and the great majority of terrorist are socially isolated: unmarried young men or widowed women who weren’t working prior to joining. These things are true for members of terrorist groups as diverse as the IRA and al-Qaida.

For example, several of the 9/11 hijackers planned to fight in Chechnya, but they didn’t have the right paperwork so they attacked America instead. The mujahedeen had no idea whom they would attack after the Soviets withdrew from Afghanistan, so they sat around until they came up with a new enemy: America. Pakistani terrorists regularly defect to another terrorist group with a totally different political platform. Many new al-Qaida members say, unconvincingly, that they decided to become a jihadist after reading an extreme, anti-American blog, or after converting to Islam, sometimes just a few weeks before. These people know little about politics or Islam, and they frankly don’t even seem to care much about learning more. The blogs they turn to don’t have a lot of substance in these areas, even though more informative blogs do exist.

All of this explains the seven habits. It’s not that they’re ineffective; it’s that they have a different goal. They might not be effective politically, but they are effective socially: They all help preserve the group’s existence and cohesion.

This kind of analysis isn’t just theoretical; it has practical implications for counterterrorism. Not only can we now better understand who is likely to become a terrorist, we can engage in strategies specifically designed to weaken the social bonds within terrorist organizations. Driving a wedge between group members—commuting prison sentences in exchange for actionable intelligence, planting more double agents within terrorist groups—will go a long way to weakening the social bonds within those groups.

We also need to pay more attention to the socially marginalized than to the politically downtrodden, like unassimilated communities in Western countries. We need to support vibrant, benign communities and organizations as alternative ways for potential terrorists to get the social cohesion they need. And finally, we need to minimize collateral damage in our counterterrorism operations, as well as clamping down on bigotry and hate crimes, which just creates more dislocation and social isolation, and the inevitable calls for revenge.

This essay previously appeared on Wired.com.

EDITED TO ADD (10/9): Interesting rebuttal.

Posted on October 7, 2008 at 5:48 AM90 Comments

Clickjacking

Good Q&A on clickjacking:

In plain English, clickjacking lets hackers and scammers hide malicious stuff under the cover of the content on a legitimate site. You know what happens when a carjacker takes a car? Well, clickjacking is like that, except that the click is the car.

“Clickjacking” is a stunningly sexy name, but the vulnerability is really just a variant of cross-site scripting. We don’t know how bad it really is, because the details are still being withheld. But the name alone is causing dread.

EDITED TO ADD (10/13): More details.

Posted on October 6, 2008 at 1:45 PM27 Comments

New Cross-Site Request Forgery Attacks

Interesting:

CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don’t verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.

If a user visits an attacker’s website, the attacker can force the user’s browser to send a request to a page that performs a sensitive action on behalf of the user. The target website sees a request coming from an authenticated user and happily performs some action, whether it was invoked by the user or not. CSRF attacks have been confused with Cross-Site Scripting (XSS) attacks, but they are very different. A site completely protected from XSS is still vulnerable to CSRF attacks if no protections are taken.

Paper here.

Posted on October 6, 2008 at 5:42 AM27 Comments

Taleb on the Limitations of Risk Management

Nice paragraph on the limitations of risk management in this occasionally interesting interview with Nicholas Taleb:

Because then you get a Maginot Line problem. [After World War I, the French erected concrete fortifications to prevent Germany from invading again—a response to the previous war, which proved ineffective for the next one.] You know, they make sure they solve that particular problem, the Germans will not invade from here. The thing you have to be aware of most obviously is scenario planning, because typically if you talk about scenarios, you’ll overestimate the probability of these scenarios. If you examine them at the expense of those you don’t examine, sometimes it has left a lot of people worse off, so scenario planning can be bad. I’ll just take my track record. Those who did scenario planning have not fared better than those who did not do scenario planning. A lot of people have done some kind of “make-sense” type measures, and that has made them more vulnerable because they give the illusion of having done your job. This is the problem with risk management. I always come back to a classical question. Don’t give a fool the illusion of risk management. Don’t ask someone to guess the number of dentists in Manhattan after asking him the last four digits of his Social Security number. The numbers will always be correlated. I actually did some work on risk management, to show how stupid we are when it comes to risk.

Posted on October 3, 2008 at 7:48 AM33 Comments

Bank Robber Hires Accomplices on Craigslist

Now this is clever:

“I came across the ad that was for a prevailing wage job for $28.50 an hour,” said Mike, who saw a Craigslist ad last week looking for workers for a road maintenance project in Monroe.

He said he inquired and was e-mailed back with instructions to meet near the Bank of America in Monroe at 11 a.m. Tuesday. He also was told to wear certain work clothing.

“Yellow vest, safety goggles, a respirator mask…and, if possible, a blue shirt,” he said.

Mike showed up along with about a dozen other men dressed like him, but there was no contractor and no road work to be done. He thought they had been stood up until he heard about the bank robbery and the suspect who wore the same attire.

EDITED TO ADD (11/7): He was arrested.

Posted on October 2, 2008 at 12:18 PM

"Scareware" Vendors Sued

This is good:

Microsoft Corp. and the state of Washington this week filed lawsuits against a slew of “scareware” purveyors, scam artists who use fake security alerts to frighten consumers into paying for worthless computer security software.

The case filed by the Washington attorney general’s office names Texas-based Branch Software and its owner James Reed McCreary IV, alleging that McCreary’s company caused targeted PCs to pop up misleading security alerts about security threats on the victims’ computers. The alerts warned users that their systems were “damaged and corrupted” and instructed them to visit a Web site to purchase a copy of Registry Cleaner XP for $39.95.

I would have thought that existing scam laws would be enough, but Washington state actually has a specific law about this sort of thing:

The lawsuits were filed under Washington’s Computer Spyware Act, which among other things punishes individuals who prey on user concerns regarding spyware or other threats. Specifically, the law makes it illegal to misrepresent the extent to which software is required for computer security or privacy, and it provides actual damages or statutory damages of $100,000 per violation, whichever is greater.

Posted on October 2, 2008 at 7:03 AM27 Comments

MI6 Camera—Including Secrets—Sold on eBay

I wish I’d known:

A 28-year-old delivery man from the UK who bought a Nikon Coolpix camera for about $31 on eBay got more than he bargained for when the camera arrived with top secret information from the UK’s MI6 organization.

Allegedly sold by one of the clandestine organization’s agents, the camera contained named al-Qaeda cells, names, images of suspected terrorists and weapons, fingerprint information, and log-in details for the Secret Service’s computer network, containing a “Top Secret” marking.

He turned the camera in to the police.

Posted on October 1, 2008 at 1:59 PM44 Comments

Hand Grenades as Weapons of Mass Destruction

I get that this is terrorism:

A 24-year-old convert to Islam has been sentenced to 35 years in prison for plotting to set off hand grenades in a crowded shopping mall during the Christmas season.

But I thought “weapons of mass destruction” was reserved for nuclear, chemical, and biological weapons.

He was arrested in 2006 on charges of scheming to use weapons of mass destruction at the Cherryvale Mall in the northern Illinois city of Rockford.

Like the continuing cheapening of the word “terrorism,” we are now cheapening the term “weapons of mass destruction.”

Edited: The link above now leads to a revised story that doesn’t use the term “weapons of mass destruction.” A version that does can still be found here.

Posted on October 1, 2008 at 6:37 AM84 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.