Entries Tagged "operational security"

Page 3 of 7

Cell Phone Opsec

Here’s an article on making secret phone calls with cell phones.

His step-by-step instructions for making a clandestine phone call are as follows:

  1. Analyze your daily movements, paying special attention to anchor points (basis of operation like home or work) and dormant periods in schedules (8-12 p.m. or when cell phones aren’t changing locations);
  2. Leave your daily cell phone behind during dormant periods and purchase a prepaid no-contract cell phone (“burner phone”);
  3. After storing burner phone in a Faraday bag, activate it using a clean computer connected to a public Wi-Fi network;
  4. Encrypt the cell phone number using a onetime pad (OTP) system and rename an image file with the encrypted code. Using Tor to hide your web traffic, post the image to an agreed upon anonymous Twitter account, which signals a communications request to your partner;
  5. Leave cell phone behind, avoid anchor points, and receive phone call from partner on burner phone at 9:30 p.m.­ — or another pre-arranged “dormant” time­ — on the following day;
  6. Wipe down and destroy handset.

    Note that it actually makes sense to use a one-time pad in this instance. The message is a ten-digit number, and a one-time pad is easier, faster, and cleaner than using any computer encryption program.

    Posted on April 7, 2015 at 9:27 AMView Comments

    NSA Classification ECI = Exceptionally Controlled Information

    ECI is a classification above Top Secret. It’s for things that are so sensitive they’re basically not written down, like the names of companies whose cryptography has been deliberately weakened by the NSA, or the names of agents who have infiltrated foreign IT companies.

    As part of the Intercept story on the NSA’s using agents to infiltrate foreign companies and networks, it published a list of ECI compartments. It’s just a list of code names and three-letter abbreviations, along with the group inside the NSA that is responsible for them. The descriptions of what they all mean would never be in a computer file, so it’s only of value to those of us who like code names.

    This designation is why there have been no documents in the Snowden archive listing specific company names. They’re all referred to by these ECI code names.

    EDITED TO ADD (11/10): Another compilation of NSA’s organizational structure.

    Posted on October 16, 2014 at 6:22 AMView Comments

    Security Tents

    The US government sets up secure tents for the president and other officials to deal with classified material while traveling abroad.

    Even when Obama travels to allied nations, aides quickly set up the security tent — which has opaque sides and noise-making devices inside — in a room near his hotel suite. When the president needs to read a classified document or have a sensitive conversation, he ducks into the tent to shield himself from secret video cameras and listening devices.

    […]

    Following a several-hundred-page classified manual, the rooms are lined with foil and soundproofed. An interior location, preferably with no windows, is recommended.

    Posted on November 15, 2013 at 6:28 AMView Comments

    Silk Road Author Arrested Due to Bad Operational Security

    Details of how the FBI found the administrator of Silk Road, a popular black market e-commerce site.

    Despite the elaborate technical underpinnings, however, the complaint portrays Ulbricht as a drug lord who made rookie mistakes. In an October 11, 2011 posting to a Bitcoin Talk forum, for instance, a user called “altoid” advertised he was looking for an “IT pro in the Bitcoin community” to work in a venture-backed startup. The post directed applicants to send responses to “rossulbricht at gmail dot com.” It came about nine months after two previous posts — also made by a user, “altoid,” to shroomery.org and Bitcoin Talk — were among the first to advertise a hidden Tor service that operated as a kind of “anonymous amazon.com.” Both of the earlier posts referenced silkroad420.wordpress.com.

    If altoid’s solicitation for a Bitcoin-conversant IT Pro wasn’t enough to make Ulbricht a person of interest in the FBI’s ongoing probe, other digital bread crumbs were sure to arouse agents’ suspicions. The Google+ profile tied to the rossulbricht@gmail.com address included a list of favorite videos originating from mises.org, a website of the “Mises Institute.” The site billed itself as the “world center of the Austrian School of economics” and contained a user profile for one Ross Ulbricht. Several Dread Pirate Roberts postings on Silk Road cited the “Austrian Economic theory” and the works of Mises Institute economists Ludwig von Mises and Murray Rothbard in providing the guiding principles for the illicit drug market.

    The clues didn’t stop there. In early March 2012 someone created an account on StackOverflow with the username Ross Ulbricht and the rossulbricht@gmail.com address, the criminal complaint alleged. On March 16 at 8:39 in the morning, the account was used to post a message titled “How can I connect to a Tor hidden service using curl in php?” Less than one minute later, the account was updated to change the user name from Ross Ulbricht to “frosty.” Several weeks later, the account was again updated, this time to replace the Ulbricht gmail address with frosty@frosty.com. In July 2013, a forensic analysis of the hard drives used to run one of the Silk Road servers revealed a PHP script based on curl that contained code that was identical to that included in the Stack Overflow discussion, the complaint alleged.

    We already know that it is next to impossible to maintain privacy and anonymity against a well-funded government adversary.

    EDITED TO ADD (10/8): Another article.

    Posted on October 7, 2013 at 1:35 PMView Comments

    How to Remain Secure Against the NSA

    Now that we have enough details about how the NSA eavesdrops on the Internet, including today’s disclosures of the NSA’s deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.

    For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn’t part of today’s story — it was in process well before I showed up — but everything I read confirms what the Guardian is reporting.

    At this point, I feel I can provide some advice for keeping secure against such an adversary.

    The primary way the NSA eavesdrops on Internet communications is in the network. That’s where their capabilities best scale. They have invested in enormous programs to automatically collect and analyze network traffic. Anything that requires them to attack individual endpoint computers is significantly more costly and risky for them, and they will do those things carefully and sparingly.

    Leveraging its secret agreements with telecommunications companies—all the US and UK ones, and many other “partners” around the world — the NSA gets access to the communications trunks that move Internet traffic. In cases where it doesn’t have that sort of friendly access, it does its best to surreptitiously monitor communications channels: tapping undersea cables, intercepting satellite communications, and so on.

    That’s an enormous amount of data, and the NSA has equivalently enormous capabilities to quickly sift through it all, looking for interesting traffic. “Interesting” can be defined in many ways: by the source, the destination, the content, the individuals involved, and so on. This data is funneled into the vast NSA system for future analysis.

    The NSA collects much more metadata about Internet traffic: who is talking to whom, when, how much, and by what mode of communication. Metadata is a lot easier to store and analyze than content. It can be extremely personal to the individual, and is enormously valuable intelligence.

    The Systems Intelligence Directorate is in charge of data collection, and the resources it devotes to this is staggering. I read status report after status report about these programs, discussing capabilities, operational details, planned upgrades, and so on. Each individual problem — recovering electronic signals from fiber, keeping up with the terabyte streams as they go by, filtering out the interesting stuff — has its own group dedicated to solving it. Its reach is global.

    The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability.

    The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO — Tailored Access Operations — group. TAO has a menu of exploits it can serve up against your computer — whether you’re running Windows, Mac OS, Linux, iOS, or something else — and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.

    The NSA deals with any encrypted data it encounters more by subverting the underlying cryptography than by leveraging any secret mathematical breakthroughs. First, there’s a lot of bad cryptography out there. If it finds an Internet connection protected by MS-CHAP, for example, that’s easy to break and recover the key. It exploits poorly chosen user passwords, using the same dictionary attacks hackers use in the unclassified world.

    As was revealed today, the NSA also works with security product vendors to ensure that commercial encryption products are broken in secret ways that only it knows about. We know this has happened historically: CryptoAG and Lotus Notes are the most public examples, and there is evidence of a back door in Windows. A few people have told me some recent stories about their experiences, and I plan to write about them soon. Basically, the NSA asks companies to subtly change their products in undetectable ways: making the random number generator less random, leaking the key somehow, adding a common exponent to a public-key exchange protocol, and so on. If the back door is discovered, it’s explained away as a mistake. And as we now know, the NSA has enjoyed enormous success from this program.

    TAO also hacks into computers to recover long-term keys. So if you’re running a VPN that uses a complex shared secret to protect your data and the NSA decides it cares, it might try to steal that secret. This kind of thing is only done against high-value targets.

    How do you communicate securely against such an adversary? Snowden said it in an online Q&A soon after he made his first document public: “Encryption works. Properly implemented strong crypto systems are one of the few things that you can rely on.”

    I believe this is true, despite today’s revelations and tantalizing hints of “groundbreaking cryptanalytic capabilities” made by James Clapper, the director of national intelligence in another top-secret document. Those capabilities involve deliberately weakening the cryptography.

    Snowden’s follow-on sentence is equally important: “Unfortunately, endpoint security is so terrifically weak that NSA can frequently find ways around it.”

    Endpoint means the software you’re using, the computer you’re using it on, and the local network you’re using it in. If the NSA can modify the encryption algorithm or drop a Trojan on your computer, all the cryptography in the world doesn’t matter at all. If you want to remain secure against the NSA, you need to do your best to ensure that the encryption can operate unimpeded.

    With all this in mind, I have five pieces of advice:

    1. Hide in the network. Implement hidden services. Use Tor to anonymize yourself. Yes, the NSA targets Tor users, but it’s work for them. The less obvious you are, the safer you are.
    2. Encrypt your communications. Use TLS. Use IPsec. Again, while it’s true that the NSA targets encrypted connections — and it may have explicit exploits against these protocols — you’re much better protected than if you communicate in the clear.
    3. Assume that while your computer can be compromised, it would take work and risk on the part of the NSA — so it probably isn’t. If you have something really important, use an air gap. Since I started working with the Snowden documents, I bought a new computer that has never been connected to the Internet. If I want to transfer a file, I encrypt the file on the secure computer and walk it over to my Internet computer, using a USB stick. To decrypt something, I reverse the process. This might not be bulletproof, but it’s pretty good.
    4. Be suspicious of commercial encryption software, especially from large vendors. My guess is that most encryption products from large US companies have NSA-friendly back doors, and many foreign ones probably do as well. It’s prudent to assume that foreign products also have foreign-installed backdoors. Closed-source software is easier for the NSA to backdoor than open-source software. Systems relying on master secrets are vulnerable to the NSA, through either legal or more clandestine means.
    5. Try to use public-domain encryption that has to be compatible with other implementations. For example, it’s harder for the NSA to backdoor TLS than BitLocker, because any vendor’s TLS has to be compatible with every other vendor’s TLS, while BitLocker only has to be compatible with itself, giving the NSA a lot more freedom to make changes. And because BitLocker is proprietary, it’s far less likely those changes will be discovered. Prefer symmetric cryptography over public-key cryptography. Prefer conventional discrete-log-based systems over elliptic-curve systems; the latter have constants that the NSA influences when they can.

    Since I started working with Snowden’s documents, I have been using GPG, Silent Circle, Tails, OTR, TrueCrypt, BleachBit, and a few other things I’m not going to write about. There’s an undocumented encryption feature in my Password Safe program from the command line; I’ve been using that as well.

    I understand that most of this is impossible for the typical Internet user. Even I don’t use all these tools for most everything I am working on. And I’m still primarily on Windows, unfortunately. Linux would be safer.

    The NSA has turned the fabric of the Internet into a vast surveillance platform, but they are not magical. They’re limited by the same economic realities as the rest of us, and our best defense is to make surveillance of us as expensive as possible.

    Trust the math. Encryption is your friend. Use it well, and do your best to ensure that nothing can compromise it. That’s how you can remain secure even in the face of the NSA.

    This essay previously appeared in the Guardian.

    EDITED TO ADD: Reddit thread.

    Someone somewhere commented that the NSA’s “groundbreaking cryptanalytic capabilities” could include a practical attack on RC4. I don’t know one way or the other, but that’s a good speculation.

    Posted on September 15, 2013 at 8:11 AMView Comments

    Government Secrecy and the Generation Gap

    Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.

    Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do — and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.

    As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.

    You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo — the documents are riddled with codename — its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.

    Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.

    Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.

    Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways — and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.

    Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.

    When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.

    Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age — the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks — whistleblowing is easier than ever.

    Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.

    A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.

    Hey, NSA, you’ve got a problem.

    This essay previously appeared in the Financial Times.

    EDITED TO ADD (9/14): Blog comments on this essay are particularly interesting.

    Posted on September 9, 2013 at 1:30 PMView Comments

    Human-Machine Trust Failures

    I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

    I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

    The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

    My point in telling this story is not to demonstrate how I beat the EEOB’s security — I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name — but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

    In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state — how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time — that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

    This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system — especially the machine part — better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

    Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

    The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

    This essay previously appeared in IEEE Security & Privacy.

    Posted on September 5, 2013 at 8:32 AMView Comments

    Opsec Details of Snowden Meeting with Greenwald and Poitras

    I don’t like stories about the personalities in the Snowden affair, because it detracts from the NSA and the policy issues. But I’m a sucker for operational security, and just have to post this detail from their first meeting in Hong Kong:

    Snowden had instructed them that once they were in Hong Kong, they were to go at an appointed time to the Kowloon district and stand outside a restaurant that was in a mall connected to the Mira Hotel. There, they were to wait until they saw a man carrying a Rubik’s Cube, then ask him when the restaurant would open. The man would answer their question, but then warn that the food was bad.

    Actually, the whole article is interesting. The author is writing a book about surveillance and privacy, one of probably a half dozen about the Snowden affair that will come out this year.

    EDITED TO ADD (8/31): While we’re on the topic, here’s some really stupid opsec on the part of Greenwald and Poitras:

    • Statement from senior Cabinet Office civil servant to #miranda case says material was 58000 ‘highly classified UK intelligence documents
    • Police who seized documents from #miranda found among them a piece of paper with the decryption password, the statement says
    • This password allowed them to decrypt one file on his seized hard drive, adds Oliver Robbins, Cabinet Office security adviser #miranda

    You can’t do this kind of stuff when you’re playing with the big boys.

    Posted on August 30, 2013 at 1:54 PMView Comments

    Protecting Against Leakers

    Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government’s security failures. Yet the debacle illustrates the challenge with trusting people in any organization.

    The problem is easy to describe. Organizations require trusted people, but they don’t necessarily know whether those people are trustworthy. These individuals are essential, and can also betray organizations.

    So how does an organization protect itself?

    Securing trusted people requires three basic mechanisms (as I describe in my book Beyond Fear). The first is compartmentalization. Trust doesn’t have to be all or nothing; it makes sense to give relevant workers only the access, capabilities and information they need to accomplish their assigned tasks. In the military, even if they have the requisite clearance, people are only told what they “need to know.” The same policy occurs naturally in companies.

    This isn’t simply a matter of always granting more senior employees a higher degree of trust. For example, only authorized armored-car delivery people can unlock automated teller machines and put money inside; even the bank president can’t do so. Think of an employee as operating within a sphere of trust — a set of assets and functions he or she has access to. Organizations act in their best interest by making that sphere as small as possible.

    The idea is that if someone turns out to be untrustworthy, he or she can only do so much damage. This is where the NSA failed with Snowden. As a system administrator, he needed access to many of the agency’s computer systems — and he needed access to everything on those machines. This allowed him to make copies of documents he didn’t need to see.

    The second mechanism for securing trust is defense in depth: Make sure a single person can’t compromise an entire system. NSA Director General Keith Alexander has said he is doing this inside the agency by instituting what is called two-person control: There will always be two people performing system-administration tasks on highly classified computers.

    Defense in depth reduces the ability of a single person to betray the organization. If this system had been in place and Snowden’s superior had been notified every time he downloaded a file, Snowden would have been caught well before his flight to Hong Kong.

    The final mechanism is to try to ensure that trusted people are, in fact, trustworthy. The NSA does this through its clearance process, which at high levels includes lie-detector tests (even though they don’t work) and background investigations. Many organizations perform reference and credit checks and drug tests when they hire new employees. Companies may refuse to hire people with criminal records or noncitizens; they might hire only those with a particular certification or membership in certain professional organizations. Some of these measures aren’t very effective — it’s pretty clear that personality profiling doesn’t tell you anything useful, for example — but the general idea is to verify, certify and test individuals to increase the chance they can be trusted.

    These measures are expensive. It costs the U.S. government about $4,000 to qualify someone for top-secret clearance. Even in a corporation, background checks and screenings are expensive and add considerable time to the hiring process. Giving employees access to only the information they need can hamper them in an agile organization in which needs constantly change. Security audits are expensive, and two-person control is even more expensive: it can double personnel costs. We’re always making trade-offs between security and efficiency.

    The best defense is to limit the number of trusted people needed within an organization. Alexander is doing this at the NSA — albeit too late — by trying to reduce the number of system administrators by 90 percent. This is just a tiny part of the problem; in the U.S. government, as many as 4 million people, including contractors, hold top-secret or higher security clearances. That’s far too many.

    More surprising than Snowden’s ability to get away with taking the information he downloaded is that there haven’t been dozens more like him. His uniqueness — along with the few who have gone before him and how rare whistle-blowers are in general — is a testament to how well we normally do at building security around trusted people.

    Here’s one last piece of advice, specifically about whistle-blowers. It’s much harder to keep secrets in a networked world, and whistle-blowing has become the civil disobedience of the information age. A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper. This may come as a shock in a market-based system, in which morally dubious behavior is often rewarded as long as it’s legal and illegal activity is rewarded as long as you can get away with it.

    No organization, whether it’s a bank entrusted with the privacy of its customer data, an organized-crime syndicate intent on ruling the world, or a government agency spying on its citizens, wants to have its secrets disclosed. In the information age, though, it may be impossible to avoid.

    This essay previously appeared on Bloomberg.com.

    EDITED TO ADD 8/22: A commenter on the Bloomberg site added another security measure: pay your people more. Better paid people are less likely to betray the organization that employs them. I should have added that, especially since I make that exact point in Liars and Outliers.

    Posted on August 26, 2013 at 1:19 PMView Comments

    Management Issues in Terrorist Organizations

    Terrorist organizations have the same management problems as other organizations, and new ones besides:

    Terrorist leaders also face a stubborn human resources problem: Their talent pool is inherently unstable. Terrorists are obliged to seek out recruits who are predisposed to violence — that is to say, young men with a chip on their shoulder. Unsurprisingly, these recruits are not usually disposed to following orders or recognizing authority figures. Terrorist managers can craft meticulous long-term strategies, but those are of little use if the people tasked with carrying them out want to make a name for themselves right now.

    Terrorist managers are also obliged to place a premium on bureaucratic control, because they lack other channels to discipline the ranks. When Walmart managers want to deal with an unruly employee or a supplier who is defaulting on a contract, they can turn to formal legal procedures. Terrorists have no such option. David Ervine, a deceased Irish Unionist politician and onetime bomb maker for the Ulster Volunteer Force (UVF), neatly described this dilemma to me in 2006. “We had some very heinous and counterproductive activities being carried out that the leadership didn’t punish because they had to maintain the hearts and minds within the organization,” he said….

    EDITED TO ADD (9/13): More on the economics of terrorism.

    Posted on August 16, 2013 at 7:31 AMView Comments

    Sidebar photo of Bruce Schneier by Joe MacInnis.