Entries Tagged "CIA"

Page 4 of 8

A Template for Reporting Government Surveillance News Stories

This is from 2006 — I blogged it here — but it’s even more true today.

Under a top secret program initiated by the Bush Administration after the Sept. 11 attacks, the [name of agency (FBI, CIA, NSA, etc.)] have been gathering a vast database of [type of records] involving United States citizens.

“This program is a vital tool in the fight against terrorism,” [Bush Administration official] said. “Without it, we would be dangerously unsafe, and the terrorists would have probably killed you and every other American citizen.” The Bush Administration stated that the revelation of this program has severely compromised national security.

We’ve changed administrations — we’ve changed political parties — but nothing has changed.

Posted on November 1, 2013 at 2:26 PMView Comments

US Government Monitoring Public Internet in Real Time

Here’s a demonstration of the US government’s capabilities to monitor the public Internet. Former CIA and NSA Director Michael Hayden was on the Acela train between New York and Washington DC, taking press interviews on the phone. Someone nearby overheard the conversation, and started tweeting about it. Within 15 or so minutes, someone somewhere noticed the tweets, and informed someone who knew Hayden. That person called Hayden on his cell phone and, presumably, told him to shut up.

Nothing covert here; the tweets were public. But still, wow.

EDITED TO ADD: To clarify, I don’t think this was a result of the NSA monitoring the Internet. I think this was some public relations office — probably the one that is helping General Alexander respond to all the Snowden stories — who is searching the public Twitter feed for, among other things, Hayden’s name.

Posted on October 26, 2013 at 5:43 PMView Comments

The Limitations of Intelligence

We recently learned that US intelligence agencies had at least three days’ warning that Syrian President Bashar al-Assad was preparing to launch a chemical attack on his own people, but wasn’t able to stop it. At least that’s what an intelligence briefing from the White House reveals. With the combined abilities of our national intelligence apparatus — the CIA, NSA, National Reconnaissance Office and all the rest — it’s not surprising that we had advance notice. It’s not known whether the US shared what it knew.

More interestingly, the US government did not choose to act on that knowledge (for example, launch a preemptive strike), which left some wondering why.

There are several possible explanations, all of which point to a fundamental problem with intelligence information and our national intelligence apparatuses.

The first possibility is that we may have had the data, but didn’t fully understand what it meant. This is the proverbial connect-the-dots problem. As we’ve learned again and again, connecting the dots is hard. Our intelligence services collect billions of individual pieces of data every day. After the fact, it’s easy to walk backward through the data and notice all the individual pieces that point to what actually happened. Before the fact, though, it’s much more difficult. The overwhelming majority of those bits of data point in random directions, or nowhere at all. Almost all the dots don’t connect to anything.

Rather than thinking of intelligence as a connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Which picture is the relevant one? We have no idea. Turning that data into actual information is an extraordinarily difficult problem, and one that the vast scope of our data-gathering programs makes even more difficult.

The second possible explanation is that while we had some information about al-Assad’s plans, we didn’t have enough confirmation to act on that information. This is probably the most likely explanation. We can’t act on inklings, hunches, or possibilities. We probably can’t even act on probabilities; we have to be sure. But when it comes to intelligence, it’s hard to be sure. There could always be something else going on — something we’re not able to eavesdrop on, spy on, or see from our satellites. Again, our knowledge is most obvious after the fact.

The third is that while we were sure of our information, we couldn’t act because that would reveal “sources and methods.” This is probably the most frustrating explanation. Imagine we are able to eavesdrop on al-Assad’s most private conversations with his generals and aides, and are absolutely sure of his plans. If we act on them, we reveal that we are eavesdropping. As a result, he’s likely to change how he communicates, costing us our ability to eavesdrop. It might sound perverse, but often the fact that we are able to successfully spy on someone is a bigger secret than the information we learn from that spying.

This dynamic was vitally important during World War II. During the war, the British were able to break the German Enigma encryption machine and eavesdrop on German military communications. But while the Allies knew a lot, they would only act on information they learned when there was another plausible way they could have learned it. They even occasionally manufactured plausible explanations. It was just too risky to tip the Germans off that their encryption machines’ code had been broken.

The fourth possibility is that there was nothing useful we could have done. And it is hard to imagine how we could have prevented the use of chemical weapons in Syria. We couldn’t have launched a preemptive strike, and it’s probable that it wouldn’t have been effective. The only feasible action would be to alert the opposition — and that, too, might not have accomplished anything. Or perhaps there wasn’t sufficient agreement for any one course of action — so, by default, nothing was done.

All of these explanations point out the limitations of intelligence. The NSA serves as an example. The agency measures its success by amount of data collected, not by information synthesized or knowledge gained. But it’s knowledge that matters.

The NSA’s belief that more data is always good, and that it’s worth doing anything in order to collect it, is wrong. There are diminishing returns, and the NSA almost certainly passed that point long ago. But the idea of trade-offs does not seem to be part of its thinking.

The NSA missed the Boston Marathon bombers, even though the suspects left a really sloppy Internet trail and the older brother was on the terrorist watch list. With all the NSA is doing eavesdropping on the world, you would think the least it could manage would be keeping track of people on the terrorist watch list. Apparently not.

I don’t know how the CIA measures its success, but it failed to predict the end of the Cold War.

More data does not necessarily mean better information. It’s much easier to look backward than to predict. Information does not necessarily enable the government to act. Even when we know something, protecting the methods of collection can be more valuable than the possibility of taking action based on gathered information. But there’s not a lot of value to intelligence that can’t be used for action. These are the paradoxes of intelligence, and it’s time we started remembering them.

Of course, we need organizations like the CIA, the NSA, the NRO and all the rest. Intelligence is a vital component of national security, and can be invaluable in both wartime and peacetime. But it is just one security tool among many, and there are significant costs and limitations.

We’ve just learned from the recently leaked “black budget” that we’re spending $52 billion annually on national intelligence. We need to take a serious look at what kind of value we’re getting for our money, and whether it’s worth it.

This essay previously appeared on CNN.com.

Posted on September 17, 2013 at 6:15 AMView Comments

Intelligence Analysis and the Connect-the-Dots Metaphor

The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn’t happen again?

It’s an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber’s failed attack in 2009. The problem is that connecting the dots is a bad metaphor, and focusing on it makes us more likely to implement useless reforms.

Connecting the dots in a coloring book is easy and fun. They’re right there on the page, and they’re all numbered. All you have to do is move your pencil from one dot to the next, and when you’re done, you’ve drawn a sailboat. Or a tiger. It’s so simple that 5-year-olds can do it.

But in real life, the dots can only be numbered after the fact. With the benefit of hindsight, it’s easy to draw lines from a Russian request for information to a foreign visit to some other piece of information that might have been collected.

In hindsight, we know who the bad guys are. Before the fact, there are an enormous number of potential bad guys.

How many? We don’t know. But we know that the no-fly list had 21,000 people on it last year. The Terrorist Identities Datamart Environment, also known as the watch list, has 700,000 names on it.

We have no idea how many potential “dots” the FBI, CIA, NSA and other agencies collect, but it’s easily in the millions. It’s easy to work backwards through the data and see all the obvious warning signs. But before a terrorist attack, when there are millions of dots — some important but the vast majority unimportant — uncovering plots is a lot harder.

Rather than thinking of intelligence as a simple connect-the-dots picture, think of it as a million unnumbered pictures superimposed on top of each other. Or a random-dot stereogram. Is it a sailboat, a puppy, two guys with pressure-cooker bombs, or just an unintelligible mess of dots? You try to figure it out.

It’s not a matter of not enough data, either.

Piling more data onto the mix makes it harder, not easier. The best way to think of it is a needle-in-a-haystack problem; the last thing you want to do is increase the amount of hay you have to search through. The television show Person of Interest is fiction, not fact.

There’s a name for this sort of logical fallacy: hindsight bias. First explained by psychologists Daniel Kahneman and Amos Tversky, it’s surprisingly common. Since what actually happened is so obvious once it happens, we overestimate how obvious it was before it happened.

We actually misremember what we once thought, believing that we knew all along that what happened would happen. It’s a surprisingly strong tendency, one that has been observed in countless laboratory experiments and real-world examples of behavior. And it’s what all the post-Boston-Marathon bombing dot-connectors are doing.

Before we start blaming agencies for failing to stop the Boston bombers, and before we push “intelligence reforms” that will shred civil liberties without making us any safer, we need to stop seeing the past as a bunch of obvious dots that need connecting.

Kahneman, a Nobel prize winner, wisely noted: “Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.” Kahneman calls it “the illusion of understanding,” explaining that the past is only so understandable because we have cast it as simple inevitable stories and leave out the rest.

Nassim Taleb, an expert on risk engineering, calls this tendency the “narrative fallacy.” We humans are natural storytellers, and the world of stories is much more tidy, predictable and coherent than the real world.

Millions of people behave strangely enough to warrant the FBI’s notice, and almost all of them are harmless. It is simply not possible to find every plot beforehand, especially when the perpetrators act alone and on impulse.

We have to accept that there always will be a risk of terrorism, and that when the occasional plot succeeds, it’s not necessarily because our law enforcement systems have failed.

This essay previously appeared on CNN.

EDITED TO ADD (5/7): The hindsight bias was actually first discovered by Baruch Fischhoff: “Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty,” Journal of Experimental Psychology: Human Perception and Performance, 1(3), 1975, pp. 288-299.

Posted on May 7, 2013 at 6:10 AMView Comments

Biometric Passports Make it Harder for Undercover CIA Officers

Last year, I wrote about how social media sites are making it harder than ever for undercover police officers. This story talks about how biometric passports are making it harder than ever for undercover CIA agents.

Busy spy crossroads such as Dubai, Jordan, India and many E.U. points of entry are employing iris scanners to link eyeballs irrevocably to a particular name. Likewise, the increasing use of biometric passports, which are embedded with microchips containing a person’s face, sex, fingerprints, date and place of birth, and other personal data, are increasingly replacing the old paper ones. For a clandestine field operative, flying under a false name could be a one-way ticket to a headquarters desk, since they’re irrevocably chained to whatever name and passport they used.

“If you go to one of those countries under an alias, you can’t go again under another name,” explains a career spook, who spoke on condition of anonymity because he remains an agency consultant. “So it’s a one-time thing — one and done. The biometric data on your passport, and maybe your iris, too, has been linked forever to whatever name was on your passport the first time. You can’t show up again under a different name with the same data.”

Posted on April 26, 2012 at 6:57 AMView Comments

Outliers in Intelligence Analysis

From the CIA journal Studies in Intelligence: “Capturing the Potential of Outlier Ideas in the Intelligence Community.”

In war you will generally find that the enemy has at any time three courses of action open to him. Of those three, he will invariably choose the fourth.

—Helmuth Von Moltke

With that quip, Von Moltke may have launched a spirited debate within his intelligence staff. The modern version of the debate can be said to exist in the cottage industry that has been built on the examination and explanation of intelligence failures, surprises, omissions, and shortcomings. The contributions of notable scholars to the discussion span multiple analytic generations, and each expresses points with equal measures of regret, fervor, and hope. Their diagnoses and their prescriptions are sadly similar, however, suggesting that the lessons of the past are lost on each succeeding generation of analysts and managers or that the processes and culture of intelligence analysis are incapable of evolution. It is with the same regret, fervor, and hope that we offer our own observations on avoiding intelligence omissions and surprise. Our intent is to explore the ingrained bias against outliers, the potential utility of outliers, and strategies for deliberately considering them.

Posted on April 17, 2012 at 6:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.