Entries Tagged "intelligence"

Page 19 of 24

FBI Stoking Fear

Another unsubstantiated terrorist plot:

An internal memo obtained by The Associated Press says the FBI has received a “plausible but unsubstantiated” report that al-Qaida terrorists in late September may have discussed attacking the subway system.

[…]

The internal bulletin says al-Qaida terrorists “in late September may have discussed targeting transit systems in and around New York City. These discussions reportedly involved the use of suicide bombers or explosives placed on subway/passenger rail systems,” according to the document.

“We have no specific details to confirm that this plot has developed beyond aspirational planning, but we are issuing this warning out of concern that such an attack could possibly be conducted during the forthcoming holiday season,” according to the warning dated Tuesday.

[…]

Rep. Peter King, the top Republican on the House Homeland Security Committee, said authorities “have very real specifics as to who it is and where the conversation took place and who conducted it.”

“It certainly involves suicide bombing attacks on the mass transit system in and around New York and it’s plausible, but there’s no evidence yet that it’s in the process of being carried out,” King said.

Knocke, the DHS spokesman, said the warning was issued “out of an abundance of caution going into this holiday season.”

Got that: “plausible but unsubstantiated,” “may have discussed attacking the subway system,” “specific details to confirm that this plot has developed beyond aspirational planning,” “attack could possibly be conducted,” “it’s plausible, but there’s no evidence yet that it’s in the process of being carried out.”

I have no specific details, but I want to warn everybody today that fiery rain might fall from the sky. Terrorists may have discussed this sort of tactic, possibly at one of their tequila-fueled aspirational planning sessions. While there is no evidence yet that the plan is in the process of being carried out, I want to be extra-cautious this holiday season. Ho ho ho.

Posted on November 27, 2008 at 12:27 PMView Comments

Secret German IP Addresses Leaked

From Wikileaks:

The PDF document holds a single paged scan of an internally distributed mail from German telecommunications company T-Systems (Deutsche Telekom), revealing over two dozen secret IP address ranges in use by the German intelligence service Bundesnachrichtendienst (BND). Independent evidence shows that the claim is almost certainly true and the document itself has been verified by a demand letter from T-systems to Wikileaks.

Posted on November 20, 2008 at 7:26 AMView Comments

Clever Counterterrorism Tactic

Used against the IRA:

One of the most interesting operations was the laundry mat [sic]. Having lost many troops and civilians to bombings, the Brits decided they needed to determine who was making the bombs and where they were being manufactured. One bright fellow recommended they operate a laundry and when asked “what the hell he was talking about,” he explained the plan and it was incorporated—to much success.

The plan was simple: Build a laundry and staff it with locals and a few of their own. The laundry would then send out “color coded” special discount tickets, to the effect of “get two loads for the price of one,” etc. The color coding was matched to specific streets and thus when someone brought in their laundry, it was easy to determine the general location from which a city map was coded.

While the laundry was indeed being washed, pressed and dry cleaned, it had one additional cycle—every garment, sheet, glove, pair of pants, was first sent through an analyzer, located in the basement, that checked for bomb-making residue. The analyzer was disguised as just another piece of the laundry equipment; good OPSEC [operational security]. Within a few weeks, multiple positives had shown up, indicating the ingredients of bomb residue, and intelligence had determined which areas of the city were involved. To narrow their target list, [the laundry] simply sent out more specific coupons [numbered] to all houses in the area, and before long they had good addresses. After confirming addresses, authorities with the SAS teams swooped down on the multiple homes and arrested multiple personnel and confiscated numerous assembled bombs, weapons and ingredients. During the entire operation, no one was injured or killed.

Posted on October 13, 2008 at 1:22 PMView Comments

Data Mining for Terrorists Doesn't Work

According to a massive report from the National Research Council, data mining for terrorists doesn’t work. Here’s a good summary:

The report was written by a committee whose members include William Perry, a professor at Stanford University; Charles Vest, the former president of MIT; W. Earl Boebert, a retired senior scientist at Sandia National Laboratories; Cynthia Dwork of Microsoft Research; R. Gil Kerlikowske, Seattle’s police chief; and Daryl Pregibon, a research scientist at Google.

They admit that far more Americans live their lives online, using everything from VoIP phones to Facebook to RFID tags in automobiles, than a decade ago, and the databases created by those activities are tempting targets for federal agencies. And they draw a distinction between subject-based data mining (starting with one individual and looking for connections) compared with pattern-based data mining (looking for anomalous activities that could show illegal activities).

But the authors conclude the type of data mining that government bureaucrats would like to do—perhaps inspired by watching too many episodes of the Fox series 24—can’t work. “If it were possible to automatically find the digital tracks of terrorists and automatically monitor only the communications of terrorists, public policy choices in this domain would be much simpler. But it is not possible to do so.”

A summary of the recommendations:

  • U.S. government agencies should be required to follow a systematic process to evaluate the effectiveness, lawfulness, and consistency with U.S. values of every information-based program, whether classified or unclassified, for detecting and countering terrorists before it can be deployed, and periodically thereafter.
  • Periodically after a program has been operationally deployed, and in particular before a program enters a new phase in its life cycle, policy makers should (carefully review) the program before allowing it to continue operations or to proceed to the next phase.
  • To protect the privacy of innocent people, the research and development of any information-based counterterrorism program should be conducted with synthetic population data… At all stages of a phased deployment, data about individuals should be rigorously subjected to the full safeguards of the framework.
  • Any information-based counterterrorism program of the U.S. government should be subjected to robust, independent oversight of the operations of that program, a part of which would entail a practice of using the same data mining technologies to “mine the miners and track the trackers.”
  • Counterterrorism programs should provide meaningful redress to any individuals inappropriately harmed by their operation.
  • The U.S. government should periodically review the nation’s laws, policies, and procedures that protect individuals’ private information for relevance and effectiveness in light of changing technologies and circumstances. In particular, Congress should re-examine existing law to consider how privacy should be protected in the context of information-based programs (e.g., data mining) for counterterrorism.

Here are more news articles on the report. I explained why data mining wouldn’t find terrorists back in 2005.

EDITED TO ADD (10/10): More commentary:

As the NRC report points out, not only is the training data lacking, but the input data that you’d actually be mining has been purposely corrupted by the terrorists themselves. Terrorist plotters actively disguise their activities using operational security measures (opsec) like code words, encryption, and other forms of covert communication. So, even if we had access to a copious and pristine body of training data that we could use to generalize about the “typical terrorist,” the new data that’s coming into the data mining system is suspect.

To return to the credit reporting analogy, credit scores would be worthless to lenders if everyone could manipulate their credit history (e.g., hide past delinquencies) the way that terrorists can manipulate the data trails that they leave as they buy gas, enter buildings, make phone calls, surf the Internet, etc.

So this application of data mining bumps up against the classic GIGO (garbage in, garbage out) problem in computing, with the terrorists deliberately feeding the system garbage. What this means in real-world terms is that the success of our counter-terrorism data mining efforts is completely dependent on the failure of terrorist cells to maintain operational security.

The combination of the GIGO problem and the lack of suitable training data combine to make big investments in automated terrorist identification a futile and wasteful effort. Furthermore, these two problems are structural, so they’re not going away. All legitimate concerns about false positives and corrosive effects on civil liberties aside, data mining will never give authorities the ability to identify terrorists or terrorist networks with any degree of confidence.

Posted on October 10, 2008 at 6:35 AMView Comments

MI6 Camera—Including Secrets—Sold on eBay

I wish I’d known:

A 28-year-old delivery man from the UK who bought a Nikon Coolpix camera for about $31 on eBay got more than he bargained for when the camera arrived with top secret information from the UK’s MI6 organization.

Allegedly sold by one of the clandestine organization’s agents, the camera contained named al-Qaeda cells, names, images of suspected terrorists and weapons, fingerprint information, and log-in details for the Secret Service’s computer network, containing a “Top Secret” marking.

He turned the camera in to the police.

Posted on October 1, 2008 at 1:59 PMView Comments

The Pentagon's World of Warcraft Movie-Plot Threat

In a presentation that rivals any of my movie-plot threat contest entries, a Pentagon researcher is worried that terrorists might plot using World of Warcraft:

In a presentation late last week at the Director of National Intelligence Open Source Conference in Washington, Dr. Dwight Toavs, a professor at the Pentagon-funded National Defense University, gave a bit of a primer on virtual worlds to an audience largely ignorant about what happens in these online spaces. Then he launched into a scenario, to demonstrate how a meatspace plot might be hidden by in-game chatter.

In it, two World of Warcraft players discuss a raid on the “White Keep” inside the “Stonetalon Mountains.” The major objective is to set off a “Dragon Fire spell” inside, and make off with “110 Gold and 234 Silver” in treasure. “No one will dance there for a hundred years after this spell is cast,” one player, “war_monger,” crows.

Except, in this case, the White Keep is at 1600 Pennsylvania Avenue. “Dragon Fire” is an unconventional weapon. And “110 Gold and 234 Silver” tells the plotters how to align the game’s map with one of Washington, D.C.

I don’t know why he thinks that the terrorists will use World of Warcraft and not some other online world. Or Facebook. Or Usenet. Or a chat room. Or e-mail. Or the telephone. I don’t even know why the particular form of communication is in any way important.

The article ends with this nice paragraph:

Steven Aftergood, the Federation of the American Scientists analyst who’s been following the intelligence community for years, wonders how realistic these sorts of scenarios are, really. “This concern is out there. But it has to be viewed in context. It’s the job of intelligence agencies to anticipate threats and counter them. With that orientation, they’re always going to give more weight to a particular scenario than an objective analysis would allow,” he tells Danger Room. “Could terrorists use Second Life? Sure, they can use anything. But is it a significant augmentation? That’s not obvious. It’s a scenario that an intelligence officer is duty-bound to consider. That’s all.”

My guess is still that some clever Pentagon researchers have figured out how to play World of Warcraft on the job, and they’re not giving that perk up anytime soon.

Posted on September 18, 2008 at 1:29 PMView Comments

Doctoring Photographs without Photoshop

It’s all about the captions:

…doctored photographs are the least of our worries. If you want to trick someone with a photograph, there are lots of easy ways to do it. You don’t need Photoshop. You don’t need sophisticated digital photo-manipulation. You don’t need a computer. All you need to do is change the caption.

The photographs presented by Colin Powell at the United Nations in 2003 provide several examples. Photographs that were used to justify a war. And yet, the actual photographs are low-res, muddy aerial surveillance photographs of buildings and vehicles on the ground in Iraq. I’m not an aerial intelligence expert. I could be looking at anything. It is the labels, the captions, and the surrounding text that turn the images from one thing into another. Photographs presented by Colin Powell at the United Nations in 2003.

Powell was arguing that the Iraqis were doing something wrong, knew they were doing something wrong, and were trying to cover their tracks. Later, it was revealed that the captions were wrong. There was no evidence of chemical weapons and no evidence of concealment. Morris’s mockery of the sweeping interpretations made in Powell’s photographs.

There is a larger point. I don’t know what these buildings were really used for. I don’t know whether they were used for chemical weapons at one time, and then transformed into something relatively innocuous, in order to hide the reality of what was going on from weapons inspectors. But I do know that the yellow captions influence how we see the pictures. “Chemical Munitions Bunker” is different from “Empty Warehouse” which is different from “International House of Pancakes.” The image remains the same but we see it differently.

Change the yellow labels, change the caption and you change the meaning of the photographs. You don’t need Photoshop. That’s the disturbing part. Captions do the heavy lifting as far as deception is concerned. The pictures merely provide the window-dressing. The unending series of errors engendered by falsely captioned photographs are rarely remarked on.

Posted on August 27, 2008 at 7:27 AMView Comments

World War II Deception Story

Great security story from an obituary of former OSS agent Roger Hall:

One of his favorite OSS stories involved a colleague sent to occupied France to destroy a seemingly impenetrable German tank at a key crossroads. The French resistance found that grenades were no use.

The OSS man, fluent in German and dressed like a French peasant, walked up to the tank and yelled, “Mail!”

The lid opened, and in went two grenades.

Hall’s book about his OSS days, You’re Stepping on My Cloak and Dagger, is a must-read.

Posted on July 29, 2008 at 1:50 PMView Comments

The Case of the Stolen BlackBerry and the Awesome Chinese Hacking Skills

A high-level British government employee had his BlackBerry stolen by Chinese intelligence:

The aide, a senior Downing Street adviser who was with the prime minister on a trip to China earlier this year, had his BlackBerry phone stolen after being picked up by a Chinese woman who had approached him in a Shanghai hotel disco.

The aide agreed to return to his hotel with the woman. He reported the BlackBerry missing the next morning.

That can’t look good on your annual employee review.

But it’s this part of the article that has me confused:

Experts say that even if the aide’s device did not contain anything top secret, it might enable a hostile intelligence service to hack into the Downing Street server, potentially gaining access to No 10’s e-mail traffic and text messages.

Um, what? I assume the IT department just turned off the guy’s password. Was this nonsense peddled to the press by the UK government, or is some “expert” trying to sell us something? The article doesn’t say.

EDITED TO ADD (7/22): The first commenter makes a good point, which I didn’t think of. The article says that it’s Chinese intelligence:

A senior official said yesterday that the incident had all the hallmarks of a suspected honeytrap by Chinese intelligence.

But Chinese intelligence would be far more likely to clone the BlackBerry and then return it. Much better information that way. This is much more likely to be petty theft.

EDITED TO ADD (7/23): The more I think about this story, the less sense it makes. If you’re a Chinese intelligence officer and you manage to get an aide to the British Prime Minister to have sex with one of your agents, you’re not going to immediately burn him by stealing his BlackBerry. That’s just stupid.

Posted on July 22, 2008 at 10:05 AMView Comments

Man-in-the-Middle Attacks

Last week’s dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.

In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they’re talking to each other, and the attacker can delete or modify the communications at will.

The Wall Street Journal reported how this gambit played out in Colombia:

“The plan had a chance of working because, for months, in an operation one army officer likened to a ‘broken telephone,’ military intelligence had been able to convince Ms. Betancourt’s captor, Gerardo Aguilar, a guerrilla known as ‘Cesar,’ that he was communicating with his top bosses in the guerrillas’ seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.”

This ploy worked because Cesar and his guerrilla bosses didn’t know one another well. They didn’t recognize one anothers’ voices, and didn’t have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn’t have any.

And that’s why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There’s no way to recognize someone’s face. There’s no way to recognize someone’s voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you’re really visiting that website. We all like to pretend that we know who we’re communicating with—and for the most part, of course, there isn’t any attacker inserting himself into our communications—but in reality, we don’t. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.

Even with context, it’s still possible for MITM to fool both sides—because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: “What did we have for dinner that time last year?” or something like that. On the telephone, the attacker wouldn’t be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn’t synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.

This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.

There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.

The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone’s identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization—as it would if there were a MITM attack in progress—he should hang up.

Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same—”my screen says 5C19; what does yours say?”—to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.

This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to “National Bank of Trustworthiness” or “Two Guys With a Computer in Nigeria.”

No one does, though, because you have to both remember and be willing to do the work. (The browsers could make this easier if they wanted to, but they don’t seem to want to.) In the real world, you can easily tell a branch of your bank from a money changer on a street corner. But on the internet, a phishing site can be easily made to look like your bank’s legitimate website. Any method of telling the two apart takes work. And that’s the first step to fooling you with a MITM attack.

Man-in-the-middle isn’t new, and it doesn’t have to be technological. But the internet makes the attacks easier and more powerful, and that’s not going to change anytime soon.

This essay originally appeared on Wired.com.

Posted on July 15, 2008 at 6:47 AMView Comments

1 17 18 19 20 21 24

Sidebar photo of Bruce Schneier by Joe MacInnis.