Entries Tagged "NSA"

Page 49 of 55

Another Recently Released NSA Document

American Cryptology during the Cold War, 1945-1989, by Thomas R. Johnson: documents 1, 2, 3, 4, 5, and 6.

In response to a declassification request by the National Security Archive, the secretive National Security Agency has declassified large portions of a four-part “top-secret Umbra” study, American Cryptology during the Cold War. Despite major redactions, this history discloses much new information about the agency’s history and the role of SIGINT and communications intelligence (COMINT) during the Cold War. Researched and written by NSA historian Thomas Johnson, the three parts released so far provide a frank assessment of the history of the Agency and its forerunners, warts-and-all.

Posted on January 2, 2009 at 12:17 PMView Comments

NSA Patent on Network Tampering Detection

The NSA has patented a technique to detect network tampering:

The NSA’s software does this by measuring the amount of time the network takes to send different types of data from one computer to another and raising a red flag if something takes too long, according to the patent filing.

Other researchers have looked into this problem in the past and proposed a technique called distance bounding, but the NSA patent takes a different tack, comparing different types of data travelling across the network. “The neat thing about this particular patent is that they look at the differences between the network layers,” said Tadayoshi Kohno, an assistant professor of computer science at the University of Washington.

The technique could be used for purposes such as detecting a fake phishing Web site that was intercepting data between users and their legitimate banking sites, he said. “This whole problem space has a lot of potential, [although] I don’t know if this is going to be the final solution that people end up using.”

Posted on December 30, 2008 at 12:07 PMView Comments

James Bamford Interview on the NSA

Worth reading. One excerpt:

The problem is that NSA was never designed for what it’s doing. It was designed after World War II to prevent another surprise attack from another nation-state, particularly the Soviet Union. And from 1945 or ’46 until 1990 or ’91, that’s what its mission was. That’s what every piece of equipment, that’s what every person recruited to the agency, was supposed to do, practically—find out when and where and if the Russians were about to launch a nuclear attack. That’s what it spent 50 years being built for. And then all of a sudden the Soviet Union is not around anymore, and NSA’s got a new mission, and part of that is going after terrorists. And it’s just not a good fit. They missed the first World Trade Center bombing, they missed the attack on the U.S.S. Cole, they missed the attack on the U.S. embassies in Africa, they missed 9/11. There’s this string of failures because this agency was not really designed to do this. In the movies, they’d be catching terrorists all the time. But this isn’t the movies, this is reality.

The big difference here is that when they were focused on the Soviet Union, the Soviets communicated over dedicated lines. The army communicated over army channels, the navy communicated over navy channels, the diplomats communicated over foreign-office channels. These were all particular channels, particular frequencies, you knew where they were; the main problem was breaking encrypted communications. [The NSA] had listening posts ringing the Soviet Union, they had Russian linguists that were being pumped out from all these schools around the U.S.

Then the Cold War ends and everything changes. Now instead of a huge country that communicated all the time, you have individuals who hop from Kuala Lampur to Nairobi or whatever, from continent to continent, from day to day. They don’t communicate [electronically] all the time—they communicate by meetings. [The NSA was] tapping Bin Laden’s phone for three years and never picked up on any of these terrorist incidents. And the [electronic] communications you do have are not on dedicated channels, they’re mixed in with the world communication network. First you’ve got to find out how to extract that from it, then you’ve got to find people who can understand the language, and then you’ve got to figure out the word code. You can’t use a Cray supercomputer to figure out if somebody’s saying they’re going to have a wedding next week whether it’s really going to be a wedding or a bombing.

So that’s the challenge facing the people there. So even though I’m critical about them for missing these things, I also try in the book to give an explanation as to why this is. It’s certainly not because the people are incompetent. It’s because the world has changed.

I think the problem is more serious than people realize. I talked to the people at Fort Gordon [in Georgia], which is the main listening post for the Middle East and North Africa. What was shocking to me was the people who were there were saying they didn’t have anybody [at the time] who spoke Pashtun. We’re at war in Afghanistan and the main language of the Taliban is Pashtun.

The answer here is to change our foreign policy so that we don’t have to depend on agencies like NSA to try to protect the country. You try to protect the country by having reasonable policies so that we won’t have to worry about terrorism so much. It’s just getting harder and harder to find them.

Also worth reading is his new book.

Posted on December 18, 2008 at 6:42 AMView Comments

Audit

As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt’s immigration status. And in November, Verizon employees peeked at his cell phone records.

What these three incidents illustrate is not that computerized databases are vulnerable to hacking—we already knew that, and anyway the perpetrators all had legitimate access to the systems they used—but how important audit is as a security measure.

When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.

Most security against crime comes from audit. Of course we use locks and alarms, but we don’t wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that’s audit.

Audit helps ensure that people don’t abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store’s money. To ensure they don’t skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.

The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don’t abuse this power, we put in place systems of audit like the warrant process.

The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.

That would be an incredibly bad idea. Law enforcement systems that don’t have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power—because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.

For computerized database systems like that—systems entrusted with other people’s information—audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don’t abuse that trust. It’s the same with IRS records, credit reports, police databases, telephone records – anything personal that someone might want to peek at during the course of his job.

Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department’s audit worked best; they had alarm systems in place that alerted superiors when Obama’s passport files were accessed and who accessed them. Verizon’s audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don’t know who accessed the information.

Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don’t abuse the power they’re entrusted with is through audit. Without it, we will simply never know who’s peeking at what.

This essay first appeared on the Wall Street Journal website.

Posted on December 10, 2008 at 2:21 PMView Comments

NSA's Warrantless Eavesdropping Targets Innocent Americans

Remember when the U.S. government said it was only spying on terrorists? Anyone with any common sense knew it was lying—power without oversight is always abused—but even I didn’t think
it was this bad:

Faulk says he and others in his section of the NSA facility at Fort Gordon routinely shared salacious or tantalizing phone calls that had been intercepted, alerting office mates to certain time codes of “cuts” that were available on each operator’s computer.

“Hey, check this out,” Faulk says he would be told, “there’s good phone sex or there’s some pillow talk, pull up this call, it’s really funny, go check it out. It would be some colonel making pillow talk and we would say, ‘Wow, this was crazy’,” Faulk told ABC News.

Warrants are a security device. They protect us against government abuse of power.

Posted on October 15, 2008 at 12:39 PMView Comments

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

NSA Snooping on Cell Phone Calls

From CNet:

A recent article in the London Review of Books revealed that a number of private companies now sell off-the-shelf data-mining solutions to government spies interested in analyzing mobile-phone calling records and real-time location information. These companies include ThorpeGlen, VASTech, Kommlabs, and Aqsacom—all of which sell “passive probing” data-mining services to governments around the world.

ThorpeGlen, a U.K.-based firm, offers intelligence analysts a graphical interface to the company’s mobile-phone location and call-record data-mining software. Want to determine a suspect’s “community of interest“? Easy. Want to learn if a single person is swapping SIM cards or throwing away phones (yet still hanging out in the same physical location)? No problem.

In a Web demo (PDF) (mirrored here) to potential customers back in May, ThorpeGlen’s vice president of global sales showed off the company’s tools by mining a dataset of a single week’s worth of call data from 50 million users in Indonesia, which it has crunched in order to try and discover small anti-social groups that only call each other.

Posted on September 17, 2008 at 12:49 PMView Comments

NSA Forms

They’re all here:

Via a Freedom of Information Act request (which involved paying $700 and waiting almost 4 years), The Memory Hole has obtained blank copies of most forms used by the National Security Agency.

Most are not very interesting, but I agree with Russ Kick:

They range from the exotic to the pedestrian, but even the most prosaic form shines some light into the workings of No Such Agency.

Posted on August 6, 2008 at 7:26 AMView Comments

Man-in-the-Middle Attacks

Last week’s dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.

In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they’re talking to each other, and the attacker can delete or modify the communications at will.

The Wall Street Journal reported how this gambit played out in Colombia:

“The plan had a chance of working because, for months, in an operation one army officer likened to a ‘broken telephone,’ military intelligence had been able to convince Ms. Betancourt’s captor, Gerardo Aguilar, a guerrilla known as ‘Cesar,’ that he was communicating with his top bosses in the guerrillas’ seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.”

This ploy worked because Cesar and his guerrilla bosses didn’t know one another well. They didn’t recognize one anothers’ voices, and didn’t have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn’t have any.

And that’s why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There’s no way to recognize someone’s face. There’s no way to recognize someone’s voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you’re really visiting that website. We all like to pretend that we know who we’re communicating with—and for the most part, of course, there isn’t any attacker inserting himself into our communications—but in reality, we don’t. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.

Even with context, it’s still possible for MITM to fool both sides—because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: “What did we have for dinner that time last year?” or something like that. On the telephone, the attacker wouldn’t be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn’t synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.

This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.

There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.

The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone’s identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization—as it would if there were a MITM attack in progress—he should hang up.

Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same—”my screen says 5C19; what does yours say?”—to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.

This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to “National Bank of Trustworthiness” or “Two Guys With a Computer in Nigeria.”

No one does, though, because you have to both remember and be willing to do the work. (The browsers could make this easier if they wanted to, but they don’t seem to want to.) In the real world, you can easily tell a branch of your bank from a money changer on a street corner. But on the internet, a phishing site can be easily made to look like your bank’s legitimate website. Any method of telling the two apart takes work. And that’s the first step to fooling you with a MITM attack.

Man-in-the-middle isn’t new, and it doesn’t have to be technological. But the internet makes the attacks easier and more powerful, and that’s not going to change anytime soon.

This essay originally appeared on Wired.com.

Posted on July 15, 2008 at 6:47 AMView Comments

1 47 48 49 50 51 55

Sidebar photo of Bruce Schneier by Joe MacInnis.