Entries Tagged "NSA"

Page 49 of 54

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

NSA Snooping on Cell Phone Calls

From CNet:

A recent article in the London Review of Books revealed that a number of private companies now sell off-the-shelf data-mining solutions to government spies interested in analyzing mobile-phone calling records and real-time location information. These companies include ThorpeGlen, VASTech, Kommlabs, and Aqsacom—all of which sell “passive probing” data-mining services to governments around the world.

ThorpeGlen, a U.K.-based firm, offers intelligence analysts a graphical interface to the company’s mobile-phone location and call-record data-mining software. Want to determine a suspect’s “community of interest“? Easy. Want to learn if a single person is swapping SIM cards or throwing away phones (yet still hanging out in the same physical location)? No problem.

In a Web demo (PDF) (mirrored here) to potential customers back in May, ThorpeGlen’s vice president of global sales showed off the company’s tools by mining a dataset of a single week’s worth of call data from 50 million users in Indonesia, which it has crunched in order to try and discover small anti-social groups that only call each other.

Posted on September 17, 2008 at 12:49 PMView Comments

NSA Forms

They’re all here:

Via a Freedom of Information Act request (which involved paying $700 and waiting almost 4 years), The Memory Hole has obtained blank copies of most forms used by the National Security Agency.

Most are not very interesting, but I agree with Russ Kick:

They range from the exotic to the pedestrian, but even the most prosaic form shines some light into the workings of No Such Agency.

Posted on August 6, 2008 at 7:26 AMView Comments

Man-in-the-Middle Attacks

Last week’s dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.

In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they’re talking to each other, and the attacker can delete or modify the communications at will.

The Wall Street Journal reported how this gambit played out in Colombia:

“The plan had a chance of working because, for months, in an operation one army officer likened to a ‘broken telephone,’ military intelligence had been able to convince Ms. Betancourt’s captor, Gerardo Aguilar, a guerrilla known as ‘Cesar,’ that he was communicating with his top bosses in the guerrillas’ seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence.”

This ploy worked because Cesar and his guerrilla bosses didn’t know one another well. They didn’t recognize one anothers’ voices, and didn’t have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn’t have any.

And that’s why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There’s no way to recognize someone’s face. There’s no way to recognize someone’s voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you’re really visiting that website. We all like to pretend that we know who we’re communicating with—and for the most part, of course, there isn’t any attacker inserting himself into our communications—but in reality, we don’t. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.

Even with context, it’s still possible for MITM to fool both sides—because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: “What did we have for dinner that time last year?” or something like that. On the telephone, the attacker wouldn’t be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn’t synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.

This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.

There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.

The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone’s identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization—as it would if there were a MITM attack in progress—he should hang up.

Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same—”my screen says 5C19; what does yours say?”—to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.

This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to “National Bank of Trustworthiness” or “Two Guys With a Computer in Nigeria.”

No one does, though, because you have to both remember and be willing to do the work. (The browsers could make this easier if they wanted to, but they don’t seem to want to.) In the real world, you can easily tell a branch of your bank from a money changer on a street corner. But on the internet, a phishing site can be easily made to look like your bank’s legitimate website. Any method of telling the two apart takes work. And that’s the first step to fooling you with a MITM attack.

Man-in-the-middle isn’t new, and it doesn’t have to be technological. But the internet makes the attacks easier and more powerful, and that’s not going to change anytime soon.

This essay originally appeared on Wired.com.

Posted on July 15, 2008 at 6:47 AMView Comments

Random Number Bug in Debian Linux

This is a big deal:

On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.c

	MD_Update(&m,buf,j);
	[ .. ]
	MD_Update(&m,buf,j); /* purify complains */

These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only “random” value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.

More info, from Debian, here. And from the hacker community here. Seems that the bug was introduced in September 2006.

More analysis here. And a cartoon.

Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we’ve seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.

Posted on May 19, 2008 at 6:07 AMView Comments

Dual-Use Technologies and the Equities Issue

On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and—in many cases—shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn’t emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.

You know you’ve got a problem when you can’t tell a hostile attack by another nation from bored kids with an axe to grind.

Separating cyberwar, cyberterrorism and cybercrime isn’t easy; these days you need a scorecard to tell the difference. It’s not just that it’s hard to trace people in cyberspace, it’s that military and civilian attacks—and defenses—look the same.

The traditional term for technology the military shares with civilians is “dual use.” Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you’d expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.

And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.

Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows: When a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.

The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.

In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.

So now we’re seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)

I’m in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It’s a side-benefit of dual-use technologies.

But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.

The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.

When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It’s time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.

This essay originally appeared on Wired.com.

Posted on May 6, 2008 at 5:17 AMView Comments

NSA's Domestic Spying

This article from The Wall Street Journal outlines how the NSA is increasingly engaging in domestic surveillance, data collection, and data mining. The result is essentially the same as Total Information Awareness.

According to current and former intelligence officials, the spy agency now monitors huge volumes of records of domestic emails and Internet searches as well as bank transfers, credit-card transactions, travel and telephone records. The NSA receives this so-called “transactional” data from other agencies or private companies, and its sophisticated software programs analyze the various transactions for suspicious patterns. Then they spit out leads to be explored by counterterrorism programs across the U.S. government, such as the NSA’s own Terrorist Surveillance Program, formed to intercept phone calls and emails between the U.S. and overseas without a judge’s approval when a link to al Qaeda is suspected.

[…]

Two former officials familiar with the data-sifting efforts said they work by starting with some sort of lead, like a phone number or Internet address. In partnership with the FBI, the systems then can track all domestic and foreign transactions of people associated with that item—and then the people who associated with them, and so on, casting a gradually wider net. An intelligence official described more of a rapid-response effect: If a person suspected of terrorist connections is believed to be in a U.S. city—for instance, Detroit, a community with a high concentration of Muslim Americans—the government’s spy systems may be directed to collect and analyze all electronic communications into and out of the city.

The haul can include records of phone calls, email headers and destinations, data on financial transactions and records of Internet browsing. The system also would collect information about other people, including those in the U.S., who communicated with people in Detroit.

The information doesn’t generally include the contents of conversations or emails. But it can give such transactional information as a cellphone’s location, whom a person is calling, and what Web sites he or she is visiting. For an email, the data haul can include the identities of the sender and recipient and the subject line, but not the content of the message.

Intelligence agencies have used administrative subpoenas issued by the FBI—which don’t need a judge’s signature—to collect and analyze such data, current and former intelligence officials said. If that data provided “reasonable suspicion” that a person, whether foreign or from the U.S., was linked to al Qaeda, intelligence officers could eavesdrop under the NSA’s Terrorist Surveillance Program.

[…]

The NSA uses its own high-powered version of social-network analysis to search for possible new patterns and links to terrorism. The Pentagon’s experimental Total Information Awareness program, later renamed Terrorism Information Awareness, was an early research effort on the same concept, designed to bring together and analyze as much and as many varied kinds of data as possible. Congress eliminated funding for the program in 2003 before it began operating. But it permitted some of the research to continue and TIA technology to be used for foreign surveillance.

Some of it was shifted to the NSA—which also is funded by the Pentagon—and put in the so-called black budget, where it would receive less scrutiny and bolster other data-sifting efforts, current and former intelligence officials said. “When it got taken apart, it didn’t get thrown away,” says a former top government official familiar with the TIA program.

Two current officials also said the NSA’s current combination of programs now largely mirrors the former TIA project. But the NSA offers less privacy protection. TIA developers researched ways to limit the use of the system for broad searches of individuals’ data, such as requiring intelligence officers to get leads from other sources first. The NSA effort lacks those controls, as well as controls that it developed in the 1990s for an earlier data-sweeping attempt.

Barry Steinhardt of the ACLU comments:

I mean, when we warn about a “surveillance society,” this is what we’re talking about. This is it, this is the ballgame. Mass data from a wide variety of sources—including the private sector—is being collected and scanned by a secretive military spy agency. This represents nothing less than a major change in American life—and unless stopped the consequences of this system for everybody will grow in magnitude along with the rivers of data that are collected about each of us—and that’s more and more every day.

More commentary.

Posted on March 26, 2008 at 6:02 AMView Comments

1 47 48 49 50 51 54

Sidebar photo of Bruce Schneier by Joe MacInnis.