Entries Tagged "privacy"

Page 121 of 138

AT&T Assisting NSA Surveillance

Interesting details emerging from EFF’s lawsuit:

According to a statement released by Klein’s attorney, an NSA agent showed up at the San Francisco switching center in 2002 to interview a management-level technician for a special job. In January 2003, Klein observed a new room being built adjacent to the room housing AT&T’s #4ESS switching equipment, which is responsible for routing long distance and international calls.

“I learned that the person whom the NSA interviewed for the secret job was the person working to install equipment in this room,” Klein wrote. “The regular technician work force was not allowed in the room.”

Klein’s job eventually included connecting internet circuits to a splitting cabinet that led to the secret room. During the course of that work, he learned from a co-worker that similar cabinets were being installed in other cities, including Seattle, San Jose, Los Angeles and San Diego.

“While doing my job, I learned that fiber optic cables from the secret room were tapping into the Worldnet (AT&T’s internet service) circuits by splitting off a portion of the light signal,” Klein wrote.

The split circuits included traffic from peering links connecting to other internet backbone providers, meaning that AT&T was also diverting traffic routed from its network to or from other domestic and international providers, according to Klein’s statement.

The secret room also included data-mining equipment called a Narus STA 6400, “known to be used particularly by government intelligence agencies because of its ability to sift through large amounts of data looking for preprogrammed targets,” according to Klein’s statement.

Narus, whose website touts AT&T as a client, sells software to help internet service providers and telecoms monitor and manage their networks, look for intrusions, and wiretap phone calls as mandated by federal law.

More about what the Narus box can do.

EDITED TO ADD (4/14): More about Narus.

Posted on April 14, 2006 at 7:58 AMView Comments

Military Secrets for Sale in Afghanistan

Stolen goods are being sold in the markets, including hard drives filled with classified data.

A reporter recently obtained several drives at the bazaar that contained documents marked “Secret.” The contents included documents that were potentially embarrassing to Pakistan, a U.S. ally, presentations that named suspected militants targeted for “kill or capture” and discussions of U.S. efforts to “remove” or “marginalize” Afghan government officials whom the military considered “problem makers.”

The drives also included deployment rosters and other documents that identified nearly 700 U.S. service members and their Social Security numbers, information that identity thieves could use to open credit card accounts in soldiers’ names.

EDITED TO ADD (4/12): NPR story.

Posted on April 12, 2006 at 6:25 AMView Comments

VOIP Encryption

There are basically four ways to eavesdrop on a telephone call.

One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it’s the easiest. While it doesn’t work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.

Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line’s path—even outside the home. This used to be the way the police eavesdropped on your phone line. These days it’s probably most often used by criminals. This method doesn’t work for cell phones, either.

Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.

Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It’s hard to eavesdrop on one particular person this way, but it’s easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They’ve even been known to use submarines to tap undersea phone cables.

That’s basically the entire threat model for traditional phone calls. And when most people think about IP telephony—voice over internet protocol, or VOIP—that’s the threat model they probably have in their heads.

Unfortunately, phone calls from your computer are fundamentally different from phone calls from your telephone. Internet telephony’s threat model is much closer to the threat model for IP-networked computers than the threat model for telephony.

And we already know the threat model for IP. Data packets can be eavesdropped on anywhere along the transmission path. Data packets can be intercepted in the corporate network, by the internet service provider and along the backbone. They can be eavesdropped on by the people or organizations that own those computers, and they can be eavesdropped on by anyone who has successfully hacked into those computers. They can be vacuumed up by nosy hackers, criminals, competitors and governments.

It’s comparable to threat No. 3 above, but with the scope vastly expanded.

My greatest worry is the criminal attacks. We already have seen how clever criminals have become over the past several years at stealing account information and personal data. I can imagine them eavesdropping on attorneys, looking for information with which to blackmail people. I can imagine them eavesdropping on bankers, looking for inside information with which to make stock purchases. I can imagine them stealing account information, hijacking telephone calls, committing identity theft. On the business side, I can see them engaging in industrial espionage and stealing trade secrets. In short, I can imagine them doing all the things they could never have done with the traditional telephone network.

This is why encryption for VOIP is so important. VOIP calls are vulnerable to a variety of threats that traditional telephone calls are not. Encryption is one of the essential security technologies for computer data, and it will go a long way toward securing VOIP.

The last time this sort of thing came up, the U.S. government tried to sell us something called “key escrow.” Basically, the government likes the idea of everyone using encryption, as long as it has a copy of the key. This is an amazingly insecure idea for a number of reasons, mostly boiling down to the fact that when you provide a means of access into a security system, you greatly weaken its security.

A recent case in Greece demonstrated that perfectly: Criminals used a cell-phone eavesdropping mechanism already in place, designed for the police to listen in on phone calls. Had the call system been designed to be secure in the first place, there never would have been a backdoor for the criminals to exploit.

Fortunately, there are many VOIP-encryption products available. Skype has built-in encryption. Phil Zimmermann is releasing Zfone, an easy-to-use open-source product. There’s even a VOIP Security Alliance.

Encryption for IP telephony is important, but it’s not a panacea. Basically, it takes care of threats No. 2 through No. 4, but not threat No. 1. Unfortunately, that’s the biggest threat: eavesdropping at the end points. No amount of IP telephony encryption can prevent a Trojan or worm on your computer—or just a hacker who managed to get access to your machine—from eavesdropping on your phone calls, just as no amount of SSL or e-mail encryption can prevent a Trojan on your computer from eavesdropping—or even modifying—your data.

So, as always, it boils down to this: We need secure computers and secure operating systems even more than we need secure transmission.

This essay originally appeared on Wired.com.

Posted on April 6, 2006 at 5:09 AMView Comments

Security Applications of Time-Reversed Acoustics

I simply don’t have the science to evaluate this claim:

Since conventional sound waves disperse when traveling through a medium, the possibility of focusing sound waves could have applications in several areas. In cryptography, for example, when sending a secret message, the sender could ensure that only one location would receive the message. Interceptors at other locations would only pick up noise due to unfocused waves. Other potential uses include antisubmarine warfare and underwater communications that benefit from targeted signaling.

Posted on April 5, 2006 at 1:06 PMView Comments

80 Cameras for 2,400 People

This story is about the remote town of Dillingham, Alaska, which is probably the most watched town in the country. There are 80 surveillance cameras for the 2,400 people, which translates to one camera for every 30 people.

The cameras were bought, I assume, because the town couldn’t think of anything else to do with the $202,000 Homeland Security grant they received. (One of the problems of giving this money out based on political agenda, rather than by where the actual threats are.)

But they got the money, and they spent it. And now they have to justify the expense. Here’s the movie-plot threat the Dillingham Police Chief uses to explain why the expense was worthwhile:

“Russia is about 800 miles that way,” he says, arm extending right.

“Seattle is about 1,200 miles back that way.” He points behind him.

“So if I have the math right, we’re closer to Russia than we are to Seattle.”

Now imagine, he says: What if the bad guys, whoever they are, manage to obtain a nuclear device in Russia, where some weapons are believed to be poorly guarded. They put the device in a container and then hire organized criminals, “maybe Mafiosi,” to arrange a tramp steamer to pick it up. The steamer drops off the container at the Dillingham harbor, complete with forged paperwork to ship it to Seattle. The container is picked up by a barge.

“Ten days later,” the chief says, “the barge pulls into the Port of Seattle.”

Thompson pauses for effect.

“Phoooom,” he says, his hands blooming like a flower.

The first problem with the movie plot is that it’s just plain silly. But the second problem, which you might have to look back to notice, is that those 80 cameras will do nothing to stop his imagined attack.

We are all security consumers. We spend money, and we expect security in return. This expenditure was a waste of money, and as a U.S. taxpayer, I am pissed that I’m getting such a lousy deal.

Posted on March 29, 2006 at 1:13 PMView Comments

Firefox Bug Causes Relationship to Break Up

A couple—living together, I assume—and engaged to be married, shared a computer. He used Firefox to visit a bunch of dating sites, being smart enough not to have the browser save his password. But Firefox did save the names of the sites it was told never to save the password for. She happened to stumble on this list. The details are left to the imagination, but they broke up.

Most bug reports aren’t this colorful.

Posted on March 27, 2006 at 7:53 AMView Comments

DHS Privacy and Integrity Report

Last year, the Department of Homeland Security finally got around to appointing its DHS Data Privacy and Integrity Advisory Committee. It was mostly made up of industry insiders instead of anyone with any real privacy experience. (Lance Hoffman from George Washington University was the most notable exception.)

And now, we have something from that committee. On March 7th they published their “Framework for Privacy Analysis of Programs, Technologies, and Applications.”

This document sets forth a recommended framework for analyzing programs, technologies, and applications in light of their effects on privacy and related interests. It is intended as guidance for the Data Privacy and Integrity Advisory Committee (the Committee) to the U.S. Department of Homeland Security (DHS). It may also be useful to the DHS Privacy Office, other DHS components, and other governmental entities that are seeking to reconcile personal data-intensive programs and activities with important social and human values.

It’s surprisingly good.

I like that it is a series of questions a program manager has to answer: about the legal basis for the program, its efficacy against the threat, and its effects on privacy. I am particularly pleased that their questions on pages 3-4 are very similar to the “five steps” I wrote about in Beyond Fear. I am thrilled that the document takes a “trade-off” approach; the last question asks: “Should the program proceed? Do the benefits of the program…justify the costs to privacy interests….?”

I think this is a good starting place for any technology or program with respect to security and privacy. And I hope the DHS actually follows the recommendations in this report.

Posted on March 21, 2006 at 3:07 PMView Comments

Data Mining for Terrorists

In the post 9/11 world, there’s much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn’t tenable for that purpose. We’re not trading privacy for security; we’re giving up privacy and getting no security in return.

Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.

But TIA didn’t die. According to The National Journal, it just changed its name and moved inside the Defense Department.

This shouldn’t be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people’s personal information. This list didn’t include classified programs, like the NSA’s eavesdropping effort, or state-run programs like MATRIX.

The promise of data mining is compelling, and convinces many. But it’s wrong. We’re not going to find terrorist plots through systems like this, and we’re going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.

Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost—in money, liberties, etc.—then the system is a good one. If not, then you’d be better off spending that cost elsewhere.

Data mining works best when there’s a well-defined profile you’re searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining’s success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern—purchase expensive luxury goods, purchase things that can be easily fenced, etc.—and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don’t even resent these phone calls—as long as they’re infrequent—so the cost is just a few minutes of operator time.

Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won’t uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.

All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn’t one. A false negative is when the system misses an actual terrorist plot. Depending on how you “tune” your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.

To reduce both those numbers, you need a well-defined profile. And that’s a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it’s much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you’re looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it’s going to be hard to find anything useful.

Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events—things that the data mining system will have to “look at”—and very few plots. This rarity makes even accurate identification systems useless.

Let’s look at some numbers. We’ll be optimistic. We’ll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).

Assume one trillion possible indicators to sift through: that’s about ten events—e-mails, phone calls, purchases, web surfings, whatever—per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.

This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you’re still chasing 2,750 false alarms per day—but that will inevitably raise your false negatives, and you’re going to miss some of those ten real plots.

This isn’t anything new. In statistics, it’s called the “base rate fallacy,” and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any “test” is going to result in an endless stream of false alarms.

This is exactly the sort of thing we saw with the NSA’s eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.

And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.

Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I’m more likely to be interested in. But these are all instances where the cost of false positives is low—a phone call from a Visa operator, or an uninteresting ad—and in systems that have value even if there is a high number of false negatives.

Finding terrorism plots is not a problem that lends itself to data mining. It’s a needle-in-a-haystack problem, and throwing more hay on the pile doesn’t make that problem any easier. We’d be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.

This essay originally appeared on Wired.com.

Posted on March 9, 2006 at 7:44 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.