Entries Tagged "surveillance"

Page 74 of 92

NSA Monitoring U.S. Government Internet Traffic

I have mixed feeling about this, but in general think it is a good idea:

President Bush signed a directive this month that expands the intelligence community’s role in monitoring Internet traffic to protect against a rising number of attacks on federal agencies’ computer systems.

The directive, whose content is classified, authorizes the intelligence agencies, in particular the National Security Agency, to monitor the computer networks of all federal agencies—including ones they have not previously monitored.

[…]

The classified joint directive, signed Jan. 8 and called the National Security Presidential Directive 54/Homeland Security Presidential Directive 23, has not been previously disclosed. Plans to expand the NSA’s role in cyber-security were reported in the Baltimore Sun in September.

According to congressional aides and former White House officials with knowledge of the program, the directive outlines measures collectively referred to as the “cyber initiative,” aimed at securing the government’s computer systems against attacks by foreign adversaries and other intruders. It will cost billions of dollars, which the White House is expected to request in its fiscal 2009 budget.

[…]

Under the initiative, the NSA, CIA and the FBI’s Cyber Division will investigate intrusions by monitoring Internet activity and, in some cases, capturing data for analysis, sources said.

The Pentagon can plan attacks on adversaries’ networks if, for example, the NSA determines that a particular server in a foreign country needs to be taken down to disrupt an attack on an information system critical to the U.S. government. That could include responding to an attack against a private-sector network, such as the telecom industry’s, sources said.

Also, as part of its attempt to defend government computer systems, the Department of Homeland Security will collect and monitor data on intrusions, deploy technologies for preventing attacks and encrypt data. It will also oversee the effort to reduce Internet portals across government to 50 from 2,000, to make it easier to detect attacks.

My concern is that the NSA is doing the monitoring. I simply don’t like them monitoring domestic traffic, even domestic government traffic.

EDITED TO ADD: Commentary.

Posted on February 4, 2008 at 6:30 AMView Comments

Security vs. Privacy

If there’s a debate that sums up post-9/11 politics, it’s security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity? Security versus privacy: It’s the battle of the century, or at least its first decade.

In a Jan. 21 New Yorker article, Director of National Intelligence Michael McConnell discusses a proposed plan to monitor all—that’s right, all—internet communications for security purposes, an idea so extreme that the word “Orwellian” feels too mild.

The article (now online here) contains this passage:

In order for cyberspace to be policed, internet activity will have to be closely monitored. Ed Giorgio, who is working with McConnell on the plan, said that would mean giving the government the authority to examine the content of any e-mail, file transfer or Web search. “Google has records that could help in a cyber-investigation,” he said. Giorgio warned me, “We have a saying in this business: ‘Privacy and security are a zero-sum game.'”

I’m sure they have that saying in their business. And it’s precisely why, when people in their business are in charge of government, it becomes a police state. If privacy and security really were a zero-sum game, we would have seen mass immigration into the former East Germany and modern-day China. While it’s true that police states like those have less street crime, no one argues that their citizens are fundamentally more secure.

We’ve been told we have to trade off security and privacy so often—in debates on security versus privacy, writing contests, polls, reasoned essays and political rhetoric—that most of us don’t even question the fundamental dichotomy.

But it’s a false one.

Security and privacy are not opposite ends of a seesaw; you don’t have to accept less of one to get more of the other. Think of a door lock, a burglar alarm and a tall fence. Think of guns, anti-counterfeiting measures on currency and that dumb liquid ban at airports. Security affects privacy only when it’s based on identity, and there are limitations to that sort of approach.

Since 9/11, approximately three things have potentially improved airline security: reinforcing the cockpit doors, passengers realizing they have to fight back and—possibly—sky marshals. Everything else—all the security measures that affect privacy—is just security theater and a waste of effort.

By the same token, many of the anti-privacy “security” measures we’re seeing—national ID cards, warrantless eavesdropping, massive data mining and so on—do little to improve, and in some cases harm, security. And government claims of their success are either wrong, or against fake threats.

The debate isn’t security versus privacy. It’s liberty versus control.

You can see it in comments by government officials: “Privacy no longer can mean anonymity,” says Donald Kerr, principal deputy director of national intelligence. “Instead, it should mean that government and businesses properly safeguard people’s private communications and financial information.” Did you catch that? You’re expected to give up control of your privacy to others, who—presumably—get to decide how much of it you deserve. That’s what loss of liberty looks like.

It should be no surprise that people choose security over privacy: 51 to 29 percent in a recent poll. Even if you don’t subscribe to Maslow’s hierarchy of needs, it’s obvious that security is more important. Security is vital to survival, not just of people but of every living thing. Privacy is unique to humans, but it’s a social need. It’s vital to personal dignity, to family life, to society—to what makes us uniquely human—but not to survival.

If you set up the false dichotomy, of course people will choose security over privacy—especially if you scare them first. But it’s still a false dichotomy. There is no security without privacy. And liberty requires both security and privacy. The famous quote attributed to Benjamin Franklin reads: “Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety.” It’s also true that those who would give up privacy for security are likely to end up with neither.

This essay originally appeared on Wired.com.

Posted on January 29, 2008 at 5:21 AMView Comments

Cameras in the New York City Subways

An update:

New York City’s plan to secure its subways with a next-generation surveillance network is getting more expensive by the second, and slipping further and further behind schedule. A new report by the New York State Comptroller’s office reveals that “the cost of the electronic security program has grown from $265 million to $450 million, an increase of $185 million or 70 percent.” An August 2008 deadline has been pushed back to December 2009, and further delays may be just ahead.

[…]

I’ve spent the last few months, on and off, reporting on New York’s counter-terror programs for the magazine. One major problem with the subway surveillance program has been wedging a modern security network into a 5,000 square-mile system that recently celebrated its hundredth birthday. Getting the power ­ and air-conditioning ­ needed for the cameras’ servers has been a nightmare. In many stations, there’s literally no place to put the things. Plus, the ceilings in most of the subway stations are only nine feet high, and there are columns every few yards. Which makes it very hard to get a good look at the passengers.

Posted on January 25, 2008 at 1:41 PMView Comments

Corporate Spying

This is a good article on a new trend in corporate spying: companies like Wal-Mart and Sears have resorted to covert surveillance of employees, partners, journalists, and even Internet users to protect itself from “global threats.”

“Like most major corporations, it is our corporate responsibility to have systems in place, including software systems, to monitor threats to our network, intellectual property and our people,” Wal-Mart spokeswoman Sarah Clark said in a statement in April. Following the Gabbard firing, Wal-Mart said it conducted a review of its monitoring activities. “There have been changes in leadership, and we have strengthened our practices and protocols in this area,” Clark said.

[…]

At a gathering of security specialists in New York City in January of 2006, David Harrison, the former Army military intelligence officer who was hired by Senser to head Wal-Mart’s analytical security research center, provided a rare glimpse into the company’s monitoring operations. Harrison told the gathering Wal-Mart faces a wide range of threats: “A bombing in China, an armed robbery in Brazil, an armed robbery in Las Vegas, another bomb threat, and that was just yesterday,” Harrison said.

To safeguard its employees and operations Wal-Mart has tapped its massive data warehouse of information, now believed to be larger than 4 petabytes (4,000 terabytes), to look for potential threats. It tracks customers who buy propane tanks, for example, or anyone who has fraudulently cashed a check, or anyone making bulk purchases of pre-paid cell phones, which could be tied to criminal activities. “If you try to buy more than three cell phones at one time, it will be tracked,” he reportedly told the audience.

[…]

Gabbard, the Wal-Mart employee fired for recording reporters’ phone calls, said in his interview with The Wall Street Journal that Wal-Mart uses software from Raytheon Oakley Networks to monitor activity on its network. The Oakley product was originally developed for the U.S. Department of Defense.

The Oakley software is so sophisticated it can allow administrators to visually see what types of information are moving across the network, from Excel spreadsheets to job searches on Monster.com, or photos with flesh tones that might indicate a user is viewing pornography.

And this article talks about ex-CIA agents working for corporations:

The best estimate is that several hundred former intelligence agents now work in corporate espionage, including some who left the C.I.A. during the agency turmoil that followed 9/11. They quickly joined private-investigation firms whose U.S. corporate clients were planning to expand into Russia, China, and other countries with opaque business practices and few public records, and who needed the skinny on international partners or rivals.

These ex-spies apply a higher level of expertise, honed by government service, to the cruder tactics already practiced by private investigators. One such ploy is pretexting—obtaining information by pretending to be somebody else. While private detectives have long posed as freelance reporters or job recruiters to get people to talk, former agents have elevated pretexting to an art.

[…]

Similarly, ex-agents have helped popularize the use of G.P.S.-based monitoring devices and long-range cameras for following people around. One corporate-espionage technique comes straight from the C.I.A. playbook. In the constant search for the slightest edge, some hedge funds and investment companies have turned to a handful of private-investigation firms for a tactic that seems to fall between science and voodoo. Called tactical behavior assessment, it relies on dozens of verbal and nonverbal cues to determine whether someone is lying. Signs of potential deception include meandering off topic rather than sticking to the facts and excessive personal grooming, such as nervously picking lint off a jacket. This method was developed by former lie-detector experts from the C.I.A.’s Office of Security, which administers polygraph tests to keep agents honest and verify the stories of would-be defectors.

[…]

Most of the ex-agents’ activities, from surveillance to lie detection, are perfectly legal. In the wake of the 2006 Hewlett-Packard scandal, detectives used pretexting to obtain the private telephone records of company directors, employees, and journalists. In an effort to track leaks to the media, federal law was tightened to prohibit using fraudulent means to obtain telephone records. Financial records were already off-limits. But federal law doesn’t forbid assuming a false identity to get other information—an area that ex-spies exploit.

Still, a few techniques favored by the spies-for-hire do appear to violate privacy statutes. One of these involves using “data haunts,” extreme methods of electronic monitoring such as tracking cell-phone calls and gathering emails by relying on secretly installed software to record computer keystrokes. An ex-C.I.A. agent described a group of his former colleagues who set up shop offshore so that they could tap into telephone calls—a practice prohibited by federal law—outside U.S. jurisdiction. “They call themselves the bad boys in the Bahamas,” he said.

Even some of the legal methods are controversial within the industry. Certain old-school firms won’t stoop to dumpster diving or stealing garbage—which is usually legal as long as the trash is on a curb or other public property—” because they consider it unethical. They say that the prevalence of former intelligence agents in the field and the rise of unscrupulous tactics have tarnished a business that often struggles with its reputation. One longtime investigator complained that he recently lost business to some ex-C.I.A. officers who promised a potential client that they could obtain the phone and bank records of a target—something that is illegal in most cases.

[…]

Current and former employees said Diligence’s ex-spies also held classes in using false identities to obtain confidential information. Ex-employees said it wasn’t unusual for an investigator to have five or six cell phones, each representing a different identity, on his or her desk. And while ex-C.I.A. and former MI5 agents were old hands at such deception, the new initiates sometimes got confused and answered a phone with the wrong name.

All interesting. It seems that corporate espionage has gone mainstream, and the debate is more about how and when.

On a related note, this paragraph disturbed me:

On occasion, Diligence investigators were dispatched to collect garbage from a target’s home or office. In some cases, two former employees said, Diligence hired off-duty or retired police officers to take trash so that they could wave their badges and fend off any awkward questions.

It’s public authority being used for private interests. We see it a lot—off-duty police officers guarding private businesses, for example—and it erodes public trust of authority. In the case above, I’m not even sure it’s legal.

Posted on January 16, 2008 at 12:21 PMView Comments

Privacy International's 2007 Report

The 2007 International Privacy Ranking.

Canada comes in first.

Individual privacy is best protected in Canada but under threat in the United States and the European Union as governments introduce sweeping surveillance and information-gathering measures in the name of security and border control, an international rights group said in a report released Saturday.

Canada, Greece and Romania had the best privacy records of 47 countries surveyed by London-based watchdog Privacy International. Malaysia, Russia and China were ranked worst.

Both Britain and the United States fell into the lowest-performing group of “endemic surveillance societies.”

EDITED TO ADD (1/10): Actually, Canada comes in second.

Posted on January 10, 2008 at 6:01 AMView Comments

Anonymity and the Netflix Dataset

Last year, Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems than the one the company was using. The data was anonymized by removing personal details and replacing names with random numbers, to protect the privacy of the recommenders.

Arvind Narayanan and Vitaly Shmatikov, researchers at the University of Texas at Austin, de-anonymized some of the Netflix data by comparing rankings and timestamps with public information in the Internet Movie Database, or IMDb.

Their research (.pdf) illustrates some inherent security problems with anonymous data, but first it’s important to explain what they did and did not do.

They did not reverse the anonymity of the entire Netflix dataset. What they did was reverse the anonymity of the Netflix dataset for those sampled users who also entered some movie rankings, under their own names, in the IMDb. (While IMDb’s records are public, crawling the site to get them is against the IMDb’s terms of service, so the researchers used a representative few to prove their algorithm.)

The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset.

On one hand, isn’t that sort of obvious? The risks of anonymous databases have been written about before, such as in this 2001 paper published in an IEEE journal. The researchers working with the anonymous Netflix data didn’t painstakingly figure out people’s identities—as others did with the AOL search database last year—they just compared it with an already identified subset of similar data: a standard data-mining technique.

But as opportunities for this kind of analysis pop up more frequently, lots of anonymous data could end up at risk.

Someone with access to an anonymous dataset of telephone records, for example, might partially de-anonymize it by correlating it with a catalog merchants’ telephone order database. Or Amazon’s online book reviews could be the key to partially de-anonymizing a public database of credit card purchases, or a larger database of anonymous book reviews.

Google, with its database of users’ internet searches, could easily de-anonymize a public database of internet purchases, or zero in on searches of medical terms to de-anonymize a public health database. Merchants who maintain detailed customer and purchase information could use their data to partially de-anonymize any large search engine’s data, if it were released in an anonymized form. A data broker holding databases of several companies might be able to de-anonymize most of the records in those databases.

What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits.

The obvious countermeasures for this are, sadly, inadequate. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. It turns out, though, that this only makes the problem slightly harder. Narayanan’s and Shmatikov’s de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it.

With only eight movie ratings (of which two may be completely wrong), and dates that may be up to two weeks in error, they can uniquely identify 99 percent of the records in the dataset. After that, all they need is a little bit of identifiable data: from the IMDb, from your blog, from anywhere. The moral is that it takes only a small named database for someone to pry the anonymity off a much larger anonymous database.

Other research reaches the same conclusion. Using public anonymous data from the 1990 census, Latanya Sweeney found that 87 percent of the population in the United States, 216 million of 248 million, could likely be uniquely identified by their five-digit ZIP code, combined with their gender and date of birth. About half of the U.S. population is likely identifiable by gender, date of birth and the city, town or municipality in which the person resides. Expanding the geographic scope to an entire county reduces that to a still-significant 18 percent. “In general,” the researchers wrote, “few characteristics are needed to uniquely identify a person.”

Stanford University researchers reported similar results using 2000 census data. It turns out that date of birth, which (unlike birthday month and day alone) sorts people into thousands of different buckets, is incredibly valuable in disambiguating people.

This has profound implications for releasing anonymous data. On one hand, anonymous data is an enormous boon for researchers—AOL did a good thing when it released its anonymous dataset for research purposes, and it’s sad that the CTO resigned and an entire research team was fired after the public outcry. Large anonymous databases of medical data are enormously valuable to society: for large-scale pharmacology studies, long-term follow-up studies and so on. Even anonymous telephone data makes for fascinating research.

On the other hand, in the age of wholesale surveillance, where everyone collects data on us all the time, anonymization is very fragile and riskier than it initially seems.

Like everything else in security, anonymity systems shouldn’t be fielded before being subjected to adversarial attacks. We all know that it’s folly to implement a cryptographic system before it’s rigorously attacked; why should we expect anonymity systems to be any different? And, like everything else in security, anonymity is a trade-off. There are benefits, and there are corresponding risks.

Narayanan and Shmatikov are currently working on developing algorithms and techniques that enable the secure release of anonymous datasets like Netflix’s. That’s a research result we can all benefit from.

This essay originally appeared on Wired.com.

Posted on December 18, 2007 at 5:53 AMView Comments

Redefining Privacy

This kind of thinking can do enormous damage to a free society:

As Congress debates new rules for government eavesdropping, a top intelligence official says it is time that people in the United States change their definition of privacy.

Privacy no longer can mean anonymity, says Donald Kerr, the principal deputy director of national intelligence. Instead, it should mean that government and businesses properly safeguard people’s private communications and financial information.

[…]

“Our job now is to engage in a productive debate, which focuses on privacy as a component of appropriate levels of security and public safety,” Kerr said. “I think all of us have to really take stock of what we already are willing to give up, in terms of anonymity, but [also] what safeguards we want in place to be sure that giving that doesn’t empty our bank account or do something equally bad elsewhere.”

Anonymity, privacy, and security are intertwined; you can’t just separate them out like that. And privacy isn’t opposed to security; privacy is part of security. And the value of privacy in a free society is enormous.

Other comments.

EDITED TO ADD (11/15): His actual comments are more nuanced. Steve Bellovin has some comments.

Posted on November 14, 2007 at 12:51 PMView Comments

Programming for Wholesale Surveillance and Data Mining

AT&T has done the research:

They use high-tech data-mining algorithms to scan through the huge daily logs of every call made on the AT&T network; then they use sophisticated algorithms to analyze the connections between phone numbers: who is talking to whom? The paper literally uses the term “Guilt by Association” to describe what they’re looking for: what phone numbers are in contact with other numbers that are in contact with the bad guys?

When this research was done, back in the last century, the bad guys where people who wanted to rip off AT&T by making fraudulent credit-card calls. (Remember, back in the last century, intercontinental long-distance voice communication actually cost money!) But it’s easy to see how the FBI could use this to chase down anyone who talked to anyone who talked to a terrorist. Or even to a “terrorist.”

Posted on October 31, 2007 at 12:03 PMView Comments

1 72 73 74 75 76 92

Sidebar photo of Bruce Schneier by Joe MacInnis.