Entries Tagged "privacy"

Page 123 of 144

Random Bag Searches in Subways

Last year, New York City implemented a program of random bag searches in the subways. It was a silly idea, and I wrote about it then. Recently the U.S. Court of Appeals for the 2nd Circuit upheld the program. Daniel Solove wrote about the ruling:

The 2nd Circuit panel concluded that the program was “reasonable” under the 4th Amendment’s special needs doctrine. Under the special needs doctrine, if there are exceptional circumstances that make the warrant and probable cause requirements unnecessary, then the search should be analyzed in terms of whether it is “reasonable.” Reasonableness is determined by balancing privacy against the government ‘s need. The problem with the 2nd Circuit decision is that under its reasoning, nearly any search, no matter how intrusive into privacy, would be justified. This is because of the way it assesses the government’s side of the balance. When the government’s interest is preventing the detonation of a bomb on a crowded subway, with the potential of mass casualties, it is hard for anything to survive when balanced against it.

The key to the analysis should be the extent to which the search program will effectively improve subway safety. In other words, the goals of the program may be quite laudable, but nobody questions the importance of subway safety. Its weight is so hefty that little can outweigh it. The important issue is whether the search program is a sufficiently effective way of achieving those goals that it is worth the trade-off in civil liberties. On this question, unfortunately, the 2nd Circuit punts. It defers to the law enforcement officials:

That decision is best left to those with “a unique understanding of, and responsibility for, limited public resources, including a finite number of police officers.” Accordingly, we ought not conduct a “searching examination of effectiveness.” Instead, we need only determine whether the Program is “a reasonably effective means of addressing” the government interest in deterring and detecting a terrorist attack on the subway system…

Instead, plaintiffs claim that the Program can have no meaningful deterrent effect because the NYPD employs too few checkpoints. In support of that claim, plaintiffs rely upon various statistical manipulations of the sealed checkpoint data.

We will not peruse, parse, or extrapolate four months’ worth of data in an attempt to divine how many checkpoints the City ought to deploy in the exercise of its day to day police power. Counter terrorism experts and politically accountable officials have undertaken the delicate and esoteric task of deciding how best to marshal their available resources in light of the conditions prevailing on any given day. We will not and may not second guess the minutiae of their considered decisions. (internal citations omitted)

Although courts should not take a “know it all” attitude, they must not defer on such a critical question. The problem with many security measures is that they are not a very wise expenditure of resources. It is costly to have a lot of police officers engage in these random searches when they could be doing other things or money could be spent on other measures. A very small number of random searches in a subway system of over 4 million riders a day seems more symbolic that effective. If courts don’t question the efficacy of security measures in the name of terrorism, then it allows law enforcement officials to win nearly all the time. The government just needs to come into court and say “terrorism” and little else will matter.

Posted on August 16, 2006 at 3:32 PMView Comments

AOL Releases Massive Amount of Search Data

From TechCrunch:

AOL has released very private data about its users without their permission. While the AOL username has been changed to a random ID number, the ability to analyze all searches by a single user will often lead people to easily determine who the user is, and what they are up to. The data includes personal names, addresses, social security numbers and everything else someone might type into a search box.

The most serious problem is the fact that many people often search on their own name, or those of their friends and family, to see what information is available about them on the net. Combine these ego searches with porn queries and you have a serious embarrassment. Combine them with “buy ecstasy” and you have evidence of a crime. Combine it with an address, social security number, etc., and you have an identity theft waiting to happen. The possibilities are endless.

This is search data for roughly 658,000 anonymized users over a three month period from March to May—about 1/3 of 1 per cent of their total data for that period.

Now AOL says it was all a mistake. They pulled the data, but it’s still still out there—and probably will be forever. And there’s some pretty scary stuff in it.

You can read more on Slashdot and elsewhere.

Anyone who wants to play NSA can start datamining for terrorists. Let us know if you find anything.

EDITED TO ADD (8/9): The New York Times:

And search by search, click by click, the identity of AOL user No. 4417749 became easier to discern. There are queries for “landscapers in Lilburn, Ga,” several people with the last name Arnold and “homes sold in shadow lake subdivision gwinnett county georgia.”

It did not take much investigating to follow that data trail to Thelma Arnold, a 62-year-old widow who lives in Lilburn, Ga., frequently researches her friends’ medical ailments and loves her three dogs. “Those are my searches,” she said, after a reporter read part of the list to her.

Posted on August 8, 2006 at 11:02 AMView Comments

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network—a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter—which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AMView Comments

Broadening CALEA

In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA). Basically, this is the law that forces the phone companies to make your telephone calls—including cell phone calls—available for government wiretapping.

But now the government wants access to VoIP calls, and SMS messages, and everything else. They’re doing their best to interpret CALEA as broadly as possible, but they’re also pursuing a legal angle. Ars Technica has the story:

The government hopes to shore up the legal basis for the program by passing amended legislation. The EFF took a look at the amendments and didn’t like what it found.

According to the Administration, the proposal would “confirm [CALEA’s] coverage of push-to-talk, short message service, voice mail service and other communications services offered on a commercial basis to the public,” along with “confirm[ing] CALEA’s application to providers of broadband Internet access, and certain types of ‘Voice-Over-Internet-Protocol’ (VOIP).” Many of CALEA’s express exceptions and limitations are also removed. Most importantly, while CALEA’s applicability currently depends on whether broadband and VOIP can be considered “substantial replacements” for existing telephone services, the new proposal would remove this limit.

Posted on July 28, 2006 at 11:09 AMView Comments

Spy Gadgets You Can Buy

Cheap:

This is a collection of “spy equipment” we have found for sale around the internet. Everything here is completely real, is sold at online stores, and almost any item listed here costs less than $500, and often times can be bought for less than $200.

What’s interesting to me is less what is available commercially today, but what we can extrapolate is available to real spies.

Posted on July 13, 2006 at 1:50 PMView Comments

Greek Wiretapping Scandal: Perpetrators' Names

According to The Guardian:

Five senior Vodafone technicians have been accused of being the operational masterminds of an elaborate eavesdropping scandal enveloping the mobile phone giant’s Greek subsidiary.

The employees, named in a report released last week by Greece’s independent telecoms watchdog, ADAE, allegedly installed spy software into Vodafone’s central systems.

Still no word on who the technicians were working for.

I’ve written about this scandal before: here, here, and most recently here.

Posted on July 10, 2006 at 1:28 PMView Comments

Terrorists, Data Mining, and the Base Rate Fallacy

I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here’s a more formal explanation:

Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes’ Theorem, to demonstrate that the NSA’s surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that “NSA’s surveillance system is useless for finding terrorists.”

The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government’s propaganda.

And here’s the analysis:

What is the probability that people are terrorists given that NSA’s mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).

The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google “Bayes’ Theorem”, you will get more than a million hits. Bayes’ Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes’ Theorem.

To know if mass surveillance will work, Bayes’ theorem requires three estimations:

  1. The base-rate for terrorists, i.e. what proportion of the population are terrorists;
  2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA;
  3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists.

No matter how sophisticated and super-duper are NSA’s methods for identifying terrorists, no matter how big and fast are NSA’s computers, NSA’s accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find terrorists.

I will not put Bayes’ computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA’s system of mass surveillance identifies them to be terrorists.

The US Census shows that there are about 300 million people living in the USA.

Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA’s monitoring of everyone’s email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA’s misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA’s surveillance system is useless for finding terrorists.

Suppose that NSA’s system is more accurate than .40, let’s say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes’ Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless.

Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.

As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it’s just useless for this particular application.

Posted on July 10, 2006 at 7:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.