Entries Tagged "privacy"

Page 76 of 145

"The Logic of Surveillance"

Interesting essay:

Surveillance is part of the system of control. “The more surveillance, the more control” is the majority belief amongst the ruling elites. Automated surveillance requires fewer “watchers”, and since the watchers cannot watch all the surveillance, long term storage increases the ability to find some “crime” anyone is guilty of.

[…]

This is one of the biggest problems the current elites face: they want the smallest enforcer class possible, so as to spend surplus on other things. The enforcer class is also insular, primarily concerned with itself (see Dorner) and is paid in large part by practical immunity to many laws and a license to abuse ordinary people. Not being driven primarily by justice or a desire to serve the public and with a code of honor which appears to largely center around self-protection and fraternity within the enforcer class, the enforcers’ reliability is in question: they are blunt tools and their fear for themselves makes them remarkably inefficient.

Surveillance expands the reach of the enforcer class and thus of the elites. Every camera, drone and so on reduces the number of eyes needed on the ground. The Stasi had millions of informers; surveillance reduces that requirement and the cost of the enforcer class.

Posted on March 12, 2013 at 6:45 AMView Comments

How the FBI Intercepts Cell Phone Data

Good article on “Stingrays,” which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network—as opposed to computers which inherently do not trust the Internet—it’s easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they’re on shaky legal ground.

The latest release, amounting to some 300 selectively redacted pages, not only suggests that sophisticated cellphone spy gear has been widely deployed since the mid-’90s. It reveals that the FBI conducted training sessions on cell tracking techniques in 2007 and around the same time was operating an internal “secret” website with the purpose of sharing information and interactive media about “effective tools” for surveillance. There are also some previously classified emails between FBI agents that show the feds joking about using the spy gear. “Are you smart enough to turn the knobs by yourself?” one agent asks a colleague.

Of course, if a policeman actually has your phone, he can suck pretty much everything out of it—again, without a warrant.

Using a single “data extraction session” they were able to pull:

  • call activity
  • phone book directory information
  • stored voicemails and text messages
  • photos and videos
  • apps
  • eight different passwords
  • 659 geolocation points, including 227 cell towers and 403 WiFi networks with which the cell phone had previously connected.

Posted on March 7, 2013 at 1:39 PMView Comments

The NSA's Ragtime Surveillance Program and the Need for Leaks

A new book reveals details about the NSA’s Ragtime surveillance program:

A book published earlier this month, “Deep State: Inside the Government Secrecy Industry,” contains revelations about the NSA’s snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is “Ragtime”—and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.

Deep State, which was authored by Marc Ambinder and D.B. Grady, also offers insight into how the NSA deems individuals a potential threat. The agency uses an automated data-mining process based on “a computerized analysis that assigns probability scores to each potential target,” as Harris puts it in his summary. The domestic version of the program, dubbed “Ragtime-P,” can process as many as 50 different data sets at one time, focusing on international communications from or to the United States. Intercepted metadata, such as email headers showing “to” and “from” fields, is stored in a database called “Marina,” where it generally stays for five years.

About three dozen NSA officials have access to Ragtime’s intercepted data on domestic counter-terrorism, the book claims, though outside the agency some 1000 people “are privy to the full details of the program.” Internally, the NSA apparently only employs four or five individuals as “compliance staff” to make sure the snooping is falling in line with laws and regulations. Another section of the Ragtime program, “Ragtime-A,” is said to involve U.S.-based interception of foreign counterterrorism data, while “Ragtime-B” collects data from foreign governments that transits through the U.S., and “Ragtime-C” monitors counter proliferation activity.

The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:

The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what’s really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they’ll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say ‘Moonraker’ or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest…actually wasn’t shut down at all, just went back under the radar after being renamed (and renamed and renamed).

He’s right. The real threat isn’t any one particular secret program, it’s all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.

We need whistleblowers. Much of the information we have about the NSA’s and the Justice Department’s plans and capabilities—think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program—is over a decade old.

Frank Rieger of the Chaos Computer Club got it right in 2006:

We also need to know how the intelligence agencies work today. It is of highest priority to learn how the “we rather use backdoors than waste time cracking your keys”-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems….

Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

Unfortunately, the Obama Administration’s mistreatment of Bradley Manning and its aggressive prosecution of other whistleblowers has probably succeeded in scaring any copycats. Yochai Benkler writes:

The prosecution will likely not accept Manning’s guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of “aiding the enemy.” That’s when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of “aiding the enemy.”

Aiding the enemy is a broad and vague offense. In the past, it was used in hard-core cases where somebody handed over information about troop movements directly to someone the collaborator believed to be “the enemy,” to American POWs collaborating with North Korean captors, or to a German American citizen who was part of a German sabotage team during WWII. But the language of the statute is broad. It prohibits not only actually aiding the enemy, giving intelligence, or protecting the enemy, but also the broader crime of communicating—directly or indirectly—with the enemy without authorization. That’s the prosecution’s theory here: Manning knew that the materials would be made public, and he knew that Al Qaeda or its affiliates could read the publications in which the materials would be published. Therefore, the prosecution argues, by giving the materials to WikiLeaks, Manning was “indirectly” communicating with the enemy. Under this theory, there is no need to show that the defendant wanted or intended to aid the enemy. The prosecution must show only that he communicated the potentially harmful information, knowing that the enemy could read the publications to which he leaked the materials. This would be true whether Al Qaeda searched the WikiLeaks database or the New York Times‘….

This theory is unprecedented in modern American history.

[…]

If Bradley Manning is convicted of aiding the enemy, the introduction of a capital offense into the mix would dramatically elevate the threat to whistleblowers. The consequences for the ability of the press to perform its critical watchdog function in the national security arena will be dire. And then there is the principle of the thing. However technically defensible on the language of the statute, and however well-intentioned the individual prosecutors in this case may be, we have to look at ourselves in the mirror of this case and ask: Are we the America of Japanese Internment and Joseph McCarthy, or are we the America of Ida Tarbell and the Pentagon Papers? What kind of country makes communicating with the press for publication to the American public a death-eligible offense?

A country that’s much less free and much less secure.

Posted on March 6, 2013 at 1:24 PMView Comments

Technologies of Surveillance

It’s a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.

For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the Daily News back in January, If something is obstructing the flow of that radiation—a weapon, for example—the device will highlight that object.

Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won’t make a difference. We’re scared of both terrorism and crime, even as the risks decrease; and when we’re scared, we’re willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.

A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD’s Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?

For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone’s clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.

The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they’re marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.

The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they’re too embedded in our security culture to be removed.

The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don’t work very well and, ironically, wouldn’t have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn’t sufficiently blur the images so they didn’t show travelers naked. Now, only millimeter-wave full-body scanners remain.

Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don’t.

Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It’s the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system—this time by a government instead of a company—despite decades of evidence that it doesn’t work and makes us all less safe.

It’s not that these technologies are totally useless. It’s that they’re expensive, and none of them is a panacea. Maybe there’s a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there’s a profit motive at work, too.

An edited version of this essay, without links, appeared in the New York Daily News.

EDITED TO ADD (2/13): IBM’s version massive data policing system is being tested in Rio de Jeneiro.

Posted on March 5, 2013 at 6:28 AMView Comments

Automobile Data Surveillance and the Future of Black Boxes

Tesla Motors gave one of its electric cars to John Broder, a very outspoken electric-car skeptic from the New York Times, for a test drive. After a negative review, Tesla revealed that it logged a dizzying amount of data from that test drive. The company then matched the reporter’s claims against its logs and published a rebuttal. Broder rebutted the rebuttal, and others have tried to figure out who is lying and who is not.

What’s interesting to me is the sheer amount of data Tesla Motors automatically collected about the test drive. From the rebuttal:

After a negative experience several years ago with Top Gear, a popular automotive show, where they pretended that our car ran out of energy and had to be pushed back to the garage, we always carefully data log media drives.

Read the article to see what they logged: power consumption, speed, ambient temperature, control settings, location, and so on.

The stakes are high here. Broder and the New York Times are concerned about their journalistic integrity, which affects their brand. And Tesla Motors wants to sell cars.

The implication is that Tesla Motors only does this for media test drives, but it gives you an idea of the sort of things that will be collected once automobile black boxes become the norm. We’re used to airplane black boxes, which only collected a small amount of data from the minutes just before an incident. But that was back when data was expensive. Now that it’s cheap, expect black boxes to collect everything all the time. And once it’s collected, it’ll be used. By auto manufacturers, by insurance companies, by car rental companies, by marketers. The list will be long.

But as we’re learning from this particular back-and-forth between Broder and Tesla Motors, even intense electronic surveillance of the actions of a person in an enclosed space did not succeed in providing an unambiguous record of what happened. To know that, the car company would have had to have someone in the car with the journalist.

This will increasingly be a problem as we are judged by our data. And in most cases, neither side will spend this sort of effort trying to figure out what really happened.

EDITED TO ADD (2/21): CNN weighs in.

Posted on February 18, 2013 at 6:14 AMView Comments

Anti-Cheating Security in Casinos

Long article.

With over a thousand cameras operating 24/7, the monitoring room creates tremendous amounts of data every day, most of which goes unseen. Six technicians watch about 40 monitors, but all the feeds are saved for later analysis. One day, as with OCR scanning, it might be possible to search all that data for suspicious activity. Say, a baccarat player who leaves his seat, disappears for a few minutes, and is replaced with another player who hits an impressive winning streak. An alert human might spot the collusion, but even better, video analytics might flag the scene for further review. The valuable trend in surveillance, Whiting says, is toward this data-driven analysis (even when much of the job still involves old-fashioned gumshoe work). “It’s the data,” he says, “And cameras now are data. So it’s all data. It’s just learning to understand that data is important.”

Posted on February 14, 2013 at 6:32 AMView Comments

1 74 75 76 77 78 145

Sidebar photo of Bruce Schneier by Joe MacInnis.