Page 479

How the FBI Intercepts Cell Phone Data

Good article on “Stingrays,” which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network—as opposed to computers which inherently do not trust the Internet—it’s easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they’re on shaky legal ground.

The latest release, amounting to some 300 selectively redacted pages, not only suggests that sophisticated cellphone spy gear has been widely deployed since the mid-’90s. It reveals that the FBI conducted training sessions on cell tracking techniques in 2007 and around the same time was operating an internal “secret” website with the purpose of sharing information and interactive media about “effective tools” for surveillance. There are also some previously classified emails between FBI agents that show the feds joking about using the spy gear. “Are you smart enough to turn the knobs by yourself?” one agent asks a colleague.

Of course, if a policeman actually has your phone, he can suck pretty much everything out of it—again, without a warrant.

Using a single “data extraction session” they were able to pull:

  • call activity
  • phone book directory information
  • stored voicemails and text messages
  • photos and videos
  • apps
  • eight different passwords
  • 659 geolocation points, including 227 cell towers and 403 WiFi networks with which the cell phone had previously connected.

Posted on March 7, 2013 at 1:39 PMView Comments

The NSA's Ragtime Surveillance Program and the Need for Leaks

A new book reveals details about the NSA’s Ragtime surveillance program:

A book published earlier this month, “Deep State: Inside the Government Secrecy Industry,” contains revelations about the NSA’s snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is “Ragtime”—and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.

Deep State, which was authored by Marc Ambinder and D.B. Grady, also offers insight into how the NSA deems individuals a potential threat. The agency uses an automated data-mining process based on “a computerized analysis that assigns probability scores to each potential target,” as Harris puts it in his summary. The domestic version of the program, dubbed “Ragtime-P,” can process as many as 50 different data sets at one time, focusing on international communications from or to the United States. Intercepted metadata, such as email headers showing “to” and “from” fields, is stored in a database called “Marina,” where it generally stays for five years.

About three dozen NSA officials have access to Ragtime’s intercepted data on domestic counter-terrorism, the book claims, though outside the agency some 1000 people “are privy to the full details of the program.” Internally, the NSA apparently only employs four or five individuals as “compliance staff” to make sure the snooping is falling in line with laws and regulations. Another section of the Ragtime program, “Ragtime-A,” is said to involve U.S.-based interception of foreign counterterrorism data, while “Ragtime-B” collects data from foreign governments that transits through the U.S., and “Ragtime-C” monitors counter proliferation activity.

The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:

The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what’s really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they’ll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say ‘Moonraker’ or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest…actually wasn’t shut down at all, just went back under the radar after being renamed (and renamed and renamed).

He’s right. The real threat isn’t any one particular secret program, it’s all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.

We need whistleblowers. Much of the information we have about the NSA’s and the Justice Department’s plans and capabilities—think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program—is over a decade old.

Frank Rieger of the Chaos Computer Club got it right in 2006:

We also need to know how the intelligence agencies work today. It is of highest priority to learn how the “we rather use backdoors than waste time cracking your keys”-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems….

Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

Unfortunately, the Obama Administration’s mistreatment of Bradley Manning and its aggressive prosecution of other whistleblowers has probably succeeded in scaring any copycats. Yochai Benkler writes:

The prosecution will likely not accept Manning’s guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of “aiding the enemy.” That’s when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of “aiding the enemy.”

Aiding the enemy is a broad and vague offense. In the past, it was used in hard-core cases where somebody handed over information about troop movements directly to someone the collaborator believed to be “the enemy,” to American POWs collaborating with North Korean captors, or to a German American citizen who was part of a German sabotage team during WWII. But the language of the statute is broad. It prohibits not only actually aiding the enemy, giving intelligence, or protecting the enemy, but also the broader crime of communicating—directly or indirectly—with the enemy without authorization. That’s the prosecution’s theory here: Manning knew that the materials would be made public, and he knew that Al Qaeda or its affiliates could read the publications in which the materials would be published. Therefore, the prosecution argues, by giving the materials to WikiLeaks, Manning was “indirectly” communicating with the enemy. Under this theory, there is no need to show that the defendant wanted or intended to aid the enemy. The prosecution must show only that he communicated the potentially harmful information, knowing that the enemy could read the publications to which he leaked the materials. This would be true whether Al Qaeda searched the WikiLeaks database or the New York Times‘….

This theory is unprecedented in modern American history.

[…]

If Bradley Manning is convicted of aiding the enemy, the introduction of a capital offense into the mix would dramatically elevate the threat to whistleblowers. The consequences for the ability of the press to perform its critical watchdog function in the national security arena will be dire. And then there is the principle of the thing. However technically defensible on the language of the statute, and however well-intentioned the individual prosecutors in this case may be, we have to look at ourselves in the mirror of this case and ask: Are we the America of Japanese Internment and Joseph McCarthy, or are we the America of Ida Tarbell and the Pentagon Papers? What kind of country makes communicating with the press for publication to the American public a death-eligible offense?

A country that’s much less free and much less secure.

Posted on March 6, 2013 at 1:24 PMView Comments

Al Qaeda Document on Avoiding Drone Strikes

Interesting:

3 – Spreading the reflective pieces of glass on a car or on the roof of the building.

4 – Placing a group of skilled snipers to hunt the drone, especially the reconnaissance
ones because they fly low, about six kilometers or less.

5 – Jamming of and confusing of electronic communication using the ordinary water-lifting dynamo fitted with a 30-meter copper pole.

6 – Jamming of and confusing of electronic communication using old equipment and
keeping them 24-hour running because of their strong frequencies and it is possible using simple ideas of deception of equipment to attract the electronic waves devices similar to that used by the Yugoslav army when they used the microwave (oven) in attracting and confusing the NATO missiles fitted with electromagnetic searching devices.

Posted on March 6, 2013 at 6:50 AMView Comments

Marketing at the RSA Conference

Marcus Ranum has an interesting screed on “booth babes” in the RSA Conference exhibition hall:

I’m not making a moral argument about sexism in our industry or the objectification of women. I could (and probably should) but it’s easier to just point out the obvious: the only customers that will be impressed by anyone’s ability to hire pretty models to work their booth aren’t going to be the ones signing the big purchase orders. And, it’s possible that they’re thinking your sales team are going to be a bunch of testosterone-laden assholes who’d be better off selling used tires. If some company wants to appeal to the consumer that’s going to jump at the T&A maybe they should relocate up the street to O’Farrell where they can include a happy ending with their product demo.

Mark Rothman on the same topic.

EDITED TO ADD (3/11): Winn Schwartau makes a similar point.

Posted on March 5, 2013 at 1:58 PMView Comments

Technologies of Surveillance

It’s a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.

For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the Daily News back in January, If something is obstructing the flow of that radiation—a weapon, for example—the device will highlight that object.

Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won’t make a difference. We’re scared of both terrorism and crime, even as the risks decrease; and when we’re scared, we’re willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.

A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD’s Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?

For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone’s clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.

The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they’re marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.

The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they’re too embedded in our security culture to be removed.

The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don’t work very well and, ironically, wouldn’t have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn’t sufficiently blur the images so they didn’t show travelers naked. Now, only millimeter-wave full-body scanners remain.

Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don’t.

Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It’s the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system—this time by a government instead of a company—despite decades of evidence that it doesn’t work and makes us all less safe.

It’s not that these technologies are totally useless. It’s that they’re expensive, and none of them is a panacea. Maybe there’s a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there’s a profit motive at work, too.

An edited version of this essay, without links, appeared in the New York Daily News.

EDITED TO ADD (2/13): IBM’s version massive data policing system is being tested in Rio de Jeneiro.

Posted on March 5, 2013 at 6:28 AMView Comments

New Internet Porn Scam

I hadn’t heard of this one before. In New Zealand, people viewing adult websites—it’s unclear whether these are honeypot sites, or malware that notices the site being viewed—get a pop-up message claiming it’s from the NZ Police and demanding payment of an instant fine for viewing illegal pornography.

EDITED TO ADD (2/12): There’s a Japanese variant of this called “one-click fraud.”

Posted on March 4, 2013 at 2:04 PMView Comments

Getting Security Incentives Right

One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn’t matter how much management tells employees that security is important, employees know when it really isn’t—when getting the job done cheaply and on schedule is much more important.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

Similarly, there’s a supposedly an old Chinese proverb that goes “hang one, warn a thousand.” Or to put it another way, we’re really good at risk management. And there’s John Byng, whose execution gave rise to the Voltaire quote (in French): “in this country, it is good to kill an admiral from time to time, in order to encourage the others.”

I thought of all this when I read about the new security procedures surrounding the upcoming papal election:

According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.

“I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff,” the oath reads.

“I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication ‘latae sententiae’, which is reserved to the Apostolic See,” it continues.

Excommunication is like being fired, only it lasts for eternity.

I’m not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots—a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while—there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.

Posted on March 4, 2013 at 6:38 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.