Entries Tagged "leaks"

Page 12 of 15

Text Message Retention Policies

The FBI wants cell phone carriers to store SMS messages for a long time, enabling them to conduct surveillance backwards in time. Nothing new there—data retention laws are being debated in many countries around the world—but this was something I did not know:

Wireless providers’ current SMS retention policies vary. An internal Justice Department document (PDF) that the ACLU obtained through the Freedom of Information Act shows that, as of 2010, AT&T, T-Mobile, and Sprint did not store the contents of text messages. Verizon did for up to five days, a change from its earlier no-logs-at-all position, and Virgin Mobile kept them for 90 days. The carriers generally kept metadata such as the phone numbers associated with the text for 90 days to 18 months; AT&T was an outlier, keeping it for as long as seven years.

An e-mail message from a detective in the Baltimore County Police Department, leaked by Antisec and reproduced in a 2011 Wired article, says that Verizon keeps “text message content on their servers for 3-5 days.” And: “Sprint stores their text message content going back 12 days and Nextel content for 7 days. AT&T/Cingular do not preserve content at all. Us Cellular: 3-5 days Boost Mobile LLC: 7 days”

That second set of data is from 2009.

Leaks seems to be the primary way we learn how our privacy is being violated these days—we need more of them.

EDITED TO ADD (4/12): Discussion of Canadian policy.

Posted on March 21, 2013 at 1:17 PMView Comments

On Secrecy

Interesting law paper: “The Implausibility of Secrecy,” by Mark Fenster.

Abstract: Government secrecy frequently fails. Despite the executive branch’s obsessive hoarding of certain kinds of documents and its constitutional authority to do so, recent high-profile events ­ among them the WikiLeaks episode, the Obama administration’s celebrated leak prosecutions, and the widespread disclosure by high-level officials of flattering confidential information to sympathetic reporters ­ undercut the image of a state that can classify and control its information. The effort to control government information requires human, bureaucratic, technological, and textual mechanisms that regularly founder or collapse in an administrative state, sometimes immediately and sometimes after an interval. Leaks, mistakes, open sources ­ each of these constitutes a path out of the government’s informational clutches. As a result, permanent, long-lasting secrecy of any sort and to any degree is costly and difficult to accomplish.

This article argues that information control is an implausible goal. It critiques some of the foundational assumptions of constitutional and statutory laws that seek to regulate information flows, in the process countering and complicating the extensive literature on secrecy, transparency, and leaks that rest on those assumptions. By focusing on the functional issues relating to government information and broadening its study beyond the much-examined phenomenon of leaks, the article catalogs and then illustrates in a series of case studies the formal and informal means by which information flows out of the state. These informal means play an especially important role in limiting both the ability of state actors to keep secrets and the extent to which formal legal doctrines can control the flow of government information. The same bureaucracy and legal regime that keep open government laws from creating a transparent state also keep the executive branch from creating a perfect informational dam. The article draws several implications from this descriptive, functional argument for legal reform and for the study of administrative and constitutional law.

Posted on March 14, 2013 at 12:19 PMView Comments

The NSA's Ragtime Surveillance Program and the Need for Leaks

A new book reveals details about the NSA’s Ragtime surveillance program:

A book published earlier this month, “Deep State: Inside the Government Secrecy Industry,” contains revelations about the NSA’s snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is “Ragtime”—and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.

Deep State, which was authored by Marc Ambinder and D.B. Grady, also offers insight into how the NSA deems individuals a potential threat. The agency uses an automated data-mining process based on “a computerized analysis that assigns probability scores to each potential target,” as Harris puts it in his summary. The domestic version of the program, dubbed “Ragtime-P,” can process as many as 50 different data sets at one time, focusing on international communications from or to the United States. Intercepted metadata, such as email headers showing “to” and “from” fields, is stored in a database called “Marina,” where it generally stays for five years.

About three dozen NSA officials have access to Ragtime’s intercepted data on domestic counter-terrorism, the book claims, though outside the agency some 1000 people “are privy to the full details of the program.” Internally, the NSA apparently only employs four or five individuals as “compliance staff” to make sure the snooping is falling in line with laws and regulations. Another section of the Ragtime program, “Ragtime-A,” is said to involve U.S.-based interception of foreign counterterrorism data, while “Ragtime-B” collects data from foreign governments that transits through the U.S., and “Ragtime-C” monitors counter proliferation activity.

The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:

The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what’s really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they’ll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say ‘Moonraker’ or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest…actually wasn’t shut down at all, just went back under the radar after being renamed (and renamed and renamed).

He’s right. The real threat isn’t any one particular secret program, it’s all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.

We need whistleblowers. Much of the information we have about the NSA’s and the Justice Department’s plans and capabilities—think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program—is over a decade old.

Frank Rieger of the Chaos Computer Club got it right in 2006:

We also need to know how the intelligence agencies work today. It is of highest priority to learn how the “we rather use backdoors than waste time cracking your keys”-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems….

Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

Unfortunately, the Obama Administration’s mistreatment of Bradley Manning and its aggressive prosecution of other whistleblowers has probably succeeded in scaring any copycats. Yochai Benkler writes:

The prosecution will likely not accept Manning’s guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of “aiding the enemy.” That’s when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of “aiding the enemy.”

Aiding the enemy is a broad and vague offense. In the past, it was used in hard-core cases where somebody handed over information about troop movements directly to someone the collaborator believed to be “the enemy,” to American POWs collaborating with North Korean captors, or to a German American citizen who was part of a German sabotage team during WWII. But the language of the statute is broad. It prohibits not only actually aiding the enemy, giving intelligence, or protecting the enemy, but also the broader crime of communicating—directly or indirectly—with the enemy without authorization. That’s the prosecution’s theory here: Manning knew that the materials would be made public, and he knew that Al Qaeda or its affiliates could read the publications in which the materials would be published. Therefore, the prosecution argues, by giving the materials to WikiLeaks, Manning was “indirectly” communicating with the enemy. Under this theory, there is no need to show that the defendant wanted or intended to aid the enemy. The prosecution must show only that he communicated the potentially harmful information, knowing that the enemy could read the publications to which he leaked the materials. This would be true whether Al Qaeda searched the WikiLeaks database or the New York Times‘….

This theory is unprecedented in modern American history.

[…]

If Bradley Manning is convicted of aiding the enemy, the introduction of a capital offense into the mix would dramatically elevate the threat to whistleblowers. The consequences for the ability of the press to perform its critical watchdog function in the national security arena will be dire. And then there is the principle of the thing. However technically defensible on the language of the statute, and however well-intentioned the individual prosecutors in this case may be, we have to look at ourselves in the mirror of this case and ask: Are we the America of Japanese Internment and Joseph McCarthy, or are we the America of Ida Tarbell and the Pentagon Papers? What kind of country makes communicating with the press for publication to the American public a death-eligible offense?

A country that’s much less free and much less secure.

Posted on March 6, 2013 at 1:24 PMView Comments

Getting Security Incentives Right

One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn’t matter how much management tells employees that security is important, employees know when it really isn’t—when getting the job done cheaply and on schedule is much more important.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.

“Fire someone who breaks security procedure, quickly and publicly,” I suggested to the presenter. “That’ll increase security awareness faster than any of your posters or lectures or newsletters.” If the risks are real, people will get it.

Similarly, there’s a supposedly an old Chinese proverb that goes “hang one, warn a thousand.” Or to put it another way, we’re really good at risk management. And there’s John Byng, whose execution gave rise to the Voltaire quote (in French): “in this country, it is good to kill an admiral from time to time, in order to encourage the others.”

I thought of all this when I read about the new security procedures surrounding the upcoming papal election:

According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.

“I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff,” the oath reads.

“I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication ‘latae sententiae’, which is reserved to the Apostolic See,” it continues.

Excommunication is like being fired, only it lasts for eternity.

I’m not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots—a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while—there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.

Posted on March 4, 2013 at 6:38 AMView Comments

State Department Redacts Wikileaks Cables

The ACLU filed a FOIA request for a bunch of cables that Wikileaks had already released complete versions of. This is what happened:

The agency released redacted versions of 11 and withheld the other 12 in full.

The five excerpts below show the government’s selective and self-serving decisions to withhold information. Because the leaked versions of these cables have already been widely distributed, the redacted releases provide unique insight into the government’s selective decisions to hide information from the American public.

Click on the link to see what was redacted.

EDITED TO ADD (3/2): Commentary:

The Freedom of Information Act provides exceptions for a number of classes of information, but the State Department’s declassification decisions appear to be based not on the criteria specified in the statute, but rather on whether the documents embarrass the US or portray the US in a negative light.

Posted on March 1, 2012 at 1:32 PMView Comments

"Going Dark" vs. a "Golden Age of Surveillance"

It’s a policy debate that’s been going on since the crypto wars of the early 1990s. The FBI, NSA, and other agencies continue to claim they’re losing their ability to engage in surveillance: that it’s “going dark.” Whether the cause of the problem is encrypted e-mail, digital telephony, or Skype, the bad guys use it to communicate, so we need to pass laws like CALEA to force these services to be made insecure, so that the government can eavesdrop.

The counter-argument is the “Golden Age of Surveillance”—that the massive increase of online data and Internet communications systems gives the government a far greater ability to eavesdrop on our lives. They can get your e-mail from Google, regardless of whether you use encryption. They can install an eavesdropping program on your computer, regardless of whether you use Skype. They can monitor your Facebook conversations, and learn thing that just weren’t online a decade ago. Today we all carry devices that tract our locations 24/7: our cell phones.

In this essay, CDT fellows (and law professors) challenge the “going dark” metaphor and make the case for “the golden age of surveillance.” Yes, wiretapping is harder; but so many other types of surveillance are easier.

A simple test can help the reader decide between the “going dark” and “golden age of surveillance” hypotheses. Suppose the agencies had a choice of a 1990-era package or a 2011-era package. The first package would include the wiretap authorities as they existed pre-encryption, but would lack the new techniques for location tracking, confederate identification, access to multiple databases, and data mining. The second package would match current capabilities: some encryption-related obstacles, but increased use of wiretaps, as well as the capabilities for location tracking, confederate tracking and data mining. The second package is clearly superior—the new surveillance tools assist a vast range of investigations, whereas wiretaps apply only to a small subset of key investigations. The new tools are used far more frequently and provide granular data to assist investigators.

A longer and more detailed version of the same argument can be found in “Encryption and Globalization,” forthcoming in the Columbia Science and Technology Law Review.

In a related story, there’s a relatively new WikiLeaks data dump of documents related to government surveillance products.

Posted on January 13, 2012 at 6:58 AMView Comments

Fake Documents that Alarm if Opened

This sort of thing seems like a decent approach, but it has a lot of practical problems:

In the wake of Wikileaks, the Department of Defense has stepped up its game to stop leaked documents from making their way into the hands of undesirables—be they enemy forces or concerned citizens. A new piece of software has created a way to do this by generating realistic, fake documents that phone home when they’re accessed, serving the dual purpose of providing false intelligence and helping identify the culprit.

Details aside, this kind of thing falls into the general category of data tracking. It doesn’t even have to be fake documents; you could imagine some sort of macro embedded into Word or pdf documents that phones home when the document is opened. (I have no idea if you actually can do it with those formats, but the concept is plausible.) This allows the owner of a document to track when, and possibly by what computer, a document is opened.

But by far the biggest drawback from this tech is the possibility of false positives. If you seed a folder full of documents with a large number of fakes, how often do you think an authorized user will accidentally double click on the wrong file? And what if they act on the false information? Sure, this will prevent hackers from blindly trusting that every document on a server is correct, but we bet it won’t take much to look into the code of a document and spot the fake, either.

I’m less worried about false positives, and more concerned by how easy it is to get around this sort of thing. Detach your computer from the Internet, and the document no longer phones home. A fix is to combine the system with an encryption scheme that requires a remote key. Now the document has to phone home before it can be viewed. Of course, once someone is authorized to view the document, it would be easy to create an unprotected copy—screen captures, if nothing else—to forward along,

While potentially interesting, this sort of technology is not going to prevent large data leaks. But it’s good to see research.

Posted on November 7, 2011 at 6:26 AMView Comments

Random Passwords in the Wild

Interesting analysis:

the hacktivist group Anonymous hacked into several BART servers. They leaked part of a database of users from myBART, a website which provides frequent BART riders with email updates about activities near BART stations. An interesting aspect of the leak is that 1,346 of the 2,002 accounts seem to have randomly-generated passwords-a rare opportunity to study this approach to password security.

Posted on October 20, 2011 at 6:25 AMView Comments

Unredacted U.S. Diplomatic WikiLeaks Cables Published

It looks as if the entire mass of U.S. diplomatic cables that WikiLeaks had is available online somewhere. How this came about is a good illustration of how security can go wrong in ways you don’t expect.

Near as I can tell, this is what happened:

  1. In order to send the Guardian the cables, WikiLeaks encrypted them and put them on its website at a hidden URL.
  2. WikiLeaks sent the Guardian the URL.
  3. WikiLeaks sent the Guardian the encryption key.
  4. The Guardian downloaded and decrypted the file.
  5. WikiLeaks removed the file from their server.
  6. Somehow, the encrypted file ends up on BitTorrent. Perhaps someone found the hidden URL, downloaded the file, and then uploaded it to BitTorrent. Perhaps it is the “insurance file.” I don’t know.
  7. The Guardian published a book about WikiLeaks. Thinking the decryption key had no value, it published the key in the book.
  8. A reader used the key from the book to decrypt the archive from BitTorrent, and published the decrypted version: all the U.S. diplomatic cables in unredacted form.

Memo to the Guardian: Publishing encryption keys is almost always a bad idea. Memo to WikiLeaks: Using the same key for the Guardian and for the insurance file—if that’s what you did—was a bad idea.

EDITED TO ADD (9/1): From pp 138-9 of WikiLeaks:

Assange wrote down on a scrap of paper: ACollectionOfHistorySince_1966_ToThe_PresentDay#. “That’s the password,” he said. “But you have to add one extra word when you type it in. You have to put in the word ‘Diplomatic’ before the word ‘History’. Can you remember that?”

I think we can all agree that that’s a secure encryption key.

EDITED TO ADD (9/1): WikiLeaks says that the Guardian file and the insurance file are not encrypted with the same key. Which brings us back to the question: how did the encrypted Guardian file get loose?

EDITED TO ADD (9/1): Spiegel has the detailed story.

Posted on September 1, 2011 at 12:56 PMView Comments

WikiLeaks Cable about Chinese Hacking of U.S. Networks

We know it’s prevalent, but there’s some new information:

Secret U.S. State Department cables, obtained by WikiLeaks and made available to Reuters by a third party, trace systems breaches—colorfully code-named “Byzantine Hades” by U.S. investigators—to the Chinese military. An April 2009 cable even pinpoints the attacks to a specific unit of China’s People’s Liberation Army.

Privately, U.S. officials have long suspected that the Chinese government and in particular the military was behind the cyber-attacks. What was never disclosed publicly, until now, was evidence.

U.S. efforts to halt Byzantine Hades hacks are ongoing, according to four sources familiar with investigations. In the April 2009 cable, officials in the State Department’s Cyber Threat Analysis Division noted that several Chinese-registered Web sites were “involved in Byzantine Hades intrusion activity in 2006.”

The sites were registered in the city of Chengdu, the capital of Sichuan Province in central China, according to the cable. A person named Chen Xingpeng set up the sites using the “precise” postal code in Chengdu used by the People’s Liberation Army Chengdu Province First Technical Reconnaissance Bureau (TRB), an electronic espionage unit of the Chinese military. “Much of the intrusion activity traced to Chengdu is similar in tactics, techniques and procedures to (Byzantine Hades) activity attributed to other” electronic spying units of the People’s Liberation Army, the cable says.

[…]

What is known is the extent to which Chinese hackers use “spear-phishing” as their preferred tactic to get inside otherwise forbidden networks. Compromised email accounts are the easiest way to launch spear-phish because the hackers can send the messages to entire contact lists.

The tactic is so prevalent, and so successful, that “we have given up on the idea we can keep our networks pristine,” says Stewart Baker, a former senior cyber-security official at the U.S. Department of Homeland Security and National Security Agency. It’s safer, government and private experts say, to assume the worst—that any network is vulnerable.

Two former national security officials involved in cyber-investigations told Reuters that Chinese intelligence and military units, and affiliated private hacker groups, actively engage in “target development” for spear-phish attacks by combing the Internet for details about U.S. government and commercial employees’ job descriptions, networks of associates, and even the way they sign their emails—such as U.S. military personnel’s use of “V/R,” which stands for “Very Respectfully” or “Virtual Regards.”

The spear-phish are “the dominant attack vector. They work. They’re getting better. It’s just hard to stop,” says Gregory J. Rattray, a partner at cyber-security consulting firm Delta Risk and a former director for cyber-security on the National Security Council.

Spear-phish are used in most Byzantine Hades intrusions, according to a review of State Department cables by Reuters. But Byzantine Hades is itself categorized into at least three specific parts known as “Byzantine Anchor,” “Byzantine Candor,” and “Byzantine Foothold.” A source close to the matter says the sub-codenames refer to intrusions which use common tactics and malicious code to extract data.

A State Department cable made public by WikiLeaks last December highlights the severity of the spear-phish problem. “Since 2002, (U.S. government) organizations have been targeted with social-engineering online attacks” which succeeded in “gaining access to hundreds of (U.S. government) and cleared defense contractor systems,” the cable said. The emails were aimed at the U.S. Army, the Departments of Defense, State and Energy, other government entities and commercial companies.

By the way, reading this blog entry might be illegal under the U.S. Espionage Act:

Dear Americans: If you are not “authorized” personnel, but you have read, written about, commented upon, tweeted, spread links by “liking” on Facebook, shared by email, or otherwise discussed “classified” information disclosed from WikiLeaks, you could be implicated for crimes under the U.S. Espionage Act—or so warns a legal expert who said the U.S. Espionage Act could make “felons of us all.”

As the U.S. Justice Department works on a legal case against WikiLeak’s Julian Assange for his role in helping publish 250,000 classified U.S. diplomatic cables, authorities are leaning toward charging Assange with spying under the Espionage Act of 1917. Legal experts warn that if there is an indictment under the Espionage Act, then any citizen who has discussed or accessed “classified” information can be arrested on “national security” grounds.

Maybe I should have warned you at the top of this post.

Posted on April 18, 2011 at 9:33 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.