Entries Tagged "secrecy"

Page 4 of 21

No-Fly List Uses Predictive Assessments

The US government has admitted that it uses predictive assessments to put people on the no-fly list:

In a little-noticed filing before an Oregon federal judge, the US Justice Department and the FBI conceded that stopping US and other citizens from travelling on airplanes is a matter of “predictive assessments about potential threats,” the government asserted in May.

“By its very nature, identifying individuals who ‘may be a threat to civil aviation or national security’ is a predictive judgment intended to prevent future acts of terrorism in an uncertain context,” Justice Department officials Benjamin C Mizer and Anthony J Coppolino told the court on 28 May.

“Judgments concerning such potential threats to aviation and national security call upon the unique prerogatives of the Executive in assessing such threats.”

It is believed to be the government’s most direct acknowledgement to date that people are not allowed to fly because of what the government believes they might do and not what they have already done.

When you have a secret process that can judge and penalize people without due process or oversight, this is the kind of thing that happens.

Posted on August 20, 2015 at 6:19 AMView Comments

Organizational Doxing

Recently, WikiLeaks began publishing over half a million previously secret cables and other documents from the Foreign Ministry of Saudi Arabia. It’s a huge trove, and already reporters are writing stories about the highly secretive government.

What Saudi Arabia is experiencing isn’t common but part of a growing trend.

Just last week, unknown hackers broke into the network of the cyber-weapons arms manufacturer Hacking Team and published 400 gigabytes of internal data, describing, among other things, its sale of Internet surveillance software to totalitarian regimes around the world.

Last year, hundreds of gigabytes of Sony’s sensitive data was published on the Internet, including executive salaries, corporate emails and contract negotiations. The attacker in this case was the government of North Korea, which was punishing Sony for producing a movie that made fun of its leader. In 2010, the U.S. cyberweapons arms manufacturer HBGary Federal was a victim, and its attackers were members of a loose hacker collective called LulzSec.

Edward Snowden stole a still-unknown number of documents from the National Security Agency in 2013 and gave them to reporters to publish. Chelsea Manning stole three-quarters of a million documents from the U.S. State Department and gave them to WikiLeaks to publish. The person who stole the Saudi Arabian documents might also be a whistleblower and insider but is more likely a hacker who wanted to punish the kingdom.

Organizations are increasingly getting hacked, and not by criminals wanting to steal credit card numbers or account information in order to commit fraud, but by people intent on stealing as much data as they can and publishing it. Law professor and privacy expert Peter Swire refers to “the declining half-life of secrets.” Secrets are simply harder to keep in the information age. This is bad news for all of us who value our privacy, but there’s a hidden benefit when it comes to organizations.

The decline of secrecy means the rise of transparency. Organizational transparency is vital to any open and free society.

Open government laws and freedom of information laws let citizens know what the government is doing, and enable them to carry out their democratic duty to oversee its activities. Corporate disclosure laws perform similar functions in the private sphere. Of course, both corporations and governments have some need for secrecy, but the more they can be open, the more we can knowledgeably decide whether to trust them.

This makes the debate more complicated than simple personal privacy. Publishing someone’s private writings and communications is bad, because in a free and diverse society people should have private space to think and act in ways that would embarrass them if public.

But organizations are not people and, while there are legitimate trade secrets, their information should otherwise be transparent. Holding government and corporate private behavior to public scrutiny is good.

Most organizational secrets are only valuable for a short term: negotiations, new product designs, earnings numbers before they’re released, patents before filing, and so on.

Forever secrets, like the formula for Coca-Cola, are few and far between. The one exception is embarrassments. If an organization had to assume that anything it did would become public in a few years, people within that organization would behave differently.

The NSA would have had to weigh its collection programs against the possibility of public scrutiny. Sony would have had to think about how it would look to the world if it paid its female executives significantly less than its male executives. HBGary would have thought twice before launching an intimidation campaign against a journalist it didn’t like, and Hacking Team wouldn’t have lied to the UN about selling surveillance software to Sudan. Even the government of Saudi Arabia would have behaved differently. Such embarrassment might be the first significant downside of hiring a psychopath as CEO.

I don’t want to imply that this forced transparency is a good thing, though. The threat of disclosure chills all speech, not just illegal, embarrassing, or objectionable speech. There will be less honest and candid discourse. People in organizations need the freedom to write and say things that they wouldn’t want to be made public.

State Department officials need to be able to describe foreign leaders, even if their descriptions are unflattering. Movie executives need to be able to say unkind things about their movie stars. If they can’t, their organizations will suffer.

With few exceptions, our secrets are stored on computers and networks vulnerable to hacking. It’s much easier to break into networks than it is to secure them, and large organizational networks are very complicated and full of security holes. Bottom line: If someone sufficiently skilled, funded and motivated wants to steal an organization’s secrets, they will succeed. This includes hacktivists (HBGary Federal, Hacking Team), foreign governments (Sony), and trusted insiders (State Department and NSA).

It’s not likely that your organization’s secrets will be posted on the Internet for everyone to see, but it’s always a possibility.

Dumping an organization’s secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

Both governments and corporations need to assume that their secrets are more likely to be exposed, and exposed sooner, than ever. They should do all they can to protect their data and networks, but have to realize that their best defense might be to refrain from doing things that don’t look good on the front pages of the world’s newspapers.

This essay previously appeared on CNN.com. I didn’t use the term “organizational doxing,” though, because it would be too unfamiliar to that audience.

EDITED TO ADD: This essay has been translated into German.

Posted on July 10, 2015 at 4:32 AMView Comments

The Secrecy of the Snowden Documents

Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It’s a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden’s actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?

I believe the answer is certainly yes, but that it’s almost certainly not Snowden’s fault.

Snowden has claimed that he gave nothing to China while he was in Hong Kong, and brought nothing to Russia. He has said that he encrypted the documents in such a way that even he no longer has access to them, and that he did this before the US government stranded him in Russia. I have no doubt he did as he said, because A) it’s the smart thing to do, and B) it’s easy. All he would have had to do was encrypt the file with a long random key, break the encrypted text up into a few parts and mail them to trusted friends around the world, then forget the key. He probably added some security embellishments, but—regardless—the first sentence of the Times story simply makes no sense: “Russia and China have cracked the top-secret cache of files…”

But while cryptography is strong, computer security is weak. The vulnerability is not Snowden; it’s everyone who has access to the files.

First, the journalists working with the documents. I’ve handled some of the Snowden documents myself, and even though I’m a paranoid cryptographer, I know how difficult it is to maintain perfect security. It’s been open season on the computers of the journalists Snowden shared documents with since this story broke in July 2013. And while they have been taking extraordinary pains to secure those computers, it’s almost certainly not enough to keep out the world’s intelligence services.

There is a lot of evidence for this belief. We know from other top-secret NSA documents that as far back as 2008, the agency’s Tailored Access Operations group has extraordinary capabilities to hack into and “exfiltrate” data from specific computers, even if those computers are highly secured and not connected to the Internet.

These NSA capabilities are not unique, and it’s reasonable to assume both that other countries had similar capabilities in 2008 and that everyone has improved their attack techniques in the seven years since then. Last week, we learned that Israel had successfully hacked a wide variety of networks, including that of a major computer antivirus company. We also learned that China successfully hacked US government personnel databases. And earlier this year, Russia successfully hacked the White House’s network. These sorts of stories are now routine.

Which brings me to the second potential source of these documents to foreign intelligence agencies: the US and UK governments themselves. I believe that both China and Russia had access to all the files that Snowden took well before Snowden took them because they’ve penetrated the NSA networks where those files reside. After all, the NSA has been a prime target for decades.

Those government hacking examples above were against unclassified networks, but the nation-state techniques we’re seeing work against classified and unconnected networks as well. In general, it’s far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s how computer and network security work today. A former NSA deputy director recently said that if we were to score cyber the way we score soccer, the tally would be 462­456 twenty minutes into the game. In other words, it’s all offense and no defense.

In this kind of environment, we simply have to assume that even our classified networks have been penetrated. Remember that Snowden was able to wander through the NSA’s networks with impunity, and that the agency had so few controls in place that the only way they can guess what has been taken is to extrapolate based on what has been published. Does anyone believe that Snowden was the first to take advantage of that lax security? I don’t.

This is why I find allegations that Snowden was working for the Russians or the Chinese simply laughable. What makes you think those countries waited for Snowden? And why do you think someone working for the Russians or the Chinese would go public with their haul?

I am reminded of a comment made to me in confidence by a US intelligence official. I asked him what he was most worried about, and he replied: “I know how deep we are in our enemies’ networks without them having any idea that we’re there. I’m worried that our networks are penetrated just as deeply.”

Seems like a reasonable worry to me.

The open question is which countries have sophisticated enough cyberespionage operations to mount a successful attack against one of the journalists or against the intelligence agencies themselves. And while I have my own mental list, the truth is that I don’t know. But certainly Russia and China are on the list, and it’s just as certain they didn’t have to wait for Snowden to get access to the files. While it might be politically convenient to blame Snowden because, as the Sunday Times reported an anonymous source saying, “we have now seen our agents and assets being targeted,” the NSA and GCHQ should first take a look into their mirrors.

This essay originally appeared on Wired.com.

EDITED TO ADD: I wrote about this essay on Lawfare:

A Twitter user commented: “Surely if agencies accessed computers of people Snowden shared with then is still his fault?”

Yes, that’s right. Snowden took the documents out of the well-protected NSA network and shared with people who don’t have those levels of computer security. Given what we’ve seen of the NSA’s hacking capabilities, I think the odds are zero that other nations were unable to hack at least one of those journalists’ computers. And yes, Snowden has to own that.

The point I make in the article is that those nations didn’t have to wait for Snowden. More specifically, GCHQ claims that “we have now seen our agents and assets being targeted.” One, agents and assets are not discussed in the Snowden documents. Two, it’s two years after Snowden handed those documents to reporters. Whatever is happening, it’s unlikely to be related to Snowden.

EDITED TO ADD: Slashdot thread. Hacker News thread.

EDITED TO ADD (7/13): Two threads on Reddit.

EDITED TO ADD (7/14): Another refutation.

Posted on June 22, 2015 at 6:13 AMView Comments

Cell Phone Opsec

Here’s an article on making secret phone calls with cell phones.

His step-by-step instructions for making a clandestine phone call are as follows:

  1. Analyze your daily movements, paying special attention to anchor points (basis of operation like home or work) and dormant periods in schedules (8-12 p.m. or when cell phones aren’t changing locations);
  2. Leave your daily cell phone behind during dormant periods and purchase a prepaid no-contract cell phone (“burner phone”);
  3. After storing burner phone in a Faraday bag, activate it using a clean computer connected to a public Wi-Fi network;
  4. Encrypt the cell phone number using a onetime pad (OTP) system and rename an image file with the encrypted code. Using Tor to hide your web traffic, post the image to an agreed upon anonymous Twitter account, which signals a communications request to your partner;
  5. Leave cell phone behind, avoid anchor points, and receive phone call from partner on burner phone at 9:30 p.m.­—or another pre-arranged “dormant” time­—on the following day;
  6. Wipe down and destroy handset.

    Note that it actually makes sense to use a one-time pad in this instance. The message is a ten-digit number, and a one-time pad is easier, faster, and cleaner than using any computer encryption program.

    Posted on April 7, 2015 at 9:27 AMView Comments

    Subconscious Keys

    I missed this paper when it was first published in 2012:

    “Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks”

    Abstract: Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

    Posted on January 28, 2015 at 6:39 AMView Comments

    Regin Malware

    Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

    This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

    I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

    For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

    This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

    So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

    To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

    Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

    My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules—Fox IT called it “a full framework of a lot of species of malware”—making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

    That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

    Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

    This essay previously appeared in the MIT Technology Review.

    Posted on December 8, 2014 at 7:19 AMView Comments

    NSA Classification ECI = Exceptionally Controlled Information

    ECI is a classification above Top Secret. It’s for things that are so sensitive they’re basically not written down, like the names of companies whose cryptography has been deliberately weakened by the NSA, or the names of agents who have infiltrated foreign IT companies.

    As part of the Intercept story on the NSA’s using agents to infiltrate foreign companies and networks, it published a list of ECI compartments. It’s just a list of code names and three-letter abbreviations, along with the group inside the NSA that is responsible for them. The descriptions of what they all mean would never be in a computer file, so it’s only of value to those of us who like code names.

    This designation is why there have been no documents in the Snowden archive listing specific company names. They’re all referred to by these ECI code names.

    EDITED TO ADD (11/10): Another compilation of NSA’s organizational structure.

    Posted on October 16, 2014 at 6:22 AMView Comments

    NSA Has Undercover Operatives in Foreign Companies

    The latest Intercept article on the Snowden documents talks about the NSA’s undercover operatives working in foreign companies. There are no specifics, although the countries China, Germany, and South Korea are mentioned. It’s also hard to tell if the NSA has undercover operatives working in companies in those countries, or has undercover contractors visiting those companies. The document is dated 2004, although there’s no reason to believe that the NSA has changed its behavior since then.

    The most controversial revelation in Sentry Eagle might be a fleeting reference to the NSA infiltrating clandestine agents into “commercial entities.” The briefing document states that among Sentry Eagle’s most closely guarded components are “facts related to NSA personnel (under cover), operational meetings, specific operations, specific technology, specific locations and covert communications related to SIGINT enabling with specific commercial entities (A/B/C)””

    It is not clear whether these “commercial entities” are American or foreign or both. Generally the placeholder “(A/B/C)” is used in the briefing document to refer to American companies, though on one occasion it refers to both American and foreign companies. Foreign companies are referred to with the placeholder “(M/N/O).” The NSA refused to provide any clarification to The Intercept.

    That program is SENTRY OSPREY, which is a program under SENTRY EAGLE.

    The document makes no other reference to NSA agents working under cover. It is not clear whether they might be working as full-time employees at the “commercial entities,” or whether they are visiting commercial facilities under false pretenses.

    Least fun job right now: being the NSA person who fielded the telephone call from the Intercept to clarify that (A/B/C)/(M/N/O) thing. “Hi. We’re going public with SENTRY EAGLE next week. There’s one thing in the document we don’t understand, and we wonder if you could help us….” Actually, that’s wrong. The person who fielded the phone call had no idea what SENTRY EAGLE was. The least fun job belongs to the person up the command chain who did.

    Wired article. Slashdot and Hacker News threads.

    Posted on October 11, 2014 at 2:54 PMView Comments

    Sidebar photo of Bruce Schneier by Joe MacInnis.