Entries Tagged "backdoors"

Page 13 of 20

Back Door in Juniper Firewalls

Juniper has warned about a malicious back door in its firewalls that automatically decrypts VPN traffic. It’s been there for years.

Hopefully details are forthcoming, but the folks at Hacker News have pointed to this page about Juniper’s use of the DUAL_EC_DBRG random number generator. For those who don’t immediately recognize that name, it’s the pseudo-random-number generator that was backdoored by the NSA. Basically, the PRNG uses two secret parameters to create a public parameter, and anyone who knows those secret parameters can predict the output. In the standard, the NSA chose those parameters. Juniper doesn’t use those tainted parameters. Instead:

ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.

This means that all anyone has to do to break the PRNG is to hack into the firewall and copy or modify those “self-generated basis points.”

Here’s a good summary of what we know. The conclusion:

Again, assuming this hypothesis is correct then, if it wasn’t the NSA who did this, we have a case where a US government backdoor effort (Dual-EC) laid the groundwork for someone else to attack US interests. Certainly this attack would be a lot easier given the presence of a backdoor-friendly RNG already in place. And I’ve not even discussed the SSH backdoor which, as Wired notes, could have been the work of a different group entirely. That backdoor certainly isn’t NOBUS—Fox-IT claim to have found the backdoor password in six hours.

More details to come, I’m sure.

EDITED TO ADD (12/21): A technical overview of the SSH backdoor.

EDITED TO ADD (12/22): Matthew Green wrote a really good technical post about this.

They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them. The end result was a period in which someone—maybe a foreign government—was able to decrypt Juniper traffic in the U.S. and around the world. And all because Juniper had already paved the road.

Another good article.

Posted on December 21, 2015 at 6:52 AMView Comments

Policy Repercussions of the Paris Terrorist Attacks

In 2013, in the early days of the Snowden leaks, Harvard Law School professor and former Assistant Attorney General Jack Goldsmith reflected on the increase in NSA surveillance post 9/11. He wrote:

Two important lessons of the last dozen years are (1) the government will increase its powers to meet the national security threat fully (because the People demand it), and (2) the enhanced powers will be accompanied by novel systems of review and transparency that seem to those in the Executive branch to be intrusive and antagonistic to the traditional national security mission, but that in the end are key legitimating factors for the expanded authorities.

Goldsmith is right, and I think about this quote as I read news articles about surveillance policies with headlines like “Political winds shifting on surveillance after Paris attacks?

The politics of surveillance are the politics of fear. As long as the people are afraid of terrorism—regardless of how realistic their fears are—they will demand that the government keep them safe. And if the government can convince them that it needs this or that power in order to keep the people safe, the people will willingly grant them those powers. That’s Goldsmith’s first point.

Today, in the wake of the horrific and devastating Paris terror attacks, we’re at a pivotal moment. People are scared, and already Western governments are lining up to authorize more invasive surveillance powers. The US want to back-door encryption products in some vain hope that the bad guys are 1) naive enough to use those products for their own communications instead of more secure ones, and 2) too stupid to use the back doors against the rest of us. The UK is trying to rush the passage of legislation that legalizes a whole bunch of surveillance activities that GCHQ has already been doing to its own citizens. France just gave its police a bunch of new powers. It doesn’t matter that mass surveillance isn’t an effective anti-terrorist tool: a scared populace wants to be reassured.

And politicians want to reassure. It’s smart politics to exaggerate the threat. It’s smart politics to do something, even if that something isn’t effective at mitigating the threat. The surveillance apparatus has the ear of the politicians, and the primary tool in its box is more surveillance. There’s minimal political will to push back on those ideas, especially when people are scared.

Writing about our country’s reaction to the Paris attacks, Tom Engelhardt wrote:

…the officials of that security state have bet the farm on the preeminence of the terrorist ‘threat,’ which has, not so surprisingly, left them eerily reliant on the Islamic State and other such organizations for the perpetuation of their way of life, their career opportunities, their growing powers, and their relative freedom to infringe on basic rights, as well as for that comfortably all-embracing blanket of secrecy that envelops their activities.

Goldsmith’s second point is more subtle: when these power increases are made in public, they’re legitimized through bureaucracy. Together, the scared populace and their scared elected officials serve to make the expanded national security and law enforcement powers normal.

Terrorism is singularly designed to push our fear buttons in ways completely out of proportion to the actual threat. And as long as people are scared of terrorism, they’ll give their governments all sorts of new powers of surveillance, arrest, detention, and so on, regardless of whether those powers actually combat the threat. This means that those who want those powers need a steady stream of terrorist attacks to enact their agenda. It’s not that these people are actively rooting for the terrorists, but they know a good opportunity when they see it.

We know that the PATRIOT Act was largely written before the 9/11 terrorist attacks, and that the political climate was right for its introduction and passage. More recently:

Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

The Paris attacks could very well be that event.

I am very worried that the Obama administration has already secretly told the NSA to increase its surveillance inside the US. And I am worried that there will be new legislation legitimizing that surveillance and granting other invasive powers to law enforcement. As Goldsmith says, these powers will be accompanied by novel systems of review and transparency. But I have no faith that those systems will be effective in limiting abuse any more than they have been over the last couple of decades.

EDITED TO ADD (12/14): Trevor Timm is all over this issue. Dan Gillmor wrote something good, too.

Posted on November 24, 2015 at 6:32 AMView Comments

Paris Attacks Blamed on Strong Cryptography and Edward Snowden

Well, that didn’t take long:

As Paris reels from terrorist attacks that have claimed at least 128 lives, fierce blame for the carnage is being directed toward American whistleblower Edward Snowden and the spread of strong encryption catalyzed by his actions.

Now the Paris attacks are being used an excuse to demand back doors.

CIA Director John Brennan chimed in, too.

Of course, this was planned all along. From September:

Privately, law enforcement officials have acknowledged that prospects for congressional action this year are remote. Although “the legislative environment is very hostile today,” the intelligence community’s top lawyer, Robert S. Litt, said to colleagues in an August e-mail, which was obtained by The Post, “it could turn in the event of a terrorist attack or criminal event where strong encryption can be shown to have hindered law enforcement.”

There is value, he said, in “keeping our options open for such a situation.”

I was going to write a definitive refutation to the meme that it’s all Snowden’s fault, but Glenn Greenwald beat me to it.

EDITED TO ADD: It wasn’t fair for me to characterize Ben Wittes’s Lawfare post as agitating for back doors. I apologize.

Better links are these two New York Times stories.

EDITED TO ADD (11/17): These two essays are also good.

EDITED TO ADD (11/18): The New York Times published a powerful editorial against mass surveillance.

EDITED TO ADD (11/19): The New York Times deleted a story claiming the attackers used encryption. Because it turns out they didn’t use encryption.

Posted on November 16, 2015 at 2:39 PMView Comments

Obama Administration Not Pursuing a Backdoor to Commercial Encryption

The Obama Administration is not pursuing a law that would force computer and communications manufacturers to add backdoors to their products for law enforcement. Sensibly, they concluded that criminals, terrorists, and foreign spies would use that backdoor as well.

Score one for the pro-security side in the Second Crypto War.

It’s certainly not over. The FBI hasn’t given up on an encryption backdoor (or other backdoor access to plaintext) since the early 1990s, and it’s not going to give up now. I expect there will be more pressure on companies, both overt and covert, more insinuations that strong security is somehow responsible for crime and terrorism, and more behind-closed-doors negotiations.

Posted on October 14, 2015 at 9:39 AMView Comments

TSA Master Keys

Someone recently noticed a Washington Post story on the TSA that originally contained a detailed photograph of all the TSA master keys. It’s now blurred out of the Washington Post story, but the image is still floating around the Internet. The whole thing neatly illustrates one of the main problems with backdoors, whether in cryptographic systems or physical systems: they’re fragile.

Nicholas Weaver wrote:

TSA “Travel Sentry” luggage locks contain a disclosed backdoor which is similar in spirit to what Director Comey desires for encrypted phones. In theory, only the Transportation Security Agency or other screeners should be able to open a TSA lock using one of their master keys. All others, notably baggage handlers and hotel staff, should be unable to surreptitiously open these locks.

Unfortunately for everyone, a TSA agent and the Washington Post revealed the secret. All it takes to duplicate a physical key is a photograph, since it is the pattern of the teeth, not the key itself, that tells you how to open the lock. So by simply including a pretty picture of the complete spread of TSA keys in the Washington Post’s paean to the TSA, the Washington Post enabled anyone to make their own TSA keys.

So the TSA backdoor has failed: we must assume any adversary can open any TSA “lock”. If you want to at least know your luggage has been tampered with, forget the TSA lock and use a zip-tie or tamper-evident seal instead, or attach a real lock and force the TSA to use their bolt cutters.

It’s the third photo on this page, reproduced here. There’s also this set of photos. Get your copy now, in case they disappear.

Reddit thread. BoingBoing post. Engadget article.

EDITED TO ADD (9/10): Someone has published a set of CAD files so you can make your own master keys.

Posted on September 8, 2015 at 6:02 AMView Comments

Another Salvo in the Second Crypto War (of Words)

Prosecutors from New York, London, Paris, and Madrid wrote an op-ed in yesterday’s New York Times in favor of backdoors in cell phone encryption. There are a number of flaws in their argument, ranging from how easy it is to get data off an encrypted phone to the dangers of designing a backdoor in the first place, but all of that has been said before. And since anecdote can be more persuasive than data, the op-ed started with one:

In June, a father of six was shot dead on a Monday afternoon in Evanston, Ill., a suburb 10 miles north of Chicago. The Evanston police believe that the victim, Ray C. Owens, had also been robbed. There were no witnesses to his killing, and no surveillance footage either.

With a killer on the loose and few leads at their disposal, investigators in Cook County, which includes Evanston, were encouraged when they found two smartphones alongside the body of the deceased: an iPhone 6 running on Apple’s iOS 8 operating system, and a Samsung Galaxy S6 Edge running on Google’s Android operating system. Both devices were passcode protected.

You can guess the rest. A judge issued a warrant, but neither Apple nor Google could unlock the phones. “The homicide remains unsolved. The killer remains at large.”

The Intercept researched the example, and it seems to be real. The phones belonged to the victim, and…

According to Commander Joseph Dugan of the Evanston Police Department, investigators were able to obtain records of the calls to and from the phones, but those records did not prove useful. By contrast, interviews with people who knew Owens suggested that he communicated mainly through text messages—the kind that travel as encrypted data—and had made plans to meet someone shortly before he was shot.

The information on his phone was not backed up automatically on Apple’s servers—apparently because he didn’t use wi-fi, which backups require.

[…]

But Dugan also wasn’t as quick to lay the blame solely on the encrypted phones. “I don’t know if getting in there, getting the information, would solve the case,” he said, “but it definitely would give us more investigative leads to follow up on.”

This is the first actual example I’ve seen illustrating the value of a backdoor. Unlike the increasingly common example of an ISIL handler abroad communicating securely with a radicalized person in the US, it’s an example where a backdoor might have helped. I say “might have,” because the Galaxy S6 is not encrypted by default, which means the victim deliberately turned the encryption on. If the native smartphone encryption had been backdoored, we don’t know if the victim would have turned it on nevertheless, or if he would have employed a different, non-backdoored, app.

The authors’ other examples are much sloppier:

Between October and June, 74 iPhones running the iOS 8 operating system could not be accessed by investigators for the Manhattan district attorney’s office—despite judicial warrants to search the devices. The investigations that were disrupted include the attempted murder of three individuals, the repeated sexual abuse of a child, a continuing sex trafficking ring and numerous assaults and robberies.

[…]

In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.

We’ve heard that 74 number before. It’s over nine months, in an office that handles about 100,000 cases a year: less than 0.1% of the time. Details about those cases would be useful, so we can determine if encryption was just an impediment to investigation, or resulted in a criminal going free. The government needs to do a better job of presenting empirical data to support its case for backdoors. That they’re unable to do so suggests very strongly that an empirical analysis wouldn’t favor the government’s case.

As to the Charlie Hebdo case, it’s not clear how much of that vital smartphone data was actual data, and how much of it was unable-to-be-encrypted metadata. I am reminded of the examples that then-FBI-Director Louis Freeh would give during the First Crypto Wars in the 1990s. The big one used to illustrate the dangers of encryption was Mafia boss John Gotti. But the surveillance that convicted him was a room bug, not a wiretap. Given that the examples from FBI Director James Comey’s “going dark” speech last year were bogus, skepticism in the face of anecdote seems prudent.

So much of this “going dark” versus the “golden age of surveillance” debate depends on where you start from. Referring to that first Evanston example and the inability to get evidence from the victim’s phones, the op-ed authors write: “Until very recently, this situation would not have occurred.” That’s utter nonsense. From the beginning of time until very recently, this was the only situation that could have occurred. Objects in the vicinity of an event were largely mute about the past. Few things, save for eyewitnesses, could ever reach back in time and produce evidence. Even 15 years ago, the victim’s cell phone would have had no evidence on it that couldn’t have been obtained elsewhere, and that’s if the victim had been carrying a cell phone at all.

For most of human history, surveillance has been expensive. Over the last couple of decades, it has become incredibly cheap and almost ubiquitous. That a few bits and pieces are becoming expensive again isn’t a cause for alarm.

This essay originally appeared on Lawfare.

EDITED TO ADD (8/13): Excellent parody/commentary: “When Curtains Block Justice.”

Posted on August 12, 2015 at 2:18 PMView Comments

HAMMERTOSS: New Russian Malware

FireEye has a detailed report of a sophisticated piece of Russian malware: HAMMERTOSS. It uses some clever techniques to hide:

The Hammertoss backdoor malware looks for a different Twitter handle each day—automatically prompted by a list generated by the tool—to get its instructions. If the handle it’s looking for is not registered that day, it merely returns the next day and checks for the Twitter handle designated for that day. If the account is active, Hammertoss searches for a tweet with a URL and hashtag, and then visits the URL.

That’s where a legit-looking image is grabbed and then opened by Hammertoss: the image contains encrypted instructions, which Hammertoss decrypts. The commands, which include instructions for obtaining files from the victim’s network, typically then lead the malware to send that stolen information to a cloud-based storage service.

Another article. Reddit thread.

Posted on July 31, 2015 at 11:12 AMView Comments

Backdoors Won't Solve Comey's Going Dark Problem

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the “going dark” problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can’t find out what’s being said.

“ISIL’s M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging” to evaluate if they are a legitimate recruit, he said. “Then they’ll move them to an encrypted mobile-messaging app so they go dark to us.”

[…]

The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they’re getting bad advice—because their proposed solution won’t solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants’ knowledge or consent; that’s the “backdoor” we’re all talking about. But the problem isn’t that most encrypted communications platforms are securely encrypted, or even that some are—the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they’ll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won’t be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn’t enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK’s David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease—security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems—that power the energy, banking, ecommerce, and infrastructure sectors—must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI’s reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

Posted on July 31, 2015 at 6:08 AMView Comments

1 11 12 13 14 15 20

Sidebar photo of Bruce Schneier by Joe MacInnis.