September 15, 2014

by Bruce Schneier
CTO, Co3 Systems, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <>.

You can read this issue on the web at <>. These same essays and news items appear in the "Schneier on Security" blog at <>, along with a lively and intelligent comment section. An RSS feed is available.

In this issue:

NSA/GCHQ/CSEC Infecting Innocent Computers Worldwide

There's a new story on the C't Magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the GCHQ slide from 2009, they've completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CSEC talk about how this process is being automated: "2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible." They've automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify "a list of 3000+ potential ORBs" in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK's GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: "Initial ten fold increase in Orb identification rate over manual process." There are also NSA slides that talk about the hacking process, but there's not much new in them.

The slides never say how many of the "potential ORBs" CSEC discovers or the computers that register positive in GCHQ's "Orb identification" are actually infected, but they're all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only do these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don't know if this is an omission -- these documents sure look like the sorts of things that come from the Snowden archive -- or if there is yet another leaker.

The Security of al Qaeda Encryption Software

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.

Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this "CIA talking points" and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.

Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you're much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you're much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they're very similar, and sometimes I'm not as precise as I should be when talking to reporters.

This is what I wrote in May:

I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Note the phrase "good algorithms and software." My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.

My guess is that in this case they don't even bother with the encryption software; they just attack the users' computers. There's nothing that screams "hack me" more than using specially designed al Qaeda encryption software. There's probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.

I don't want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it's also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Recorded Future stories:

NPR story:

Stuart Baker:

Glenn Greenwald:

My previous writing:

Snake oil:

Other stories:


The US Air Force is focusing on cyber deception next year:

There's an interesting article on a data exfiltration technique.
Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

The Onion on passwords:

People are not very good at matching photographs to people. We have an error rate of about 15%.

Matthew Green has a good post on what's wrong with PGP and what should be done about it.
Three related posts:

The gyroscopes on smart phones are sensitive enough to pick up acoustic vibrations. It's crude, but it works.

The White House is refusing to release details about the security of because it might help hackers. What this really means is that the security details would embarrass the White House.

Security researchers have finally gotten their hands on a Rapiscan backscatter full-body scanner. The results aren't very good.
Note that these machines have been replaced in US airports with millimeter wave full-body scanners.

New paper: "Green Lights Forever: Analyzing the Security of Traffic Infrastructure," Branden Ghena, William Beyer, Allen Hillaker, Jonathan Pevarnek, and J. Alex Halderman.

ISIS threatens the US with terrorism, openly mocking our profiling.
I am reminded of my debate on airport profiling with Sam Harris, particularly my initial response to his writings.

Pencil-and paper codes used by Central American criminal gangs. It's a simple substitution cipher.

Long article on electromagnetic weapons.

JackPair is a clever device that encrypts your voice between your headset and the audio jack. The crypto looks competent, and the design looks well-thought-out. I'd use it.

Apple is including some sort of automatic credit card payment system with the iPhone 6. It's using some security feature of the phone and system to negotiate a cheaper transaction fee. Basically, there are two kinds of credit card transactions: card-present, and card-not-present. The former is cheaper because there's less risk of fraud. The article says that Apple has negotiated the card-present rate for its iPhone payment system, even though the card is not present. Presumably, this is because of some other security features that reduce the risk of fraud.

A device called Cyborg Unplugged can be configured to prevent any Wi-Fi connection.

Good security analysis of Safeplug, which is basically Tor in a box. Short answer: not yet.

Silk has organized the trove of WikiLeaks documents about corporations aiding government surveillance around the world. It's worth wandering around through all this material.

In 2010, Aza Raskin described a new phishing attack: taking over a background tab on a browser to trick people into entering in their login credentials. Clever.

QUANTUM Technology Sold by Cyberweapons Arms Manufacturers

Last October, I broke the story about the NSA's top secret program to inject packets into the Internet backbone: QUANTUM. Specifically, I wrote about how QUANTUMINSERT injects packets into existing Internet connections to redirect a user to an NSA web server codenamed FOXACID to infect the user's computer. Since then, we've learned a lot more about how QUANTUM works, and general details of many other QUANTUM programs.

These techniques make use of the NSA's privileged position on the Internet backbone. It has TURMOIL computers directly monitoring the Internet infrastructure at providers in the US and around the world, and a system called TURBINE that allows it to perform real-time packet injection into the backbone. Still, there's nothing about QUANTUM that anyone else with similar access can't do. There's a hacker tool called AirPwn that basically performs a QUANTUMINSERT attack on computers on a wireless network.

A new report from Citizen Lab shows that cyberweapons arms manufacturers are selling this type of technology to governments around the world: the US DoD contractor CloudShield Technologies, Italy's Hacking Team, and Germany's and the UK's Gamma International. These programs intercept web connections to sites like Microsoft and Google -- YouTube is specially mentioned -- and inject malware into users' computers.

Turkmenistan paid a Swiss company, Dreamlab Technologies -- somehow related to the cyberweapons arms manufacturer Gamma International -- just under $1M for this capability. Dreamlab also installed the software in Oman. We don't know what other countries have this capability, but the companies here routinely sell hacking software to totalitarian countries around the world.

There's some more information in a Washington Post article, and an essay on The Intercept.

In talking about the NSA's capabilities, I have repeatedly said that today's secret NSA programs are tomorrow's PhD dissertations and the next day's hacker tools. This is exactly what we're seeing here. By developing these technologies instead of helping defend against them, the NSA -- and GCHQ and CSEC -- are contributing to the ongoing insecurity of the Internet.


QUANTUM technology sold to other governments:

My comments on NSA technology becoming commonplace:

Related: here is an open letter from Citizen Lab's Ron Deibert to Hacking Team about the nature of Citizen Lab's research and the misleading defense of Hacking Team's products.

The Concerted Effort to Remove Data Collection Restrictions

Since the beginning, data privacy regulation has focused on collection, storage, and use. You can see it in the OECD Privacy Framework from 1980 (see also this proposed update).

Recently, there has been concerted effort to focus all potential regulation on data use, completely ignoring data collection. Microsoft's Craig Mundie argues this. So does the PCAST report. And the World Economic Forum. This is lobbying effort by US business. My guess is that the companies are much more worried about collection restrictions than use restrictions. They believe that they can slowly change use restrictions once they have the data, but that it's harder to change collection restrictions and get the data in the first place.

We need to regulate collection as well as use. In a new essay, Chris Hoofnagle explains why.

OCED Privacy Framework

Proposed update:

Documents that downplay collection restrictions:

Chris Hoofnagle's essay:

Schneier News

I'm speaking at AppSec USA in Denver on September 18.

I'm speaking at the IAPP Privacy Academy in San Jose, CA, on September 19.

I'm speaking at a Front Line Defenders event in Dublin on October 6.

I'm speaking at Cyber Security Expo in London on October 8.

I'm speaking at TechNet Europe in Paris on October 9.

Someone wrote Sherlock-Schneier fan fiction. Not slash, thank heavens. (And no, that's not an invitation.)

Cell Phone Kill Switches Mandatory in California

California passed a kill-switch law, meaning that all cell phones sold in California must have the capability to be remotely turned off. It was sold as an antitheft measure. If the phone company could remotely render a cell phone inoperative, there would be less incentive to steal one.

I worry more about the side effects: once the feature is in place, it can be used by all sorts of people for all sorts of reasons.

The law raises concerns about how the switch might be used or abused, because it also provides law enforcement with the authority to use the feature to kill phones. And any feature accessible to consumers and law enforcement could be accessible to hackers, who might use it to randomly kill phones for kicks or revenge, or to perpetrators of crimes who might -- depending on how the kill switch is implemented -- be able to use it to prevent someone from calling for help.
"It's great for the consumer, but it invites a lot of mischief," says Hanni Fakhoury, staff attorney for the Electronic Frontier Foundation, which opposes the law. "You can imagine a domestic violence situation or a stalking context where someone kills [a victim's] phone and prevents them from calling the police or reporting abuse. It will not be a surprise when you see it being used this way."

I wrote about this in 2008, more generally:

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That's a difficult security problem even in its simplest form. Distributing that system among a variety of different devices -- computers, phones, PDAs, cameras, recorders -- with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.
Once we go down this path -- giving one device authority over other devices -- the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

The law only affects California, but phone manufacturers won't sell two different phones. So this means that *all* cell phones will eventually have this capability. And, of course, the procedural controls and limitations written into the California law don't apply elsewhere.

Users can opt out, at least for now: "The bill would authorize an authorized user to affirmatively elect to disable or opt-out of the technological solution at any time."

Article on side effects:

My 2008 essay:

How the law can be used to disrupt protests:

Security of Password Managers

At USENIX Security this year, there were two papers studying the security of password managers:

* David Silver, Suman Jana, and Dan Boneh, "Password Managers: Attacks and Defenses."

* Zhiwei Li, Warren He, Devdatta Akhawe, and Dawn Song, "The Emperor's New Password Manager: Security Analysis of Web-based Password Managers."

It's interesting work, especially because it looks at security problems in something that is supposed to improve security.

I've long recommended a password manager to solve the very real problem that any password that can be easily remembered is vulnerable to a dictionary attack. The world got a visceral reminder of this earlier this week, when hackers posted iCloud photos from celebrity accounts. The attack didn't exploit a flaw in iCloud; the attack exploited weak passwords.

Security is often a trade-off with convenience, and most password managers automatically fill in passwords on browser pages. This turns out to be a difficult thing to do securely, and opens up password managers to attack.

My own password manager, Password Safe, wasn't mentioned in either of these papers (the second paper was updated to include it). I specifically designed it not to automatically fill. I specifically designed it to be a standalone application. The fast way to transfer a password from Password Safe to a browser page is by using the operating system's cut and paste commands.

I still recommend using a password manager, simply because it allows you to choose longer and stronger passwords. And for the few passwords you should remember, my scheme for generating them is below.

Password Managers: Attacks and Defenses

The Emperor's New Password Manager: Security Analysis of Web-based Password Managers

iCloud hack:

Password Safe:

My essay on choosing secure passwords:

This 2012 paper on password managers does include PasswordSafe.

Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a "security guru" by The Economist. He is the author of 12 books -- including "Liars and Outliers: Enabling the Trust Society Needs to Survive" -- as well as hundreds of articles, essays, and academic papers. His influential newsletter "Crypto-Gram" and his blog "Schneier on Security" are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation's Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.

Copyright (c) 2014 by Bruce Schneier.

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.