November 15, 2014
by Bruce Schneier
CTO, Co3 Systems, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1411.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Crypto Wars II
- Hacking Team Documentation
- The Future of Incident Response
- How Did the Feds Identity Dread Pirate Roberts?
- Schneier News
- Spritz: A New RC4-Like Stream Cipher
- NSA Classification ECI = Exceptionally Controlled Information
Crypto Wars II
FBI Director James Comey again called for an end to secure encryption by putting in a backdoor. Here’s his speech:
There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.
But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.
Cyber adversaries will exploit any vulnerability they find. But it makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact. And with sophisticated encryption, there might be no solution, leaving the government at a dead end—all in the name of privacy and network security.
I’m not sure why he believes he can have a technological means of access that somehow only works for people of the correct morality with the proper legal documents, but he seems to believe that’s possible. As Jeffrey Vagle and Matt Blaze point out, there’s no technical difference between Comey’s “front door” and a “back door.”
As in all of these sorts of speeches, Comey gave examples of crimes that could have been solved had only the police been able to decrypt the defendant’s phone. Unfortunately, none of the three stories is true. The Intercept tracked down each story, and none of them is actually a case where encryption foiled an investigation, arrest, or conviction:
In the most dramatic case that Comey invoked—the death of a 2-year-old Los Angeles girl—not only was cellphone data a non-issue, but records show the girl’s death could actually have been avoided had government agencies involved in overseeing her and her parents acted on the extensive record they already had before them.
In another case, of a Louisiana sex offender who enticed and then killed a 12-year-old boy, the big break had nothing to do with a phone: The murderer left behind his keys and a trail of muddy footprints, and was stopped nearby after his car ran out of gas.
And in the case of a Sacramento hit-and-run that killed a man and his girlfriend’s four dogs, the driver was arrested in a traffic stop because his car was smashed up, and immediately confessed to involvement in the incident.
Hadn’t Comey found anything better since then? In a question-and-answer session after his speech, Comey both denied trying to use scare stories to make his point—and admitted that he had launched a nationwide search for better ones, to no avail.
This is important. All the FBI talk about “going dark” and losing the ability to solve crimes is absolute bullshit. There is absolutely no evidence, either statistically or even anecdotally, that criminals are going free because of encryption.
So why are we even discussing the possibility to forcing companies to provide insecure encryption to their users and customers?
Sadly, I don’t think this is going to go away anytime soon.
Vagle and Blaze:
The EFF points out that companies are protected by law from being required to provide insecure security to make the FBI happy.
My first post on these new Crypto Wars is here.
Hacking Team Documentation
The “Intercept” has published the complete manuals for Hacking Team’s attack software. This follows a detailed report on Hacking Team’s products from August. Hacking Team sells computer and cell phone hacking capabilities to the governments of Azerbaijan, Colombia, Egypt, Ethiopia, Hungary, Italy, Kazakhstan, Korea, Malaysia, Mexico, Morocco, Nigeria, Oman, Panama, Poland, Saudi Arabia, Sudan, Thailand, Turkey, UAE, and Uzbekistan…and probably others as well.
This is important. The NSA’s capabilities are not unique to the NSA. They’re not even unique to countries like the US, UK, China, Russia, France, Germany, and Israel. They’re available for purchase by any totalitarian country that wants to spy on foreign governments or its own citizens. By ensuring an insecure Internet for everyone, the NSA enables companies like Hacking Team to thrive.
Other reports on Hacking Team:
Kevin Poulsen has written an interesting story about two people who successfully exploited a bug in a popular video poker machine.
The Guardian has reported that the app Whisper tracks users, and then published a second article explaining what it knows after Whisper denied the story.
Here’s Whisper’s denial; be sure to also read the first comment from Moxie Marlinspike.
More explanations and analyses:
Interesting essay on the sorts of things you can learn from anonymized taxi passenger and fare data.
Adi Shamir gave a presentation at Black Hat Europe on using all-in-one printers to control computers on the other side of air gaps. There’s no paper yet, but two publications reported on the talk.
This technique can be used to send commands into an air-gapped computer network, and to exfiltrate data from that network.
Susan Landau has a new paper on the NSA’s increasing role in commercial cybersecurity. She argues that the NSA is the wrong organization to do this, and we need a more public and open government agency involved in commercial cybersecurity.
David Elliott Bell has a related paper.
Also read this review of both papers.
Interesting paper: Maya Embar, Louis M. McHough IV, and William R. Wesselman, “Printer watermark obfuscation.”
List of printers and whether or not they display tracking dots (may not be up to date):
The ineffectiveness of sealing the border against Ebola (and other viruses):
Here’s a physical attack against a credit card verification system. Basically, the attack disrupts the communications between the retail terminal and the system that identifies revoked credit cards. Since retailers generally default to accepting cards when the system doesn’t work, the attack is generally successful.
There’s a report that the FBI has identified a second leaker.
I think this is “Leaker #3” on my list, even though it’s probably the “second leaker” discussed in the documentary “Citizen Four.”
The latest version of Apple’s OS automatically syncs your files to iCloud Drive, even files you choose to store locally. Apple encrypts your data, both in transit and in iCloud, with a key it knows. Apple, of course, complies with all government requests: FBI warrants, subpoenas, and National Security Letters—as well as NSA PRISM and whatever-else-they-have demands.
Survey on what Americans fear. TLDR: it’s not the things that are actually risky.
The Food and Drug Administration has released guidelines regarding the security of medical devices.
Adobe book reader surveillance.
The new version of Adobe’s software sends back much less data. In this case, public criticism worked.
Good essay on the risk of unfounded Ebola fears.
The State of Louisiana is prohibiting researchers who have recently been to Ebola-infected countries from attending a conference on tropical medicine. So now we’re at a point where our fear of Ebola is inhibiting scientific research into treating and curing Ebola.
Good article on an Enigma simulator, with pictures, diagrams, and code.
Probably the best IT security book of the year is Adam Shostack’s “Threat Modeling”
The book is an honorable mention finalist for “The Best Books” of the past 12 months. This is the first time a security book has been on the list since my “Applied Cryptography” (first edition) won in 1994 and my “Secrets and Lies” won in 2001.
Anyway, Shostack’s book is really good, and I strongly recommend it. He blogs about the topic here.
Verizon is tracking the Internet use of its phones by surreptitiously modifying URLs. This is a good description of how it works.
Interesting paper by Melissa Hathaway: “Connected Choices: How the Internet Is Challenging Sovereign Decisions.”
Robert Lee and Thomas Rid have a new paper: “OMG Cyber! Thirteen Reasons Why Hype Makes for Bad Policy.”
Another essay on the same topic:
Chicago is doing random explosives screenings at random L stops in the Chicago area. Compliance is voluntary. I have to wonder what would happen if someone who looks Arab refused to be screened. And what possible value this procedure has. Anyone who has a bomb in their bag would see the screening point well before approaching it, and be able to walk to the next stop without potentially arousing suspicion.
Kaspersky Labs is reporting on a sophisticated hacker group that is targeting specific individuals around the world. “Darkhotel” is the name the group and its techniques has been given.
This seems pretty obviously a nation-state attack. It’s anyone’s guess which country is behind it, though.
We usually infer the attackers from the target list. This one isn’t that helpful. Pakistan? China? South Korea? I’m just guessing.
Hacking Internet voting from wireless routers.
Internet voting scares me. It gives hackers the potential to seriously disrupt our democratic processes.
Orin Kerr has a new article that argues for narrowly construing national security law. The idea is to involve legislatures more.
This is certainly not a panacea. As Jack Goldsmith rightly points out, more Congressional oversight over NSA surveillance during the last decade would have gained us more NSA surveillance. But it’s certainly better than having secret courts make the rules after only hearing one side of the argument.
Some ISPs are blocking TLS encryption. It’s not happening often, but it seems that some ISPs are blocking STARTTLS messages and causing web encryption to fail.
Pew Research has released a new survey on American’s perceptions of privacy. The results are pretty much in line with all the other surveys on privacy I’ve read. As Cory Doctorow likes to say, we’ve reached “peak indifference to surveillance.”
Last month, for the first time since US export restrictions on cryptography were relaxed over a decade ago, the US government has fined a company for exporting crypto software without a license.
The Future of Incident Response
Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.
This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.
Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)—attacks that specifically target for reasons other than simple financial theft—brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.
And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.
Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.
At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.
Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.
This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.
The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.
The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing—and computer and network IR.
Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy—if you can “get inside his OODA loop”—then you have an enormous advantage.
We need tools to facilitate all of these steps:
* Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information—and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
* Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
* Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
* Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network—all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.
Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.
This essay originally appeared in “IEEE Security & Privacy.”
How Did the Feds Identity Dread Pirate Roberts?
Last month, I wrote that the FBI identified Ross W. Ulbricht as the Silk Road’s Dread Pirate Roberts through a leaky CAPTCHA. Seems that story doesn’t hold water. According to Brian Krebs:
The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But [Nicholas] Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.
“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.
But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?
“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”
My guess is that the NSA provided the FBI with this information. We know that the NSA provides surveillance data to the FBI and the DEA, under the condition that they lie about where it came from in court.
NSA whistleblower William Binney explained how it’s done:
…when you can’t use the data, you have to go out and do a parallel construction, [which] means you use what you would normally consider to be investigative techniques, [and] go find the data. You have a little hint, though. NSA is telling you where the data is…
NSA providing data to the DEA and others:
My company, Co3 Systems, is hiring both technical and nontechnical positions. If you live in the Boston area, click through and take a look.
Spritz: A New RC4-Like Stream Cipher
Last month, Ron Rivest gave a talk at MIT about Spritz, a new stream cipher by him and Jacob Schuldt. It’s basically a redesign of RC4, given current cryptographic tools and knowledge.
RC4 is an example of what I think of as a too-good-to-be-true cipher. It looks so simple. It *is* so simple. In classic cryptographic terms, it’s a single rotor machine. It’s a single self-modifying rotor, but it modifies itself very slowly. Even so, it’s very hard to cryptanalyze. Even though the single rotor leaks information about its internal state with every output byte, its self-modifying structure always seems to stay ahead of analysis. But RC4 been around for over 25 years, and the best attacks are at the edge of practicality. When I talk about what sorts of secret cryptographic advances the NSA might have, a practical RC4 attack is one of the possibilities.
Spritz is Rivest and Schuldt’s redesign of RC4. It retains all of the problems that RC4 had. It’s built on a 256-element array of bytes, making it less than ideal for modern 32-bit and 64-bit CPUs. It’s not very fast. (It’s 50% slower than RC4, which was already much slower than algorithms like AES and Threefish.) It has a long key setup. But it’s a very clever design.
Here are the cores of RC4 and Spritz:
1: i = i + 1
2: j = j + S[i]
4: z = S[S[i] + S[j]]
5: Return z
1: i = i + w
2: j = k + S[j + S[i]]
2a: k = i + k + S[j]
4: z = S[j + S[i + S[z + k]]]
5: Return z
S is an 8-bit permutation. In theory, it can be any size, which is nice for analysis, but in practice, it’s a 256-element array. RC4 has two pointers into the array: i and j. Spritz adds a third: k. The parameter w is basically a constant. It’s always 1 in RC4, but can be any odd number in Spritz (odd because that means it’s always relatively prime to 256). In both ciphers, i slowly walks around the array, and j—or j and k—bounce around wildly. Both have a single swap of two elements of the array. And both produce an output byte, z, a function of all the other parameters. In Spritz, the previous z is part of the calculation of the current z.
That’s the core. There are also functions for turning the key into the initial array permutation, using this as a stream cipher, using it as a hash function, and so on. It’s basically a sponge function, so it has a lot of applications.
What’s really interesting here is the way Rivest and Schuldt chose their various functions. They basically tried them all (given some constraints), and chose the ones with the best security properties. This is the sort of thing that can only be done with massive computing power.
I have always really liked RC4, and am happy to see a 21st-century redesign. I don’t know what kind of use it’ll get with its 8-bit word size, but surely there’s a niche for it somewhere.
NSA secret cryptography:
NSA Classification ECI = Exceptionally Controlled Information
ECI is a classification above Top Secret. It’s for things that are so sensitive they’re basically not written down, like the names of companies whose cryptography has been deliberately weakened by the NSA, or the names of agents who have infiltrated foreign IT companies.
As part of the Intercept story on the NSA’s using agents to infiltrate foreign companies and networks, it published a list of ECI compartments. It’s just a list of code names and three-letter abbreviations, along with the group inside the NSA that is responsible for them. The descriptions of what they all mean would *never* be in a computer file, so it’s only of value to those of us who like code names.
This designation is why there have been no documents in the Snowden archive listing specific company names. They’re all referred to by these ECI code names.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and the Chief Technology Officer at Co3 Systems, Inc. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Co3 Systems, Inc.
Copyright (c) 2014 by Bruce Schneier.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..