Liveblogging the Financial Cryptography Conference
Ross Anderson liveblogged Financial Cryptography 2014. Interesting stuff.
Page 23 of 55
Ross Anderson liveblogged Financial Cryptography 2014. Interesting stuff.
This document gives a good overview of how NIST develops cryptographic standards and guidelines. It’s still in draft, and comments are appreciated.
Given that NIST has been tainted by the NSA’s actions to subvert cryptographic standards and protocols, more transparency in this process is appreciated. I think NIST is doing a fine job and that it’s not shilling for the NSA, but it needs to do more to convince the world of that.
The Voynich Manuscript has been partially decoded. This seems not to be a hoax. And the manuscript seems not to be a hoax, either.
Here’s the paper.
Amit Sahai and others have some new results in software obfuscation. The papers are here. An over-the top Wired.com story on the research is here. And Matthew Green has a great blog post explaining what’s real and what’s hype.
There has been a lot of news about Belgian cryptographer Jean-Jacques Quisquater having his computer hacked, and whether the NSA or GCHQ is to blame. There have been a lot of assumptions and hyperbole, mostly related to the GCHQ attack against the Belgian telecom operator Belgacom.
I’m skeptical. Not about the attack, but about the NSA’s or GCHQ’s involvement. I don’t think there’s a lot of operational value in most academic cryptographic research, and Quisquater wasn’t involved in practical cryptanalysis of operational ciphers. I wouldn’t put it past a less-clued nation-state to spy on academic cryptographers, but it’s likelier this is a more conventional criminal attack. But who knows? Weirder things have happened.
There’s a new piece of ransomware out there, PowerLocker (also called PrisonLocker), that uses Blowfish:
PowerLocker could prove an even more potent threat because it would be sold in underground forums as a DIY malware kit to anyone who can afford the $100 for a license, Friday’s post warned. CryptoLocker, by contrast, was custom built for use by a single crime gang. What’s more, PowerLocker might also offer several advanced features, including the ability to disable the task manager, registry editor, and other administration functions built into the Windows operating system. Screen shots and online discussions also indicate the newer malware may contain protections that prevent it from being reverse engineered when run on virtual machines.
PowerLocker encrypts files using keys based on the Blowfish algorithm. Each key is then encrypted to a file that can only be unlocked by a 2048-bit private RSA key. The Malware Must Die researchers said they had been monitoring the discussions for the past few months. The possibility of a new crypto-based ransomware threat comes as developers continue to make improvements to the older CryptoLocker title. Late last month, for instance, researchers at antivirus provider Trend Micro said newer versions gave the CryptoLocker self-replicating abilities that allowed it to spread through USB thumb drives.
Elgar’s cryptography puzzles from the late 1890s.
Adobe lost 150 million customer passwords. Even worse, they had a pretty dumb cryptographic hash system protecting those passwords.
This is well-written and very good.
We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.
But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.
And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.
How the NSA Gets Its Backdoors
The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.
But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.
Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.
Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.
Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.
How the NSA Designs Backdoors
While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.
For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:
These characteristics imply several things:
Design Strategies for Defending against Backdoors
With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.
This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.
And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.
In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.
Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.
This essay previously appeared on Wired.com.
EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.
Sidebar photo of Bruce Schneier by Joe MacInnis.