TPM-Fail Attacks Against Cryptographic Coprocessors
Really interesting research: TPM-FAIL: TPM meets Timing and Lattice Attacks, by Daniel Moghimi, Berk Sunar, Thomas Eisenbarth, and Nadia Heninger.
Abstract: Trusted Platform Module (TPM) serves as a hardware-based root of trust that protects cryptographic keys from privileged system and physical adversaries. In this work, we per-form a black-box timing analysis of TPM 2.0 devices deployed on commodity computers. Our analysis reveals that some of these devices feature secret-dependent execution times during signature generation based on elliptic curves. In particular, we discovered timing leakage on an Intel firmware-based TPM as well as a hardware TPM. We show how this information allows an attacker to apply lattice techniques to recover 256-bit private keys for ECDSA and ECSchnorr signatures. On Intel fTPM, our key recovery succeeds after about1,300 observations and in less than two minutes. Similarly, we extract the private ECDSA key from a hardware TPM manufactured by STMicroelectronics, which is certified at CommonCriteria (CC) EAL 4+, after fewer than 40,000 observations. We further highlight the impact of these vulnerabilities by demonstrating a remote attack against a StrongSwan IPsecVPN that uses a TPM to generate the digital signatures for authentication. In this attack, the remote client recovers the server’s private authentication key by timing only 45,000 authentication handshakes via a network connection.
The vulnerabilities we have uncovered emphasize the difficulty of correctly implementing known constant-time techniques, and show the importance of evolutionary testing and transparent evaluation of cryptographic implementations.Even certified devices that claim resistance against attacks require additional scrutiny by the community and industry, as we learn more about these attacks.
These are real attacks, and take between 4-20 minutes to extract the key. Intel has a firmware update.
Attack website. News articles. Boing Boing post. Slashdot thread.
Ross Snider • November 15, 2019 12:08 PM
I’d really like to see implementation security considered a critical security feature for the selection criteria of contest ciphers.
Implementation security would measured assuming non-expert implementers will develop the most naive versions of the cryptographic codes (e.g. using look up tables for S-boxes, and failing to validate special cases like points in subgroups).
A good example are ed25519’s Edwards’ Curves, whose default naive implementation is timing oracle free.
While the conventional wisdom is that only security experts should implement cryptography, the reality is that this is both unenforceable and unverifiable. Further, security experts implementing these systems under the review and assurance structures of multi-billion dollar organizations, checked and certified by industry bodies still make these kinds of mistakes.
There was some interesting research at NYU on leakage resistant cryptography I saw around 2011 (security model assumes a certain rate of private bit leakage through unspecified means), but I haven’t kept up with that. These kinds of lines of research to me seem promising.