Entries Tagged "cryptanalysis"

Page 12 of 22

Is Cryptography Engineering or Science?

Responding to a tweet by Thomas Ptacek saying, “If you’re not learning crypto by coding attacks, you might not actually be learning crypto,” Colin Percival published a well-thought-out rebuttal, saying in part:

If we were still in the 1990s, I would agree with Thomas. 1990s cryptography was full of holes, and the best you could hope for was to know how your tools were broken so you could try to work around their deficiencies. This was a time when DES and RC4 were widely used, despite having well-known flaws. This was a time when people avoided using CTR mode to convert block ciphers into stream ciphers, due to concern that a weak block cipher could break if fed input blocks which shared many (zero) bytes in common. This was a time when people cared about the “error propagation” properties of block ciphers ­ that is, how much of the output would be mangled if a small number of bits in the ciphertext are flipped. This was a time when people routinely advised compressing data before encrypting it, because that “compacted” the entropy in the message, and thus made it “more difficult for an attacker to identify when he found the right key”. It should come as no surprise that SSL, designed during this era, has had a long list of design flaws.

Cryptography in the 2010s is different. Now we start with basic components which are believed to be highly secure ­ e.g., block ciphers which are believed to be indistinguishable from random permutations ­ and which have been mathematically proven to be secure against certain types of attacks ­ e.g., AES is known to be immune to differential cryptanalysis. From those components, we then build higher-order systems using mechanisms which have been proven to not introduce vulnerabilities. For example, if you generate an ordered sequence of packets by encrypting data using an indistinguishable-from-random-permutation block cipher (e.g., AES) in CTR mode using a packet sequence number as the CTR nonce, and then append a weakly-unforgeable MAC (e.g., HMAC-SHA256) of the encrypted data and the packet sequence number, the packets both preserve privacy and do not permit any undetected tampering (including replays and reordering of packets). Life will become even better once Keccak (aka. SHA-3) becomes more widely reviewed and trusted, as its “sponge” construction can be used to construct—with provable security—a very wide range of important cryptographic components.

He recommends a more modern approach to cryptography: “studying the theory and designing systems which you can prove are secure.”

I think both of statements are true—and not contradictory at all. The apparent disagreement stems from differing definitions of cryptography.

Many years ago, on the Cryptographer’s Panel at an RSA conference, then-chief scientist for RSA Bert Kaliski talked about the rise of something he called the “crypto engineer.” His point was that the practice of cryptography was changing. There was the traditional mathematical cryptography—designing and analyzing algorithms and protocols, and building up cryptographic theory—but there was also a more practice-oriented cryptography: taking existing cryptographic building blocks and creating secure systems out of them. It’s this latter group he called crypto engineers. It’s the group of people I wrote Applied Cryptography, and, most recently, co-wrote Cryptography Engineering, for. Colin knows this, directing his advice to “developers”—Kaliski’s crypto engineers.

Traditional cryptography is a science—applied mathematics—and applied cryptography is engineering. I prefer the term “security engineering,” because it necessarily encompasses a lot more than cryptography—see Ross Andersen’s great book of that name. And mistakes in engineering are where a lot of real-world cryptographic systems break.

Provable security has its limitations. Cryptographer Lars Knudsen once said: “If it’s provably secure, it probably isn’t.” Yes, we have provably secure cryptography, but those proofs take very specific forms against very specific attacks. They reduce the number of security assumptions we have to make about a system, but we still have to make a lot of security assumptions.

And cryptography has its limitations in general, despite the apparent strengths. Cryptography’s great strength is that it gives the defender a natural advantage: adding a single bit to a cryptographic key increases the work to encrypt by only a small amount, but doubles the work required to break the encryption. This is how we design algorithms that—in theory—can’t be broken until the universe collapses back on itself.

Despite this, cryptographic systems are broken all the time: well before the heat death of the universe. They’re broken because of software mistakes in coding the algorithms. They’re broken because the computer’s memory management system left a stray copy of the key lying around, and the operating system automatically copied it to disk. They’re broken because of buffer overflows and other security flaws. They’re broken by side-channel attacks. They’re broken because of bad user interfaces, or insecure user practices.

Lots of people have said: “In theory, theory and practice are the same. But in practice, they are not.” It’s true about cryptography. If you want to be a cryptographer, study mathematics. Study the mathematics of cryptography, and especially cryptanalysis. There’s a lot of art to the science, and you won’t be able to design good algorithms and protocols until you gain experience in breaking existing ones. If you want to be a security engineer, study implementations and coding. Take the tools cryptographers create, and learn how to use them well.

The world needs security engineers even more than it needs cryptographers. We’re great at mathematically secure cryptography, and terrible at using those tools to engineer secure systems.

After writing this, I found a conversation between the two where they both basically agreed with me.

Posted on July 5, 2013 at 7:04 AMView Comments

SIMON and SPECK: New NSA Encryption Algorithms

The NSA has published some new symmetric algorithms:

Abstract: In this paper we propose two families of block ciphers, SIMON and SPECK, each of which comes in a variety of widths and key sizes. While many lightweight block ciphers exist, most were designed to perform well on a single platform and were not meant to provide high performance across a range of devices. The aim of SIMON and SPECK is to fill the need for secure, flexible, and analyzable lightweight block ciphers. Each offers excellent performance on hardware and software platforms, is flexible enough to admit a variety of implementations on a given platform, and is amenable to analysis using existing techniques. Both perform exceptionally well across the full spectrum of lightweight applications, but SIMON is tuned for optimal performance in hardware, and SPECK for optimal performance in software.

It’s always fascinating to study NSA-designed ciphers. I was particularly interested in the algorithms’ similarity to Threefish, and how they improved on what we did. I was most impressed with their key schedule. I am always impressed with how the NSA does key schedules. And I enjoyed the discussion of requirements. Missing, of course, is any cryptanalytic analysis.

I don’t know anything about the context of this paper. Why was the work done, and why is it being made public? I’m curious.

Posted on July 1, 2013 at 6:24 AMView Comments

New RC4 Attack

This is a really clever attack on the RC4 encryption algorithm as used in TLS.

We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.

The attack is very specialized:

The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.

The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably.

Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation—for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized—so it’s likely to become faster and better.

There’s no reason to panic here. But let’s start to move away from RC4 to something like AES.

There are lots of press articles on the attack.

Posted on March 29, 2013 at 6:59 AMView Comments

Breaking Hard-Disk Encryption

The newly announced ElcomSoft Forensic Disk Decryptor can decrypt BitLocker, PGP, and TrueCrypt. And it’s only $300. How does it work?

Elcomsoft Forensic Disk Decryptor acquires the necessary decryption keys by analyzing memory dumps and/or hibernation files obtained from the target PC. You’ll thus need to get a memory dump from a running PC (locked or unlocked) with encrypted volumes mounted, via a standard forensic product or via a FireWire attack. Alternatively, decryption keys can also be derived from hibernation files if a target PC is turned off.

This isn’t new. I wrote about AccessData doing the same thing in 2007:

Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive … everywhere. And it creates a dictionary from that, and feeds it into PRTK.

And PRTK breaks more than 50 percent of passwords from this dictionary alone.

It’s getting harder and harder to maintain good file security.

Posted on December 27, 2012 at 1:02 PMView Comments

Stealing VM Keys from the Hardware Cache

Research into one VM stealing crypto keys from another VM running on the same hardware.

ABSTRACT: This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co-locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library.

Two articles.

Posted on November 16, 2012 at 6:13 AMView Comments

New WWII Cryptanalysis

I’d sure like to know more about this:

Government code-breakers are working on deciphering a message that has remained a secret for 70 years.

It was found on the remains of a carrier pigeon that was discovered in a chimney, in Surrey, having been there for decades.

It is thought the contents of the note, once decoded, could provide fresh information from World War II.

It was a British pigeon, presumed to have died while heading back to Bletchley Park.

Some more articles. Additional video.

ETA (11/5): Another article, and Bletchley Park news release.

ETA (11/6): And another.

I look forward to seeing the decryption.

EDITED TO ADD (11/25): GCHQ can’t decrypt it. They think that it’s either a one-time pad or a unique codebook.

Posted on November 5, 2012 at 1:26 PMView Comments

When Will We See Collisions for SHA-1?

On a NIST-sponsored hash function mailing list, Jesse Walker (from Intel; also a member of the Skein team) did some back-of-the-envelope calculations to estimate how long it will be before we see a practical collision attack against SHA-1. I’m reprinting his analysis here, so it reaches a broader audience.

According to E-BASH, the cost of one block of a SHA-1 operation on already deployed commodity microprocessors is about 214 cycles. If Stevens’ attack of 260 SHA-1 operations serves as the baseline, then finding a collision costs about 214 * 260 ~ 274 cycles.

A core today provides about 231 cycles/sec; the state of the art is 8 = 23 cores per processor for a total of 23 * 231 = 234 cycles/sec. A server typically has 4 processors, increasing the total to 22 * 234 = 236 cycles/sec. Since there are about 225 sec/year, this means one server delivers about 225 * 236 = 261 cycles per year, which we can call a “server year.”

There is ample evidence that Moore’s law will continue through the mid 2020s. Hence the number of doublings in processor power we can expect between now and 2021 is:

3/1.5 = 2 times by 2015 (3 = 2015 – 2012)

6/1.5 = 4 times by 2018 (6 = 2018 – 2012)

9/1.5 = 6 times by 2021 (9 = 2021 – 2012)

So a commodity server year should be about:

261 cycles/year in 2012

22 * 261 = 263 cycles/year by 2015

24 * 261 = 265 cycles/year by 2018

26 * 261 = 267 cycles/year by 2021

Therefore, on commodity hardware, Stevens’ attack should cost approximately:

274 / 261 = 213 server years in 2012

274 / 263 = 211 server years by 2015

274 / 265 = 29 server years by 2018

274 / 267 = 27 server years by 2021

Today Amazon rents compute time on commodity servers for about $0.04 / hour ~ $350 /year. Assume compute rental fees remain fixed while server capacity keeps pace with Moore’s law. Then, since log2(350) ~ 8.4 the cost of the attack will be approximately:

213 * 28.4 = 221.4 ~ $2.77M in 2012

211 * 28.4 = 219.4 ~ $700K by 2015

29 * 28.4 = 217.4 ~ $173K by 2018

27 * 28.4 = 215.4 ~ $43K by 2021

A collision attack is therefore well within the range of what an organized crime syndicate can practically budget by 2018, and a university research project by 2021.

Since this argument only takes into account commodity hardware and not instruction set improvements (e.g., ARM 8 specifies a SHA-1 instruction), other commodity computing devices with even greater processing power (e.g., GPUs), and custom hardware, the need to transition from SHA-1 for collision resistance functions is probably more urgent than this back-of-the-envelope analysis suggests.

Any increase in the number of cores per CPU, or the number of CPUs per server, also affects these calculations. Also, any improvements in cryptanalysis will further reduce the complexity of this attack.

The point is that we in the community need to start the migration away from SHA-1 and to SHA-2/SHA-3 now.

Posted on October 5, 2012 at 1:24 PMView Comments

1 10 11 12 13 14 22

Sidebar photo of Bruce Schneier by Joe MacInnis.