Entries Tagged "cryptography"

Page 2 of 53

Breaking RSA with a Quantum Computer

A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA. This is something to take seriously. It might not be correct, but it’s not obviously wrong.

We have long known from Shor’s algorithm that factoring with a quantum computer is easy. But it takes a big quantum computer, on the orders of millions of qbits, to factor anything resembling the key sizes we use today. What the researchers have done is combine classical lattice reduction factoring techniques with a quantum approximate optimization algorithm. This means that they only need a quantum computer with 372 qbits, which is well within what’s possible today. (The IBM Osprey is a 433-qbit quantum computer, for example. Others are on their way as well.)

The Chinese group didn’t have that large a quantum computer to work with. They were able to factor 48-bit numbers using a 10-qbit quantum computer. And while there are always potential problems when scaling something like this up by a factor of 50, there are no obvious barriers.

Honestly, most of the paper is over my head—both the lattice-reduction math and the quantum physics. And there’s the nagging question of why the Chinese government didn’t classify this research. But…wow…maybe…and yikes! Or not.

Factoring integers with sublinear resources on a superconducting quantum processor

Abstract: Shor’s algorithm has seriously challenged information security based on public key cryptosystems. However, to break the widely used RSA-2048 scheme, one needs millions of physical qubits, which is far beyond current technical capabilities. Here, we report a universal quantum algorithm for integer factorization by combining the classical lattice reduction with a quantum approximate optimization algorithm (QAOA). The number of qubits required is O(logN/loglogN ), which is sublinear in the bit length of the integer N , making it the most qubit-saving factorization algorithm to date. We demonstrate the algorithm experimentally by factoring integers up to 48 bits with 10 superconducting qubits, the largest integer factored on a quantum device. We estimate that a quantum circuit with 372 physical qubits and a depth of thousands is necessary to challenge RSA-2048 using our algorithm. Our study shows great promise in expediting the application of current noisy quantum computers, and paves the way to factor large integers of realistic cryptographic significance.

In email, Roger Grimes told me: “Apparently what happened is another guy who had previously announced he was able to break traditional asymmetric encryption using classical computers…but reviewers found a flaw in his algorithm and that guy had to retract his paper. But this Chinese team realized that the step that killed the whole thing could be solved by small quantum computers. So they tested and it worked.”

EDITED TO ADD: One of the issues with the algorithm is that it relies on a recent factoring paper by Claus Schnorr. It’s a controversial paper; and despite the “this destroys the RSA cryptosystem” claim in the abstract, it does nothing of the sort. Schnorr’s algorithm works well with smaller moduli—around the same order as ones the Chinese group has tested—but falls apart at larger sizes. At this point, nobody understands why. The Chinese paper claims that their quantum techniques get around this limitation (I think that’s what’s behind Grimes’s comment) but don’t give any details—and they haven’t tested it with larger moduli. So if it’s true that the Chinese paper depends on this Schnorr technique that doesn’t scale, the techniques in this Chinese paper won’t scale, either. (On the other hand, if it does scale then I think it also breaks a bunch of lattice-based public-key cryptosystems.)

I am much less worried that this technique will work now. But this is something the IBM quantum computing people can test right now.

EDITED TO ADD (1/4): A reporter just asked me my gut feel about this. I replied that I don’t think this will break RSA. Several times a year the cryptography community received “breakthroughs” from people outside the community. That’s why we created the RSA Factoring Challenge: to force people to provide proofs of their claims. In general, the smart bet is on the new techniques not working. But someday, that bet will be wrong. Is it today? Probably not. But it could be. We’re in the worst possible position right now: we don’t have the facts to know. Someone needs to implement the quantum algorithm and see.

EDITED TO ADD (1/5): Scott Aaronson’s take is a “no”:

In the new paper, the authors spend page after page saying-without-saying that it might soon become possible to break RSA-2048, using a NISQ (i.e., non-fault-tolerant) quantum computer. They do so via two time-tested strategems:

  1. the detailed exploration of irrelevancies (mostly, optimization of the number of qubits, while ignoring the number of gates), and
  2. complete silence about the one crucial point.

Then, finally, they come clean about the one crucial point in a single sentence of the Conclusion section:

It should be pointed out that the quantum speedup of the algorithm is unclear due to the ambiguous convergence of QAOA.

“Unclear” is an understatement here. It seems to me that a miracle would be required for the approach here to yield any benefit at all, compared to just running the classical Schnorr’s algorithm on your laptop. And if the latter were able to break RSA, it would’ve already done so.

All told, this is one of the most actively misleading quantum computing papers I’ve seen in 25 years, and I’ve seen … many.

EDITED TO ADD (1/7): More commentary. Again: no need to panic.

EDITED TO ADD (1/12): Peter Shor has suspicions.

Posted on January 3, 2023 at 12:38 PMView Comments

A Taxonomy of Access Control

My personal definition of a brilliant idea is one that is immediately obvious once it’s explained, but no one has thought of it before. I can’t believe that no one has described this taxonomy of access control before Ittay Eyal laid it out in this paper. The paper is about cryptocurrency wallet design, but the ideas are more general. Ittay points out that a key—or an account, or anything similar—can be in one of four states:

safe Only the user has access,
loss No one has access,
leak Both the user and the adversary have access, or
theft Only the adversary has access.

Once you know these states, you can assign probabilities of transitioning from one state to another (someone hacks your account and locks you out, you forgot your own password, etc.) and then build optimal security and reliability to deal with it. It’s a truly elegant way of conceptualizing the problem.

Posted on August 12, 2022 at 6:38 AMView Comments

NIST’s Post-Quantum Cryptography Standards

Quantum computing is a completely new paradigm for computers. A quantum computer uses quantum properties such as superposition, which allows a qubit (a quantum bit) to be neither 0 nor 1, but something much more complicated. In theory, such a computer can solve problems too complex for conventional computers.

Current quantum computers are still toy prototypes, and the engineering advances required to build a functionally useful quantum computer are somewhere between a few years away and impossible. Even so, we already know that that such a computer could potentially factor large numbers and compute discrete logs, and break the RSA and Diffie-Hellman public-key algorithms in all of the useful key sizes.

Cryptographers hate being rushed into things, which is why NIST began a competition to create a post-quantum cryptographic standard in 2016. The idea is to standardize on both a public-key encryption and digital signature algorithm that is resistant to quantum computing, well before anyone builds a useful quantum computer.

NIST is an old hand at this competitive process, having previously done this with symmetric algorithms (AES in 2001) and hash functions (SHA-3 in 2015). I participated in both of those competitions, and have likened them to demolition derbies. The idea is that participants put their algorithms into the ring, and then we all spend a few years beating on each other’s submissions. Then, with input from the cryptographic community, NIST crowns a winner. It’s a good process, mostly because NIST is both trusted and trustworthy.

In 2017, NIST received eighty-two post-quantum algorithm submissions from all over the world. Sixty-nine were considered complete enough to be Round 1 candidates. Twenty-six advanced to Round 2 in 2019, and seven (plus another eight alternates) were announced as Round 3 finalists in 2020. NIST was poised to make final algorithm selections in 2022, with a plan to have a draft standard available for public comment in 2023.

Cryptanalysis over the competition was brutal. Twenty-five of the Round 1 algorithms were attacked badly enough to remove them from the competition. Another eight were similarly attacked in Round 2. But here’s the real surprise: there were newly published cryptanalysis results against at least four of the Round 3 finalists just months ago—moments before NIST was to make its final decision.

One of the most popular algorithms, Rainbow, was found to be completely broken. Not that it could theoretically be broken with a quantum computer, but that it can be broken today—with an off-the-shelf laptop in just over two days. Three other finalists, Kyber, Saber, and Dilithium, were weakened with new techniques that will probably work against some of the other algorithms as well. (Fun fact: Those three algorithms were broken by the Center of Encryption and Information Security, part of the Israeli Defense Force. This represents the first time a national intelligence organization has published a cryptanalysis result in the open literature. And they had a lot of trouble publishing, as the authors wanted to remain anonymous.)

That was a close call, but it demonstrated that the process is working properly. Remember, this is a demolition derby. The goal is to surface these cryptanalytic results before standardization, which is exactly what happened. At this writing, NIST has chosen a single algorithm for general encryption and three digital-signature algorithms. It has not chosen a public-key encryption algorithm, and there are still four finalists. Check NIST’s webpage on the project for the latest information.

Ian Cassels, British mathematician and World War II cryptanalyst, once said that “cryptography is a mixture of mathematics and muddle, and without the muddle the mathematics can be used against you.” This mixture is particularly difficult to achieve with public-key algorithms, which rely on the mathematics for their security in a way that symmetric algorithms do not. We got lucky with RSA and related algorithms: their mathematics hinge on the problem of factoring, which turned out to be robustly difficult. Post-quantum algorithms rely on other mathematical disciplines and problems—code-based cryptography, hash-based cryptography, lattice-based cryptography, multivariate cryptography, and so on—whose mathematics are both more complicated and less well-understood. We’re seeing these breaks because those core mathematical problems aren’t nearly as well-studied as factoring is.

The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. We’ve learned the hard way how algorithms can get so entrenched in systems that it can take many years to update them: in the transition from DES to AES, and the transition from MD4 and MD5 to SHA, SHA-1, and then SHA-3.

We need to do better. In the coming years we’ll be facing a double uncertainty. The first is quantum computing. When and if quantum computing becomes a practical reality, we will learn a lot about its strengths and limitations. It took a couple of decades to fully understand von Neumann computer architecture; expect the same learning curve with quantum computing. Our current understanding of quantum computing architecture will change, and that could easily result in new cryptanalytic techniques.

The second uncertainly is in the algorithms themselves. As the new cryptanalytic results demonstrate, we’re still learning a lot about how to turn hard mathematical problems into public-key cryptosystems. We have too much math and an inability to add more muddle, and that results in algorithms that are vulnerable to advances in mathematics. More cryptanalytic results are coming, and more algorithms are going to be broken.

We can’t stop the development of quantum computing. Maybe the engineering challenges will turn out to be impossible, but it’s not the way to bet. In the face of all that uncertainty, agility is the only way to maintain security.

This essay originally appeared in IEEE Security & Privacy.

EDITED TO ADD: One of the four public-key encryption algorithms selected for further research, SIKE, was just broken.

Posted on August 8, 2022 at 6:20 AMView Comments

SIKE Broken

SIKE is one of the new algorithms that NIST recently added to the post-quantum cryptography competition.

It was just broken, really badly.

We present an efficient key recovery attack on the Supersingular Isogeny Diffie­-Hellman protocol (SIDH), based on a “glue-and-split” theorem due to Kani. Our attack exploits the existence of a small non-scalar endomorphism on the starting curve, and it also relies on the auxiliary torsion point information that Alice and Bob share during the protocol. Our Magma implementation breaks the instantiation SIKEp434, which aims at security level 1 of the Post-Quantum Cryptography standardization process currently ran by NIST, in about one hour on a single core.

News article.

Posted on August 4, 2022 at 6:56 AMView Comments

NIST Announces First Four Quantum-Resistant Cryptographic Algorithms

NIST’s post-quantum computing cryptography standard process is entering its final phases. It announced the first four algorithms:

For general encryption, used when we access secure websites, NIST has selected the CRYSTALS-Kyber algorithm. Among its advantages are comparatively small encryption keys that two parties can exchange easily, as well as its speed of operation.

For digital signatures, often used when we need to verify identities during a digital transaction or to sign a document remotely, NIST has selected the three algorithms CRYSTALS-Dilithium, FALCON and SPHINCS+ (read as “Sphincs plus”). Reviewers noted the high efficiency of the first two, and NIST recommends CRYSTALS-Dilithium as the primary algorithm, with FALCON for applications that need smaller signatures than Dilithium can provide. The third, SPHINCS+, is somewhat larger and slower than the other two, but it is valuable as a backup for one chief reason: It is based on a different math approach than all three of NIST’s other selections.

NIST has not chosen a public-key encryption standard. The remaining candidates are BIKE, Classic McEliece, HQC, and SIKE.

I have a lot to say on this process, and have written an essay for IEEE Security & Privacy about it. It will be published in a month or so.

Posted on July 6, 2022 at 11:49 AMView Comments

On the Subversion of NIST by the NSA

Nadiya Kostyuk and Susan Landau wrote an interesting paper: “Dueling Over DUAL_EC_DRBG: The Consequences of Corrupting a Cryptographic Standardization Process”:

Abstract: In recent decades, the U.S. National Institute of Standards and Technology (NIST), which develops cryptographic standards for non-national security agencies of the U.S. government, has emerged as the de facto international source for cryptographic standards. But in 2013, Edward Snowden disclosed that the National Security Agency had subverted the integrity of a NIST cryptographic standard­the Dual_EC_DRBG­enabling easy decryption of supposedly secured communications. This discovery reinforced the desire of some public and private entities to develop their own cryptographic standards instead of relying on a U.S. government process. Yet, a decade later, no credible alternative to NIST has emerged. NIST remains the only viable candidate for effectively developing internationally trusted cryptography standards.

Cryptographic algorithms are essential to security yet are hard to understand and evaluate. These technologies provide crucial security for communications protocols. Yet the protocols transit international borders; they are used by countries that do not necessarily trust each other. In particular, these nations do not necessarily trust the developer of the cryptographic standard.

Seeking to understand how NIST, a U.S. government agency, was able to remain a purveyor of cryptographic algorithms despite the Dual_EC_DRBG problem, we examine the Dual_EC_DRBG situation, NIST’s response, and why a non-regulatory, non-national security U.S. agency remains a successful international supplier of strong cryptographic solutions.

Posted on June 23, 2022 at 6:05 AMView Comments

Hidden Anti-Cryptography Provisions in Internet Anti-Trust Bills

Two bills attempting to reduce the power of Internet monopolies are currently being debated in Congress: S. 2992, the American Innovation and Choice Online Act; and S. 2710, the Open App Markets Act. Reducing the power to tech monopolies would do more to “fix” the Internet than any other single action, and I am generally in favor of them both. (The Center for American Progress wrote a good summary and evaluation of them. I have written in support of the bill that would force Google and Apple to give up their monopolies on their phone app stores.)

There is a significant problem, though. Both bills have provisions that could be used to break end-to-end encryption.

Let’s start with S. 2992. Sec. 3(c)(7)(A)(iii) would allow a company to deny access to apps installed by users, where those app makers “have been identified [by the Federal Government] as national security, intelligence, or law enforcement risks.” That language is far too broad. It would allow Apple to deny access to an encryption service provider that provides encrypted cloud backups to the cloud (which Apple does not currently offer). All Apple would need to do is point to any number of FBI materials decrying the security risks with “warrant proof encryption.”

Sec. 3(c)(7)(A)(vi) states that there shall be no liability for a platform “solely” because it offers “end-to-end encryption.” This language is too narrow. The word “solely” suggests that offering end-to-end encryption could be a factor in determining liability, provided that it is not the only reason. This is very similar to one of the problems with the encryption carve-out in the EARN IT Act. The section also doesn’t mention any other important privacy-protective features and policies, which also shouldn’t be the basis for creating liability for a covered platform under Sec. 3(a).

In Sec. 2(a)(2), the definition of business user excludes any person who “is a clear national security risk.” This term is undefined, and as such far too broad. It can easily be interpreted to cover any company that offers an end-to-end encrypted alternative, or a service offered in a country whose privacy laws forbid disclosing data in response to US court-ordered surveillance. Again, the FBI’s repeated statements about end-to-end encryption could serve as support.

Finally, under Sec. 3(b)(2)(B), platforms have an affirmative defense for conduct that would otherwise violate the Act if they do so in order to “protect safety, user privacy, the security of nonpublic data, or the security of the covered platform.” This language is too vague, and could be used to deny users the ability to use competing services that offer better security/privacy than the incumbent platform—particularly where the platform offers subpar security in the name of “public safety.” For example, today Apple only offers unencrypted iCloud backups, which it can then turn over governments who claim this is necessary for “public safety.” Apple can raise this defense to justify its blocking third-party services from offering competing, end-to-end encrypted backups of iMessage and other sensitive data stored on an iPhone.

S. 2710 has similar problems. Sec 7. (6)(B) contains language specifying that the bill does not “require a covered company to interoperate or share data with persons or business users that…have been identified by the Federal Government as national security, intelligence, or law enforcement risks.” This would mean that Apple could ignore the prohibition against private APIs, and deny access to otherwise private APIs, for developers of encryption products that have been publicly identified by the FBI. That is, end-to-end encryption products.

I want those bills to pass, but I want those provisions cleared up so we don’t lose strong end-to-end encryption in our attempt to reign in the tech monopolies.

EDITED TO ADD (6/23): A few DC insiders have responded to me about this post. Their basic point is this: “Your threat model is wrong. The big tech companies can already break end-to-end encryption if they want. They don’t need any help, and this bill doesn’t give the FBI any new leverage they don’t already have. This bill doesn’t make anything any worse than it is today.” That’s a reasonable response. These bills are definitely a net positive for humanity.

Posted on June 21, 2022 at 6:34 AMView Comments

Hertzbleed: A New Side-Channel Attack

Hertzbleed is a new side-channel attack that works against a variety of microprocressors. Deducing cryptographic keys by analyzing power consumption has long been an attack, but it’s not generally viable because measuring power consumption is often hard. This new attack measures power consumption by measuring time, making it easier to exploit.

The team discovered that dynamic voltage and frequency scaling (DVFS)—a power and thermal management feature added to every modern CPU—allows attackers to deduce the changes in power consumption by monitoring the time it takes for a server to respond to specific carefully made queries. The discovery greatly reduces what’s required. With an understanding of how the DVFS feature works, power side-channel attacks become much simpler timing attacks that can be done remotely.

The researchers have dubbed their attack Hertzbleed because it uses the insights into DVFS to expose­or bleed out­data that’s expected to remain private.

[…]

The researchers have already shown how the exploit technique they developed can be used to extract an encryption key from a server running SIKE, a cryptographic algorithm used to establish a secret key between two parties over an otherwise insecure communications channel.

The researchers said they successfully reproduced their attack on Intel CPUs from the 8th to the 11th generation of the Core microarchitecture. They also claimed that the technique would work on Intel Xeon CPUs and verified that AMD Ryzen processors are vulnerable and enabled the same SIKE attack used against Intel chips. The researchers believe chips from other manufacturers may also be affected.

Posted on June 20, 2022 at 6:23 AMView Comments

Java Cryptography Implementation Mistake Allows Digital-Signature Forgeries

Interesting implementation mistake:

The vulnerability, which Oracle patched on Tuesday, affects the company’s implementation of the Elliptic Curve Digital Signature Algorithm in Java versions 15 and above. ECDSA is an algorithm that uses the principles of elliptic curve cryptography to authenticate messages digitally.

[…]

ECDSA signatures rely on a pseudo-random number, typically notated as K, that’s used to derive two additional numbers, R and S. To verify a signature as valid, a party must check the equation involving R and S, the signer’s public key, and a cryptographic hash of the message. When both sides of the equation are equal, the signature is valid.

[…]

For the process to work correctly, neither R nor S can ever be a zero. That’s because one side of the equation is R, and the other is multiplied by R and a value from S. If the values are both 0, the verification check translates to 0 = 0 X (other values from the private key and hash), which will be true regardless of the additional values. That means an adversary only needs to submit a blank signature to pass the verification check successfully.

Madden wrote:

Guess which check Java forgot?

That’s right. Java’s implementation of ECDSA signature verification didn’t check if R or S were zero, so you could produce a signature value in which they are both 0 (appropriately encoded) and Java would accept it as a valid signature for any message and for any public key. The digital equivalent of a blank ID card.

More details.

Posted on April 22, 2022 at 7:09 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.