No, RSA Is Not Broken

I have been seeing this paper by cryptographer Peter Schnorr making the rounds: “Fast Factoring Integers by SVP Algorithms.” It describes a new factoring method, and its abstract ends with the provocative sentence: “This destroys the RSA cryptosystem.”

It does not. At best, it’s an improvement in factoring—and I’m not sure it’s even that. The paper is a preprint: it hasn’t been peer reviewed. Be careful taking its claims at face value.

Some discussion here.

I’ll append more analysis links to this post when I find them.

EDITED TO ADD (3/12): The latest version of the paper does not have the words “This destroys the RSA cryptosystem” in the abstract. Some more discussion.

Posted on March 5, 2021 at 10:48 AM18 Comments


Me March 5, 2021 12:29 PM


Obviously, if this factoring can be done in polynomial time it would be “bad” for RSA, but not necessarily terrible. I have personally seen the difference between n*lg(n) and n^2, it is… substantial. If factoring can be done in, say n^3 time, it would be less secure than 2^n, but perhaps it could still be “safe enough.” It likely still wouldn’t break the paradigm that the fastest way is around the encryption rather than through it.

I could be wrong, and this might be the beginning of the end for RSA, I’ll leave that for experts like Bruce here to determine.

Edgar Gardner March 5, 2021 1:31 PM

Just to save people a click, the most important point from the StackExchange discussion is that if Schnorr could “destroy RSA”, he would have destroyed one of the RSA Challenge problems to prove it. He did not.

Clive Robinson March 5, 2021 1:54 PM

@ Me,

It likely still wouldn’t break the paradigm that the fastest way is around the encryption rather than through it.

With modern OS and Apps that’s almost true of old Julius’s little three up.

As an industry we realy realy need to work on basic security in our consumer systems, it’s truly appaling at the best of times.

It’s clear that just about all consumer and much commercial grad software out there is neither “fit for purpose”, nor is it “fit for merchantability”. As a rough rule of thumb the bigger the organization merchanting it, the worse the history right up to the currebt crop of vunerabilities.

But… I’m still going to try to read the Peter Schnorr preprint when it’s agreed which one is the most current. Why? Because sometimes I like a challenge especially when it appears unsolvable 😉

More seriously though, whilst the preprint may be cutting a trail in the wrong direction currently, it might give insight into other ways that will be more productive in the future.

I guess the real question for me at the end of the day is “Will I be able to enter the jungle?” the preprint appears to be…

Cassandra March 5, 2021 5:11 PM

Well, the version on the Goethe University of Frankfurt now (05 Mar 2021) is dated 04 Mar 2020 (hxxps://, and thus is presumably later than the versions on the preprint server (hxxps://

The the text “This destroys the RSA cryptosystem” has gone in the presumed most recent version.

I hope the Internet Archive or other mechanism has been keeping a record.

We have

Version dated as “work in progress 31.10.2019”
uploaded to the preprint server 02 Mar 2021
with Keywords: Factoring integers, enumeration of short and close lattice vectors, prime number lattice.
The abstract on the preprint server, but not in the paper itself, contains the phrase “This destroyes[sic] the RSA cryptosystem.”

Version undated
uploaded to the preprint server 03 Mar 2021
with Keywords: Primal-dual reduction,SVP, fac-relation.
The abstract on the preprint server, and in the paper itself, contains the phrase “This destroys the RSA cryptosystem.”

Version dated “work in progress 04.03.2020” (Possibly a typo for 2021?)
available on the university website, downloaded by me today (05 Mar 2021)
with Keywords: Prime number lattice, Primal-dual reduction.
The in the paper itself, does not refer to destroying the RSA cryptosystem.

This last version still contains the phrase “Dabei iste= 2.7182818284···die Eulersche Zahl undπ= 3.141592654···die Kreiszahl.”

I have not done a diff on the papers, but they certainly appear to be changing.

MarkH March 5, 2021 5:27 PM

My understanding so far (disclaimer: the math in Schnorr’s paper is far beyond me).

• Schnorr is the Real Deal: a distinguished mathematician who has made sufficient contribution to the field of cryptography, that I remember his name from my textbooks.

• The fastest demonstrated factoring algorithm for RSA moduli, called GNFS, looks for sets of numbers with a certain algebraic relationship to the modulus. [This is the sieving phase.] Each such set is a sort of “clue” to the factorization. When very many distinct sets have been found, they can be composed into a matrix (of truly gigantic dimensions) and the factorization found by solving the matrix. [This is the linear algebra phase.]

• If I’m following the concept, Schnorr proposes a system along roughly the same lines, which begins with sieving for distinct sets of numbers with a simple relationship to the integer to be factored. This could “break” RSA, if sieving can be done much more quickly than in GNFS, and/or the linear algebra matrix is much smaller.

• Schnorr (if I understand correctly) proposes that algorithms related to mathematical groups called lattices can greatly improve the speed of the search, by leading with high probability to sets with the desired property. I suppose a crude analogy would be sifting sand for grains of iron with a magnet, rather than by visual sorting.

• The link provided by Bruce above — to a short but pithy discussion thread — is the best source I’ve yet seen for comments coming from people who apparently have understanding of the subject matter. My reading of the critiques is that Schnorr seems to have made important mistakes, and that the theorems he has proved don’t go far enough to justify his conclusions.

• Even worse, the critiques suggest that viewed accurately, Schnorr’s approach is perhaps not an improvement on GNFS, and might actually be slower.

• If anyone factored a big semiprime using Schnorr’s strategy — even if the number were much smaller than the current RSA factoring record — then some meaningful assessment could be begin as to whether this approach has any implications for the security of cryptosystems, or even whether it’s really faster than GNFS. If such a factorization had been done already, we would surely have heard of it!

• However, the lack of a demonstration doesn’t prove that the factoring technique is deficient. The software package used for most recent factoring records, CADO-NFS, has about 450,000 source lines (that’s 30% of emacs !!!!). A lot of that is complexity to optimize time-consuming processes; I don’t know how small a “demonstration” package could be. If the complexity needed to realize Schnorr’s factoring approach is comparable (and for all I know, it could be even worse), then he obviously can’t shake the software package out of his sleeve.

• What apparently has been done, is some software meant to demonstrate the heart of Schnorr’s sieving technique.

• Schnorr has been making presentations on his concept since at least 2009. In my personal opinion, 11+ years is enough time for a small group to make some kind of significant demonstration software, if people believed that the idea is likely to work.

• Because this proposed factoring technique has been public for more than a decade, both specialists in lattice algorithms, and other mathematicians who focus on factoring algorithms in general, have had plenty of opportunity to study Schnorr’s ideas. I don’t see evidence that even one mathematician with relevant expertise has yet been persuaded.

Clive Robinson March 5, 2021 6:18 PM

@ MarkH,

if people believed that the idea is likely to work

It’s actually a variation on “Not Invented Here” syndrom and mathmatics has more than a few examples.

A couple straight of the top of my head,

1, Proof of Fermat’s Last Theorem, which (Wiles’s finally did).

2, Proof of “PRIMES in P” now called AKS Primality test (published by Manindra Agrawal, Neeraj Kayal and Nitin Saxena)

In both cases mathmaticians had assumed it was not possible to solve either so did not bother looking.

But one thing we do know is that there is more than one rule of the form,

“If the base ten digits add up to nine mod ten then the number is divisable by nine”

The practical question is how do we find them all and the correct base to apply efficiently. Obviously a “lookup table” is going to be a little large.

Dave March 5, 2021 8:12 PM

Schnorr has been tinkering with this since his retirement in 2011, you can trace back a series of papers to at least 2012. One of the very early versions had a student implement and run it on a toy problem and it worked, but only in the sense of “it works as intended” rather than “it can be used on RSA-sized numbers”.

The whole result is so unlikely that the initial reaction was that it was a hoax by someone else misusing a paper from Schnorr, until Schnorr confirmed it was legit, at which point people started looking for the error(s) in it.

So I think it’s a case of: Retired mathematician plays with X as a hobby for a decade and then makes a slip-up but is so excited by the result he gets that he rushes it out without first getting it checked.

MarkH March 5, 2021 10:44 PM


I have a different perspective on the attitudes of academic mathematicians toward (a) a mathematician’s choice of problem to work on, and (b) publication of unexpected results.

My understanding is that prior to the publication of “Primes is in P,” many workers in the specialized field of complexity theory had formed the opinion that no deterministic polynomial-time algorithm could be found for primality testing.

That being so, colleagues might have looked skeptically at Agrawal for working on the problem of finding such an algorithm (he was already in his 30s). His two young collaborators, however, were on the point of getting their first university degrees, so they weren’t taking any real risk to their reputations.

The situation with the Fermat conjecture (Fermat’s “Last Theorem” or FLT) was quite different. It suffered from a two-part stigma: (a) it was fiendishly hard, having resisted centuries of attempts to crack it; and (b) it was trivial, in the sense that no important mathematical question depended on whether FLT was valid.

For a mathematician to beat his head against this wall, when success could bring newspaper coverage not mathematical fruits, was considered Bad Form. Wiles kept his desire to prove FLT secret, and didn’t do any serious work on it until …

Work by academic mathematicians Gerhard Frey and Ken Ribet showed that if a certain Very Important Conjecture could be proved, then FLT must be true: proof of Fermat would be a corollary.

That Very Important Conjecture was then known as the Taniyama-Shimura (or sometimes Taniyama-Weil) conjecture; now that it’s proven, it’s called the Modularity Theorem.

When Wiles was working on the problem, dozens (if not hundreds) of papers had been published in journals with statements similar in form to “if Taniyama-Shimura is true, then we can prove the following result …”

In other words, proof of that conjecture was going to be extremely consequential.

For Wiles, the Frey/Ribet breakthrough made all the difference: it transformed the difficulty of FLT from “fiendish” to “nobody has an idea of how to do that” [1] and its significance from “trivial” to “one the most important unsolved problems in mathematics.”

Nobody professional in math or computer science has been ridiculed — as far as I’m aware — for serious work on factoring algorithms. The problem is both famous, and of obvious practical importance. Making progress on factoring might not raise your standing as a mathematician, but it gives you star status in the field of cryptography.

Nobody has proved any meaningful lower bounds on the cost of factoring. GNFS is the best yet discovered for RSA moduli — and no better alternative has been found for about 25 years.

However, there is simply no theoretical basis for supposing that GNFS is the best possible. Complexity theorists would be excited to see a big improvement in factoring, but such a breakthrough wouldn’t damage anybody’s theory or reputation.

There’s just no a priori reason for mathematicians to suppose that quicker factoring can’t be done. If you can do it … just show them the receipts! I predict that if your work is serious, they will take it seriously.


So I shift from choice of problem, to reception of the work.

A. In the case of “PRIMES is in P,” many mathematicians were able to follow the arguments and proofs of their paper. It had the added advantage that it wasn’t difficult to code a program running their algorithm.

Although their result was a big surprise, it was (if I recall those days correctly) accepted as valid within a matter of weeks.

B. Wiles worked on Taniyama-Shimura in total secrecy (until the very end, when he felt the need for some checking by a very select few). It was a very respectable problem, because it was so important; his obsessive secrecy was a function of his personal psychology, rather than reputational risk.

His first paper (containing a defect that would take him months to solve) was on the order of 100 pages in length, and its mathematics so specialized that if everyone on Earth who could understand it without lengthy study had gathered together, they could have ridden together in an elevator car.

Although Wiles’ announcement of a proof was a shock to his profession, his work was taken very seriously, and the necessary experts promptly went to work analyzing his paper statement by statement. I’m not aware of any mathematicians rejecting his claims out-of-hand.

C. A rather sad example is the claimed proof by Louis de Branges of Riemann’s zeta function conjecture (known far and wide as the Riemann Hypothesis or RH).

A poll of mathematicians would probably rank RH as the single most important unsolved problem in their discipline, so working on it is a respectable choice; but it has resisted proof (or disproof) for more than 150 years, so unless one is manifestily working on an approach that is both plausible, and not tried before, it’s very likely a waste of time.

de Branges’ paper is similar in length to Wiles’, and perhaps even more difficult to analyze. The people most qualified to evaluate it might need to devote many months of their working life to its examination.

Before making such an investment, they would probably need to see the “germ of an idea” — what new concept or approach the author has brought to bear, which can plausibly succeed where all others before have failed.

de Branges has not convinced his colleagues that this “germ of an idea” is present, so his paper remains in a sort of limbo, neither accepted nor shown to be false. I can assure you, that many mathematicians despair that they will live to see a proof of RH, and would be delighted to learn of one.


By comparison, Schnorr’s paper is neither very long, nor (as far as I understand) does it contain math known only to a handful. Lattice algorithms seem to be fairly well-studied, so there might be dozens of professional mathematicians (and perhaps a similar number of grad students) able to follow the arguments in detail.

As of today, they don’t seem to be buying what Schnorr is selling. The window is still open for him to make his case, but he’s got to show his profession more than he has so far.

[1] The distinction is that although nobody expected a proof of Taniyama-Shimura, it had not resisted so many long and determined attacks as FLT had.

JonKnowsNothing March 5, 2021 11:41 PM

@MarkH @Clive

re: The value of trying

Thank you MarkH for the wonderful synopsis.

I remember documentaries about FLT and how it was “solved” and the ultimate analysis was it was solved, but not the way Fermat would have solved it.

There can be many roads to discovery and not all of them successful. Even a failed attempt has value. And some failed attempts turn out not to be failures after all.

The Battle of Austerlitz
Napoleon addressed a wounded Russian soldier, Napoleon recalls:

“For having been defeated, one does not cease to be among the brave.”

Curious March 6, 2021 8:14 AM

Does/did anybody, the paper author, Bruce, or anybody else, even sketch up how in any case the security of the cryptosystem would even be compromised?

Seems like, at least the notion, would be interesting to explore. Me not being noothing like an expert on cryptography or security, perhaps there was some easily overlooked detailed to some implementation that might cause a so called catastrophic failure in the security of a cryptosystem?

If anything, I think I’ve learned after all these years reading this blog, is that the correct implementation of a security system is important to both understand and maintain so that there aren’t any flaws happening, or existing with the past, present or future implementaiton of something.

MarkH March 15, 2021 10:14 AM

I’ve found no “new news” on Schnorr factoring, which is hardly surprising.

I finally got around to looking at the github page by Léo Ducas for his “SchnorrGate” project:

Ducas is an academic mathematician working in the Netherlands. Lattices are his specialty, and he happens to be currently giving a course on lattices and cryptography.

What I learned:

• Not only has Schnorr been making polynomial-time claims for his factoring strategy since 2009 … he described this strategy in 1991, building on a 1975 (!) factoring paper by Brillhart and Morrison.

In other words, this idea ain’t new, and mathematicians have looked at it quite seriously.

• None of Schnorr’s papers claiming polynomial-time factoring has yet passed peer review.

• The essence of Schnorr’s idea (if I understand it) is that solving a specific lattice problem (SVP) provides an efficient way to search for the arithmetic relations which must be accumulated in order to factor the given integer.

Ducas was able (using a specialized software package) to quickly code Schnorr’s relation search algorithm, and test it experimentally. His experiments showed that the search for relations is much less fruitful than Schnorr calculates.

• On the basis of those experiments, Ducas concludes that in order to successfully factor, the size of the lattice problem would have to be scaled up by a really large factor to yield a sufficient quantity of relations.

With sufficiently big lattices for this to succeed, the strategy would almost certainly be slower than GNFS.


As an algorithmic aside not relevant to factoring — or even cryptography! — I just learned about work from 2 years ago by Harvey and van der Hoeven proposing a novel multiplication algorithm.

Math-geek readers will be aware that the fastest practical method for multiplying Really Huge Numbers uses the Fast Fourier Transform (FFT). As far as I’m aware, numbers that big are never used in crypto, but this FFT-based multiplication is at the heart of (for example) the Great Internet Mersennse Prime Search (GIMPS).

Engineers are used to thinking of Fourier Transforms in one dimension, though 2D or even 3D FFT is sometimes useful. The multiplication technique used in number theory projects like GIMPS uses one-dimensional FFT.

A conjectured lower bound for the cost of multiplication is O(N log(N)), which is attained in the Harvey and van der Hoeven … but their multiplication requires computing FFT in 1,729 dimensions!

This algorithm is only fastest for numbers too large to be even represented in the physical universe … so it’s probably not showing up in a software library any time soon.

Tea March 15, 2021 10:24 AM


At a high level, the security of the RSA algorithm depends on the idea that factoring large numbers is a hard problem but computing those numbers from their factors is easy.

To use RSA, you select a pair of large prime numbers and compute a public key from them which anyone can use to encrypt a message to you. You also compute a second key from those primes which you can use to decrypt those messages.

As long as it’s very hard for someone else to reverse engineer your secret primes from your public key, then RSA is secure. However if that can be done efficiently, then any attacker is able to use your public key to work out your private key and read messages sent to you.

MarkH March 16, 2021 2:39 AM


I’m familiar with the taxicab number story about Hardy and Ramanujan, but I’m not confident that I would have recalled the number.

In the event, I didn’t recognize it in the multiplication algorithm story, so thanks for pointing that out!

Skimming the paper, I saw that the authors did a lower-bound calculation for the required FFT dimension, and determined that any number greater than 1728 would suffice, so they chose 1729.

The paper ends with a brief discussion of how the application of a number of optimizations could adapt their multiplication to 9D FFTs, though at the cost of complicating the algorithm rather considerably.

I don’t see in their paper a calculation of the lower bound of integer sizes for their technique to run faster than the Schoenhage-Strassen algorithm using 1D FFT, so I don’t know what impact (if any) the proposed optimizations would have on that bound.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.