307-Digit Number Factored

We have a new factoring record: 307 digits. It’s a special number—2^1039 – 1—but the techniques can be generalized:

Is the writing on the wall for 1024-bit encryption” “The answer to that question is an unqualified yes,” says Lenstra. For the moment the standard is still secure, because it is much more difficult to factor a number made up of two huge prime numbers, such as an RSA number, than it is to factor a number like this one that has a special mathematical form. But the clock is definitely ticking. “Last time, it took nine years for us to generalize from a special to a non-special hard-to factor number (155 digits). I won’t make predictions, but let’s just say it might be a good idea to stay tuned.”

I hope RSA applications would have moved away from 1024-bit security years ago, but for those who haven’t yet: wake up.

EDITED TO ADD (5/21): That’s 1023 bits. (I should have said that.)

Posted on May 21, 2007 at 10:26 AM53 Comments


Ryan May 21, 2007 11:02 AM

That’s a 1020 bit number. Very surprising.

Which raises the question: Is the writing on the wall for RSA in general?

Doesn’t a >2048-bit public key become impractical in many protocols? DomainKeys and other schemes require distribution of public keys via DNS.

The patent uncertainly surrounding ECC are not insurmountable. Is NIST planning on any public-key standard revisions, such as the AES and SHS workshops?

Jay May 21, 2007 11:20 AM

A little too obscure for me Bruce, I’m not a cryptanalyst, but enlightening, I use RSA products at work, so this is nice to know


Anonymous May 21, 2007 11:25 AM

Well, my key is the largest available in non-modified GPG.

I’m fine except for the fact that the signature is extremely bulky. Adopting an even larger key would be serious trouble…
Maybe time has come to actually consider ECC? What do you think about it, Bruce?

ivan May 21, 2007 11:37 AM

FTA: “18 months of calculations that took over a half century of computer time. ”

Any one knows what does “a half century of computer time” mean?

Nevo May 21, 2007 11:43 AM


It means the number of computers used times the amount of time they were used equals 500 years.

Stephen Smoogen May 21, 2007 11:46 AM

It is poorly worded thus meaning nothing. In most cases, you would use something like that as a half-century of our reference computer time (eg. if one computer had done it versus a cluster.) However without the reference computer listed.. it is not very useful info.

Floor May 21, 2007 11:55 AM

Computer time is the amount of time the a programs is actually running on the processor. If a program is running on 10 processors simultaneously it can “do” 10 minutes of “computer time” in one real minute. But also a computer program does not normally spend all it’s time “on” the processor. Most programs are running idle most of the time, waiting for something to do. So for example your music player uses 3% cpu power, that means it’s only using 1.8 seconds of processor time per minute.

It is all a little bit more complicated than this, but this should give you a basic idea.

Clive Robinson May 21, 2007 12:16 PM

Although 1024 RSA might be for those that are as Bruce put’s it “asleep”, there are some engineering solutions that just do not have the space for many more bits.

So what does switching to Eliptic Curves get you (apart from problems with modular inversion and crypto coprocessors) …

Supposedly the same strength as RSA but with one sixth the bits (or whatever other number is the latest quote today) oh and six times the speed.

However one thing that is known about these sorts of problems/solutions is that in general when a solution to one problem is found it can fairly quickly be reworked to other similar problems.

What has become clear is that although factoring was considered for many years to be a mathmatical backwater it’s use in crypto re-vitalised the field. So if ECC becomes prevelant how long before it to becomes subject to the same scrutiny?

So the next question is how long before some maths undergraduate thinks hmmm and does something not too disimilar on other crypto systems?

Just an idel thought…

a_Lex May 21, 2007 12:21 PM

Bruce’s key size does not matter 🙂

What matters is that most users will have to increase their keysizes pretty soon.

And using a key of 4096 bits HURTS.

So we will have to move towards ECC. And we better start moving NOW.

Ryan May 21, 2007 12:45 PM

I just created a 2048-bit RSA keypair with GNUpg, and the public key is 1677 bytes after being compressed with gzip (to remove the effects of base64). A binary sig made by the same key is 2978 bytes.

Ouch. That’s unworkable for a lot of applications, be it DNS or doing signing of row data in any size of database.

greg May 21, 2007 2:30 PM

The point is, unworkable or not. 1024 bits is not enough for a signature scheme that is suppose to produce certs that are valid for a decent amount of time. Period!

ECC is simply a patent minfield. Its just crazy what can be patented really (aka RSA). Futhermore its “incressed” security is not a proof. its just that we find it harder at this point working out discrete logs over Eliptic feilds. It could be just as easy as discrete logs over G_p for all we know.

You can use subgroups with a tradtional Discrete Log based scheme as another alternative.

But consider the application and the cost of an attack? what are the payoffs.

I have used 2048bit keys for a while now, and it really is not a problem. I know smart cards have issues. But then what do you want? True security? or a cheap system?

Genrally you can’t have both.

Just work out what its worth. Don’t spend $1000 to protect $10.

But if you want your certificate to be valid in 20 years time? You need big key sizes.

Bruce Schneier May 21, 2007 2:40 PM

“A little too obscure for me Bruce, I’m not a cryptanalyst, but enlightening, I use RSA products at work, so this is nice to know.”

And then there are those of you who complain that they want more cryptography posts.

aikimark May 21, 2007 3:38 PM


  • We probably find crypto stories more interesting than average.
  • We want to know what the “Oracle” thinks of these problems.
  • Those of us that advise others need to know strengths and weaknesses.
  • Where else can we do such cutting edge nerdy stuff?


Paul Renault May 21, 2007 4:52 PM

Steven, ivan, Bruce, et al:

My guess is that the 500 years of computing time is based on a P-90 computer. The GIMPS (Great Internet Mersenne Prime Search), has been searching for Mersenne primes (that is, of the form (2^^N)-1) since January 1996, when the P-90 was pretty much the typical/standard CPU in use. They use the P-90 as a measuring stick.

According to the “Top Producers” list, at #2014, I’ve logged some 178 years of P-90 calculations since I started running their software on my machine(s) about 11 years ago. I was running a PII-166 when I started, switched to a PIII-1G about 6 years ago, and now have been running a a PIV-3.4G since about two years along with the still-running PIII-1G.

According to the updated-hourly Status Page http://www.mersenne.org/primenet/status.shtml the subset of slower machines, which are dedicated to factoring the non-primes Mersennes, performs some 64 years of P-90 factoring PER DAY.

So, this subset of slower machines could factor that 307 digit number in about 8 days.

Yes, I do have a 4096-bit PGP/GPG key.

Now I know why…

Ryan May 21, 2007 6:10 PM

@Andy: I was also unsure as to why the GPG .sig is so large… I’m not familiar with all the metadata included with a GPG sig. I always thought it was just the key ID plus the signature in binary form.

We’re trying to do some GPG code-signing in our version control system, and on our production servers, in order to detect tampering and provide tracability. This means signing many versions of 500K+ small files. I think I am going to be seriously studying all of the more esoteric GPG options.

Freddo May 21, 2007 6:29 PM

I am curious to know whether any new factoring techniques were used or whether this feat was essentially due to the special structure of the number.

The current record is for RSA-200 in 2005, a 663 bit (=200 decimal digits) number. This took 18 months elapsed time on a cluster of eighty 2.2 Ghz Opteron computers, connected through a Gigabit network (in contrast RSA 158 took 3 months in 2002 with the same hardware). See http://www.cwi.nl/research/2005/24teRieleEs.pdf

Let us suppose that each additional bit requires the factoring time to increase by a factor of just 1.1 – I believe this is better than what can be achieved even with the best factoring methods. At a rough guide let’s suppose that we assume that we are calculating on the same setup as Jens Franke. In this case we would get:

704 bits 75 years
768 bits 33,000 years
896 bits 6.7 billion years
1024 bits 10^15 years
2048 bits 10^57 years

Of course there are likely to be big advances in computing power, and there may also be improvements in factoring methods, but I think these calculations show that we are still a long way from factoring 1024 bit RSA keys.

hhh May 21, 2007 11:42 PM

Freddo: Actually, Thorsten Kleinjung, one of the authors, gave an estimate for the time needed to factor a random 700 bit number: Several months, not 75 years. But he thinks that it will take “many years” until RSA-1024 will be cracked. (From http://www.heise.de/newsticker/meldung/89996 or http://www1.uni-bonn.de/pressDB/jsp/pressemitteilungsdetails.jsp?detailjahr=2007&detail=160, both in German.)

Your assumption that the calculation time is exponential (with a factor of 1.1 per bit) is way to simple. To my knowledge, Franke and Kleinjung have published an asymptotic formula for the required time of at least one of the steps of their method, it was not exponential, but more complicated.

In his preprint “Cofactorisation strategies for the number field sieve and an estimate for the sieving step for factoring 1024-bit integers” (http://www.math.uni-bonn.de/people/thor/cof.ps), Kleinjung used a different method (not extrapolation of asymptotic formulae) to arrive at the following estimate:

12 million (!) PCs with 2.2GHz Athlon 64 CPU and 2 GB main memory would need less than one year to complete the sieving step for RSA-1024.

In contrast, the cost for the sieving step in the factorization which has been achieved now equalled only 100 such Athlon64 PCs running for a year. (Actually, on the calendar, it took 6 months, of 11 months altogether.) See a copy of the detailed announcement here: http://www.heise.de/newsticker/foren/go.shtml?read=1&msg_id=12799757&forum_id=117643)”>http://www.heise.de/newsticker/foren/go.shtml?read=1&msg_id=12799757&forum_id=117643

MichaelM May 22, 2007 1:36 AM


Give me meat!

I’m with those who want more crypto posts. If I want fluff and fun, I’ll head to slashdot. 🙂


Greg May 22, 2007 1:37 AM

Compare these time estimates (to factor n digit RSA numbers) with estimates of the past (I think 10000 of years for 300 digits was one). You need big margins with this sort of thing. We always get better and so far do better than we think we could.

In particular the GNS has subexponetial running time. That means you need to make numbers bigger than you perhaps think.

At anyrate, for most of us the weakness is keeping the secrect key secrect, so its probably not such a big deal. Unless you are a cert provider.

20-50 years is very very long time in computers. Its about how long we have had modern computers for comparason

Hen May 22, 2007 4:30 AM


12 million (!) PCs with 2.2GHz Athlon 64 CPU and 2 GB main memory would need less than one year to complete the sieving step for RSA-1024.

As far as I rememeber, sieving is only the first step. What about matrix reduction (or whatever proper name is) step ? It’s tightly coupled and much harder to perform on PCs.

ouah May 22, 2007 10:43 AM

EDITED TO ADD (5/21): That’s 1023 bits. (I should have said that.)

I cannot find the number which has been factored but:

  • the smallest 307 digit number is 1017 bits
  • the biggest 307 digit number is 1020 bits


(thx to the bc unix command)

ouah May 22, 2007 10:56 AM

From http://www.crypto-world.com/announcements/m1039.txt

we compute the 307 number digit:


in base 2 :


=> 1017 bits


Chris S May 22, 2007 2:41 PM


Here’s a tip. When posting 1017 character “words”
(a number in this case), do something to break
them into groups. Because, now, this page is
rendered as very wide, and all the paragraphs are
one line high, requiring ugly manual side scrolling
to read.

I’ve deliberately put line breaks in this post to let
others more easily read it.

(P.S. Not to mention that the ‘post’ and ‘preview’
buttons are halfway across town from the text
entry box!!)

psilo May 22, 2007 5:18 PM


I agree that’s annoying, but it’s really not his fault. The blog application should be able to render such unexpected things more gracefully.

aikimark May 22, 2007 5:49 PM


“- the biggest 307 digit number is 1020 bits”

Can I assume you are rounding?
I calculated the base 2 log of .999999999999E307 as

I incorrectly calculated the
bit value by starting with 9.999999999999E307.

Appologies to all blog readers and to Bruce.

Freddo May 22, 2007 9:52 PM

As I understand it from http://www.heise.de/newsticker/meldung/89996, the factors of the 307 digit number in question had 80 and 227 digits respectively. Factoring should be much simpler when the sizes are so dissimilar. So perhaps their achievement is not such a big deal.

Or am I barking up the wrong tree again?

Freddo May 22, 2007 11:30 PM

Also this was the cofactor of a Mersenne prime which apparently makes it easier too – how much easier?

Greg May 23, 2007 6:04 AM


Quite a lot easier. The point is not that 1024 keys can be factored today. But that they are not going to last long. So if it something that needs shelf life, you 2048 bit at a minium (like sig). For real certs (SSL is not really real with the current usage patterns) should really be using 4096.

For applications that use PC’s there really is no reason not to use at least 2048 bit keys.

This goes for DL problem based security as well.

Tom Womack May 23, 2007 7:31 AM

An announcement with a lot of detail in it was sent out to the NMBRTHRY mailing list yesterday:


NFS consists of two parts – a sieving step and linear algebra. Sieving doesn’t require anything special; the linear algebra in this case required four clusters, each of 36 contemporary CPUs (three used Pentium-D 3GHz and the fourth used Xeon 5150 2.66GHz), for about two months.

For a 768-bit RSA key, the sieving could be done without too much pain in a few months on a couple of thousand computers each with 512M of memory; and these could be distributed around the world.

The linear algebra phase would be actually difficult; if my extrapolations are correct (and they’re from six data points some way below the required level, so not all that likely to be correct), the job would be about forty times harder, and the smallest system that could contribute (you have to be able to write down a matrix of about half a billion rows each with about 200 entries) would be a cluster with half a terabyte of total memory. Embarrassingly, the best-known algorithms only double in speed when you quadruple the number of computers used within your cluster, though with N whole clusters the ‘block Wiedemann approach’ does run about N times as fast.

If you had twenty clusters, each of 196 computers with 4GB each, you could do the job in a couple of months. I don’t. The NTT research labs in Japan don’t. EPFL in Switzerland doesn’t. Cambridge University’s high-performance compute facility is about half the desired size, and has the whole of Cambridge University clamouring to use it.

That’s to handle a 768-bit RSA key. A 768-bit SNFS job is much easier – a couple of months on a couple of high-end Mac Pros with 4GB of memory would suffice.

For a 1024-bit RSA key, both steps are sufficiently difficult that they’d need to be done with custom-made chips rather than with desktop computers. Several academic groups have done designs (http://theory.csail.mit.edu/~tromer/papers/scalinear.pdf for the linear algebra step, http://eprint.iacr.org/2006/403.pdf for the sieving step), generally with constraints of the form that no chip in the system may be very much bigger than the largest chips even manufactured, and with unrealistically optimistic cost models; they tend to deduce that the cost of factorising one 1024-bit number a year is around a hundred million dollars.

Tom Womack May 23, 2007 7:41 AM

@Freddo: there are methods which are good at finding ‘small’ factors; and one of these was applied to 2^1039-1/5080711. Quite comprehensively applied; they ran, in the idle time of what must have been most of the sufficiently-powerful machines at NTT, a quarter-million trials each of which took about two hours on an Athlon64 with 4GB of memory, for a total of about 125 computer-years.

This would have had a 95% chance of finding a 65-digit factor, a 50% chance of finding a 70-digit factor, a 10% chance of finding a 75-digit factor, and a 2% chance of finding the 80-digit factor that turned out to exist.

So, it’s not that having factors of different sizes makes the job easier, it’s that having a small (less than 70 digits) factor allows you to apply a different algorithm. Obviously, if the number turns out not to have a small factor, all the time you spent running the different algorithm is wasted.

beachpig May 23, 2007 5:21 PM

@Ryan, ref DomainKeys you only need these to be valid for the expected delivery time of an email. So make a cron job and replace your DomainKeys every day or so. Maybe the DNS will need the current and previous DomainKeys for a clean transition. Once the break time for RSA gets close to this figure I’m sure DomainKeys will support another algorithm.

Anonymous May 24, 2007 6:40 AM

Thanks for posting the details Tom Womack, my guesses weren’t way off, it turns out.

I dug out a GIMPS page that compared the times for a wide range of CPUs: the Pentium D 3GHz mentionned is the release, is roughly 118 times faster than the benchmark P90 that GIMPS uses.

According to the link you provided, the calculation took 3 P4D-3GHz years. Or, 356 P90 years, or 5.6 GIMPS-factoring-machines-only days.

If the entire ‘zombie army’ of GIMPS computers, at nearly 74,000 machines, were set to work on the problem, the time required to factor this 307-digit number could be divided by 36 – 3 hours, 45 minutes.

Yup, 1024 isn’t big enough.

Marc May 25, 2007 2:21 AM

The number factored is 2^1039-1, so it’s actually 1039 bits long. A small factor was already known, but that doesn’t help in this case (Special Number Field Sieve).

sopadeajo June 22, 2008 7:21 PM

Why a mersenne number (when composite) is easier to factor than a RSA number?

because of this:

              GP/PARI CALCULATOR Version 2.3.3 (released)
       i686 running cygwin (ix86/GMP-4.2.1 kernel) 32-bit version

compiled: Dec 23 2007, gcc-3.4.4 (cygming special, gdc 0.12, using dmd 0.125)
(readline v5.2 enabled, extended help not available)

                 Copyright (C) 2000-2006 The PARI Group

PARI/GP is free software, covered by the GNU General Public License, and

Type ? for help, \q to quit.
Type ?12 for how to get moral (and possibly technical) support.

parisize = 4000000, primelimit = 500000
? factor( GP/PARI CALCULATOR Version 2.3.3 (released)
i686 running cygwin (ix86/GMP-4.2.1 kernel) 32-bit version
compiled: Dec 23 2007, gcc-3.4.4 (cygming special, gdc 0.12, using dmd 0.125)
(readline v5.2 enabled, extended help not available)

                 Copyright (C) 2000-2006 The PARI Group

PARI/GP is free software, covered by the GNU General Public License, and

Type ? for help, \q to quit.
Type ?12 for how to get moral (and possibly technical) support.

parisize = 4000000, primelimit = 500000
? factor(5080710)
%1 =
[2 1]

[3 1]

[5 1]

[163 1]

[1039 1]

5080711 ia a prime factor of the mersenne number 2^1039 -1 and


And this is a theorem, and this is the utility of theorems in mathematics:

Any prime factor of a Mersenne number (2^p-1) is of the form: 2kp+1

Willie June 12, 2009 1:49 PM

So, now in 2009, what would be advisable key sizes to be on the safe side without over imposing a huge gap, say for the next 10 years?

Mainly I plan to use it to sign some mails and eventually encrypt some documents so only the other party could open and encrypt files on my disks so I am safe if I loose the equipment. Perhaps I would sign code as I am starting to develop, but not really complicated stuff (so far).

Ideally avoid eavesdropping from standard agencies, but not to the paranoia of NSA and such.

I plan to use GPG (Gnu PGP) unless you tell me there are some serious issues for not using it as of 2009.

So far I am thinking of a set of a RSA keys, a 2048 for signing and a 4096 for encrypting. SHA-256 for digests and AES256 for cipher.

Makes sense? Or just with 2048 I am all but safe from quantums?? (which so far I wouldn’t care). Is the 4096, thinking 10 years from now, advisable or is it too much? Or perhaps it’s better of to plan only 5 years periods?

I know there no one answer to this, I would honestly like to listen to more sound and solid founded opinions.

I’ve kind of seen a “whole” in the key lengths debate since after 2007, so perhaps we the non-scientists are missing some breakthrough that makes all this useless.

Thanks for your patience and help in advance.

Anonymous June 12, 2009 10:43 PM

Willie, cost based security is a good measure. Very few people are worth even spending money on cracking 768 bit and probably even 512 bit.

If you have issues where you got to hide from the feds today, good luck, because key size is the least of your worries.

Legal issues and implementation quality of crypto are something to get right.

Least you worry about evesdropping the better off you will be. In the USA, money and access to politics and power, is the real game if you need to play games.

I avoid evesdropping by being innocent and having little of value.
Everywhere evesdropping and there is nothing you can do.

Willie June 13, 2009 12:34 PM

Anonymous, thank you so much for your answer. Puts things into context.

I think 2048 is more than enough for signing and encrypting if all you want to do is protect your data within the private sector, so to speak, which really what I need.

kwh May 29, 2010 5:16 PM

Yes, context is very important. If your stuff is THAT important that you are worried about armies of rooms of custom made chips or FPGAs then perhaps should be more concerned with physical security (i.e. preventing the rubber hose ;->)

This is all assuming that the crypto algorithms don’t all contain some trap-door math to let the good guys get into things a little easier.

But it’s open source right?

Inspected your binaries lately at the assembly level?


Crypto is fun for those who get off on math, but as we know the real problems are always much more basic.

Jhon Doe December 5, 2014 1:47 AM

Not that this changes much the point made: we need to use longer keys.
To which I agree.

But It is NOT a 307 decimal digits long number, also
It is NOT a 1023 binary digits long number.

I believe that Bruce blog should have the correct numbers for digits and bits.

It is very easy to understand that (in binary):
(2^3) =1000 is 4 digits in length and
(2^3-1)=111 is 3 digits long.

That is:
a number of the form 2^n has (n+1) binary digits, and ((2^n)-1) is the biggest possible value for (n) binary digits.

Just to confirm, run this command in any Linux console that has “bc” available:

 a="$(bc -l <<<"obase=2;2^3-1")"; echo "length= ${#a} and number = $a"

The length in decimal could either be (approximately) calculated by logarithms:

bc -l <<<"l(2^1039-1)/l(10)"    =  312.770165  ~ 313 decimal digits.

Or found, and counted exactly:

a="$(BC_LINE_LENGTH=0 bc -l <<<"obase=10; 2^1039-1")"; echo "${#a}; echo "$a"

313 digits long.

I hope this clears up any misunderstanding:

It is NOT 307, it is a 313 decimal digits long number, or
It is NOT 1023, it is a 1039 binary digits long number.

Jhon Doe December 5, 2014 2:55 AM

Not that this changes much the point made: we need to use longer keys.
To which I agree.

But It is NOT a 1023 binary digits long number, sorry.

To exactly clear the issue of number of digits, we need to read the original finding as reported:
From http://www.crypto-world.com/announcements/m1039.txt

    We are pleased to announce the factorization of the 1039th Mersenne
    number (2,1039- in the Cunningham table) by SNFS.
    The factorization is:
    2^1039-1 = p7 * p80 * p227 where
    p7   = 5080711
    p80  = 5585366661.....  (a  80 decimal digits long number)
    p227 = 2075818194.....  (a 227 decimal digits long number)
    The factor 5080711 was already known.
    Very Important !!. --->^^^^^^^^^^^^^

So, what really was factored was the number { ((2^1039)-1)/5080711 }, which is (in bash):

f="$(BC_LINE_LENGTH=0 bc -l <<<"scale=0;(2^(1039)-1)/5080711")";echo -e "${#f}\n$f"

Exactly a 307 decimal digits long number:

and (exactly) a 1017 binary digits long number.

(use this bash line to find exactly what it is in binary:
      bc -l <<<"scale=0;obase=2;(2^(1039)-1)/5080711"

Hope this helps.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.