This is all assuming that the crypto algorithms don't all contain some trap-door math to let the good guys get into things a little easier.

But it's open source right?

Inspected your binaries lately at the assembly level?

http://cm.bell-labs.com/who/ken/trust.html

Crypto is fun for those who get off on math, but as we know the real problems are always much more basic.

I think 2048 is more than enough for signing and encrypting if all you want to do is protect your data within the private sector, so to speak, which really what I need.

]]>If you have issues where you got to hide from the feds today, good luck, because key size is the least of your worries.

Legal issues and implementation quality of crypto are something to get right.

Least you worry about evesdropping the better off you will be. In the USA, money and access to politics and power, is the real game if you need to play games.

I avoid evesdropping by being innocent and having little of value.

Everywhere evesdropping and there is nothing you can do.

Mainly I plan to use it to sign some mails and eventually encrypt some documents so only the other party could open and encrypt files on my disks so I am safe if I loose the equipment. Perhaps I would sign code as I am starting to develop, but not really complicated stuff (so far).

Ideally avoid eavesdropping from standard agencies, but not to the paranoia of NSA and such.

I plan to use GPG (Gnu PGP) unless you tell me there are some serious issues for not using it as of 2009.

So far I am thinking of a set of a RSA keys, a 2048 for signing and a 4096 for encrypting. SHA-256 for digests and AES256 for cipher.

Makes sense? Or just with 2048 I am all but safe from quantums?? (which so far I wouldn't care). Is the 4096, thinking 10 years from now, advisable or is it too much? Or perhaps it's better of to plan only 5 years periods?

I know there no one answer to this, I would honestly like to listen to more sound and solid founded opinions.

I've kind of seen a "whole" in the key lengths debate since after 2007, so perhaps we the non-scientists are missing some breakthrough that makes all this useless.

Thanks for your patience and help in advance.

]]>5080711-1=2*3*5*163*1039

]]>

because of this:

GP/PARI CALCULATOR Version 2.3.3 (released)

i686 running cygwin (ix86/GMP-4.2.1 kernel) 32-bit version

compiled: Dec 23 2007, gcc-3.4.4 (cygming special, gdc 0.12, using dmd 0.125)

(readline v5.2 enabled, extended help not available)

Copyright (C) 2000-2006 The PARI Group

PARI/GP is free software, covered by the GNU General Public License, and

comes WITHOUT ANY WARRANTY WHATSOEVER.

Type ? for help, \q to quit.

Type ?12 for how to get moral (and possibly technical) support.

parisize = 4000000, primelimit = 500000

? factor( GP/PARI CALCULATOR Version 2.3.3 (released)

i686 running cygwin (ix86/GMP-4.2.1 kernel) 32-bit version

compiled: Dec 23 2007, gcc-3.4.4 (cygming special, gdc 0.12, using dmd 0.125)

(readline v5.2 enabled, extended help not available)

Copyright (C) 2000-2006 The PARI Group

PARI/GP is free software, covered by the GNU General Public License, and

comes WITHOUT ANY WARRANTY WHATSOEVER.

Type ? for help, \q to quit.

Type ?12 for how to get moral (and possibly technical) support.

parisize = 4000000, primelimit = 500000

? factor(5080710)

%1 =

[2 1]

[3 1]

[5 1]

[163 1]

[1039 1]

5080711 ia a prime factor of the mersenne number 2^1039 -1 and

5080711=2*3*5*163*1039

And this is a theorem, and this is the utility of theorems in mathematics:

Any prime factor of a Mersenne number (2^p-1) is of the form: 2*k*p+1

]]>what does it mean??

]]>

I dug out a GIMPS page that compared the times for a wide range of CPUs: the Pentium D 3GHz mentionned is the release, is roughly 118 times faster than the benchmark P90 that GIMPS uses.

http://www.mersenne.org/bench.htm

According to the link you provided, the calculation took 3 P4D-3GHz years. Or, 356 P90 years, or 5.6 GIMPS-factoring-machines-only days.

If the entire 'zombie army' of GIMPS computers, at nearly 74,000 machines, were set to work on the problem, the time required to factor this 307-digit number could be divided by 36 - 3 hours, 45 minutes.

Yup, 1024 isn't big enough.

This would have had a 95% chance of finding a 65-digit factor, a 50% chance of finding a 70-digit factor, a 10% chance of finding a 75-digit factor, and a 2% chance of finding the 80-digit factor that turned out to exist.

So, it's not that having factors of different sizes makes the job easier, it's that having a small (less than 70 digits) factor allows you to apply a different algorithm. Obviously, if the number turns out not to have a small factor, all the time you spent running the different algorithm is wasted.

]]>http://listserv.nodak.edu/cgi-bin/wa.exe?A2=ind0705&L=nmbrthry&T=0&P=1019

NFS consists of two parts - a sieving step and linear algebra. Sieving doesn't require anything special; the linear algebra in this case required four clusters, each of 36 contemporary CPUs (three used Pentium-D 3GHz and the fourth used Xeon 5150 2.66GHz), for about two months.

For a 768-bit RSA key, the sieving could be done without too much pain in a few months on a couple of thousand computers each with 512M of memory; and these could be distributed around the world.

The linear algebra phase would be actually difficult; if my extrapolations are correct (and they're from six data points some way below the required level, so not all that likely to be correct), the job would be about forty times harder, and the smallest system that could contribute (you have to be able to write down a matrix of about half a billion rows each with about 200 entries) would be a cluster with half a terabyte of total memory. Embarrassingly, the best-known algorithms only double in speed when you quadruple the number of computers used within your cluster, though with N whole clusters the 'block Wiedemann approach' does run about N times as fast.

If you had twenty clusters, each of 196 computers with 4GB each, you could do the job in a couple of months. I don't. The NTT research labs in Japan don't. EPFL in Switzerland doesn't. Cambridge University's high-performance compute facility is about half the desired size, and has the whole of Cambridge University clamouring to use it.

That's to handle a 768-bit RSA key. A 768-bit SNFS job is much easier - a couple of months on a couple of high-end Mac Pros with 4GB of memory would suffice.

For a 1024-bit RSA key, both steps are sufficiently difficult that they'd need to be done with custom-made chips rather than with desktop computers. Several academic groups have done designs (http://theory.csail.mit.edu/~tromer/papers/scalinear.pdf for the linear algebra step, http://eprint.iacr.org/2006/403.pdf for the sieving step), generally with constraints of the form that no chip in the system may be very much bigger than the largest chips even manufactured, and with unrealistically optimistic cost models; they tend to deduce that the cost of factorising one 1024-bit number a year is around a hundred million dollars.

]]>Quite a lot easier. The point is not that 1024 keys can be factored today. But that they are *not* going to last long. So if it something that needs shelf life, you 2048 bit at a minium (like sig). For real certs (SSL is not really real with the current usage patterns) should really be using 4096.

For applications that use PC's there really is no reason not to use at least 2048 bit keys.

This goes for DL problem based security as well.

]]>Or am I barking up the wrong tree again?

"- the biggest 307 digit number is 1020 bits"

Can I assume you are rounding?

I calculated the base 2 log of .999999999999E307 as

1019.8319251304202407961880648532

I incorrectly calculated the

1023.1538532253076031440583842826

bit value by starting with 9.999999999999E307.

Appologies to all blog readers and to Bruce.

]]>I agree that's annoying, but it's really not his fault. The blog application should be able to render such unexpected things more gracefully.

]]>Here's a tip. When posting 1017 character "words"

(a number in this case), do *something* to break

them into groups. Because, now, this page is

rendered as very wide, and all the paragraphs are

one line high, requiring ugly manual side scrolling

to read.

I've deliberately put line breaks in this post to let

others more easily read it.

(P.S. Not to mention that the 'post' and 'preview'

buttons are halfway across town from the text

entry box!!)

we compute the 307 number digit:

11594205740725730643

69807148876894640753

89979170201772498686

83535388224838599667

56608000609540800517

94720539932612302048

74402860435302861914

10144093453512334712

73967988850226307575

28093791660285551055

00425810771176177610

09413797078797380618

70084377771868286808

89844712822002935201

80607475545154137071

1023817

in base 2 :

11010011010101100100

00001000010000100011

11110001000011001101

11110011110100001011

00001101011000001111

10011111011000011000

01111111101010100110

10001010011101101010

11111011101110011001

01011111101101101110

10110101000100000001

10101110001110000100

00010110110111111111

10111010101000010001

10000010001010100000

00010011110010000011

11100101100011111110

01110111010000111100

00101111001101110000

01000101010110000001

10000101010111001011

10101010000011110011

11001111110011001111

11010001001000010000

11101010110010000001

10001100010111011100

10100110110001010100

11110000101111100010

01100100100000011010

11010100100011011001

10111000111010100000

10011110101000000111

10001101101111001110

11010100001100010110

01111101110101011010

11111000001100100011

11101000111000101001

11100010011011110000

00011001111001001101

01101010011000100000

10111110010110100001

01010111111100100100

11110001011010101110

01001000100001010110

11010110111011100110

00110101010000011001

01000011111010010010

10000110000000011111

00111011101101111100

00011110011101100101

11010000011001001

=> 1017 bits

ouah

]]>I cannot find the number which has been factored but:

- the smallest 307 digit number is 1017 bits

- the biggest 307 digit number is 1020 bits

ouah

(thx to the bc unix command)

http://www.ntt.co.jp/news/news07e/0705/070521a.html]]>

>12 million (!) PCs with 2.2GHz Athlon 64 CPU and 2 GB main memory would need less than one year to complete the sieving step for RSA-1024.

As far as I rememeber, sieving is only the first step. What about matrix reduction (or whatever proper name is) step ? It's tightly coupled and much harder to perform on PCs.

]]>So 500 MIPS years -> 15,750 trillion instructions (Not that figure really helps...)

]]>In particular the GNS has *subexponetial* running time. That means you need to make numbers bigger than you perhaps think.

At anyrate, for most of us the weakness is keeping the secrect key secrect, so its probably not such a big deal. Unless you are a cert provider.

20-50 years is very very long time in computers. Its about how long we have had modern computers for comparason

]]>Give me meat!

I'm with those who want more crypto posts. If I want fluff and fun, I'll head to slashdot. :-)

thanks,

Michael

Your assumption that the calculation time is exponential (with a factor of 1.1 per bit) is way to simple. To my knowledge, Franke and Kleinjung have published an asymptotic formula for the required time of at least one of the steps of their method, it was not exponential, but more complicated.

In his preprint "Cofactorisation strategies for the number field sieve and an estimate for the sieving step for factoring 1024-bit integers" (http://www.math.uni-bonn.de/people/thor/cof.ps), Kleinjung used a different method (not extrapolation of asymptotic formulae) to arrive at the following estimate:

12 million (!) PCs with 2.2GHz Athlon 64 CPU and 2 GB main memory would need less than one year to complete the sieving step for RSA-1024.

In contrast, the cost for the sieving step in the factorization which has been achieved now equalled only 100 such Athlon64 PCs running for a year. (Actually, on the calendar, it took 6 months, of 11 months altogether.) See a copy of the detailed announcement here: http://www.heise.de/newsticker/foren/go.shtml?read=1&msg_id=12799757&forum_id=117643">http://www.heise.de/newsticker/foren/go.shtml?read=1&msg_id=12799757&forum_id=117643)">http://www.heise.de/newsticker/foren/go.shtml?read=1&msg_id=12799757&forum_id=117643

The current record is for RSA-200 in 2005, a 663 bit (=200 decimal digits) number. This took 18 months elapsed time on a cluster of eighty 2.2 Ghz Opteron computers, connected through a Gigabit network (in contrast RSA 158 took 3 months in 2002 with the same hardware). See http://www.cwi.nl/research/2005/24teRieleEs.pdf

Let us suppose that each additional bit requires the factoring time to increase by a factor of just 1.1 – I believe this is better than what can be achieved even with the best factoring methods. At a rough guide let’s suppose that we assume that we are calculating on the same setup as Jens Franke. In this case we would get:

704 bits 75 years

768 bits 33,000 years

896 bits 6.7 billion years

1024 bits 10^15 years

2048 bits 10^57 years

Of course there are likely to be big advances in computing power, and there may also be improvements in factoring methods, but I think these calculations show that we are still a long way from factoring 1024 bit RSA keys.

]]>We're trying to do some GPG code-signing in our version control system, and on our production servers, in order to detect tampering and provide tracability. This means signing many versions of 500K+ small files. I think I am going to be seriously studying all of the more esoteric GPG options.

]]>My guess is that the 500 years of computing time is based on a P-90 computer. The GIMPS (Great Internet Mersenne Prime Search), has been searching for Mersenne primes (that is, of the form (2^^N)-1) since January 1996, when the P-90 was pretty much the typical/standard CPU in use. They use the P-90 as a measuring stick.

http://mersenne.org

According to the "Top Producers" list, at #2014, I've logged some 178 years of P-90 calculations since I started running their software on my machine(s) about 11 years ago. I was running a PII-166 when I started, switched to a PIII-1G about 6 years ago, and now have been running a a PIV-3.4G since about two years along with the still-running PIII-1G.

According to the updated-hourly Status Page http://www.mersenne.org/primenet/status.shtml the subset of slower machines, which are dedicated to factoring the non-primes Mersennes, performs some 64 years of P-90 factoring PER DAY.

So, this subset of slower machines could factor that 307 digit number in about 8 days.

Yes, I do have a 4096-bit PGP/GPG key.

Now I know why...

]]>

LOL!

]]>* We probably find crypto stories more interesting than average.

* We want to know what the "Oracle" thinks of these problems.

* Those of us that advise others need to know strengths and weaknesses.

* Where else can we do such cutting edge nerdy stuff?

:-)

]]>And then there are those of you who complain that they want *more* cryptography posts.

ECC is simply a patent minfield. Its just crazy what can be patented really (aka RSA). Futhermore its "incressed" security is not a proof. its just that we find it harder at *this* point working out discrete logs over Eliptic feilds. It could be just as easy as discrete logs over G_p for all we know.

You can use subgroups with a tradtional Discrete Log based scheme as another alternative.

But consider the application and the cost of an attack? what are the payoffs.

I have used 2048bit keys for a while now, and it really is not a problem. I know smart cards have issues. But then what do you want? True security? or a cheap system?

Genrally you can't have both.

Just work out what its worth. Don't spend $1000 to protect $10.

But if you want your certificate to be valid in 20 years time? You need big key sizes.

]]>Ouch. That's unworkable for a lot of applications, be it DNS or doing signing of row data in any size of database.

]]>What matters is that most users will have to increase their keysizes pretty soon.

And using a key of 4096 bits HURTS.

So we will have to move towards ECC. And we better start moving NOW.

So what does switching to Eliptic Curves get you (apart from problems with modular inversion and crypto coprocessors) ...

Supposedly the same strength as RSA but with one sixth the bits (or whatever other number is the latest quote today) oh and six times the speed.

However one thing that is known about these sorts of problems/solutions is that in general when a solution to one problem is found it can fairly quickly be reworked to other similar problems.

What has become clear is that although factoring was considered for many years to be a mathmatical backwater it's use in crypto re-vitalised the field. So if ECC becomes prevelant how long before it to becomes subject to the same scrutiny?

So the next question is how long before some maths undergraduate thinks hmmm and does something not too disimilar on other crypto systems?

Just an idel thought...

]]>http://geekz.co.uk/schneierfacts/fact/27]]>

]]>

It is all a little bit more complicated than this, but this should give you a basic idea.

]]>It means the number of computers used times the amount of time they were used equals 500 years.

]]>Any one knows what does "a half century of computer time" mean?

]]>Well, my key is the largest available in non-modified GPG.

I'm fine except for the fact that the signature is extremely bulky. Adopting an even larger key would be serious trouble...

Maybe time has come to actually consider ECC? What do you think about it, Bruce?

Thanks.

]]>