About 5.3 billion cores, at 1000 breaks per year or 8.8 hours per break.
>> And doing the LA in "hours" is impossible with current technology.
250 TB @ say $10,000/TB is certainly attainable by a major player. I
don't know how long it might take though.
>> The keysizes for RSA have been greatly raised in standards, not because
>> of conventional attacks, but rather from the *threat* (as yet
>> unrealized)
>> of quantum computing.
Partly that, but historically the smaller RSA keysizes became insecure
quicker than initially expected, and also people became aware of the
cost of upgrading; and so people became very conservative about new
keysizes.
>>
>> Note: I predicted back in the 90's that RSA-1024 would not be broken
>> before 2020.
>
> Thanks for the explanations and reference. So, it would be seem
> that the reports on RSA-1024's impending demise have been somewhat
> exaggerated.
>
Not entirely so - while Pubkeybreaker is essentially correct in his
facts concerning publicly-known algorithms, and I agree (but with the
second proviso below) that most likely it isn't going to happen in the
next five years, at least two other issues must still be considered.
First, how long do you want to keep a secret? If it's more than say 10
years, using 1024-bit RSA becomes dodgy at best; and using 1024-bit DH,
especially with a commonly-used prime, becomes almost irresponsible [1].
Second issue, whether there have been any hardware or theoretical
advances made by major players, eg NSA. You don't use the full power of
a major core when doing the sieving, and dedicated hardware could chop
at least one and probably two orders of magnitude off the core hardware
requirements.
As to theoretical advances, there have been hints of a theoretical
breakthrough by NSA - whether there is any truth in them I do not know.
But nowadays I recommend 1,536 bits, especially for DL/DH. The extra
cost is often lost in the noise, and seldom amounts to much.
I don't think there is much to gain in going any higher than 1,536 bits
- if Quantum computing can break 1,536 bits, it is likely only a small
step further to breaking any practicable number of bits. 2k bits is not
unreasonable to offset against unknown future developments, but any more
is most likely wasted.
[1] If you don't know, if you can do a lot of work and "break" a DH/DL
prime, then it becomes cheap to break as many messages using that prime
as you like.
As individual DH primes get reused a lot more often than RSA keys, if
you can break a few 1024-bits then the first and juiciest targets would
be the widely-used DH primes.
As far as I can see, there is no useful way to similarly "parallel"
attacks on sets of randomly generated RSA keys.
In consequence there is a good argument that DH primes should be longer
than RSA keys, even though equal lengths of RSA and DL are about as hard
to crack as each other.
-- Peter Fairbrother