rsa issue (again)

36 views
Skip to first unread message

Petar Maymounkov

unread,
Mar 1, 2010, 12:18:03 PM3/1/10
to golang-nuts
A while back people complained that RSA hangs when generating
reasonable size private keys.

Russ suggested that it may be an issue with randomness depravation,
if you read from /dev/random (or any such OS source).
This possibility is disqualified if you read your randomness from,
say,
and RC4 stream. The problem still persists.

This post:

http://groups.google.com/group/golang-nuts/browse_thread/thread/972ede75aba1b21b/5555f699403063e8?lnk=gst&q=rsa#5555f699403063e8

Suggested that there might be a "boundary condition" bug in package
big.

Does anyone know the status of this.

The bottom line is that the RSA library is completely useless
currently, in my opinion.

Thanks,
--Petar

Charlie Dorian

unread,
Mar 1, 2010, 2:05:48 PM3/1/10
to golang-nuts
Are you sure this is still a problem? I just ran the program from the
first post of the referenced thread:

mb:~/proj/go charles$ time ./6.out
P & Q: 43861716044572319 & 71589576064007819
P & Q: 17575883740729560383 & 12382756466702954303

real 0m0.313s
user 0m0.291s
sys 0m0.012s

I'm using most recent version:

mb:~/go/src charles$ hg identify
39166d3ccd25+ tip

On Mar 1, 12:18 pm, Petar Maymounkov <pet...@gmail.com> wrote:
> A while back people complained that RSA hangs when generating
> reasonable size private keys.
>
> Russ suggested that it may be an issue with randomness depravation,
> if you read from /dev/random (or any such OS source).
> This possibility is disqualified if you read your randomness from,
> say,
> and RC4 stream. The problem still persists.
>
> This post:
>

> http://groups.google.com/group/golang-nuts/browse_thread/thread/972ed...

Raif S. Naffah

unread,
Mar 2, 2010, 4:50:59 AM3/2/10
to Charlie Dorian, golang-nuts
the same program seems to hang with 8g/l with the same tip --still
running after 5 minutes+.
signature.asc

Raif S. Naffah

unread,
Mar 2, 2010, 12:24:05 PM3/2/10
to se...@naffah-raif.name, Charlie Dorian, golang-nuts
On Tue, 2010-03-02 at 20:50 +1100, Raif S. Naffah wrote:
> the same program seems to hang with 8g/l with the same tip --still
> running after 5 minutes+.

update. it just takes a long time:

$ time ./8.out
P & Q: 71394167727006023 & 70732225960644203
P & Q: 14855339563281008843 & 13277519734499058839

real 397m15.520s
user 0m1.080s
sys 0m0.117s

cheers;
rsn

signature.asc

Charlie Dorian

unread,
Mar 2, 2010, 12:40:23 PM3/2/10
to golang-nuts
How interesting! I changed my GOARCH, rebuilt everything with
"all.bash", and tried again:

mb:~/proj/go charles$ time ./8.out
P & Q: 62028102827988203 & 45613910241712343
P & Q: 13320469966129271003 & 16965234814937627147

real 0m0.572s
user 0m0.559s
sys 0m0.012s

Looks like we need more research. My results look like:
0.3s with GOOS=darwin and GOARCH=amd64; 2.53GHz Intel Core 2 Duo
0.6s with GOOS=darwin and GOARCH=386; 2.53GHz Intel Core 2 Duo

Please add yours.

>  signature.asc
> < 1KViewDownload

andrey mirtchovski

unread,
Mar 2, 2010, 1:27:46 PM3/2/10
to Charlie Dorian, golang-nuts
> How interesting!  I changed my GOARCH, rebuilt everything with
> "all.bash", and tried again:

it is /dev/random deprivation, at least on linux. if one switches to
urandom the code completes in 0.2 seconds:

P & Q: 42738676210089647 & 55228975988039927
P & Q: 12744243358501247963 & 14193110443708744607

real 0m0.286s

now, back to /dev/random, if you exhaust it it will block waiting for
more "randomness" to accumulate in the system. for a lot of randomness
you may have to wait a long time, especially if the machine is not a
desktop and doesn't have a lot of networking/disk activity (and no
interrupts, consequently). you can "trick" /dev/random to generate a
bit more random numbers by inducing disk/network activity, although
that kind of defeats the purpose. for example, try this experiment (on
linux[1]):

cat /dev/random until it stops spewing data,
in a separate session do a 'dd if=/dev/sda1 of=/dev/null' and you'll
see more random bits appear with some regularity
in yet another session ssh to a remote machine and do a 'find /', and
you should see yet more bits appear slightly faster

it took a lot of work to get the original code to complete in any
reasonable time on a lightly-loaded linux box that has no mouse
attached to it. two dd's of the raw disk partitions, a couple of
multi-gigabyte network transfers and several 'du -a /' resulted in
this complete time:

$ time ./6.t
P & Q: 54727659978154379 & 67460947717305683
P & Q: 13111547974407549047 & 14009543552169709367

real 18m58.452s

the problem is that the rsa key generation's thirst for randomness is
insatiable. one solution is to use /dev/random to seed the
pseudo-random number generator, which would give you sub-second key
generation with some low degree of security[2].

andrey

[1]: OSX's /dev/random implements the pseudo-random algorithm and
feeds it additional randomness from the system, so in general it does
produce a lot more data, as 'cat /dev/random' on a mac would attest.
/dev/random and /dev/urandom are identical there

[2]: BSD's man page states:

On Linux, /dev/urandom will produce
lower quality output if the entropy pool drains, while
/dev/random will prefer to
block and wait for additional entropy to be collected."

Petar Maymounkov

unread,
Mar 2, 2010, 1:34:24 PM3/2/10
to andrey mirtchovski, Charlie Dorian, golang-nuts
Don't waste your time testing RSA with randomness from /dev/urandom.
This just adds more complexity and sources of confusion.

Either use a dumb source of randomness:

// Reader that always returns one
type OnesReader struct{}

func (OnesReader) Read(p []byte) (n int, err os.Error) {
for i := 0; i < len(p); i++ {
p[i] = 1
}
return len(p), nil
}

Or a slightly more elaborate one:

type TimedRand struct {
*rc4.Cipher
}

func NewTimedRand() TimedRand {
ciph, err := rc4.NewCipher(int64ToBytes(time.Nanoseconds()))
if err != nil {
panic("rc4 gen error")
}
return TimedRand{ciph}
}

func (tr TimedRand) Read(p []byte) (n int, err os.Error) {
for i,_ := range p {
p[i] = 0
}
tr.XORKeyStream(p)
return len(p),nil
}

func int64ToBytes(s int64) []byte {
u := uint64(s)
l := unsafe.Sizeof(u)
b := make([]byte, l)
for i := 0; i < l; i++ {
b[i] = byte((u >> uint(8*(l-i-1))) & 0xff)
}
return b
}

In either case, there is no deprivation of randomness and one
easily observes that RSA hangs with the CPU at 99% for a long time.

P

andrey mirtchovski

unread,
Mar 2, 2010, 3:36:59 PM3/2/10
to Petar Maymounkov, Charlie Dorian, golang-nuts
> In either case, there is no deprivation of randomness and one
> easily observes that RSA hangs with the CPU at 99% for a long time.

you are correct, there's is no deprivation of randomness. of course,
there's no randomness either, but the algorithm for rsa key generation
depends on it. if you look at the code, what it does is poll the
random source for bits and then checks whether those bits are anywhere
close to a prime (big.ProbablyPrime()). all ones is not probably
prime, so the rsa gen algorithm loops infinitely, and the 99%
utilization you're seeing is the garbage collection working hard to
keep big.SetBytes()'s malloc work in check.

switch you 'generate ones' code to this and you'll experience much
lower delay (note that the rsa gen algorithm is not guaranteed to
complete given the design discussed above, so your timings will vary
with the seed)

func (OnesReader) Read(p []byte) (n int, err os.Error) {
for i := 0; i < len(p); i++ {

p[i] = byte(rand.Int())
}
return len(p), nil
}

then in main:

rand.Seed(1)

with that code, on the same machine as before, using seed '0':

$ time ./t
P & Q: 51640149482748083 & 71828404232303483
P & Q: 17801739130345293587 & 14360911268719510787

real 0m0.268s

andrey mirtchovski

unread,
Mar 2, 2010, 3:41:20 PM3/2/10
to golang-nuts
> with that code, on the same machine as before, using seed '0':

err, make that a '1'

Petar Maymounkov

unread,
Mar 3, 2010, 12:52:30 AM3/3/10
to andrey mirtchovski, golang-nuts
I did what you suggested, and in my application I need to create
81-byte RSA private keys. So that is 648-bit keys.

My keys are generate in about 30 seconds on a MacBook Pro,
but there also seems to be a large variance on the generation time.

I am wondering if these are acceptable times.

--Petar

andrey mirtchovski

unread,
Mar 3, 2010, 1:08:49 AM3/3/10
to Petar Maymounkov, golang-nuts
i think shooting randomly at 648-bit numbers trying to hit a prime is
going to be suboptimal. I know google has sufficient expert knowledge
in the field to be able to fix this properly. i suggest you create a
bug report.

plan9's auth/rsagen completes a 648-bit key in 0.15 seconds (on
plan9). that code can be used to fashion a better algorithm for key
generation.

Petar Maymounkov

unread,
Mar 3, 2010, 10:36:19 AM3/3/10
to andrey mirtchovski, golang-nuts
Can you point me to that "auth/rsagen" source? I can't find it online.
I might sit down and fix the problem.

Thanks,
--Petar

Petar Maymounkov

unread,
Mar 3, 2010, 10:40:34 AM3/3/10
to andrey mirtchovski, golang-nuts, Russ Cox
Also, let's ask Russ whether he didn't get the code from there
in the first place. (Which is very likely.) Russ?

Thanks,
--Petar

andrey mirtchovski

unread,
Mar 3, 2010, 10:49:36 AM3/3/10
to Petar Maymounkov, golang-nuts, Russ Cox
the source for genrprime (which rsagen uses) is on Russ's website.
it's a different algorithm:

http://swtch.com/usr/local/plan9/src/libsec/port/genprime.c

Russ Cox

unread,
Mar 3, 2010, 12:24:27 PM3/3/10
to Petar Maymounkov, andrey mirtchovski, golang-nuts
On Wed, Mar 3, 2010 at 07:40, Petar Maymounkov <pe...@csail.mit.edu> wrote:
> Also, let's ask Russ whether he didn't get the code from there
> in the first place. (Which is very likely.) Russ?

Adam Langley wrote the RSA generator.
The algorithm used in Go and the algorithm used in Plan 9
look mostly the same. The only difference I can see is
that the Go algorithm keeps trying random numbers to
find the prime while the Plan 9 code picks a random
number and then increments by two until it gets a prime.
These should amount to the same thing, though the
Go algorithm will use up more randomness. I don't think
that would explain the significant runtime difference, though.
It could also be that probablyPrime throws off a lot of
garbage, though that's purely speculation.

It's worth running the binary under 6prof, if it can handle it,
to see where the time is going.

Russ

Nathan Vander Wilt

unread,
Mar 3, 2010, 1:02:09 PM3/3/10
to golang-nuts
On Mar 2, 2010, at 10:27 AM, andrey mirtchovski wrote:
>> How interesting! I changed my GOARCH, rebuilt everything with
>> "all.bash", and tried again:
>
> it is /dev/random deprivation, at least on linux. if one switches to
> urandom the code completes in 0.2 seconds:
>
> P & Q: 42738676210089647 & 55228975988039927
> P & Q: 12744243358501247963 & 14193110443708744607
>
> real 0m0.286s

Sorry, I should have followed up to that original thread. I ended up just switching to 32-bit (6g) because the 64-bit (8g) was hanging in the key size range I was trying to use it. I am on a Mac, which uses Yarrow for all random devices, so there is no entropy deprivation going on. IIRC, I saw some "hangs" with certain key size ranges on 6g too but it didn't affect my needs.

I didn't try applying Charlie's fix for the boundary condition because I'd hoped the dev team would apply it to the main tree after someone more knowledgeable than me confirmed its correctness. It does smell like a numerics problem, though. Maybe this needs a real bug filed in the issue tracker.

hope this helps,
-natevw

Charlie Dorian

unread,
Mar 3, 2010, 1:16:09 PM3/3/10
to golang-nuts
This (http://primes.utm.edu/howmany.shtml) site says:
"Example: Suppose I want to find a 1000 digit prime. If I am choosing
1000 digit integers x to test for primality at random, then I'd
expect to test about log(10^1000) of them, or about 2302 integers
before finding a prime. Obviously if I used odd integers I could
multiply this estimate by 1/2..."

If the first attempt is not prime, I am probably half-way between
primes, so incrementing by two means I would find a prime in one-
quarter of the tries.

Is this not true?

andrey mirtchovski

unread,
Mar 3, 2010, 10:52:07 PM3/3/10
to Charlie Dorian, golang-nuts
> This (http://primes.utm.edu/howmany.shtml) site says:
> "Example: Suppose I want to find a 1000 digit prime.  If I am choosing
> 1000 digit integers x to test for primality  at random, then I'd
> expect to test about log(10^1000) of them, or about 2302 integers
> before finding a prime.  Obviously if I used odd integers I could
> multiply this estimate by 1/2..."
>
> If the first attempt is not prime, I am probably half-way between
> primes, so incrementing by two means I would find a prime in one-
> quarter of the tries.

Your hint was revelatory. It turns out that it may sometimes take
upward of 100k tries to find a prime in the 324-bit range, and there
is at least one case in the 66-bit range where a prime is not detected
in at least 12 million successive odd numbers, step 4, missing some 4
thousand primes in the range. This is now bug 638.

cheers!

emghazal

unread,
Mar 4, 2010, 3:24:29 AM3/4/10
to golang-nuts
On Mar 3, 9:02 pm, Nathan Vander Wilt <nate-li...@calftrail.com>
wrote:

> On Mar 2, 2010, at 10:27 AM, andrey mirtchovski wrote:
> Sorry, I should have followed up to that original thread. I ended up just switching to 32-bit (6g) because the 64-bit (8g) was hanging in the key size range I was trying to use it.

Quick nitpicking,
6g/6l/6c are 64-bit
8g/8l/8c are 32-bit

http://golang.org/doc/install.html#tmp_35

Reply all
Reply to author
Forward
0 new messages