Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

RSA keygen recommendations

98 views
Skip to first unread message

Tom St Denis

unread,
Jan 22, 2010, 1:32:44 PM1/22/10
to
Looking at X9.31 and of course the NIST DSS spec it seems they still
promote a contrived key generation process for RSA keys despite the
fact p \pm 1 attacks are no longer viable. I get that there is some
value in deterministic key generation, but couldn't they just say seed
some PRNG generate two large numbers, and test for primality? Also
they still cite 1/4 for the bounds of MR but I've read [and should
look up again] that for random bases or numbers the bounds are in fact
much tighter [and lower] than that.

Is this just a case of X9.31 not ever getting updated and/or used, or
are there valid reasons to use the contrived key gen processes?

Tom

Simon Johnson

unread,
Jan 22, 2010, 5:36:09 PM1/22/10
to
> Is this just a case of X9.31 not ever getting updated and/or used, or
> are there valid reasons to use the contrived key gen processes?

What you need here is a boat load of XML. XML will solve this problem.

We can have:

<cipher type="Asymmetric" name="RivestShamirAdleman">
<keygeneration method="outdated,outmoded" result="pointless" />
</cipher>

Then you have someone write a parser in twelve different, slightly
incompatible, libraries and call that a standard.

Then, and only then, have you created a standard that will be defunct
before it's even officially recongised.

Almost like WEP actually.

Simon.

Samuel Neves

unread,
Jan 24, 2010, 6:10:27 PM1/24/10
to

There is no mathematical reason to generate "strong primes" in RSA key
generation. See:

http://eprint.iacr.org/2001/007

The averages for random numbers in Miller-Rabin are described in:

http://math.dartmouth.edu/~carlp/PDF/paper88.pdf

Tom St Denis

unread,
Jan 25, 2010, 6:49:39 AM1/25/10
to

What I don't get then is if this paper was published in '93, then why
does rDSA from DSS and X9.31 recommend 8+ rounds of MR + LL test?

Thanks for the citations though. I think the results from paper on MR
are also to be found in HAC [at least that's where I seem to recall
reading it]

Tom

Pubkeybreaker

unread,
Jan 25, 2010, 12:56:40 PM1/25/10
to

One needs to be aware of the political situation that was present
inside of ANSI X9F
at that time.

The committee consisted of a small number of cryptographers (me, Paul
VanOorschot,
Matt Wiener was sometimes there, a competent rep from the NSA), a
large number of
bank representatives, plus a representative from Certicom who shall
remain nameless.
(but whom I respect).

There were a lot of political roadblocks to getting X9.31 established
as a standard.
Certicom clearly wanted to keep it from ever becoming a standard. RSA
Security
was their competition.

A number of people kept bringing up "red herring" reasons for putting
all kinds of
obstacles into the standard.

The bankers wanted "strong primes" because it gave them a warm fuzzy
feeling.
They didn't understand the math behind elliptic curve factoring
methods, nor the
reason why ECM made Pollard P-1 and Pollard P+1 obsolete. The
Certicom
representative went along with this, of course. The feeling was
"generating strong primes
is cheap; why not do it anyway?" In fact, for years before I
became involved the
standard had been STALLED precisely because noone knew a way to
generate so-called
strong primes quickly. I gave them a method to do it.

It also gave people a warm fuzzy feeling to require multiple MR tests
for PRP
as well as the LL test. The claim was that "why not do it since the
cost of a
few extra tests is small" [keys do not get generated all that often].

For the sake of getting a standard accepted and in place my boss (Burt
Kaliski)
basically told me to give in on all these technical points. RSA
Security needed
to have a standard put it place.

BTW, for those who have not guessed, I was Ron Rivest's co-author on
the
stong prime paper. And I was one of the main authors for X9.31

You needed to have been part of the politics to really understand what
was
going on.

Tom St Denis

unread,
Jan 26, 2010, 7:41:00 AM1/26/10
to
On Jan 25, 12:56 pm, Pubkeybreaker <pubkeybrea...@aol.com> wrote:
> You needed to have been part of the politics to really understand what
> was
> going on.

Thanks for the insight. I certainly respect the "bureaucratic"
nonsense you're pointing out. What I don't get then is why there
haven't been updates to X9.31 or at least DSS. The X9.31 could be
updated to remove strong primes and still be numerically consistent
with existing implementations. It has been 12 years and RSA is RSA
after all.

While I have your eyes for a min, mind explaining the need for
signatures to be less than N/2, and why it's ok for the "fast
signature method" to avoid that step (wouldn't that make slow/fast
signatures incompatible?)?

Thanks,
Tom

Thomas Pornin

unread,
Jan 26, 2010, 9:16:42 AM1/26/10
to
According to Tom St Denis <t...@iahu.ca>:

> What I don't get then is why there haven't been updates to X9.31 or at
> least DSS.

If there has been no update to X9.31, then there probably is no real
need for an update to X9.31.


This is how things go with regards to computing industry standards. If a
standard is frozen in time, then this means that people at large are
quite happy with it: either the standard fulfills their requirements, or
they silently ignore it. A standard evolves only when there are people
dissatisfied with the current standard, _and_ those people are unwilling
to forego compliance to that standard. For instance, X.509 constantly
evolves because there are people who are intersted in making their
products "X.509-compliant", but also want some extra features or some
things done differently. On the other hand, the specification of what
goes in an ASN.1 "TeletexString" is stable, not because it is
satisfactory, but because implementers are quite happy with stuffing
latin-1 in it (which is not conformant, and has never been) or simply
avoiding it (using UTF8String instead).

In some extreme cases, a standard is frozen because people who need a
standard and are dissatisfied with the current standard went to the
effort of creating a new, distinct standard more or less from scratch.
That's how LDAP strives while DAP is a corpse. Another intesresting
case is SGML vs XML: although XML is a "SGML profile", it has acquired
a life of its own, and SGML is stuck.


Your question then boils down to a boolean choice:
1. X9.31 is good as it is, because key generation is rare enough to make
the extra key generation steps unexpensive.
2. Implementations do not conform to X9.31, but nobody presses the issue
and life goes on.

Although the reality is probably a mix of those two choices; something
like: "situations where X9.31 conformance is important are also situations
where key generation is rare enough to make the extra steps a non-issue."


--Thomas Pornin

Pubkeybreaker

unread,
Jan 26, 2010, 9:19:32 AM1/26/10
to

Why no update? I have no idea. I have not been involved with X9F for
9 years.

It's been 12 years since I looked at x9.31 IIRC (and my memory could
be VERY wrong),
I think we needed to disambiguate between x and -x mod N. If x <
N/2 then -x > N/2
and it becomes easy to distriguish them.

Tom St Denis

unread,
Jan 26, 2010, 10:52:33 AM1/26/10
to

My confusion about S < N/2 is that it's not actually a requirement of
the RSA transform. But what do you mean between x and -x? When
would you be presented with both and/or need to tell them apart? One
of them will lead to a valid padded message, the other will not.

The ANSI spec has a sentence about S being a bit shorter but that's
about it. It can't solely be about saving an extra bit of space since
most messages will be padded out to octet lengths anyways...

Tom

Tom St Denis

unread,
Jan 26, 2010, 10:54:30 AM1/26/10
to
On Jan 26, 9:16 am, Thomas Pornin <por...@bolet.org> wrote:
> Your question then boils down to a boolean choice:
> 1. X9.31 is good as it is, because key generation is rare enough to make
> the extra key generation steps unexpensive.

Well it makes it a pain on non-desktop platforms since the generic
"random primes" method is slow enough as it is.

> 2. Implementations do not conform to X9.31, but nobody presses the issue
> and life goes on.
>
> Although the reality is probably a mix of those two choices; something
> like: "situations where X9.31 conformance is important are also situations
> where key generation is rare enough to make the extra steps a non-issue."

I think part of my question is whether X9.31 is actually used in
industry at all. I haven't seen it [or a quote of it] being used
anywhere outside of rDSA in DSS.

Tom

James H. Markowitz

unread,
Jan 26, 2010, 11:58:05 AM1/26/10
to
On Tue, 26 Jan 2010 07:54:30 -0800, Tom St Denis wrote:

> I think part of my question is whether X9.31 is actually used in
> industry at all.

Is this not supposed to be mandatory for any implementation with
a NIST certification compliant with the FIPS 140-2 specifications?

Tom St Denis

unread,
Jan 26, 2010, 12:10:30 PM1/26/10
to

NIST also recognizes PKCS #1 signatures if I'm not mistaken.

Tom

James H. Markowitz

unread,
Jan 26, 2010, 3:49:44 PM1/26/10
to

I meant for RSA key generation.

Phil Carmody

unread,
Jan 28, 2010, 5:39:06 AM1/28/10
to
Pubkeybreaker <pubkey...@aol.com> writes:
> warm fuzzy feeling.
> didn't understand
> noone knew a way
> warm fuzzy feeling
...

> told me to give in on all these technical points.

Woh, are we in the same office?

Phil
--
Any true emperor never needs to wear clothes. -- Devany on r.a.s.f1

Peter Gutmann

unread,
Feb 10, 2010, 11:41:01 PM2/10/10
to

Weeelll... yes, you implement just enough to get your code past NIST and then
switch back to PKCS #1 like everyone else.

The same applies to a pile of other bizarro modes and mechanisms that NIST
requires.

Oh, and in fact to FIPS 140 in general.

Peter.

James H. Markowitz

unread,
Feb 11, 2010, 9:26:15 AM2/11/10
to
On Thu, 11 Feb 2010 04:41:01 +0000, Peter Gutmann wrote:

> "James H. Markowitz" <no...@nowhere.net> writes:
>
>>On Tue, 26 Jan 2010 07:54:30 -0800, Tom St Denis wrote:
>
>>> I think part of my question is whether X9.31 is actually used in
>>> industry at all.
>
>> Is this not supposed to be mandatory for any implementation with
>>a NIST certification compliant with the FIPS 140-2 specifications?
>
> Weeelll... yes, you implement just enough to get your code past NIST and
> then switch back to PKCS #1 like everyone else.

But can in that case still claim to be FIPS-compliant?

> The same applies to a pile of other bizarro modes and mechanisms that
> NIST requires.
>
> Oh, and in fact to FIPS 140 in general.

I would tend to agree but....

Tom St Denis

unread,
Feb 11, 2010, 10:24:25 AM2/11/10
to
On Feb 11, 9:26 am, "James H. Markowitz" <no...@nowhere.net> wrote:
> On Thu, 11 Feb 2010 04:41:01 +0000, Peter Gutmann wrote:
> > "James H. Markowitz" <no...@nowhere.net> writes:
>
> >>On Tue, 26 Jan 2010 07:54:30 -0800, Tom St Denis wrote:
>
> >>> I think part of my question is whether X9.31 is actually used in
> >>> industry at all.
>
> >>        Is this not supposed to be mandatory for any implementation with
> >>a NIST certification compliant with the FIPS 140-2 specifications?
>
> > Weeelll... yes, you implement just enough to get your code past NIST and
> > then switch back to PKCS #1 like everyone else.
>
>         But can in that case still claim to be FIPS-compliant?

PKCS #1 signatures are also supported in the rDSA spectrum. You don't
have to support X9.31 at all to get a CAVP certificate.

> > The same applies to a pile of other bizarro modes and mechanisms that
> > NIST requires.
>
> > Oh, and in fact to FIPS 140 in general.
>
>         I would tend to agree but....

What I find most ironic is none of the DSA vectors in the CAVP process
actually use the encoding standards mandated by the standards. You
get EC-DSA signatures as <r,s> strings as opposed to the ASN.1
encoding required by X9.62, you get RSA keys in <n,e,d> pairs as
opposed to PKCS #1 RSAPublicKey encodings, etc...

Largely CAVP validation appears to be useles for asymmetric crypto.
Nothing here tells me whether if I encode a signature and send it to a
3rd party will it actually be accepted...

James H. Markowitz

unread,
Feb 11, 2010, 12:45:46 PM2/11/10
to
On Thu, 11 Feb 2010 07:24:25 -0800, Tom St Denis wrote:

> On Feb 11, 9:26 am, "James H. Markowitz" <no...@nowhere.net> wrote:
>> On Thu, 11 Feb 2010 04:41:01 +0000, Peter Gutmann wrote:
>> > "James H. Markowitz" <no...@nowhere.net> writes:
>>
>> >>On Tue, 26 Jan 2010 07:54:30 -0800, Tom St Denis wrote:
>>
>> >>> I think part of my question is whether X9.31 is actually used in
>> >>> industry at all.
>>
>> >>        Is this not supposed to be mandatory for any
>> >>        implementation with
>> >>a NIST certification compliant with the FIPS 140-2 specifications?
>>
>> > Weeelll... yes, you implement just enough to get your code past NIST
>> > and then switch back to PKCS #1 like everyone else.
>>
>>         But can in that case still claim to be FIPS-compliant?
>
> PKCS #1 signatures are also supported in the rDSA spectrum. You don't
> have to support X9.31 at all to get a CAVP certificate.

Sorry, I was specifically thinking about RSA key generation
again. My guess is that in order to be FIPS-compliant at all, one has to
generate the primes with the slow, cumbersome mechanism described in
X9.31, right?

Scott Fluhrer

unread,
Feb 11, 2010, 5:14:21 PM2/11/10
to

"James H. Markowitz" <no...@nowhere.net> wrote in message
news:hl1foa$qr7$1...@news.albasani.net...

Actually, they just changed the rules; in NIST SP800-56B, they removed the
requirements on the factors of p,q+-1. In any case, the old X9.31 method
was combersome (complex, which is an enemy of security), however, it really
wasn't *that* slow (the time taken for searching for the 4 small subprimes
is still fairly small compared to the time taken searching for the real
primes p and q, which you need to do in any case).

--
poncho


Peter Gutmann

unread,
Feb 15, 2010, 1:42:48 AM2/15/10
to
"James H. Markowitz" <no...@nowhere.net> writes:
>On Thu, 11 Feb 2010 04:41:01 +0000, Peter Gutmann wrote:
>> "James H. Markowitz" <no...@nowhere.net> writes:
>>>On Tue, 26 Jan 2010 07:54:30 -0800, Tom St Denis wrote:
>>
>>>> I think part of my question is whether X9.31 is actually used in
>>>> industry at all.
>>
>>> Is this not supposed to be mandatory for any implementation with
>>>a NIST certification compliant with the FIPS 140-2 specifications?
>>
>> Weeelll... yes, you implement just enough to get your code past NIST and
>> then switch back to PKCS #1 like everyone else.
>
> But can in that case still claim to be FIPS-compliant?

Sure, since the FIPS-compliant stuff is still in there, so the marketing
checkbox requirement is met. It doesn't actually get used, but as long as the
checkbox is filled no-one even notices.

(Interesting story - blurred somewhat to hide identities of those involved -
some years ago a vendor shipped a widely-used product which could be run in
FIPS 140 mode. Unfortunately there was a bug in it which meant that if FIPS
140 mode was ever enabled, the product crashed. After three years and tens of
millions of units shipped a user finally noticed that FIPS 140 mode didn't
actually work.

Another vendor also had a product in which you could enable FIPS 140 mode.
Doing so required contacting the vendor for a particular item to enable it (in
other words the vendor could count how many people were running the product in
FIPS 140 mode). After several years, the vendor reported that the number of
users who had actually enabled FIPS 140 mode could be counted on the fingers
of one hand.

Both vendors are major suppliers to the US government, and this is only a small
sample of some of the more amusing stories around).

Peter.

Pubkeybreaker

unread,
Feb 15, 2010, 7:58:06 AM2/15/10
to

It is neither slow, nor cumbersome. A proper implementation runs only
a few
percent slower than the generation of random primes.

0 new messages