Hello all,
As described at the 5th PQC Standardization Conference, NIST is currently planning to specify separate "pure" and "pre-hash" versions of the signature algorithms in FN-DSA, ML-DSA and SLH-DSA, as I described in a forum post in January. In each case, the base signature generation and signature verification functions will remain unchanged, but the input string will be modified in order to create domain separation.
Taking SLH-DSA as an example, the internal signing function, slh_sign_internal(M, SK, addrnd) will be the same as Algorithm 18 of FIPS 205 IPD (with the exception that if hedged signing is used, then the random value for opt_rand is passed as an input (addrnd) rather than being generated within the function). FIPS 205 will then define two API functions for signing and two API functions for verification, one of each for "pure" signing and one of each for "pre-hash" signing.
For "pure" signing, the API would be slh_sign(M, ctx, SK). For hedged signing this function would:
For "pre-hash" signing, the API would be hash_slh_sign(M, ctx, PH, SK). For hedged signing this function would:
If implementing hedged signing, then addrnd needs to be generated within the same cryptographic module as the one that performs slh_sign_internal (except when performing KAT testing). M' may be constructed outside of the cryptographic module.
For pre-hashing, the FIPS will allow any approved hash function or XOF to be used for PH. However, when defining OIDs for signatures, we plan to specify a limited number of options -- perhaps only one or two options for PH for each parameter set. OIDs will be posted to the NIST web site; they will not be specified in the FIPS.
David Cooper
NIST PQC
The Ed25519ph and Ed448ph variants are prehashed. This is mainlyuseful for interoperation with legacy APIs, since in most of thecases, either the amount of data signed is not large or the protocolis in the position to do digesting in ways better than justprehashing (e.g., tree hashing or splitting the data). Theprehashing also makes the functions greatly more vulnerable toweaknesses in hash functions used. These variants SHOULD NOT beused.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/114533c4-c933-447c-995b-0492e061dee7n%40list.nist.gov.
+1 on Bas’ comment and the probable low adoption of prehash-Sigs. For reference, https://www.nccoe.nist.gov/sites/default/files/2023-12/pqc-migration-nist-sp-1800-38c-preliminary-draft.pdf (Appendix C) includes a summary of the pros and cons and some background on pure vs prehash-EdDSA. It is a moot point now because NIST decided differently, but imo the limited use-cases that needed prehashing could define it in their own specs.
From: 'Bas Westerbaan' via pqc-forum <pqc-...@list.nist.gov>
Sent: Monday, April 22, 2024 9:34 AM
To: Sebastien Riou <sebasti...@pqshield.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; Moody, Dustin (Fed) <dustin...@nist.gov>
Subject: RE: [EXTERNAL] [pqc-forum] Re: Updates on pre-hash for FIPS 204 and 205
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. |
--You received this message because you are subscribed to the Google Groups "pqc-forum" group.To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/410cbf556e08408caf9d9b332a86abac%40amazon.com.
Sebastien Riou
Director, Product Security Architecture
PQShield Ltd
M: +33 782 320 285
My understanding is that pre-hash can do everything that the pure version does, hence my question, what do we gain by keeping the pure version ?
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAMjbhoWuqmzJ1-snQ-W91MYb%3DQnX7Bdhh_mOFymDefZVTPcU-g%40mail.gmail.com.
On the one hand I understand the benefits of domain separation from a theoretical point of view, but on the other hand I am not convinced by the relevance of a pure vs prehash distinction at the algorithm level. From an abstraction point of view, I prefer to view the signature primitive as "authenticate some arbitrary-length data", rather than "authenticate some data, that may or may not be prehashed, with an optional context string"; the latter feels like a protocol consideration that could be handled at the protocol level. I believe that having this distinction at the algorithm level, with a separate API for each, may cause headaches for protocol designers and implementers. For example :
- the protocol requires to authenticate some data, prepended with a context string. Should I use sign(M, ctx, SK), or sign(ctx || M, "", SK) ? Notice that M' = (octet(0) || octet(OLEN(ctx)) || ctx || M) in the first case, while M' = (octet(0) || octet(0) || ctx || M) in the second case.
- the protocol requires to authenticate some data; that data happens to be a hash. Should I use pure or pre-hash signature ?
- the protocol requires to authenticate some data, which is formatted as in the proposal to provide domain separation. Should I use pure signature, or should I pass directly that data as per "M' may be constructed outside of the cryptographic module" ?
- I need for whatever reason to replace ML-DSA/SLH-DSA in my application by some other algorithm that does not support this distinction and has a different API because of this (e.g. XMSS as per RFC 8391). What message should I sign with this new algorithm, especially if the context string is non-empty and/or if the data was pre-hashed ?
All of these ambiguities disappear when sticking to the definition of the signature primitive as "authenticate some arbitrary-length data", and letting the protocol handle domain separation if needed. Moreover, as others have noted, the prehash variant will probably have a low adoption, like preHash-EdDSA; its main benefit in practice is that it makes implementation of multi-part interfaces trivial, while the two passes on the message performed by EdDSA and SLH-DSA are incompatible with this (and comes with a performance overhead for long messages). But maybe there are other solutions to this multi-part API issue. I will start another thread on this topic.
The problem with your suggestion is that for a protocol that does not hash any meta data, specifically not the signature algorithm identifier, when computing the message digest, there occurs an ambiguity with respect to how to interpret the signature. A signature over the pre-hashed variant of SLH-DSA interpreted as the non-pre-hashed variant would lead to an existential forgery: if an adversary changes the signature algorithm identifier accordingly, it appears that the signer signed hash(m), while he in fact signed m. Similarly, the ctx allows to distinguish between composite and stand-alone use of the signature algorithm in a sound way which doesn't allow stripping off one signature without this being detected.
- Falko
MTG
AG
Dr. Falko Strenzke
Executive System Architect
Phone: +49
6151 8000 24
E-Mail: falko.s...@mtg.de
Web: mtg.de
MTG AG - Dolivostr. 11 - 64293 Darmstadt, Germany
Commercial register: HRB 8901
Register Court: Amtsgericht Darmstadt
Management Board: Jürgen Ruf (CEO), Tamer Kemeröz
Chairman of the Supervisory Board: Dr. Thomas Milde
This email may contain confidential and/or privileged
information. If you are not the correct recipient or have
received this email in error,
please inform the sender immediately and delete this
email.Unauthorised copying or distribution of this email is
not permitted.
Data protection information: Privacy policy
Hi Robin,
Am 26.04.24 um 15:44 schrieb 'Robin Larrieu' via pqc-forum:
On the one hand I understand the benefits of domain separation from a theoretical point of view, but on the other hand I am not convinced by the relevance of a pure vs prehash distinction at the algorithm level. From an abstraction point of view, I prefer to view the signature primitive as "authenticate some arbitrary-length data", rather than "authenticate some data, that may or may not be prehashed, with an optional context string"; the latter feels like a protocol consideration that could be handled at the protocol level. I believe that having this distinction at the algorithm level, with a separate API for each, may cause headaches for protocol designers and implementers. For example :
- the protocol requires to authenticate some data, prepended with a context string. Should I use sign(M, ctx, SK), or sign(ctx || M, "", SK) ? Notice that M' = (octet(0) || octet(OLEN(ctx)) || ctx || M) in the first case, while M' = (octet(0) || octet(0) || ctx || M) in the second case.
- the protocol requires to authenticate some data; that data happens to be a hash. Should I use pure or pre-hash signature ?
- the protocol requires to authenticate some data, which is formatted as in the proposal to provide domain separation. Should I use pure signature, or should I pass directly that data as per "M' may be constructed outside of the cryptographic module" ?
- I need for whatever reason to replace ML-DSA/SLH-DSA in my application by some other algorithm that does not support this distinction and has a different API because of this (e.g. XMSS as per RFC 8391). What message should I sign with this new algorithm, especially if the context string is non-empty and/or if the data was pre-hashed ?
All of these ambiguities disappear when sticking to the definition of the signature primitive as "authenticate some arbitrary-length data", and letting the protocol handle domain separation if needed. Moreover, as others have noted, the prehash variant will probably have a low adoption, like preHash-EdDSA; its main benefit in practice is that it makes implementation of multi-part interfaces trivial, while the two passes on the message performed by EdDSA and SLH-DSA are incompatible with this (and comes with a performance overhead for long messages). But maybe there are other solutions to this multi-part API issue. I will start another thread on this topic.The problem with your suggestion is that for a protocol that does not hash any meta data, specifically not the signature algorithm identifier, when computing the message digest, there occurs an ambiguity with respect to how to interpret the signature. A signature over the pre-hashed variant of SLH-DSA interpreted as the non-pre-hashed variant would lead to an existential forgery: if an adversary changes the signature algorithm identifier accordingly, it appears that the signer signed hash(m), while he in fact signed m. Similarly, the ctx allows to distinguish between composite and stand-alone use of the signature algorithm in a sound way which doesn't allow stripping off one signature without this being detected.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/92eb8a7d-76c0-4b5d-9f0c-d1376cff48c3%40cryptonext-security.com.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/57a04a90-2100-456f-809e-5a7057412e80n%40list.nist.gov.
+1 to David’s comment.
> But can we definitely say that no applications will need the pre-hash?
IMO, “need” is a very strong word, and, IMO, the reason this debate has circled so much. There are other potential solutions (modify all the protocols), so strictly speaking no applications “need” the pre-hash.
That said, I know that my colleagues in the HSM firmware space have been bemoaning the loss of pre-hashed signatures. The cases that have come up are:
These data formats are old enough to be effectively rusted-shut. We can change the signing algorithms, but the ship has long sailed to change the CRL data format to include a flag about whether the signature is over the raw data or over a pre-hash of it (and which hash function was used by the signer).
So nothing will catch fire if we don’t get pre-hashed versions of SLH-DSA and ML-DSA, but performance and throughput of some specific high-volume and latency-sensitive usecases will suffer, even on big rack-mount HSMs.
But also, I’m sympathetic to the fact that pre-hashing un-does the domain-separation and collision-resistance that are built-in to SLH-DSA and ML-DSA (aka “Yes, it’s not worse than RSA and ECDSA, but it’s not the 1980’s anymore and we can do better”). So I’m sympathetic that providing pre-hashed versions is effectively NIST endorsing weakening the spec because implementers will see the OIDs for the pre-hashed versions and use them without reading the fine-print warnings, however by punting this to protocol designers as Sophie suggests gives a better chance that the people making the decision to support pre-hashing will be aware of the fine-print and can build replacement domain-separators -- aka H( pk || m ) -- into the protocol-level pre-hash. But as stated above, only for protocols and data formats that are not rusted shut.
I see both sides of this. I honestly don’t know what the right decision is.
---
Mike Ounsworth
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/e31df6c0-1aea-40e1-b595-ee13d0c00ac4%40nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/DM8PR11MB57369A27F333D534BA2B95099F1A2%40DM8PR11MB5736.namprd11.prod.outlook.com.
Hi Sophie,
> In the case of Dilithium/FIPS 204 we can do the same thing, with the slight caveat that we need to hash in the public key for Dilithium before handing this hash to the sender (called µ in the draft standard). This can be accomplished in a streaming interface, and the resulting signature would be no different to a non-prehashed Dilithium signature.
I have a question about this (probably because I have not been following all of this thread in detail).
One of the advantages of (external) pre-hashing with RSA is that you can do the pre-hash outside of the FIPS boundary – ex.: in the PKCS#11 client library which may be in a different datacentre from the FIPS module (HSM) so that you only need to transmit the hash over the wire. As far as FIPS 186-5 (which chains to RFC 8017 for RSA-PSS) is concerned, that is the message m, and whether it’s actually a message, or is the hash of something else is completely irrelevant to your FIPS certification – IE the hash function used to compute the pre-hash is not mentioned in FIPS 186-5 / RFC 8017 and therefore is not subject to FIPS validation.
My concern / question about FIPS 204 (and the ECDSA part of FIPS 186-5 for that matter) is: if you take the pre-hash step (step 6 of FIPS 204 section 6, or step 1 of FIPS 186-5 section 6.4.1) and perform it externally to your FIPS module, then have you horribly violated your FIPS boundary in a way that will never pass a FIPS validation? In other words, I feel like you can’t actually “lift” the pre-hash step out of the FIPS module, you can only add an extra round of hashing in front, which cannot be done transparently to the verifier – but I would love for someone with more clarity to confirm.
---
Mike Ounsworth
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/20240501090658.1242827.qmail%40cr.yp.to.
Using the same OID for signature and public key, and baking into the OID whether it’s direct or pre-hash goes a long way to guaranteeing that a single key doesn’t mix modes.
---
Mike Ounsworth
From: 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov>
Sent: Wednesday, May 1, 2024 12:32 PM
To: pqc-...@list.nist.gov
Subject: Re: [EXTERNAL] Re: [pqc-forum] Re: Updates on pre-hash for FIPS 204 and 205
A key should only be used with one protocol as is. Crypto agility switching to a different OID needs to switch to a different key. On Wed, May 1, 2024 at 2: 07 AM D. J. Bernstein <djb@ cr. yp. to> wrote: 'Sophie Schmieg' via pqc-forum
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEEbLAYidForJQ78mBJJdVpM8YupP5SWZrwg3N2tJm6kT%3DmOKw%40mail.gmail.com.
One of the advantages of (external) pre-hashing with RSA is that you can do the pre-hash outside of the FIPS boundary – ex.: in the PKCS#11 client library which may be in a different datacentre from the FIPS module (HSM) so that you only need to transmit the hash over the wire. As far as FIPS 186-5 (which chains to RFC 8017 for RSA-PSS) is concerned, that is the message m, and whether it’s actually a message, or is the hash of something else is completely irrelevant to your FIPS certification – IE the hash function used to compute the pre-hash is not mentioned in FIPS 186-5 / RFC 8017 and therefore is not subject to FIPS validation.
This is not exactly correct. For both RSA and ECDSA, FIPS 186-5
says that an approved hash function shall be used, but doesn't
specify what that hash function is. For ECDSA, computing the hash
of the message is step 1 in Section 6.4.1 of FIPS 185-6. For RSA,
computing the hash of the message is step 2 in Section 9.1.1. of
RFC 8017 for PSS and step 1 in Section 9.2 for PKCS #1 v1.5. For
FIPS validation of an implementation of RSA the hash function(s)
supported is considered as part of the testing (see Section 6.3 of
https://csrc.nist.gov/CSRC/media/Projects/Cryptographic-Algorithm-Validation-Program/documents/dss2/rsa2vs.pdf).
My concern / question about FIPS 204 (and the ECDSA part of FIPS 186-5 for that matter) is: if you take the pre-hash step (step 6 of FIPS 204 section 6, or step 1 of FIPS 186-5 section 6.4.1) and perform it externally to your FIPS module, then have you horribly violated your FIPS boundary in a way that will never pass a FIPS validation? In other words, I feel like you can’t actually “lift” the pre-hash step out of the FIPS module, you can only add an extra round of hashing in front, which cannot be done transparently to the verifier – but I would love for someone with more clarity to confirm.
For ECDSA, see https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program/component-testing#ECDSASigGenPrim.
For ML-DSA, Ray Perlner noted during the FIPS 204 Update at the 5th PQC Standardization Conference that the message representative, μ, could be computed in a separate cryptographic module. We will include a note about that in the final version of FIPS 204.That clarifies. Thank you David!
---
Mike Ounsworth
From: 'David A. Cooper' via pqc-forum <pqc-...@list.nist.gov>
Sent: Monday, May 6, 2024 12:07 PM
To: Mike Ounsworth <Mike.Ou...@entrust.com>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [EXTERNAL] Re: [pqc-forum] Re: Updates on pre-hash for FIPS 204 and 205
Hello Mike, On 4/30/24 5: 55 PM, Mike Ounsworth wrote: One of the advantages of (external) pre-hashing with RSA is that you can do the pre-hash outside of the FIPS boundary – ex. : in the PKCS#11 client library which may be in a different datacentre
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/8d0b28ff-5049-435f-af44-4377beb68299%40nist.gov.