OIDs for ML-KEM and "pure" signatures

2,403 views
Skip to first unread message

David A. Cooper

unread,
Aug 20, 2024, 1:26:36 PM8/20/24
to pqc-forum
Hello all,

The OIDs for ML-KEM and for "pure" ML-DSA and SLH-DSA signatures are now
available at
https://csrc.nist.gov/projects/computer-security-objects-register/algorithm-registration.

One OID has been defined for each parameter set, with the same OID being
used to identify both public keys and ciphertexts/signatures. The
details of the encodings of public keys, ciphertexts, and signatures
within ASN.1 structures are being defined within the IETF (e.g.,
https://www.ietf.org/archive/id/draft-ietf-lamps-kyber-certificates-03.html,
https://www.ietf.org/archive/id/draft-ietf-lamps-cms-kyber-04.html,
https://www.ietf.org/archive/id/draft-ietf-lamps-dilithium-certificates-04.html,
https://www.ietf.org/archive/id/draft-ietf-lamps-x509-slhdsa-01.html,
https://www.ietf.org/archive/id/draft-ietf-lamps-cms-sphincs-plus-07.html).

We plan to publish the OIDs for "pre-hash" ML-DSA and SLH-DSA in the
near future. However, there is still an open question of whether all
signature public keys should be identified using the "pure" OIDs or
public keys intended to create "pre-hash" signatures should be
identified with the "pre-hash" OIDs.

Using the "pure" OIDs to identify all public keys would allow a single
public key to generate both "pure" and "pre-hash" signatures. In many
cases, this flexibility will not be necessary given the preference not
to use a single key for multiple applications. However, in
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/crzvboG05qw I
noted the example of a CA that uses a single key to sign both
certificates and CRLs. Some CRLs are tens of megabytes in size, and
there have been some questions about whether generating "pure"
signatures for such large CRLs could be problematic in some cases.

We would appreciate any additional feedback that anyone would like to
provide on this issue.

Thank you,

David Cooper

Orie Steele

unread,
Aug 20, 2024, 2:27:16 PM8/20/24
to David A. Cooper, pqc-forum
I prefer algorithms that satisfy the following interface, and for the same key to never be used for multiple algorithms.

keyGen(alg) -> publicKey, privateKey
sign(alg, privateKey, message) -> Signature
verify(alg, publicKey) -> message

In the context of JOSE, and COSE, the key representations can preserve the algorithm the key is restricted to use:

https://datatracker.ietf.org/doc/html/rfc7517#section-4.4
https://datatracker.ietf.org/doc/html/rfc9052#section-7.1-3.4

Regardless of if there are different OIDs, I would hope that JOSE and COSE would register different algorithms for PH variants, if there is interest in supporting them alongside pure ML-DSA and SLH-DSA.

If you ask for a key that is for ML-DSA, and then later try to use it to verify ML-DSA-PH, I think that's a misuse of what the key was generated to support.

NIST 800-57 Section 5.2 provides guidance which I interpret as supporting not using the same key for multiple algorithms.
NIST 800-57 Section 5.3 provides guidance on crypto periods, which are more complicated to consider if a key can be used with multiple algorithms... consider a key that is valid to sign documents with ML-DSA for 1 year, and ML-DSA-PH for 2 years.

OS



--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/5a3c46a7-a584-4dc5-ae9a-d1c8d0059782%40nist.gov.


--


ORIE STEELE
Chief Technology Officer
www.transmute.industries


Phillip Hallam-Baker

unread,
Aug 20, 2024, 3:28:03 PM8/20/24
to David A. Cooper, pqc-forum
Thanks! I was needing this exact thing yesterday!

Now that NIST has tagged and bagged these algorithms, we should do the same with the JOSE/COSE and XML Signature and Encryption identifiers on the basis that any breaking changes in the specs will require the NIST algs to change as well.


I don't see the 'pure' and 'prehash' versions as likely to conflict.

If I am designing a protocol, I am always going to be very specific as to which I am going to be using. And to be frank, I don't plan on designing new protocols in ASN.1 (though I could probably be persuaded to be paid to convert protocols from JSON to ASN.1).

The reason I desperately need the OIDs is because much of the low level, infrastructure I use is already specified in ASN.1 and so when I am taking the fingerprint of a key, it is going to be H (algID, keybytes). What OIDs have and no other algorithm identifiers have is a registry infrastructure that allows anyone at all to issue code points. So while a given algorithm can have multiple ASN.1 OIDs, each OID is unambiguous. 


When I write a profile for applying ML-DSA it is going to specify whether to use pure or pre-hashed, one or the other. Even if JOSE or XMLSignature give me the option to choose, *I* am going to make that choice for my application, that is not something to be left to the choice of the end user.

The reason I prefer the pure versions over the pre-hashed is because most of my 'signature' applications are of a signature over a short(ish) chunk of metadata that includes the digest of the document signed. So that manifest is never going to be more than 500 bytes or so.

That said, I have applications which are designed around the APIs developed for RSA that don't have that metadata binding step.


I agree with Dr. Steele that domain separation is important. But I don't see pure/prehash as an example of that. I want to separate keys used for signing executables separate from keys used to sign code, etc. etc.


Sophie Schmieg

unread,
Aug 20, 2024, 3:43:39 PM8/20/24
to Orie Steele, David A. Cooper, pqc-forum
The algorithm should always be considered as part of the public key (or private key), and not be passed into the signing or verification methods. I.e. when you are building your key object by parsing the relevant data, expanding matrices or computing any missing CRT parameters and so on, you should include the algorithm that key is used with in some form, and the later when calling the actual signing/verification methods, the algorithm is already implicitly given through the parsed key object's in memory representation. Ideally you also have the algorithm sitting alongside your raw key material wherever you store your raw key material, so that the parsing step does not require any additional side inputs, but we should try to avoid any API where the algorithm can be chosen independently from the key, that way only lie vulnerabilities.



--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

Orie Steele

unread,
Aug 20, 2024, 4:38:03 PM8/20/24
to Sophie Schmieg, David A. Cooper, pqc-forum
We support storing the algorithm alongside the key material in JOSE / JOSE key representations for SLH-DSA and ML-DSA:

https://datatracker.ietf.org/doc/html/draft-ietf-cose-dilithium-03#name-key-pair

{
  "kty": "ML-DSA",
  "alg": "ML-DSA-44",
  "pub": "V53SIdVF...uvw2nuCQ",
  "priv": "V53SIdVF...cDKLbsBY"
}

https://datatracker.ietf.org/doc/html/draft-ietf-cose-sphincs-plus-04#name-key-pair

{
  "kty": "SLH-DSA",
  "alg": "SLH-DSA-SHA2-128s",
  "pub": "V53SIdVF...uvw2nuCQ",
  "priv": "V53SIdVF...cDKLbsBY"
}

I'd consider it a mistake to change the algorithm from ML-DSA-44 to ML-DSA-44-PH-SHA-256 at any time in the cryptoperiod for the key... in the context of JOSE or COSE.
Sadly JOSE and COSE say that the algorithm that a key is used with is optional... and key expressions are not integrity protected.
We can't change JOSE and COSE now, but we can make sure that the same key is not encouraged to be used with multiple algorithms.
We do that, by assigning the algorithms different names, and by encouraging the algorithm used to generate the key to be stored with the keys, wherever they are stored, as Sophie suggested.

As an aside, JWK and COSE key thumbprints do not compute the key identifier over the algorithm identifier:

https://www.rfc-editor.org/rfc/rfc7638#section-3.2
https://datatracker.ietf.org/doc/html/draft-ietf-cose-key-thumbprint-05#name-required-cose-key-parameter

I don't recall how certificate fingerprints work, are they sensitive to a change in ML-DSA-44 vs ML-DSA-44-PH-SHA-256 ?

If they are, that seems like a good thing... and using the same OIDs for both algorithms would be less desirable.

Keys should be generated for use with a single algorithm, for a specific time, and only used with that algorithm during that time, and not used with any algorithms after that time.

Different protocols / APIs might make it harder or easier to achieve this, but all protocols and APIs benefit from more specific algorithm identifiers, and less algorithm identifiers, because that helps with negotiation and interoperability.

OS



Phillip Hallam-Baker

unread,
Aug 20, 2024, 5:03:47 PM8/20/24
to David A. Cooper, pqc-forum
Spotted while updating my code...

In the registry, we have

id-ml-dsa-44 OBJECT IDENTIFIER ::= { sigAlgs 17 }
id-ml-dsa-65 OBJECT IDENTIFIER ::= { sigAlgs 18}
id-ml-dsa-87 OBJECT IDENTIFIER ::= { sigAlgs 19}

and also

id-alg-ml-kem-512 OBJECT IDENTIFIER ::= { kems 1 }
id-alg-ml-kem-768 OBJECT IDENTIFIER ::= { kems 2}
id-alg-ml-kem-1024 OBJECT IDENTIFIER ::= { kems 3}

One is id-ml... and the other id-alg-ml...

I know the names don't matter very much in ASN.1, but they do end up being emitted in text representations from time to time.



Phillip Hallam-Baker

unread,
Aug 20, 2024, 5:27:23 PM8/20/24
to Orie Steele, Sophie Schmieg, David A. Cooper, pqc-forum
On Tue, Aug 20, 2024 at 4:37 PM Orie Steele <or...@transmute.industries> wrote:
We support storing the algorithm alongside the key material in JOSE / JOSE key representations for SLH-DSA and ML-DSA:

https://datatracker.ietf.org/doc/html/draft-ietf-cose-dilithium-03#name-key-pair

{
  "kty": "ML-DSA",
  "alg": "ML-DSA-44",
  "pub": "V53SIdVF...uvw2nuCQ",
  "priv": "V53SIdVF...cDKLbsBY"
}

https://datatracker.ietf.org/doc/html/draft-ietf-cose-sphincs-plus-04#name-key-pair

{
  "kty": "SLH-DSA",
  "alg": "SLH-DSA-SHA2-128s",
  "pub": "V53SIdVF...uvw2nuCQ",
  "priv": "V53SIdVF...cDKLbsBY"
}

I'd consider it a mistake to change the algorithm from ML-DSA-44 to ML-DSA-44-PH-SHA-256 at any time in the cryptoperiod for the key... in the context of JOSE or COSE.
Sadly JOSE and COSE say that the algorithm that a key is used with is optional... and key expressions are not integrity protected.
We can't change JOSE and COSE now, but we can make sure that the same key is not encouraged to be used with multiple algorithms.

This goes to the composability of specifications and I disagree as to the import.

Neither JOSE nor COSE are useful on their own. They are modules that are used as part of a larger application and quite often, one has to fight to keep people from insisting some issue be satisfied in one particular part of the system.

Case in point here is when XML Signature had a long discussion on what algorithms to make mandatory to implement 20 years ago, a debate which has precisely no relevance to the present day. An organization that is trying to write standards without some sort of regular review should probably never bake required algorithms into them, at the very least, limit them to the upper level applications, not the plumbing.

Key authentication is properly a consideration of Public(/Private) Key Infrastructure and something JOSE/COSE should not be addressing.


Whether a key is used with 'pure' or 'pre-hash' is a protocol design issue. It would be simpler to require everything be pre-hashed. But that would mean we would have a lot of protocols where we digest the content, then digest the content again with some metadata and then send the result to the signature algorithm where it gets put through a slightly different type of digest.

I am aware some folk have other issues for wanting pure. But the reason I want it is to avoid what is otherwise an unnecessary step in a modern signature scheme like JOSE or XMLSig.

Semantic separation of the signed data to prevent semantic substitution attacks is a real concern, but I don't think that concern touches on using pure vs pre-signed.

Scott Fluhrer (sfluhrer)

unread,
Aug 21, 2024, 11:39:04 AM8/21/24
to Orie Steele, David A. Cooper, pqc-forum

Is there a specific security concern with using the same private key to sign both pure and prehashed messages?  Does the same concern also apply to using the same private key with different contexts (where ‘context’ is the extra input added to the FIPS 204, 205 sign and verify APIs)?

 

Or, is it an artifact of how JOSE and COSE decided to implement things?

 

Orl 9From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Orie Steele
Sent: Tuesday, August 20, 2024 2:27 PM
To: David A. Cooper <david....@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [pqc-forum] OIDs for ML-KEM and "pure" signatures

 

I prefer algorithms that satisfy the following interface, and for the same key to never be used for multiple algorithms.

keyGen(alg) -> publicKey, privateKey
sign(alg, privateKey, message) -> Signature
verify(alg, publicKey) -> message

Verify( alg, publicKey, message, alleged_signature ) -> boolean

Russ Housley

unread,
Aug 21, 2024, 11:55:00 AM8/21/24
to David Cooper, pqc-forum
David:

I would hope that a CA would not need pre-hash to sign CRLs. One can CRL Distribution Points to make sure that a CRL cannot grow to the point that pre-hash is needed. Since everyone will need to make new trust anchors for the new PQC algorithms, this mechanism can be deployed throughout the new PKI.

That said, given the domain separation that has been specified, I cannot see how using the same private key with both pure and pre-hash can lead to compromise.

Russ

Phillip Hallam-Baker

unread,
Aug 21, 2024, 12:24:16 PM8/21/24
to Russ Housley, David Cooper, pqc-forum
On Wed, Aug 21, 2024 at 11:54 AM Russ Housley <hou...@vigilsec.com> wrote:
David:

I would hope that a CA would not need pre-hash to sign CRLs.  One can CRL Distribution Points to make sure that a CRL cannot grow to the point that pre-hash is needed.  Since everyone will need to make new trust anchors for the new PQC algorithms, this mechanism can be deployed throughout the new PKI.

That assumes a process change by the CA which I am not sure is valid. CRL Distribution points are a mechanism that allows the size of CRLs processed by end clients to be controlled. But that is not the only use case for CRLs. Often times we have an XKMS or OCSP like service gobbling up a series of CRLs to make some sort of analysis. Even if we are doing delta CRLs, they can get very large. In that case, requiring processing of distribution points is going to be a rather ugly workaround to make the signature scheme work.

If we were going down that route, we should revisit my ranged CRL proposal from 25 years ago which allowed the CRL partition to be adjusted dynamically. Or we should look at compressed CRLs.

PKIX is a legacy PKI and the way I look at legacy infrastructure is that they eventually wear out and each time we introduce a change that involves special cases, that puts another few thousand miles on the clock. Requiring everything to go down the Distribution Points route would be additional wear for not much benefit. I would much rather require every CRL be signed using PreHash because that is the approach that was anticipated 30 years ago when the specs were baked.

 
That said, given the domain separation that has been specified, I cannot see how using the same private key with both pure and pre-hash can lead to compromise.

This is the real point for me, semantic substitution attacks have been a big concern for me since we started XML Signature, I haven't seen someone craft a message that could be substituted but there was clearly a possibility. 

pure vs pre-hash is not semantic substitution, it is a functional substitution that is fully accounted for in the spec.


 
Russ

Orie Steele

unread,
Aug 21, 2024, 12:45:21 PM8/21/24
to Russ Housley, David Cooper, pqc-forum
Scott, 

I don't think there are security issues with using the same private key to sign both ML-DSA and HashML-DSA, given the domain separation that Russ noted.
It's just against the NIST recommendation.
In JOSE / COSE the fingerprints would be the same... so at least you would recognize a compromised key as being the same regardless of which algorithm it was used with.

Quoting directly:

"""
The identifier for a signature (e.g., the object identifier [OID]) should indicate whether the signature is a ML-DSA signature or a pre-hash HashML-DSA signature. In the case of pre-hash signatures, the identifier should also indicate the hash function or XOF used to compute the pre-hash.
"""

In JOSE / COSE context, complying with this advice would require registering (these are arbitrary examples, we don't have a document requesting such registrations, afaik)... HashML-DSA-44-SHA-256, HashML-DSA-65-SHA-384, HashML-DSA-87-SHA-512 

Russ,

I agree with what you wrote, the only comment I would add is:
In certs, the fingerprints would be different, so maybe there is a small risk that a private key that is compromised is only revoked in HashML-DSA certs and not in ML-DSA certs?

If you put the same key in both ML-DSA and HashML-DSA certs, you would be ignoring the recommendation:

"""
While a single key pair may be used for both ML-DSA and HashML-DSA signatures, it is recommended that each key pair only
be used for one version or the other.
"""

This would then make handling key compromise potentially more complicated depending on how you tracked where the compromised key was used.

TLDR; The same key SHOULD NOT be used for different algorithms, regardless of how keys or signatures are represented in protocols.

OS









--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Russ Housley

unread,
Aug 21, 2024, 12:53:37 PM8/21/24
to Orie Steele, David Cooper, pqc-forum
Orie:

In David Cooper's message he pointed out that a CA might want to sign certificates with ML-DSA and sign CRLs with Hash-ML-DSA using the same private key.  I was making two points:

(1) if the CA limits the size of the CRL, then ML-DSA can be used to sign certificates and CRLs.  just avoid the massive CRLs that we see today.

(2) If you ignore (1) and want to use the the domain separation provides protection if a CA were to sign certificates with ML-DSA and sign CRLs with Hash-ML-DSA using the same private key.

Russ

Tim Hollebeek

unread,
Aug 21, 2024, 1:48:08 PM8/21/24
to Russ Housley, David Cooper, pqc-forum
I agree with Russ that certificates, certificate chains, and related
technologies should be limited to the "pure" implementations of the
algorithms. In my mind, the "pre-hash" versions are for larger objects.

I don't see any reason at all why there should be a correspondence between the
signature type on the certificate and the signature type on the object being
signed. Completely different use cases and no reason for consistency.
In fact, making them consistent is harmful, because it introduces the pre-hash
complexity into certificate hierarchies with no benefit.

-Tim

-Tim

> -----Original Message-----
> From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Russ
> Housley
> Sent: Wednesday, August 21, 2024 11:55 AM
> To: David Cooper <david....@nist.gov>
> Cc: pqc-forum <pqc-...@list.nist.gov>
> Subject: Re: [pqc-forum] OIDs for ML-KEM and "pure" signatures
>
> David:
>
> I would hope that a CA would not need pre-hash to sign CRLs. One can CRL
> Distribution Points to make sure that a CRL cannot grow to the point that
> pre-
> hash is needed. Since everyone will need to make new trust anchors for the
> new PQC algorithms, this mechanism can be deployed throughout the new
> PKI.
>
> That said, given the domain separation that has been specified, I cannot see
> how using the same private key with both pure and pre-hash can lead to
> compromise.
>
> Russ
>
>
> > On Aug 20, 2024, at 1:26 PM, 'David A. Cooper' via pqc-forum <pqc-

Orie Steele

unread,
Aug 21, 2024, 2:19:43 PM8/21/24
to Tim Hollebeek, Russ Housley, David Cooper, pqc-forum
Thanks Russ and Tim, just to state it one more time in my own terms:

Certs should not include Hash-ML-DSA OIDs, but they may use Hash-ML-DSA OIDs in signatures.

There would be no "way to restrict a cert to just Hash-ML-DSA" like there would be for JOSE and COSE.

Back to Sophie's original point, if you want to use a cert with only Hash-ML-DSA or only ML-DSA, you need to store that information somewhere else (not in the cert).

In JOSE / COSE you could generate a key and restrict it to just Hash-ML-DSA or just ML-DSA, and then use it for just those algorithms, and it would always have the same fingerprint.
In the case that you did not restrict the key, you could use it with both Hash-ML-DSA and ML-DSA.

With certs, you can generate a key and make a cert for just ML-DSA, but choose to use it with Hash-ML-DSA at some future point in time, and the cert will always have the same fingerprint.

Is this interpretation correct?

OS



 

Russ Housley

unread,
Aug 21, 2024, 2:28:33 PM8/21/24
to Orie Steele, Tim Hollebeek, David Cooper, pqc-forum
Orie:


There would be no "way to restrict a cert to just Hash-ML-DSA" like there would be for JOSE and COSE.

If NIST assigns a Hashed-ML-DSA OID, then there will be a way to restrict the public key to only using pre-hash.

Looking at RFC 4055 as an example:

rsaEncryption -- can be used with any RSA operation.

id-RSASSA-PSS -- can only be used with RSA PSS.

id-RSAES-OAEP -- can only be used with RSA OAEP,

Something similar can be done here with more OID assignments.

Russ

Sophie Schmieg

unread,
Aug 21, 2024, 3:19:52 PM8/21/24
to Tim Hollebeek, Russ Housley, David Cooper, pqc-forum
Orie: I like storing the algorithm with the key representation. While the RFC does not provide integrity protection, the fact that both public and private key need those in any case gives me some (maybe naive) hope that whatever integrity mechanism is used to protect the keys is extended to the entire key representation, given that it's usually easier to just store the entire thing in a secret storage opposed to putting it together on the fly.

Scott: While I'm not aware of any security issue, I would argue that using the same key prehashed and not prehashed is ill-defined, at least as long as we are talking about the faux prehashing using domain separator and OID, and not the true prehashing by running line 6 of algorithm 7 of ML-DSA in a different module.
A signature algorithm is a triple (G, S, V) [1], with 
G : 0 -R-> K x P
S : M x K -R-> T
V: M x T x P -> {0, 1}
where K is the private key space, P is the public key space, M is the message space, T is the signature space, and arrows with R in them are randomized functions taking an invisible entropy input as well.
The only way I can make sense of prehashed and non-prehashed versions of the signature algorithms is by modelling them as two different signature algorithm triples which happen to share the key generation function i.e. (G, S1, V1) for not prehashed and (G, S2, V2) for prehashed. But that implies that a key is either used only in scheme 1 or only in scheme 2, and cannot cross boundaries. Otherwise, we would define a new primitive of the optionally prehashed signature as a quintuple (G, S1, S2, V1, V2), but we could no longer just call this a signature scheme, it clearly has a different security contract. From that point alone, I would never use the same key prehashed and not prehashed, but I also don't like using the same X25519 key for both key agreement and signatures.

[1] See for example https://toc.cryptobook.us/book.pdf, Definition 13.1

Orie Steele

unread,
Aug 21, 2024, 3:22:07 PM8/21/24
to Russ Housley, Tim Hollebeek, David Cooper, pqc-forum
Russ:

Ok, now I'm fully caught up.

David kicked the thread off with:


"""
We plan to publish the OIDs for "pre-hash" ML-DSA and SLH-DSA in the
near future. However, there is still an open question of whether all
signature public keys should be identified using the "pure" OIDs or
public keys intended to create "pre-hash" signatures should be
identified with the "pre-hash" OIDs.

Using the "pure" OIDs to identify all public keys would allow a single
public key to generate both "pure" and "pre-hash" signatures.
"""

I basically said it's a "bad idea" to use "pure" OID public keys to verify "pre-hash" signatures.

If it's considered a "good idea" and I am wrong, we might consider acknowledging that this happens in the JOSE / COSE specs.

Something like:

Even when an ML-DSA-65 public key is restricted to only "ML-DSA-65" ; it MAY be used to verify ML-DSA-65 or HashML-DSA-65 signatures.
This is against the guidance in Section 5.4 of FIPS 204, but is done to align with << some reference to lamps >>.

To be clear, my position remains "don't use a public key for multiple algorithms".

And to Tim's point, if you don't want to allow "pre-hash" in certs... don't publish "pre hash" OIDs?

IMO, if you verify a HashML-DSA-65 OID signature with an ML-DSA-65 OID cert, you are not following the guidance in FIPS 204.

If lamps drafts plan to allow or encourage this, I would like to understand why.

OS




Russ Housley

unread,
Aug 21, 2024, 3:50:42 PM8/21/24
to Orie Steele, Tim Hollebeek, David Cooper, pqc-forum
Orie:


To be clear, my position remains "don't use a public key for multiple algorithms".

That is indeed the safest position.  David raised the corner case.

Russ

Mike Ounsworth

unread,
Aug 22, 2024, 12:03:30 PM8/22/24
to Scott Fluhrer (sfluhrer), Orie Steele, David A. Cooper, pqc-forum

Scott,

 

My personal opinion is that there’s a fair amount of overkill being blown around here.

 

The definition of Existential Unforgeability (EUF) is (according to Wikipedia): “EUF is the creation (by an adversary) of at least one message/signature pair, (m, \sigma), where m has never been signed by the legitimate signer”. So using the same key for pure and pre-hash violates EUF because if you do a pure signature of a message that happens to be the right length to be a SHA-x hash value, then you could present that (m, \sigma) as a valid pre-hashed message even though the signer never intended to sign the pre-image of that hash value. Or vice-versa where you take the hash value of a pre-hashed signature and present it as a plaintext pure-signed message.

 

I have my doubts about whether this attack is worth worrying about in practice since RSA and ECDSA have been working this way for decades and nobody is screaming about it. It seems to me that you either need to get incredibly lucky with the hash value just happening to have valid syntax and semantics for your protocol, or that you can invert the hash function and the pre-image turns out to be something useful. But I admit that binding the key to a mode does restore EUF security.

 

---

Mike Ounsworth

Image removed by sender.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion on the web visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAN8C-_%2BAtNQPwobORKSyBEoaCgYgCwo6%3Dkx0XBqwfj_Piz1F-Q%40mail.gmail.com.

--

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

~WRD0001.jpg

Scott Fluhrer (sfluhrer)

unread,
Aug 22, 2024, 12:08:55 PM8/22/24
to Mike Ounsworth, Orie Steele, David A. Cooper, pqc-forum

Actually, with the ‘pure’ and ‘prehash’ interfaces to ML-DSA (and SLH-DSA and presumably with the future FN-DSA), there is no EUF issue there.  It exists only with classical signatures (and so if we have a composite classical/ML-DSA signature, it’s not there either, because of the ML-DSA part)

Mike Ounsworth

unread,
Aug 22, 2024, 12:48:38 PM8/22/24
to Scott Fluhrer (sfluhrer), Orie Steele, David A. Cooper, pqc-forum

Ah. Right. FIPS 204 Algorithm 5: HashML-DSA binds a domain separator of the pre-hash alg used. Right, yes, I guess that addresses EUF. I have not yet fully digested the changes. Sorry.

 

---

Mike Ounsworth

 

From: 'Scott Fluhrer (sfluhrer)' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, August 22, 2024 11:09 AM
To: Mike Ounsworth <Mike.Ou...@entrust.com>; Orie Steele <or...@transmute.industries>; David A. Cooper <david....@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: [EXTERNAL] RE: [pqc-forum] OIDs for ML-KEM and "pure" signatures

 

Actually, with the ‘pure’ and ‘prehash’ interfaces to ML-DSA (and SLH-DSA and presumably with the future FN-DSA), there is no EUF issue there. It exists only with classical signatures (and so if we have a composite classical/ML-DSA signature,

image001.jpg

Phillip Hallam-Baker

unread,
Aug 26, 2024, 4:26:05 PM8/26/24
to Russ Housley, David Cooper, pqc-forum
I am just looking through my code which is built on top of the Microsoft .Net Core but I strongly suspect is very typical. The hash then sign/verify assumption is very much welded into the low level specs. That is the way we have done it for over 30 years now. I have a get out of jail free card because I had to implement Ed448 and ML-DSA myself and only call them from JOSE. But I would have to wait on the platform APIs to change before I can call an HSM through the platform drivers...

Not saying we can't change but change is probably going to come at a cost of delays in deployment which I am OK with and the risk of ugly implementation hacks which I am more concerned about.


Not saying this means we shouldn't go for pure. It is the better approach. But it is gonna be some code changes that are more than trivial.


--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Phillip Hallam-Baker

unread,
Aug 26, 2024, 5:52:22 PM8/26/24
to Russ Housley, David Cooper, pqc-forum
Another little road bump to watch out for, the way we introduced contexts into Ed25519 and Ed448 was 'non optimal' in that for historic reasons, we ended up with different ways to add the context.

And when we specified the prehash variants, we fixed the choice of hash function, from RFC 8032

For Ed25519ph, phflag=1 and PH is SHA512 instead.  That is, the input
   is hashed using SHA-512 before signing with Ed25519.

Ed448ph is the same but with PH being SHAKE256(x, 64) and phflag
   being 1, i.e., the input is hashed before signing with Ed448 with a
   hash constant modified.

ML-DSA allows any digest that satisfies the security level to be used:
In order to maintain the same level of security strength when the content is hashed at the application level
or using HashML-DSA , the digest that is signed needs to be generated using an approved hash function
or XOF (e.g., from FIPS 180 [8] or FIPS 202 [7]) that..

I believe that the ML-DSA scheme is better. Being told what pre hash to use is like not being able to choose the hash at all. And that is a right royal PITA for specs like mine where we are hashing content for multiple purposes.

Binding the signature to the OID of the digest like ML-DSA does is how we have always done it in RSA and given that we have yet to achieve widespread deployment of Ed25519 and Ed448, I think we should look at making that spec conformant with the ML-DSA approach.


Reply all
Reply to author
Forward
0 new messages