PKCS#7 / CMS signed data content and PQC algorithms

1,558 views
Skip to first unread message

Stephan Mueller

unread,
Nov 21, 2024, 3:51:26 AM11/21/24
to pqc-forum
Hi,

during the implementation of a PKCS#7/CMS message generator for PQC algorithms
(see [1]), I would like to ask the collective brain about the following:

RFC5652 section 5.4 ([2]) outlines that the data to be protected shall be
hashed an then later be signed. Together with the mandatory field
"digestAlgorithms" in the SignerInfo ([3]) which defines the used digest
algorithm, I infer that:

1. the message digest must be applied to the data to be signed

2. as we have a message digest, the PQC algorithms seemingly must always be
used in their pre-hashed variant.

The reason for confirming this suspicion is the following: it is certainly
technically possible to use the non-pre-hashed variants of PQC when performing
the signature operation. But in this case:

* the message digest algorithm specification is not used at all when not
having authenticate attributes

* when having authenticated attributes which mandate the hashing of the
original data and then signing the authenticated attributes, the message
digest algorithm is only applied to the original message hashing, but not to
the signature operation of the authenticated message itself.


[1] https://github.com/smuellerDD/leancrypto/tree/master/asn1/src#pkcs7-message-generator

[2] https://www.rfc-editor.org/rfc/rfc5652#section-5.4

[3] https://www.rfc-editor.org/rfc/rfc5652#section-12.1

Thanks a lot
Stephan


Kampanakis, Panos

unread,
Nov 21, 2024, 12:05:47 PM11/21/24
to Stephan Mueller, pqc-...@list.nist.gov
Hi Stephan,

Please check Section 4 of draft-ietf-lamps-cms-sphincs-plus https://datatracker.ietf.org/doc/html/draft-ietf-lamps-cms-sphincs-plus#name-signed-data-conventions CMS always uses pure mode. When signed attributes are present the data is digested "using the same hash function that is used in the SLH-DSA tree". https://datatracker.ietf.org/doc/html/draft-salter-lamps-cms-ml-dsa-00#name-pure-mode-vs-pre-hash-mode says something similar for ML-DSA in CMS.

In other words CMS already supports optional predigesting and thus we don't need the HashSLH-DSA. That is more straightforward and aligns with what CMS implementations do today.


-----Original Message-----
From: 'Stephan Mueller' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, November 21, 2024 3:51 AM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: [EXTERNAL] [pqc-forum] PKCS#7 / CMS signed data content and PQC algorithms

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/14312363.ZIWU6Lj1NH%40tauon.atsec.com.

Stephan Mueller

unread,
Nov 21, 2024, 12:51:40 PM11/21/24
to pqc-...@list.nist.gov, Kampanakis, Panos
Am Donnerstag, 21. November 2024, 18:05:30 Mitteleuropäische Normalzeit
schrieb 'Kampanakis, Panos' via pqc-forum:

Hi Panos' via pqc-forum,

> Hi Stephan,
>
> Please check Section 4 of draft-ietf-lamps-cms-sphincs-plus
> https://datatracker.ietf.org/doc/html/draft-ietf-lamps-cms-sphincs-plus#nam
> e-signed-data-conventions CMS always uses pure mode. When signed attributes
> are present the data is digested "using the same hash function that is used
> in the SLH-DSA tree".
> https://datatracker.ietf.org/doc/html/draft-salter-lamps-cms-ml-dsa-00#name
> -pure-mode-vs-pre-hash-mode says something similar for ML-DSA in CMS.
>
> In other words CMS already supports optional predigesting and thus we don't
> need the HashSLH-DSA. That is more straightforward and aligns with what CMS
> implementations do today.

Thank you. Just to clarify: in case of not having signed attributes, the
digestAlgorithm set with the signer is to be ignored?

Thanks

Ciao
Stephan


Mike Ounsworth

unread,
Nov 21, 2024, 2:37:26 PM11/21/24
to Stephan Mueller, pqc-forum

Hi Stephan,

 

I have been part of some discussion of this. My understanding is that the hashing at the CMS level is completely disjoint from any hashing done as part of the algorithm. For example, today you would perform a message digest according to the algorithm set in the SignerInfo, and then you would separately perform the pre-hashing step of sha256withRSA. These are two separate hash steps and are not combined. Two historical reasons for this:

 

1. sha256withRSA is often implemented (and FIPS / CC certified) as an atomic operation. Pulling the hash step entirely into the CMS library would be a weird abstraction layer violation of the crypto module. Said a different way: most crypto libraries and hardware will expose an interface for sha256withRSA, and not an interface for RSA by itself.

2. The bit with signed attributes, as you say, means that this is really part of the protocol and not part of the signature primitive.

 

 

You should also be aware that for ML-DSA in particular, we are leaning towards completely forbidding HashML-DSA within X.509 and instead standardizing what we are calling “ExternalMu-ML-DSA” which is allowed by the inline comment in FIPS 204 Alg 7 line 6. See this work-in-progress draft for the full idea. Feedback is most welcome.

 

https://lamps-wg.github.io/dilithium-certificates/draft-ietf-lamps-dilithium-certificates.html#name-pre-hashed-mode-externalmu-

 

---

Mike Ounsworth

 

From: 'Stephan Mueller' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, November 21, 2024 2:51 AM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: [EXTERNAL] [pqc-forum] PKCS#7 / CMS signed data content and PQC algorithms

 

Hi, during the implementation of a PKCS#7/CMS message generator for PQC algorithms (see [1]), I would like to ask the collective brain about the following: RFC5652 section 5.4 ([2]) outlines that the data to be protected shall be hashed an then

Hi,
 
during the implementation of a PKCS#7/CMS message generator for PQC algorithms 
(see [1]), I would like to ask the collective brain about the following:
 
RFC5652 section 5.4 ([2]) outlines that the data to be protected shall be 
hashed an then later be signed. Together with the mandatory field 
"digestAlgorithms" in the SignerInfo ([3]) which defines the used digest 
algorithm, I infer that:
 
1. the message digest must be applied to the data to be signed
 
2. as we have a message digest, the PQC algorithms seemingly must always be 
used in their pre-hashed variant.
 
The reason for confirming this suspicion is the following: it is certainly 
technically possible to use the non-pre-hashed variants of PQC when performing 
the signature operation. But in this case:
 
* the message digest algorithm specification is not used at all when not 
having authenticate attributes
 
* when having authenticated attributes which mandate the hashing of the 
original data and then signing the authenticated attributes, the message 
digest algorithm is only applied to the original message hashing, but not to 
the signature operation of the authenticated message itself.
 
 
 
Thanks a lot
Stephan
 
 
-- 
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Mike Ounsworth

unread,
Nov 21, 2024, 4:19:55 PM11/21/24
to Stephan Mueller, pqc-...@list.nist.gov, Kampanakis, Panos
> Just to clarify: in case of not having signed attributes, the 
digestAlgorithm set with the signer is to be ignored?

 

I believe not. I believe that the hashing done as part of CMS SignerInfo, and any hashing done as part of the signature primitive are always separate and un-related. This is well-established behaviour of CMS, which underpins all sorts of things (S/MIME, signed PDF, Windows code signing, etc). I’m sure there are samples around somewhere (but I don’t have them on-hand, so maybe not a helpful comment).

 

---

Mike Ounsworth

 

From: 'Stephan Mueller' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, November 21, 2024 11:52 AM
To: pqc-...@list.nist.gov; Kampanakis, Panos <kpa...@amazon.com>
Subject: [EXTERNAL] Re: [pqc-forum] PKCS#7 / CMS signed data content and PQC algorithms

 

Am Donnerstag, 21. November 2024, 18:05:30 Mitteleuropäische Normalzeit schrieb 'Kampanakis, Panos' via pqc-forum: Hi Panos' via pqc-forum, > Hi Stephan, > > Please check Section 4 of draft-ietf-lamps-cms-sphincs-plus > https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draft-ietf-lamps-cms-sphincs-plus*nam__;Iw!!FJ-Y8qCqXTj2!eMZriYnD4hKo0Jzct_iPa2KKjb-3sLy02fB9PdJERj3a95XaybSX-j5lGGJq9z3xEfb43YBGIvGhd5ISx535el51QQmH$

Am Donnerstag, 21. November 2024, 18:05:30 Mitteleuropäische Normalzeit 
schrieb 'Kampanakis, Panos' via pqc-forum:
 
Hi Panos' via pqc-forum,
 
> Hi Stephan,
> 
> Please check Section 4 of draft-ietf-lamps-cms-sphincs-plus
> e-signed-data-conventions CMS always uses pure mode. When signed attributes
> are present the data is digested "using the same hash function that is used
> in the SLH-DSA tree".
> -pure-mode-vs-pre-hash-mode says something similar for ML-DSA in CMS.
> 
> In other words CMS  already supports optional predigesting and thus we don't
> need the HashSLH-DSA. That is more straightforward and aligns with what CMS
> implementations do today.
 
Thank you. Just to clarify: in case of not having signed attributes, the 
digestAlgorithm set with the signer is to be ignored?
 
Thanks
 
Ciao
Stephan
 
 
-- 
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Simo Sorce

unread,
Nov 21, 2024, 4:25:59 PM11/21/24
to Mike Ounsworth, Stephan Mueller, pqc-forum
On Thu, 2024-11-21 at 19:37 +0000, 'Mike Ounsworth' via pqc-forum wrote:
> You should also be aware that for ML-DSA in particular, we are leaning towards completely forbidding HashML-DSA within X.509 and instead standardizing what we are calling “ExternalMu-ML-DSA” which is allowed by the inline comment in FIPS 204 Alg 7 line 6. See this work-in-progress draft for the full idea. Feedback is most welcome.

Hi Mike,
can you expand on why you are leaning this way?

To compute mu you have to mix as the first element the hash of the
public key corresponding to the private key that will ultimately apply
the signature.

This means that if you ever need to apply multiple signatures you end
up having to hash the whole message multiple times. It also means that
in general you cannot use a pre-computed hashes of the content and just
apply a signature.

This doesn't look very friendly to many scenarios where you want to
decouple hashing from signing.

Additionally many HW modules many not give you the ability to provide
mu at this time, which means that the operation would be very expensive
and/or impossible for single shot signers as the whole message would
have to piped through the HW module.

So it is puzzling to me that you are trying to standardize on passing
in mu and/or the whole message instead of a more balanced approach
where you use Pure ML-DSA to sign a pre-computed hash.

Is there a summary document with the rationale of this proposal ?

--
Simo Sorce
Distinguished Engineer
RHEL Crypto Team
Red Hat, Inc

Tim Hollebeek

unread,
Nov 21, 2024, 4:33:04 PM11/21/24
to Simo Sorce, Mike Ounsworth, Stephan Mueller, pqc-forum
There was a very long discussion of this at IETF 121's LAMPS meeting, which I
highly recommend for anyone interested in this complicated issue. It was also
discussed a bit at PQUIP.

Here is the LAMPS recording: https://www.youtube.com/watch?v=pQzooWyxR9Y

-Tim

> -----Original Message-----
> From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Simo
> Sorce
> Sent: Thursday, November 21, 2024 4:26 PM
> To: Mike Ounsworth <Mike.Ou...@entrust.com>; Stephan Mueller
> <smue...@chronox.de>; pqc-forum <pqc-...@list.nist.gov>
> Subject: Re: [EXTERNAL] [pqc-forum] PKCS#7 / CMS signed data content and
> PQC algorithms
>
> --
> You received this message because you are subscribed to the Google Groups
> "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to pqc-forum+...@list.nist.gov.
> To view this discussion visit
> https://groups.google.com/a/list.nist.gov/d/msgid/pqc-
> forum/ec4522bb94bd27dbb390782637d221dcf9c20d4b.camel%40redhat.c
> om.

Mike Ounsworth

unread,
Nov 21, 2024, 5:53:35 PM11/21/24
to Tim Hollebeek, Simo Sorce, Stephan Mueller, pqc-forum
[changing the title to attract the right eyes]

For those who don't have the reply history, we are discussing this proposed
IETF text to ban HashML-DSA within X.509 and allow instead
"ExternalMu-ML-DSA" (note this is a github link since this text is not yet
up on IETF Datatracker, and it may go stale):
https://lamps-wg.github.io/dilithium-certificates/draft-ietf-lamps-dilithium
-certificates.html#name-pre-hashed-mode-externalmu-


Hi Simo,

Yes, the recordings of both the recent IETF 121 LAMPS and PQUIP sessions
contain a lot of useful discussion on this point.

I am happy to summarize here because I think this is a very important point
that the IETF and HSM communities need to get agreement on. This email got
longer than I intended, but I'll send it in full. The IETF meeting two weeks
ago had strong opinions on this topic, and I have to admit that I'm a
convert, so I apologize that my tone is rather enthusiastic. I do respect
that there are completely valid opposing opinions here, and I hope that we
can come to a friendly middle-ground.


I'll start with something a little spicy (sorry NIST). We are in this mess
because FIPS 204 actually defines three different signature APIs: Algorithm
2, Algorithm 4, and the "unnamed" algorithm that you get when you take the
thing that's implied by the inline comment on line 6 of Algorithm 7 and you
unroll that into a full set of algorithms -- which is what we have done in
the IETF draft linked above and we named it "ExternalMu-ML-DSA". It has two
steps:
ExternalMu-ML-DSA.Prehash(pk, M, ctx) --> mu
ExternalMu-ML-DSA.Sign(sk, mu) --> sig

The mess is (in part) because FIPS 204 Section 5.4 (HashML-DSA) was added at
the last minute of the initial public draft period and never got public
review. Don't get me wrong, I'm grateful that NIST added pre-hashing, but I
think we need to slow down and consider that we are now having the public
review period for section 5.4. And I think we need to do this NOW before HSM
vendors sink all sorts of time and money into ML-DSA implementations that
don't offer the functionality that the PKI community wants. (ditto on
expanded private keys vs seed private keys, but that's an entirely different
topic for a different thread.)



Here's the technical content:

The BIG pro for ExternalMu-ML-DSA is that from the outside, it is completely
indistinguishable from ML-DSA (ie they are mathematically equivalent
functions), so it can be 100% buried as implementation detail *inside*, for
example, a P11 library. I can have a public key in a certificate and
sometimes signatures are pre-hashed client-side, and sometimes the whole
message M goes to the HSM, and this is all just internal detail of the HSM
driver. They produce indistinguishable signatures and therefore there is
only one .Verify() function. All the protocol- and application-level mess of
having to manage two "modes" goes away.

The somewhat small con for ExternalMu-ML-DSA is that you need to have the
public key hash pk.tr available, which may not be practical in all
situations. I say "somewhat small" because it seems to me that this can be
easily handled by the HSM / P11 library exposing a flag so that applications
can choose to use ExternalMu when they have the public key handy, and
"one-step" ML-DSA when they don't. Now application developers can manage
themselves the tradeoff between needing to have the public key available on
the client, or needing to stream the entire content to the HSM.

There is also a somewhat small security argument against ExternalMu-ML-DSA
in that it now becomes possible to mis-match pk and sk (ie create a
signature that will not verify against the intended public key, and will
potentially verify against a different public key). I say "somewhat small"
because this is an implementation-dependent security difference between
ML-DSA and ExternalMu-ML-DSA, and because HashML-DSA already has zero
security in this regard since it has completely dropped pk from the
pre-hash, so even a horrible implementation of ExternalMu-ML-DSA has more
security in this regard than the best implementation of HashML-DSA.


The BIG con for HashML-DSA is that NIST decided that we now use the same
OIDs for Public Keys and Signature Algorithms, which in general is a good
design pattern, but in this case it means that you have to decide when
making your X.509 certificate whether your ML-DSA key will be to produce
"pure" or "pre-hashed" signatures, and then it can only do that thing for
the entire life of the key (well, technically of the certificate). This is
somewhat unfortunate because this is completely an artificial restriction
due to how OIDs and certificates work, but unfortunately we have to live
with it, and that's going to be a PAIN for applications that won't know at
key-gen time. Also the security thing about undoing the tr collision
resistance will give many people heartburn and lead to many blogs and bad
press about how HashML-DSA is weaker than ML-DSA or ExternalMu-ML-DSA (I may
write some of these blogs myself).

So as a community, we need to decide whether we care more about signer
convenience (having to truck around pk.tr), or application and verifier
convenience (needing to commit to one signing mode at keygen time, and
needing to have two different .Verify() functions and carry markers in every
protocol telling you which verifier to use, and since either verifier will
produce a valid result, this sounds like it's designed to generate CVEs).



I'll respond directly to this point:

> This means that if you ever need to apply multiple signatures you end up
> having to hash the whole message multiple times. It also means that in
general
> you cannot use a pre-computed hashes of the content and just apply a
signature

Is this common? I am aware of protocols like CMS or OpenPGP or JWT where you
can do multiple detached signatures in an efficient way on a single message,
but this is all handled at the protocol layer.
Implementing this behaviour by taking "sha256withRSAEncryption" and
separating it into "sha256" which you do once, and "withRSAEncryption" which
you do multiple times, feels like an abuse of the signature primitive, has
weird implications for crypto library abstraction layers (do libraries even
expose an interface for "withRSAEncryption", and if so, should they? Is this
not straying into "don't roll your own encryption" territory by exposing raw
RSA to end users?), It also prevents any innovation in terms of security
properties, such as the ones you get from ML-DSA binding tr, or SLH-DSA
binding both PK and R. This feels like the kind of behaviour that we should
be discouraging, not using as an argument for opposing progress.


---
Mike Ounsworth
Apostle of the church of ExternalMu-ML-DSA

Simo Sorce

unread,
Nov 21, 2024, 6:18:52 PM11/21/24
to Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
Pardon me,
but is all the trade-off based on saving one small round of hashing?

I am not sure I understand why HashML-DSA would be step down in terms
of security, at the core the signature binds the public key just like
ML-DSA does so I do not understand the loss of binding to pk.tr you
mention.

I am not saying HashML-DSA is necessarily a great choice and perhaps
for PKI the payload to sign is always small enough that even if a token
does not support externally provided mu just piping in the whole
payload will be fine.

My main concern is that other, less skilled protocol designers, will
just assume that mu can *always* provided outside the crytpo boundary
and start designing protocols that sign *large* payloads assuming
ExternalMu-ML-DSA is always possible, just to save a small additional
hashing step at the end.

Additionally allowing bare mu trading can lead to people simply
unbinding pk.tr and writing protocols that pass in an arbitrary hash
(and hash mechanism) into the ExternalMu-ML-DSA generating a completely
non compliant signature.

There is nothing stopping these two scenarios if you give those APIs to
the masses because the crypto module can not check that MU is valid at
all, it can only check that the length of the blob being passed in is
correct, nothing else.

Simo.


On Thu, 2024-11-21 at 22:53 +0000, 'Mike Ounsworth' via pqc-forum

Sophie Schmieg

unread,
Nov 21, 2024, 7:44:44 PM11/21/24
to Simo Sorce, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
The main security concern I have around HashML-DSA is that when the hash function is specified only in the payload, and not the public key, the scheme ceases to be a cryptographic signature scheme, as it does not have the same definition as used in the academic analysis of signature schemes. It also means that if a hash function is later broken and allows for second preimages, the already signed tokens create an existential forgery. If the hash function was externally provided and authenticated, this could be avoided. This is a very theoretic concern, mostly because a) second preimage attacks are exceedingly rare, and b) if the alternative is specifying the hash function in the public key, the mitigation would be a change of the public key as is, so you might as well also change the non-prehashed bits.

The practical problem I have with HashML-DSA is mostly one of interoperability: It is yet another, incompatible scheme, which may or may not be supported in various systems. And in this case, it is completely unnecessary, since Algorithm 7, Line 6 already provides all we need for a prehashed version of ML-DSA, without introducing any of the incompatibilities. There is simply no reason for HashML-DSA to exist, it does not solve any problem not already solved by external µ/streamed µ.
> ---
> Mike Ounsworth
> Apostle of the church of ExternalMu-ML-DSA
>
 Does that make me the founder of the Church of External-µ-ML-DSA?

Mike Ounsworth

unread,
Nov 21, 2024, 7:45:37 PM11/21/24
to Simo Sorce, Tim Hollebeek, Stephan Mueller, pqc-forum
> Pardon me,
but is all the trade-off based on saving one small round of hashing?

 

No, that matters almost not at all. I did not even mention that in my email below.

 

The main problem is the absolute disaster that’s gonna happen at the operational level when you start having to commit at key-gen time to whether your ML-DSA key will do pure or pre-hashed signatures, and also the absolute disaster that’s gonna happen at the protocol level when we need to carry in the message whether to use ML-DSA.Verify() or HashML-DSA.Verify(), and if the client gets it wrong, then you have a security vulnerability, which means this data needs to be protected inside the signed data, which most protocols today don’t do, it’s a nasty chicken-and-egg problem.

ExternalMu-ML-DSA does not have either of these problems because the .Verify() function is identical or both ML-DSA and ExternalMu-ML-DSA.

 

 

 

> I am not sure I understand why HashML-DSA would be step down in terms
of security, at the core the signature binds the public key just like
ML-DSA does

 

No, it doesn’t, not if you put a pre-hash in front of it.

 

Let’s assume an attack model where the hash function used for the HashML-DSA prehash is broken like SHA-1 so that it’s easy to find collision pairs where H(m1) = H(m2). Then I trivially have a signature collision in HashML-DSA.

The same is only true for ML-DSA if the collision was computed against that public key, ie H(tr || m1) = H(tr | m2). In other words, it needs to be a per-public-key collision search. This is not quite as robust as SLH-DSA which rolls in a completely unpredictable nonce, but it’s something.

 

So while HashML-DSA is not worse that RSA, it’s not taking advantage of modern crypto design techniques. Insisting that we MUST maintain the behaviour from the ‘90’s is impeding both the immediate signature security improvements in FIPS 204 and 205, and also any future innovations. In my opinion, we need to treat the signature primitives as a black box and stop interfering with their internal security properties. You may argue that separating out mu is also doing this, but I argue that it is not; it is separating the work between two collaborating crypto modules, but it is not altering the behaviour of the algorithm.

 

 

 

 

> Additionally allowing bare mu trading can lead to people simply
unbinding pk.tr and writing protocols that pass in an arbitrary hash
(and hash mechanism) into the ExternalMu-ML-DSA generating a completely
non compliant signature.

 

Again, I’m proposing that ExternalMu-ML-DSA.Prehash() and ExternalMu-ML-DSA.Sign() can be split across a P11 driver and its backing HSM; not that we should be exposing ExternalMu-ML.Sign() to the application. I fully agree that allowing protocol designers to get creative with how they construct mu is a complete and total violation of the ML-DSA algorithm. Also, a signature generated this way would fail to validate against a standard .Verify() function, since that independently computes mu. In addition to that failing test vectors, I suspect that CMVP would not certify that anyhow because FIPS 204 makes it clear that the ExternalMu prehash step still has to be done in a certified crypto module, just not necessarily *the same* crypto module.

(Alg 7 line 6):
“> message representative that may optionally be computed in a different cryptographic module”

 

 

 

 

 

This is excellent discussion. It’s really unfortunate that we didn’t get to have this discussion *before* FIPS 204 was published because now I’m afraid that we are fighting against inertia that people have already sunk investment into their ML-DSA implementations. I think it’s very clear that ML-DSA and ExternalMu-ML-DSA is WAY LESS MESSY at the X.509 layer than ML-DSA and HashML-DSA since the latter requires a complete duplicate set of OIDs, and a complete duplicate set of .Verify() implementations.

 

---

Mike Ounsworth

John Gray

unread,
Nov 21, 2024, 8:21:52 PM11/21/24
to Mike Ounsworth, Simo Sorce, Tim Hollebeek, Stephan Mueller, pqc-forum
I agree.   At IETF 121 a decision was made not to support HashML-DSA (and HashSLH-DSA) in certificates (and hence PKI).  My concern was a CA that needs to signs large CRLs is now going to have to send massive amounts of data across a network instead of just a hash.   The response was "they should not do that".  I agree they shouldn't, but I know in practise they will.   So then the solution could be delegated signing so then you have a ML-DSA root issuing a CRL sign HashML-DSA cert which could work if IETF allowed HashML-DSA certificates.  However that is not ideal because it makes validation chains longer and lots of CAs do direct signing... so now it gets messy.   The ExternalMu-ML-DSA solves this real world problem elegantly and that is why it belongs in the Dilithium certs IETF draft. 

Cheers,

John Gray 



   

Simo Sorce

unread,
Nov 22, 2024, 12:08:44 PM11/22/24
to Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
Thanks for the explanation Mike,
I do not have a strong held belief one way or another but I am
uncomfortable with external mu computation, so take the following and
previous comments as just devil's advocate and exploration of this
topic.


So if I understand it correctly you are raising two issues here:
1) ML-DSA and HashML-DSA are effectively two distinct algorithms, but
share the same OIDs making it hard to properly support both in X.509
where signatures are identified by said OIDs.
2) Using different Hashes in HAshML-DSA potentially opens up to
collision attacks on the payload.

I agree, and this is unfortunate, at the very least the signature
should have included a way to discriminate what form is being used or
different OIDs should have been used.
Because of this there are only two options: select only one variant, or
add oids, clearly the latter seem not favored, but I am not sure why
not.

For the sake of argument, and if the double hashing is not a
consideration, in theory you could standardize on HashML-DSA instead of
ML-DSA and from the pov of algorithm confusion about what the oid means
you would still have no confusion as only one variant would be
allowed..

This bring to the second issue, HashML-DSA may be subject to attacks
that ML-DSA is not because you can use a different Hash for the pre-
hashing than used for the signature and that hash could have
weaknesses.

For this a simple solution could be that X.509 allows only to use the
same Hash used in ML-DSA to compute the pre-hash for HashML-DSA,
reducing the only difference in attack capability to be the same as
that against ML-DSA.

From the pov of finding a collision I think we can agree that whether
the payload is pk.tr||M or just ctx||M makes no difference, all of the
message being signed is public information, and all of the actual
potential attacks would be specific to the signed content anyway (the
specific certificate/signature you want to forge against). M is
generally so specific that in most cases the "per-public-key" part
makes no real difference for X.509 given certificates usually contain
unique information anyway and you wouldn't be able to "reuse" the
search for a collision against just M for a different sk/pk pair
signature anyway.


So in theory you could say that X.509 standardizes on HashML-DSA only
with SHAKE256 as the only valid pre-hash digest and you would have
equivalent security and implementability guarantees without exposing mu
computation outside of the crypto module.



A third option could be to use ML-DSA only but force the payload to be
additionally pre-hashed with an X.509 specific structure that could
improve over what HashML-DSA did not deliver w/o exposing MU:

Example:
X509_ML-DSA_Sign = ML-DSA( ctx || r || H(r || M) ), where ctx includes
some context string like "X509" and specifies the Hash H (perhaps as an
oid or just another string like "SHAKE256") and r is a random nonce.

This would give you all of the properties you need (I think) with the
only potential downside of "double-hashing".

X509_ML-DSA_Sign would probably be better represented as separate oid
or family of oids if you want to allow Hash agility for the preHash
part.

HTH,
Simo.

PS: Note that the discussion about software driver + hsm hw is a red
herring because the certification can be a combined hw+sw module and
therefore such a thing can be built regardless of whether external-mu
is allowed or not, therefore I do not see that argument as a positive
or negative addition to this discussion. If we had PAI in general CPUs
that compute ML-DSA in hw and take mu in input a "pure" general purpose
software would also be possible without external-mu being allowed,
because once again this would be an internal detail of the crypto
module.

Simo Sorce

unread,
Nov 22, 2024, 12:16:17 PM11/22/24
to John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
Hi John,
this problem could also be solved by creating an ephemeral intermediate
certificate at the time of CRL signing.

Generate CRL,
Generate Certificate and Keys E
Sign CRL with E with some fast, local module.
Sign E with CA
Discard E private key.
Bundle E with CRL.

This avoids depending on specific characteristics of an implementation
of ML-DSA that exposes MU or has specific performance guarantees, and
is a generic mechanism that can be used with *any* future signature
mechanism, whether it allows or not efficient, out of module, pre-hash
computation.

Simo.
> > necessarily **the same** crypto module.
> >
> > (Alg 7 line 6):
> > “> message representative that may optionally be computed in a different
> > cryptographic module”
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > This is excellent discussion. It’s really unfortunate that we didn’t get
> > to have this discussion **before** FIPS 204 was published because now I’m
> > afraid that we are fighting against inertia that people have already sunk
> > investment into their ML-DSA implementations. I think it’s very clear that
> > ML-DSA and ExternalMu-ML-DSA is WAY LESS MESSY at the X.509 layer than
> > ML-DSA and HashML-DSA since the latter requires a complete duplicate set of
> > OIDs, and a complete duplicate set of .Verify() implementations.
> >
> >
> >
> > ---
> >
> > *Mike* Ounsworth
> >
> >
> >
> > *From:* Simo Sorce <si...@redhat.com>
> > *Sent:* Thursday, November 21, 2024 5:19 PM
> > *To:* Mike Ounsworth <Mike.Ou...@entrust.com>; Tim Hollebeek <
> > tim.ho...@digicert.com>; Stephan Mueller <smue...@chronox.de>;
> > pqc-forum <pqc-...@list.nist.gov>
> > *Subject:* Re: [EXTERNAL] [pqc-forum] HashML-DSA vs ExternalMu-ML-DSA
> > > https://urldefense.com/v3/__https://lamps-wg.github.io/dilithium-certificates/draft-ietf-lamps-dilithium__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNprZJVJ83$ <https://urldefense.com/v3/__https:/lamps-wg.github.io/dilithium-certificates/draft-ietf-lamps-dilithium__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNprZJVJ83$>
> > > Here is the LAMPS recording: https://urldefense.com/v3/__https://www.youtube.com/watch?v=pQzooWyxR9Y__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNpn7F3K35$ <https://urldefense.com/v3/__https:/www.youtube.com/watch?v=pQzooWyxR9Y__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNpn7F3K35$>
> > > > https://urldefense.com/v3/__https://groups.google.com/a/list.nist.gov/d/msgid/pqc-__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNpg-TC-Tl$ <https://urldefense.com/v3/__https:/groups.google.com/a/list.nist.gov/d/msgid/pqc-__;!!FJ-Y8qCqXTj2!aqgrbQPWAXesGdra0vqFtAsQwwQrMX4fGqeA2sSjkltj7raKoku1ET1xXRDz8thrAEtEBRZNpg-TC-Tl$>
> >
> > > > forum/ec4522bb94bd27dbb390782637d221dcf9c20d4b.camel%40redhat.c
> >
> > > > om.
> >
> > >
> >
> >
> >
> > --
> >
> > Simo Sorce
> >
> > Distinguished Engineer
> >
> > RHEL Crypto Team
> >
> > Red Hat, Inc
> >
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "pqc-forum" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to pqc-forum+...@list.nist.gov.
> > To view this discussion visit
> > https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB5739E153E7EF72C6911C598B9F232%40CH0PR11MB5739.namprd11.prod.outlook.com
> > <https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CH0PR11MB5739E153E7EF72C6911C598B9F232%40CH0PR11MB5739.namprd11.prod.outlook.com?utm_medium=email&utm_source=footer>
> > .

Sophie Schmieg

unread,
Nov 22, 2024, 1:14:28 PM11/22/24
to Simo Sorce, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
There seems to be some lingering concerns about what would happen if you tricked an external-µ ML-DSA implementation into signing a message representative that is not SHAKE256(SHAKE256(pk, 64) || 0x00 || 0x00 || m, 64). For some reason, the same concern seems to not apply to an external-SHA256 HashML-DSA implementation being tricked into signing a message representative that is not SHA256(m), despite both being secure hash functions (SHAKE256 is even slightly more secure).
What will happen (in either case) is the same thing: The signer produces a signature that cannot be verified.
The step of reducing a message into a message representative via a hash function (whether that is SHA256 or SHAKE256) is in order to compress the message to something more workable for the signature scheme. In the end, what a signature attests to is the message representative, with the properties of the hash function leading to that attestation to be an attestation to the message itself.

There are two possible concerns I can see, if you do not trust NIST's assessment that external µ is secure on faith alone:
1) There are signature algorithms where choosing a malicious message identifier can lead to leakage of the private key, SLH-DSA is one such example (which is why there is no Church of External-Digest-SLH-DSA). However, ML-DSA uses the message representative in Fiat-Shamir with aborts. Fiat-Shamir, by its very construction, can handle any message identifier, even if chosen maliciously, since it has to hash it further to incorporate the commitment for the underlying zero knowledge protocol. You can see this by the fact that µ is only used in lines 7 and 15 and in both cases it is used as input into a hash function together with cryptographically relevant entropy unknown to the party choosing µ (K and rnd in the case of line 7, with K being part of the private key, and rnd being optional additional cryptographic randomness, w1Encode(w1) in the case of line 15, the high order bits of Ay, where y is computed via the unpredictable hash in line 7, and is unpredictable to the attacker. In fact, if the attacker could predict the result of line 15, they could just recover s1 and s2 by knowing w1 on aborting cases).

2) The public key is an input into µ. What happens if one uses a different public key here? The short answer is, one gets a signature that does not verify. There is also a slightly more interesting longer answer, though. To simplify things, let's pretend that ML-DSA did not perform public key compression, i.e. the public key was the full t = As1 + s2, and the signature did not contain any hints. The compression is not security relevant, so assuming the attacker doesn't have to work around it is just going to make our lives easier. You could, if you wished, try to make small adjustments to t by adding another small error term s3 to it, and hoping that the sum of s2 + s3 is small enough to not destroy the value of knowing s1. Of course, without knowledge of the private key, you have no way of telling whether the error you added pushed things out of bounds. Hashing µ with this alternate public key and sending it off to be signed will possibly result in a valid signature. However, all the attacker accomplished is creating a worse situation for themselves than they had, had they not manipulated t in this fashion: They could already create signatures, now they can create signatures for a very similar, but slightly different public key, and those signatures sometimes fail to verify.
Public keys have to be authentic for a signature scheme to work. Creating signatures for a related, but differing public key does not break any security guarantees to begin with, and is a property present in other schemes as well, for example ECDSA. Similar to ML-DSA you can do a small alteration to the public key, namely flipping the sign on the y coordinate, to create an alternate public key that a modified signature (-r, s) would also verify for.
Note further that the far bigger risk in any prehashed scheme, whether external µ or HashML-DSA is the fact that the party computing the hash has to be trusted: The private key holder, signing only a hash, has no idea what it is signing, so a malicious hashing party can just manipulate the message that is being signed, instead of the hash itself, using a classic confused deputy attack.

Using external µ has a slight security benefit over using HashML-DSA's message representative, even: Assuming the message has low entropy, an adversary that intercepts only the message representative, and has no knowledge of the public key, cannot brute force the message in the case of µ, while in the case of HashML-DSA, they can. Whether that property ever matters is a different question.



--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

Simo Sorce

unread,
Nov 22, 2024, 1:49:13 PM11/22/24
to Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, 2024-11-22 at 10:14 -0800, Sophie Schmieg wrote:
> There seems to be some lingering concerns about what would happen if you
> tricked an external-µ ML-DSA implementation into signing a message

I would like to be clear here that my concern is not about an adversary
trucking an oracle to sign the wrong thing, at all, I do assume whoever
provide the message, let alone the hashes needs to be fully trusted of
course.

My concern is mostly (semi)intentional abuse of the crypto module
implementation by application writers or by protocol engineers, whereby
both the client and the server do fully interoperate and their
signatures verify because both sides pass in a badly formed mu.

For example passing in a SHAKE256(M) (or another, possibly truncated
or, ugh, padded hash) instead of the proper mu, without pk.tr or
anything else hashed in.

If APIs to pass in a pre-hashed mu are not available, it is not
possible to do this with "standard" cryptography modules, the
implementer would have to write their own implementation.

But if such API exists and is even required for common scenarios, then
there is a bigger chance for this to happen, whether intentionally or
not.

We've seen it time and again with raw RSA signatures.

My concern is mostly the use of such APIs by people with enough
background to know how to make API calls, but not enough to understand
the cryptographic consequences of what they are doing, which frankly is
the majority of software developers.

That is all, I fully understand and agree with your concerns (maybe
with varying degrees of weight on those), I just think that we could
achieve the same goals w/o necessarily opening up access to mu.

That said, we've dealt with raw RSA abuse until today, so it will not
the end of the world if the consensus is that external-mu is what
people really want.

Simo.

Watson Ladd

unread,
Nov 22, 2024, 1:53:27 PM11/22/24
to Simo Sorce, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
I don't understand why the PKCS 11 driver or whatever API cannot
handle what it takes to work with external mu internally, while the
hardware HSM exposes the mu.
>
> Simo.
>
> --
> Simo Sorce
> Distinguished Engineer
> RHEL Crypto Team
> Red Hat, Inc
>
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/4780d41526ad1f7c0966a3a29301e899e529a196.camel%40redhat.com.



--
Astra mortemque praestare gradatim

Simo Sorce

unread,
Nov 22, 2024, 1:58:43 PM11/22/24
to Watson Ladd, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, 2024-11-22 at 10:53 -0800, Watson Ladd wrote:
> I don't understand why the PKCS 11 driver or whatever API cannot
> handle what it takes to work with external mu internally, while the
> hardware HSM exposes the mu.

It totally can, but the driver, or the HSM, can verify nothing about
the passed in mu except its length.

What this means is

External-mu-ML-DSA == Raw RSA

Handle with care!! And ... thank you?

Watson Ladd

unread,
Nov 22, 2024, 2:01:51 PM11/22/24
to Simo Sorce, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, Nov 22, 2024 at 10:58 AM Simo Sorce <si...@redhat.com> wrote:
>
> On Fri, 2024-11-22 at 10:53 -0800, Watson Ladd wrote:
> > I don't understand why the PKCS 11 driver or whatever API cannot
> > handle what it takes to work with external mu internally, while the
> > hardware HSM exposes the mu.
>
> It totally can, but the driver, or the HSM, can verify nothing about
> the passed in mu except its length.
>
> What this means is
>
> External-mu-ML-DSA == Raw RSA
>
> Handle with care!! And ... thank you?

I think you misunderstand. The PKCS11 driver would expose ML-DSA. The
use of external mu is purely an implementation detail between the HSM
and the driver to avoid sending the entire message to the HSM.
>
> Simo.
>
> --
> Simo Sorce
> Distinguished Engineer
> RHEL Crypto Team
> Red Hat, Inc
>


Simo Sorce

unread,
Nov 22, 2024, 2:28:29 PM11/22/24
to Watson Ladd, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, 2024-11-22 at 11:01 -0800, Watson Ladd wrote:
> On Fri, Nov 22, 2024 at 10:58 AM Simo Sorce <si...@redhat.com> wrote:
> >
> > On Fri, 2024-11-22 at 10:53 -0800, Watson Ladd wrote:
> > > I don't understand why the PKCS 11 driver or whatever API cannot
> > > handle what it takes to work with external mu internally, while the
> > > hardware HSM exposes the mu.
> >
> > It totally can, but the driver, or the HSM, can verify nothing about
> > the passed in mu except its length.
> >
> > What this means is
> >
> > External-mu-ML-DSA == Raw RSA
> >
> > Handle with care!! And ... thank you?
>
> I think you misunderstand. The PKCS11 driver would expose ML-DSA. The
> use of external mu is purely an implementation detail between the HSM
> and the driver to avoid sending the entire message to the HSM.

I am not sure why people fixate on PKCS11 here.

Once OpenSSL or other popular Open Source cryptography library provides
access to sign with External-mu-ML-DSA the damage is done.

Once applications start producing mu on their own PKCS11 will also be
forced to provide direct mu injection then. We are already discussing
whether to immediately provide such a mechanism, in fact.

Watson Ladd

unread,
Nov 22, 2024, 2:34:38 PM11/22/24
to Simo Sorce, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum


On Fri, Nov 22, 2024, 11:28 AM Simo Sorce <si...@redhat.com> wrote:
On Fri, 2024-11-22 at 11:01 -0800, Watson Ladd wrote:
> On Fri, Nov 22, 2024 at 10:58 AM Simo Sorce <si...@redhat.com> wrote:
> >
> > On Fri, 2024-11-22 at 10:53 -0800, Watson Ladd wrote:
> > > I don't understand why the PKCS 11 driver or whatever API cannot
> > > handle what it takes to work with external mu internally, while the
> > > hardware HSM exposes the mu.
> >
> > It totally can, but the driver, or the HSM, can verify nothing about
> > the passed in mu except its length.
> >
> > What this means is
> >
> >         External-mu-ML-DSA == Raw RSA
> >
> > Handle with care!! And ... thank you?
>
> I think you misunderstand. The PKCS11 driver would expose ML-DSA. The
> use of external mu is purely an implementation detail between the HSM
> and the driver to avoid sending the entire message to the HSM.

I am not sure why people fixate on PKCS11 here.

Once OpenSSL or other popular Open Source cryptography library provides
access to sign with External-mu-ML-DSA the damage is done.

The entire conversation is about whether or not we need HashML-DSA if the environments that would need it could use PureML-DSA. It's not about how the API exposed by Open SSL looks which they can screw up independently of us.

Taylor R Campbell

unread,
Nov 22, 2024, 2:56:34 PM11/22/24
to Simo Sorce, Sophie Schmieg, John Gray, Mike Ounsworth, Tim Hollebeek, Stephan Mueller, pqc-forum
> Date: Fri, 22 Nov 2024 13:49:02 -0500
> From: Simo Sorce <si...@redhat.com>
>
> I would like to be clear here that my concern is not about an adversary
> trucking an oracle to sign the wrong thing, at all, I do assume whoever
> provide the message, let alone the hashes needs to be fully trusted of
> course.
>
> My concern is mostly (semi)intentional abuse of the crypto module
> implementation by application writers or by protocol engineers, whereby
> both the client and the server do fully interoperate and their
> signatures verify because both sides pass in a badly formed mu.
>
> For example passing in a SHAKE256(M) (or another, possibly truncated
> or, ugh, padded hash) instead of the proper mu, without pk.tr or
> anything else hashed in.

Suppose, through some combination of implementation errors and
uncoordinated contractors disagreeing on imperial vs metric
cryptography, the HSM is fed H(pad(M)) instead of H(pk.tr, pad(M)) and
thus spits out a signature of the form

s := ExternalMu-ML-DSA.Sign_internal(sk, mu=, rnd).
where mu = H(pad(M)).

A conforming verifier given the public key pk and a putative signed
message (s', M') will test

ok := ML-DSA.Verify_internal(pk, pad(M'), s'),
which internally derives mu' := H(pk.tr, pad(M')).

How can signatures generated by your HSM usage pass this verifier?
The verifier's mu' won't match the signer's mu, so none of the
downstream signature verification logic will pass (with more than
negligible probability, anyway).

Surely the first time you do a trial run of your HSM with H(pad(M))
inputs and check to make sure it passes verification, the verifier
will reject it, no? And even if you deploy to prod without first
deploying to test, surely all your users will notice signature
verification failures?

Are you suggesting designing and deploying a custom, nonstandard
_verifier_ that accepts signed messages of this form? Isn't the whole
point of ExternalMu-ML-DSA -- in contrast to HashML-DSA -- that it
interoperates with unmodified standard ML-DSA verifiers?

Mike Ounsworth

unread,
Nov 22, 2024, 2:57:18 PM11/22/24
to Watson Ladd, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum

I agree with Watson here.

 

Whether or not openssl would need to expose an API for ExternalMu-ML-DSA is sortof orthogonal to this discussion. I can imagine that openssl would need to implement the ExternalMu.Prehash() function, and would need to know how to chain that into underlying cryptographic hardware that exposes a .Sign(mu). If you are doing a signature entirely in local software, then there is no reason to split this into Prehash() and then Sign() – just do straight “one-shot” normal ML-DSA. It’s hard to imagine what use case would require openssl to implement and expose a .Sign(mu) function in software. That would be cases where a a single signature produced across two collaborating openssl modules separated by a network connection? If openssl does have a use case for that, I trust them to suitably document it. As Sophie has nicely laid out, there’s no security risk here: if you hand in a broken mu, then you have a broken signature that will not verify, which is probably not what you wanted, but is not a security problem.

 

---

Mike Ounsworth

 

From: Watson Ladd <watso...@gmail.com>

Sent: Friday, November 22, 2024 1:34 PM
To: Simo Sorce <si...@redhat.com>

Cc: Sophie Schmieg <ssch...@google.com>; John Gray <jogr...@gmail.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Tim Hollebeek <tim.ho...@digicert.com>; Stephan Mueller <smue...@chronox.de>; pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [EXTERNAL] [pqc-forum] HashML-DSA vs ExternalMu-ML-DSA

 

On Fri, Nov 22, 2024, 11:28 AM Simo Sorce <simo@redhat.com> wrote: On Fri, 2024-11-22 at 11:01 -0800, Watson Ladd wrote: > On Fri, Nov 22, 2024 at 10:58 AM Simo Sorce <simo@redhat.com> wrote: > > > > On Fri, 2024-11-22

Simo Sorce

unread,
Nov 22, 2024, 4:25:30 PM11/22/24
to Mike Ounsworth, Watson Ladd, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum
I was under the impression you were advocating for a public API to
allow external parties (it's in the name) to pass mu directly to the
cryptographic module.

If all you are discussing is how to internally organize calls between
the software part (PKCS11 driver) and the Hardware part (HSM) of a
validated module then we are really just saying that X509 only uses ML-
DSA and nothing else.

It is unclear to me why you would have to define how you call an
internal detail of implementation of a combined sw+hw module stack, and
why an internal detail would call for external parties.

Either way I think the concerns, from all parties, seem to me, are now
clear. We do not have to agree on what "might happen", or how good or
bad any choice will be.

Thanks,
Simo.

On Fri, 2024-11-22 at 19:56 +0000, 'Mike Ounsworth' via pqc-forum
wrote:
> I agree with Watson here.
>
>
>
> Whether or not openssl would need to expose an API for ExternalMu-ML-DSA is sortof orthogonal to this discussion. I can imagine that openssl would need to implement the ExternalMu.Prehash() function, and would need to know how to chain that into underlying cryptographic hardware that exposes a .Sign(mu). If you are doing a signature entirely in local software, then there is no reason to split this into Prehash() and then Sign() – just do straight “one-shot” normal ML-DSA. It’s hard to imagine what use case would require openssl to implement and expose a .Sign(mu) function in software. That would be cases where a a single signature produced across two collaborating openssl modules separated by a network connection? If openssl does have a use case for that, I trust them to suitably document it. As Sophie has nicely laid out, there’s no security risk here: if you hand in a broken mu, then you have a broken signature that will not verify, which is probably not what you wanted, but is not a security problem.
>
>
>
> ---
>
> Mike Ounsworth
>
>
>
> From: Watson Ladd <watso...@gmail.com>
> Sent: Friday, November 22, 2024 1:34 PM
> To: Simo Sorce <si...@redhat.com>
> Cc: Sophie Schmieg <ssch...@google.com>; John Gray <jogr...@gmail.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Tim Hollebeek <tim.ho...@digicert.com>; Stephan Mueller <smue...@chronox.de>; pqc-forum <pqc-...@list.nist.gov>
> Subject: Re: [EXTERNAL] [pqc-forum] HashML-DSA vs ExternalMu-ML-DSA
>
>
>
> On Fri, Nov 22, 2024, 11: 28 AM Simo Sorce <simo@ redhat. com> wrote: On Fri, 2024-11-22 at 11: 01 -0800, Watson Ladd wrote: > On Fri, Nov 22, 2024 at 10: 58 AM Simo Sorce <simo@ redhat. com> wrote: > > > > On Fri, 2024-11-22
>
>
>
>
>
> On Fri, Nov 22, 2024, 11:28 AM Simo Sorce <si...@redhat.com <mailto:si...@redhat.com> > wrote:
>
> On Fri, 2024-11-22 at 11:01 -0800, Watson Ladd wrote:

Sophie Schmieg

unread,
Nov 22, 2024, 5:25:05 PM11/22/24
to Simo Sorce, Mike Ounsworth, Watson Ladd, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum
There are other use cases for prehashing besides HSMs. In particular, Cloud systems holding private keys as remote oracles have a similar issue, but are by necessity not as closely integrated as PKCS11 drivers. But I do agree, in the end, what we are saying is that X509 should only use ML-DSA.

To address the fear of malformed µs being used anyways: The addition of the public key to ML-DSA is something known as the BUFF transform [1]. In particular, this transform gives the scheme some additional security properties, but it is not required for the scheme to have unforgerability. So in this hypothetical scenario, of someone abusing the API by feeding in the wrong hash, they would not lose the most important security guarantee.

Importantly, exposing a hash would also not allow the secret key holder to verify it was computed correctly, and in my experience, one cannot underestimate the creativity people have when abusing APIs (looking at the piece of legacy code that handles RSA-with-SHA3, because someone truncated SHA384 in the requirements), so neither HashML-DSA nor ML-DSA actually can tell whether their respective hashes were computed correctly, and just as there is no way of telling SHAKE256(SHAKE256(pk, 64) || 0x00 || 0x00 || m, 64) apart from SHA2-512, there is no way of distinguishing SHA2-256(m) from SHAKE128(m, 32), save for brute forcing the preimage.

Mike Ounsworth

unread,
Nov 22, 2024, 5:42:01 PM11/22/24
to Simo Sorce, Watson Ladd, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum

Hi Simo,

 

Right, so the question is whether and how to support cases where Prehash() --> mu is computed by one crypto module (ex.: openssl), which then hands off to a Sign(mu) in another crypto module (ex.: a P11 driver for an HSM). As a PKI person myself this is all “crypto module internal detail” to me, but this is a thing that needs to be sorted out.

 

I think the open question is whether PKCS#11 should expose a .Sign(mu) ? I think it probably does. Again, the exact wording of FIPS 204 is:

 

“> message representative that may optionally be computed in a different cryptographic module”

 

So it’s totally within the FIPS guidelines for Prehash() to be done in Module A, and Sign(mu) to be done in Module B. There probably are valid use cases that will require this.

 

---

Mike Ounsworth

 

From: Simo Sorce <si...@redhat.com>
Sent: Friday, November 22, 2024 3:25 PM
To: Mike Ounsworth <Mike.Ou...@entrust.com>; Watson Ladd <watso...@gmail.com>
Cc: Sophie Schmieg <ssch...@google.com>; John Gray <jogr...@gmail.com>; Tim Hollebeek <tim.ho...@digicert.com>; Stephan Mueller <smue...@chronox.de>; pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [EXTERNAL] [pqc-forum] HashML-DSA vs ExternalMu-ML-DSA

 

I was under the impression you were advocating for a public API to allow external parties (it's in the name) to pass mu directly to the cryptographic module. If all you are discussing is how to internally organize calls between the software

Watson Ladd

unread,
Nov 22, 2024, 5:46:08 PM11/22/24
to Mike Ounsworth, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, Nov 22, 2024 at 2:41 PM Mike Ounsworth
<Mike.Ou...@entrust.com> wrote:
>
> Hi Simo,
>
>
>
> Right, so the question is whether and how to support cases where Prehash() --> mu is computed by one crypto module (ex.: openssl), which then hands off to a Sign(mu) in another crypto module (ex.: a P11 driver for an HSM). As a PKI person myself this is all “crypto module internal detail” to me, but this is a thing that needs to be sorted out.

Or you can think about it the other way: the PKCS11 driver would call
back into the available module on the machine to do the hashing then
send the result off: no special API required.

Mike Ounsworth

unread,
Nov 22, 2024, 6:11:41 PM11/22/24
to Watson Ladd, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum

Waston, but that’s an entirely separate use case, right?

 

If openssl is linked against and calling into a P11 driver, that is a different usecase from a P11 driver being linked against and calling into openssl. One requires the PKCS#11 specification to expose a .Sign(mu) and the other does not.

 

Other than sunk costs on the part of HSM vendors who may have already shipped their ML-DSA implementation, is there any downside to putting a .Sign(mu) API into the P11 spec in order to allow the prehash step to be done by the application prior to invoking the HSM’s driver?

 

 

 

PS – I may be contradicting my own earlier comments here. Oops.

 

---

Mike Ounsworth

 

From: Watson Ladd <watso...@gmail.com>
Sent: Friday, November 22, 2024 4:46 PM
To: Mike Ounsworth <Mike.Ou...@entrust.com>
Cc: Simo Sorce <si...@redhat.com>; Sophie Schmieg <ssch...@google.com>; John Gray <jogr...@gmail.com>; Tim Hollebeek <tim.ho...@digicert.com>; Stephan Mueller <smue...@chronox.de>; pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [EXTERNAL] [pqc-forum] HashML-DSA vs ExternalMu-ML-DSA

 

On Fri, Nov 22, 2024 at 2:41 PM Mike Ounsworth <Mike.Ounsworth@entrust.com> wrote: > > Hi Simo, > > > > Right, so the question is whether and how to support cases where Prehash() --> mu is computed by one crypto module

Watson Ladd

unread,
Nov 22, 2024, 6:28:47 PM11/22/24
to Mike Ounsworth, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum
On Fri, Nov 22, 2024 at 3:11 PM Mike Ounsworth
<Mike.Ou...@entrust.com> wrote:
>
> Waston, but that’s an entirely separate use case, right?

I'm just taking your example. We have user code in A that calls into
OpenSSL and wants to use a key on an HSM via a PKCS11 provider, or
some other provider tied to the HSM. Providers in OpenSSL can call
into each other dynamically, so the PKCS 11 provider can call into the
FIPS provider or default provider if one exists to compute mu, and
send it to the HSM.

Now, it's possible that PKCS11 needs such a mechanism to expose mu to
make this work but the application doesn't see this. For applications
that call into PKCS11 directly this is trickier if the driver can't
depend on anything but the hardware, but it's not impossible.

Sincerely,
Watson

Richard Kisley

unread,
Dec 5, 2024, 10:16:22 AM12/5/24
to Watson Ladd, Mike Ounsworth, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum
Hi,
Summary: I think we need explicit NIST approval for exposing the internal Sign/Verify interfaces in Section 6 of FIPS 204 before basing the PQC certificate ecosystem on them. @Christopher Celi, @Dustin Moody

The 2nd paragraph of the section 6 in FIPS 204 explicitly disallows offering Algorithm 7 (and the others in this section, including Algorithm 8, ‘Verify_internal’) to applications.  So the needed change is to remove this ‘should not’:

 

“Other than for testing purposes, the interfaces for key generation and signature generation specified in this section should not be made available to applications, as any random values required for key generation and signature generation shall be generated by the cryptographic module. ”


Yes, there is a huge difference between ‘should not’ and ‘shall not’, and optimistic folks have been reading ‘should not’ as ‘do what you want’ as long as we have had standards.  However, I strongly disagree with an optimistic interpretation on these grounds:

  1. A ‘should not’ is NOT an explicit allow in the view of the authors of the standard.  It is at best an ‘implicit allow’, usually an acknowledgement that some existing use case cannot be ruled out, although the authors very much wanted to do that.  It is definitely a statement that NIST would rather that people NOT do this.
  2. IETF-LAMPS suggested plan is to ignore the explicit NIST preference of HashML-DSA and to base the brand new PQC world certificate ecosystem on the Section 6 ‘should not’ + non-public interface. This seems dangerous.

 

I think there very much needs to be a change to FIPS 204 before embracing the IETF-LAMPS proposed plan. 


Please also note: my own feelings are a very strong preference for ExternalMu-ML-DSA.  I very much like the compatibility / simplicity and I think a lot of the security goal is met by moving line 5 of Algorithm 4 into Algorithm 7, although I am not a cryptographer.


Note that in my non-cryptographer opinion, I think FIPS 205 has a similar problem, even though section 9.4 explicitly allows pre-hash.  This is because the remedy in section 9.4 is to pass hash(M) as the M input to the sign operation (algorithm 18), meaning that pre-hash SLH-DSA will also result in a different signature of M versus ‘Pure’ SLH-DSA.  I do not understand FIPS 205 well enough to know if the preceding algorithms allow an equivalent construction to ExternalMu-ML-DSA, resulting in the same signature for pre-hash and ‘pure’ SLH-DSA.


R Kisley
IBM

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

JOHNSON Darren

unread,
Dec 5, 2024, 11:14:29 AM12/5/24
to Richard Kisley, Watson Ladd, Mike Ounsworth, Simo Sorce, Sophie Schmieg, John Gray, Tim Hollebeek, Stephan Mueller, pqc-forum

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

Hi Richard,

We received confirmation on the NIST PQC forums that supporting an interface that accepts Mu as input is allowed.  Here is one of the more recent (and more direct) threads on that topic.

https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/GPtJRsk67TY

 

As for “should not” vs “shall”, I don’t believe there is any confusion there either.  As part of a certification, we are required to provide documentation/evidence as to how we meet all of the SHALL statements in the various FIPS specification.  And “should not” is a recommendation, not a requirement we are required to meet.

 

The FIPS specifications also provide definitions that support this.

                shall: used to indicate a requirement of this standard

                should: used to indicate a strong recommendation but not a requirement of this standard. Ignoring the recommendation could lead to undesirable results.

 

Thanks

Darren

Reply all
Reply to author
Forward
0 new messages