External calculation of M' vs mu in FIPS 204.

2,699 views
Skip to first unread message

COSTA Graham

unread,
Nov 12, 2024, 1:30:56 PM11/12/24
to pqc-...@list.nist.gov, JOHNSON Darren

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

Hi

 

Can NIST CTG clarify the scope of the following statement in Algorithm 7, ML-DSA.Sign_internal (step 6) from FIPS 204 please?

 

“message representation that may optionally be computed in a different cryptographic module.”

 

There are some IETF discussion where companies have read the presence of this statement in Algorithm 7 as saying that the ‘mu’ (i.e. (H(BytesToBits(𝑡𝑟)||𝑀′ , 64))) can in its entirety be calculated outside the boundary of a valid ML-DSA implementation.  This matters to all implementations but in particular to FIPS 140-3 validated modules where it impacts what interfaces a module may be permitted (or expected) to support.

 

If mu is permitted to be calculated outside the boundary of an implementation, this would seem to be flawed in that it leaves no option for the ML_DSA implementation to check the content of M’.  i.e. has the OID been included, is CTS<255, is the hash length sensible for the listed OID etc.

 

Our own interpretation is that the statement in Algorithm 7 ONLY relates to M’ calculated in Algorithm 4, ML-DSA.Sign_external  (step 6) and consumed by Algorithm 7 on step 6.  The standard clarifies in section 5.4.1:  “𝑀′ may be constructed outside of the cryptographic module that performs ML-DSA.Verify_internal. However, in the case of HashML-DSA , the hash or XOF of the content must be computed within a FIPS 140-validated cryptographic module, which may be a different cryptographic module than the one that performs ML-DSA.Verify_internal.”.

 

The same question also applies to Algorithm 8, ML-DSA.Verify_internal and the same statement against step 7 for calculation of mu.

 

Clarification by the standards authors for both the sign and verify cases would be much appreciated to understand if we need to support interfaces to import M’ exclusively or also mu. 

 

Kind Regards,

 

Graham.

 

Sophie Schmieg

unread,
Nov 12, 2024, 1:58:23 PM11/12/24
to COSTA Graham, pqc-...@list.nist.gov, JOHNSON Darren
I think this topic was discussed 4 days ago on this list as well, inthe "OIDs for "pre-hash" ML-DSA and SLH-DSA" thread, with Ben Livelsberger's email containing what I consider the official answer.

Note that there is no security concern here, since ML-DSA (as opposed to HashML-DSA which in my opinion should be avoided in its entirety) does not contain a hash function (and no OID), and therefore does not suffer from the in-band negotiation issues HashML-DSA somewhat inherently tempts implementers into.

An invalid context would result in a invalid signature anyhow, so the attacker would not gain much by tricking a signing oracle into signing a message identifier with an invalid context field, this is akin to signing a hash that does not have a valid preimage.
If the signing oracle is supposed to perform some checks on the context, one could potentially realize a confused deputy attack, but the correct defense against this specific scenario is to properly define a higher level protocol that domain separates the hash, and includes and checks all relevant metadata, as such a signing oracle that allows only limited access to context is outside of the generic use case in any case, as far as I can see, and I would value avoiding HashML-DSA higher than the increased protocol complexity for one very rare specific use case.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PR0P264MB4057EA576559008E1DC2B83399592%40PR0P264MB4057.FRAP264.PROD.OUTLOOK.COM.


--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

COSTA Graham

unread,
Nov 12, 2024, 2:30:21 PM11/12/24
to Sophie Schmieg, pqc-...@list.nist.gov, JOHNSON Darren

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

That was a question about CAVP Ben answered.  Ben isn’t part of the NIST CTG group responsible for the standards.   I think it’s still a legitimate question on what the FIPS 204 authors intended that I’ve not seen authoritively answered.

 

The main goal behind external hashing is to allow hashing at source.  That is achieved based on the allowance against M’.

Simo Sorce

unread,
Nov 12, 2024, 2:33:09 PM11/12/24
to Sophie Schmieg, COSTA Graham, pqc-...@list.nist.gov, JOHNSON Darren
Hi Sophie,

Ben's message can be interpreted as allowing a validated module to call
itself a separate validated module, but it is not clear that mu can be
provided by an application and computed by a non validated module
(which would be implicitly allowed by providing a direct interface to
applications).

In the past NIST has been very keen on forcing the hash calculation to
be done provably by a validated module, so Graham's question is quite
pertinent and not clearly answered IMO.

Simo.

On Tue, 2024-11-12 at 10:57 -0800, 'Sophie Schmieg' via pqc-forum
wrote:
> I think this topic was discussed 4 days ago on this list as well, inthe
> "OIDs for "pre-hash" ML-DSA and SLH-DSA" thread, with Ben Livelsberger's
> email containing what I consider the official answer.
>
> Note that there is no security concern here, since ML-DSA (as opposed to
> HashML-DSA which in my opinion should be avoided in its entirety) does not
> contain a hash function (and no OID), and therefore does not suffer from
> the in-band negotiation issues HashML-DSA somewhat inherently tempts
> implementers into.
>
> An invalid context would result in a invalid signature anyhow, so the
> attacker would not gain much by tricking a signing oracle into signing a
> message identifier with an invalid context field, this is akin to signing a
> hash that does not have a valid preimage.
> If the signing oracle is supposed to perform some checks on the context,
> one could potentially realize a confused deputy attack, but the correct
> defense against this specific scenario is to properly define a higher level
> protocol that domain separates the hash, and includes and checks all
> relevant metadata, as such a signing oracle that allows only limited access
> to context is outside of the generic use case in any case, as far as I can
> see, and I would value avoiding HashML-DSA higher than the increased
> protocol complexity for one very rare specific use case.
>
> On Tue, Nov 12, 2024 at 10:30 AM 'COSTA Graham' via pqc-forum <
> pqc-...@list.nist.gov> wrote:
>
> > THALES GROUP LIMITED DISTRIBUTION to email recipients
> >
> >
> >
> > Hi
> >
> >
> >
> > *Can NIST CTG clarify the scope of the following statement in Algorithm 7,
> > ML-DSA.Sign_internal (step 6) from FIPS 204 please?*
> >
> >
> >
> > *“message representation that may optionally be computed in a different
> > cryptographic module.”*
> >
> >
> >
> > There are some IETF discussion where companies have read the presence of
> > this statement in Algorithm 7 as saying that the ‘mu’ (i.e. (H(BytesToBits(
> > 𝑡𝑟)||𝑀′ , 64))) can in its entirety be calculated outside the boundary
> > of a valid ML-DSA implementation. This matters to all implementations but
> > in particular to FIPS 140-3 validated modules where it impacts what
> > interfaces a module may be permitted (or expected) to support.
> >
> >
> >
> > If mu is permitted to be calculated outside the boundary of an
> > implementation, this would seem to be flawed in that it leaves no option
> > for the ML_DSA implementation to check the content of M’. i.e. has the OID
> > been included, is CTS<255, is the hash length sensible for the listed OID
> > etc.
> >
> >
> >
> > Our own interpretation is that the statement in Algorithm 7 ONLY relates
> > to M’ calculated in Algorithm 4, ML-DSA.Sign_external (step 6) and
> > consumed by Algorithm 7 on step 6. The standard clarifies in section
> > 5.4.1: “*𝑀′ may be constructed outside of the cryptographic module that
> > performs ML-DSA.Verify_internal. However, in the case of HashML-DSA , the
> > hash or XOF of the content must be computed within a FIPS 140-validated
> > cryptographic module, which may be a different cryptographic module than
> > the one that performs ML-DSA.Verify_internal.”*.
> >
> >
> >
> > The same question also applies to Algorithm 8, ML-DSA.Verify_internal and
> > the same statement against step 7 for calculation of mu.
> >
> >
> >
> > *Clarification by the standards authors for both the sign and verify cases
> > would be much appreciated to understand if we need to support interfaces to
> > import M’ exclusively or also mu. *
> >
> >
> >
> > Kind Regards,
> >
> >
> >
> > Graham.
> >
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "pqc-forum" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to pqc-forum+...@list.nist.gov.
> > To view this discussion visit
> > https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PR0P264MB4057EA576559008E1DC2B83399592%40PR0P264MB4057.FRAP264.PROD.OUTLOOK.COM
> > <https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PR0P264MB4057EA576559008E1DC2B83399592%40PR0P264MB4057.FRAP264.PROD.OUTLOOK.COM?utm_medium=email&utm_source=footer>
> > .
> >
>
>
> --
>
> Sophie Schmieg | Information Security Engineer | ISE Crypto |
> ssch...@google.com
>


--
Simo Sorce
Distinguished Engineer
RHEL Crypto Team
Red Hat, Inc

COSTA Graham

unread,
Nov 12, 2024, 2:48:34 PM11/12/24
to Simo Sorce, Sophie Schmieg, pqc-...@list.nist.gov, JOHNSON Darren
THALES GROUP LIMITED DISTRIBUTION to email recipients

NIST is also a big place. I do recognize that Ben responded as he did but he caveated that comment with "From our perspective". i.e. he could be interpreted as speaking for the NIST CAVP only and how they've read the standard. For vendors implementing and certifying modules, it's not uncommon to find different views between NIST CMVP, CAVP and the CTG group.

If as commented by Ben on Nov 4: "We interpret FIPS 204 as allowing ML-DSA.Sign_internal(sk, mu, rnd) in addition to ML-DSA.Sign_internal(sk, 𝑀 ′, rnd) and ML-DSA.Verify_internal(𝑝𝑘, mu, 𝜎) in addition to ML-DSA.Verify_internal(𝑝𝑘, 𝑀 ′, 𝜎)."

I propose if this is correct, this get added to the FIPS 204 errata to make it clear this is a view shared by ALL of NIST and for someone from the core PQC standards team to confirm.

-----Original Message-----
From: Simo Sorce <si...@redhat.com>
Sent: Tuesday, November 12, 2024 7:33 PM
To: Sophie Schmieg <ssch...@google.com>; COSTA Graham <graham...@thalesgroup.com>
Cc: pqc-...@list.nist.gov; JOHNSON Darren <darren....@thalesgroup.com>
Subject: Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.

Sophie Schmieg

unread,
Nov 12, 2024, 4:00:07 PM11/12/24
to COSTA Graham, Simo Sorce, pqc-...@list.nist.gov, JOHNSON Darren
I do not disagree with the assessment that having someone from the standards team to confirm this would be great, the main point of my response was noting that there is no security issue in transmitting µ itself. While I don't see any other way of interpreting the comment, having an entire API defined via the consequences of a comment is not an ideal situation, especially since it leaves us with two incompatible ways for prehashing, even though only one of the two ways actually acts as a replacement for prehashing as used with ECDSA/RSA signatures today. As a side note, HashML-DSA similarly does not have an API defined that would actually allow the external provision of the hash of the message as well, with the comments in section 5.4.1 sharing the same language as the comment of Algorithm 7, line 6, so it is not clear to me whether there is supposed to be any difference between the two, and in particular whether there is any reasonable use case for HashML-DSA that would not be far simpler handled by pure ML-DSA (and in fact my opposition to HashML-DSA is mostly rooted in the fact that I have not seen any use case that would call for it).

Jennifer Trokey

unread,
Nov 12, 2024, 4:40:27 PM11/12/24
to Sophie Schmieg, COSTA Graham, Simo Sorce, pqc-...@list.nist.gov, JOHNSON Darren
I apologize as I go through the prices, piece by piece it is quite the learning process! If I understood the term errata a little better than what I originally was viewing it as, I could assist in so much more processes I'm sure! 

Sincerely, 
Jennifer Linn Trokey

COSTA Graham

unread,
Nov 12, 2024, 4:52:18 PM11/12/24
to Jennifer Trokey, Sophie Schmieg, Simo Sorce, pqc-...@list.nist.gov, JOHNSON Darren

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

Ref: Eratta, Dustin said in an earlier post that NIST don’t intend to update FIPS 204 for 5 years.  Instead, they’ve posted a spreadsheet alongside it that contains clarifications and correction.

 

https://csrc.nist.gov/files/pubs/fips/204/final/docs/fips-204-potential-updates.xlsx

 

Since the spreadsheet states as one of it’s roles that is contains: “Potential corrections attempt to remove ambiguity and improve interpretation of the document.”  That would seem the place to formally record the answer to this question once sorted out.  Sorry for any confusion!

David A. Cooper

unread,
Nov 19, 2024, 3:53:31 PM11/19/24
to pqc-forum
We agree with Ben Livelsberger's assessment in
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/HXhRTjshe5E/m/mkkFd7zcAwAJ.
A cryptographic module that implements ML-DSA signing and/or
verification may take mu as input rather than M or M'. A cryptographic
module may be designed to accept either M' or mu as input, but does not
need to support both. (A cryptographic module may alternatively be
designed to accept M as input, but ideally would be able to accept M' as
input for testing purposes.)

As mu is computed as SHAKE256(....), this computation needs to be
performed within a validated cryptographic module that has a validated
implementation of SHAKE256. However, it is not the responsibility of the
module that implements ML-DSA to ensure that the external computation
was performed in an approved manner.

While there is nothing in FIPS 204 that would prevent mu from being
provided by an application, this is something that we would discourage.
Similar to the discussion about pre-hashing in
https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/GMMKmejELfQ/m/e8qnwDp0BAAJ,
it would be preferable for the application to use a cryptographic
library that accepts the message as input. The cryptographic library
would then be responsible for constructing M' and then calling the
appropriate cryptographic modules to compute mu and then the signature.
This seems to be what the designers of the PKCS #11 interface have in
mind
(https://mailarchive.ietf.org/arch/msg/spasm/tNt8T_2JzKLaYEm14sjAzfJgF8s/,
https://mailarchive.ietf.org/arch/msg/spasm/35aGThc78ekJ-RUn_UFBjEBllW8/,
https://mailarchive.ietf.org/arch/msg/spasm/7VbkjoMyjzjULvkv4MfV-n2ruTE/).

Mike Ounsworth

unread,
Nov 19, 2024, 4:23:36 PM11/19/24
to David A. Cooper, pqc-forum

Hi.

 

I’m not sure if this is off-topic for this thread, but I want to point out that computing mu requires knowledge of the public key hash tr.

 

 

ML-DSA.Sign_internal(sk, M’, rnd) pulls tr out of sk, no problem. But if you are doing the mu pre-hash on the client side as a performance optimization, then where it is getting tr from? Either the client / driver needs to make a network call to the HSM to fetch tr (which completely negates the performance gain of local hashing), or you need a different interface that passes in tr along with the message to be hashed (the security implications of this sentence probably make David Cooper uncomfortable since now it’s possible to mismatch pk and sk). I think we should be encouraging implementers to think of this as “a step that uses the public key to compute mu” and “a step that uses the secret key to sign, starting from mu”.

 

It's unfortunate that this is all buried in an in-line comment on line 6 of algorithm 7, because this is actually fairly tricky and the ExternalMu “mode” deserves to be written out as a full standalone algorithm with its own implementation and security considerations. We are attempting to do so in the IETF LAMPS ML-DSA specification [2]

 

[1]: https://mailarchive.ietf.org/arch/msg/spasm/LjmIsUE3tgTiV2Z1Us_CgQfFvLw/

[2]: editor’s copy on github since this is not officially posted yet: https://lamps-wg.github.io/dilithium-certificates/draft-ietf-lamps-dilithium-certificates.html#name-pre-hashed-mode-externalmu-

 

---

Mike Ounsworth

 

From: 'David A. Cooper' via pqc-forum <pqc-...@list.nist.gov>
Sent: Tuesday, November 19, 2024 2:53 PM
To: pqc-forum <pqc-...@list.nist.gov>
Subject: [EXTERNAL] Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.

 

We agree with Ben Livelsberger's assessment in https://urldefense.com/v3/__https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/HXhRTjshe5E/m/mkkFd7zcAwAJ__;!!FJ-Y8qCqXTj2!e4N3kGnxZWWITWCJu1CYCnmy8BHUSLCmm6OqqK4DZQLuzRSQEVXV4aOoQyQMNn0uSD29FpFX2pMT_GbD6CwQANQIPOY5$.

We agree with Ben Livelsberger's assessment in 
A cryptographic module that implements ML-DSA signing and/or 
verification may take mu as input rather than M or M'. A cryptographic 
module may be designed to accept either M' or mu as input, but does not 
need to support both. (A cryptographic module may alternatively be 
designed to accept M as input, but ideally would be able to accept M' as 
input for testing purposes.)
 
As mu is computed as SHAKE256(....), this computation needs to be 
performed within a validated cryptographic module that has a validated 
implementation of SHAKE256. However, it is not the responsibility of the 
module that implements ML-DSA to ensure that the external computation 
was performed in an approved manner.
 
While there is nothing in FIPS 204 that would prevent mu from being 
provided by an application, this is something that we would discourage. 
Similar to the discussion about pre-hashing in 
it would be preferable for the application to use a cryptographic 
library that accepts the message as input. The cryptographic library 
would then be responsible for constructing M' and then calling the 
appropriate cryptographic modules to compute mu and then the signature. 
This seems to be what the designers of the PKCS #11 interface have in 
mind 
>>> .
>>> 
>> 
>> -- 
>> Sophie Schmieg | Information Security Engineer | ISE Crypto |
>> ssch...@google.com
>> 
> 
 
-- 
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

COSTA Graham

unread,
Nov 20, 2024, 10:25:48 AM11/20/24
to Mike Ounsworth, David A. Cooper, pqc-forum

THALES GROUP LIMITED DISTRIBUTION to email recipients

 

Thank you David (and Mike for the follow-on) – could the main points of this response be added to the FIPS 204 errata please?  i.e,

 

  1. FIPS 204 allows alternate forms of the external and internal functions where M’ and mu are passed in instead of the data to be signed;
  2. there is no expectation for a module only implementing the signing component and not implementing the calculation of M’ or mu to be accountable for checking the content of these inputs other than standard input validation checks such as length checking;
  3. there is no expectation for a module implementing ML-DSA to be responsible for checking that an approved module calculated M’ and mu;
  4. pk used to calculate mu, if calculated external to a given implementation boundary for ML-DSA, does not need to come from the same module performing the sign operation using sk.  Unless that is not the case.  If not the case, the errata should confirm that the independent module calculating mu must request and receive pk from the module that subsequently handles sk; and
  5. confirm that although permitted for situations where external calculation of M’ and mu make system-level sense, it is not an option encouraged by NIST and not a mandated feature of the algorithm for all implementations as has been stated in these threads.

Samuel Lee

unread,
Mar 18, 2025, 9:31:51 PMMar 18
to pqc-forum, COSTA Graham, Mike Ounsworth, David A. Cooper
Hey folks,

Sorry to revive an old thread, but folks on my team at Microsoft have recently written up a position paper discussing External Mu and alternatives here: ProblemsWithMlDsaExternaMu.pdf - Google Drive

This topic has come to our attention recently due to changes in LAMPS standardization of ML-DSA in the new year.
I will also post this paper on the LAMPS mailing list tomorrow, and look forward to having further discussions on this topic!

Best,
Sam

Moody, Dustin (Fed)

unread,
Mar 19, 2025, 10:53:24 AMMar 19
to pqc-forum, Samuel Lee, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
All,

After some internal discussion between our crypto team and the CAVP/CMVP, we have updated our FAQ question dealing with external mu.  Please see the second question at


which  will take you to


We hope this update clarifies things from our perspective.  

Dustin
NIST PQC


From: 'Samuel Lee' via pqc-forum <pqc-...@list.nist.gov>
Sent: Tuesday, March 18, 2025 9:31 PM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: COSTA Graham <graham...@thalesgroup.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Cooper, David (Fed) <david....@nist.gov>
Subject: Re: [EXTERNAL] Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.
 

Sophie Schmieg

unread,
Mar 19, 2025, 11:47:56 AMMar 19
to Moody, Dustin (Fed), pqc-forum, Samuel Lee, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
To address the main concern in the Microsoft paper about the security of external µ, the authors are correct that to prove this security one has to do a case by case analysis of the call sites using µ, and ensure that they are properly domain separated hash function calls. The authors make it sound like this is somehow a Herculean task, when in reality even a cursory look at FIPS 204 is sufficient (and I presume NIST has done this very analysis).
image.png
As we can see, µ is used in two places, lines 7 and 15.
Line 7 is to enable deterministic signing, and µ is hashed with the value K of the private key. This value is not used anywhere else, making line 7 a PRF (KMAC), and as such we can replace it with a random function, making rho'' a uniformly random, unpredictable value (without knowledge of the private key). In deterministic mode, all the attacker can do is sign the same µ again, which leads to the same signature (hence the name). In non-deterministic mode, the additional randomness rnd becomes part of the PRF key, so even resigning the same µ leads to an unpredictable rho'' value.
Line 15 is equally easy to analyze. This is the core of the Fiat-Shamir transform, replacing the challenge of a sigma protocol with a hash function call of the commitment together with the message. The standard formulation of Fiat-Shamir does not first hash the message, and indeed the commitment is high in entropy making such a hash a PRF again. In particular, the message identifier is hashed together with the high bits of w (named w1), which are computed via expanding on rho'', a value that we just discussed is unpredictable to the attacker, making w1 equally unpredictable. w1 is not used as a hash function input elsewhere in this algorithm. Since we are using Fiat-Shamir with aborts, there are multiple passes over line 15, each with a different w1. But as rho'' is unpredictable to the attacker, these w1 are the output of a pseudo random function, and as such, after the standard argument of replacing the call with a random function, can be assumed uncorreletated.

All in all, these are standard arguments, there really is no basis to the security fear-mongering of external-µ.

I'm happy to rebuke the arguments more formally in a paper to accompany my RWPQC talk next week on the very topic, and also address the rest of the arguments.

Samuel Lee (ENS/Crypto)

unread,
Mar 19, 2025, 2:38:39 PMMar 19
to Moody, Dustin (Fed), Sophie Schmieg, pqc-forum, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
Thanks Dustin and Sophie,

It is a mischaracterization of the paper to say that our main concern is the security of exposing signing APIs accepting external mu which cannot be verified by the signing module.
We do say that the security properties are different, need to be reasoned about, and were not part of the review in the competition. Certainly an attacker with access to an external-mu based signing oracle for a given key has a strictly stronger primitive than an attacker with access to only ML-DSA or HashML-DSA.


Splitting the core signature / verification primitive of ML-DSA across different components increases complexity of software and introduces risk. It is also a bespoke solution for the ML-DSA algorithm, which seems to be fundamentally coming from the problems of using ML-DSA to directly sign large messages being operationally hard.

If we strongly believe that ExternalMu-ML-DSA-sign taking and signing an arbitrary 64-byte input is secure, why is this not the only ML-DSA signing primitive?
Then encoding of mu should simply be a suggestion for how protocols may choose to compute a hash of the message, context, and public key.

Alternatively, if specific encoding of mu is important for security, why is it OK to perform this in a different FIPS module, with all of the risks of errors and incompatibility that this entails? (with an implicit ExternalMu-prehash algorithm which is not validated or CAVP certified, and which the construction of the inputs of mu may be written in application code)

The external interfaces explicitly defined by FIPS 204 for ML-DSA and HashML-DSA should be sufficient for real-world use, and introducing and standardizing on a third variant based on a note in the pseudocode for the internal algorithms is making the situation worse not better.


It especially makes very little sense to me that in FIPS 204, there is a note allowing external mu computation for ML-DSA verification. On the verification side if a module can compute mu, it can compute the full verification (as it has access to the public key and access to the message). This seems to only open the door for mistakes which may affect interoperability or security. 

Best,
Sam

From: 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov>
Sent: 19 March 2025 08:47
To: Moody, Dustin (Fed) <dustin...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>; Samuel Lee (ENS/Crypto) <Samue...@microsoft.com>; COSTA Graham <graham...@thalesgroup.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Cooper, David (Fed) <david....@nist.gov>
You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/GPtJRsk67TY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEEbLAYPP-3LiJqG1vkDu9rBoAt7Hui1Tz8xt1khgsp0tVtaHQ%40mail.gmail.com.

Sophie Schmieg

unread,
Mar 19, 2025, 3:14:57 PMMar 19
to Samuel Lee (ENS/Crypto), Moody, Dustin (Fed), pqc-forum, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
I do agree with those points, it should be the only way to prehash ML-DSA, unfortunately that ship has sailed. External-µ prehashing is morally equivalent to ECDSA prehashing, in that ECDSA was inspired by Schnorr signatures, and if anybody used Schnorr signatures, they might run into the same exact problem. In the end, Fiat-Shamir is able to turn any sigma protocol into a signature scheme, by hashing the message to be signed with the commitment of the sigma protocol, and the hash ML-DSA introduces to compute µ is only a performance trick to begin with, to allow for deterministic/robust signing without having to make two passes over the message. By including the public key in the message identifier, you make the scheme robust against fault injection attacks that you have in e.g. EdDSA. The unfortunate side effect of the way that ML-DSA is defined is that µ is not a "clean" hash, which seems to have people thinking that mathematically there is somehow something different happening than with classical prehashing.

While you are right that a party supplying µ has technically more power than with an API that does the hashing itself, but only if you assume that they are able to break SHAKE256 (Or to put it slightly more formally, they have ever so slightly weaker bounds). But again, this mirrors the situation of classical signature schemes, where it is not a priori clear that being supplied the "hash" does not result in a key leakage vulnerability. It does not, but I think it results in a forgery attack for RSA-PKCS1 for the very least, as the multiplicative nature of RSA is not broken by the hash function call anymore (ignoring the padding, if you want to forge a signature for message m, find two numbers x and y that multiply to m, sign x and y, multiply the signatures, it is now a signature for m. The padding complicates this attack though). The usual counterargument to this concern is that the hashing party is trusted and anything short of a key leakage attack is of no concern. But as I've shown above, this is not necessary for ML-DSA, and µ-prehashing is provably secure, and, assuming SHAKE256 is a secure hash function, does not give the hashing party any additional powers.

Note that NIST does not define real world APIs in FIPS 204. The real world APIs for signature functions usually look something like this:
CK_DEFINE_FUNCTION(CK_RV, C_SignInit)(
  CK_SESSION_HANDLE hSession,
  CK_MECHANISM_PTR pMechanism,
  CK_OBJECT_HANDLE hKey
);
CK_DEFINE_FUNCTION(CK_RV, C_SignUpdate)(
  CK_SESSION_HANDLE hSession,
  CK_BYTE_PTR pPart,
  CK_ULONG ulPartLen
);
CK_DEFINE_FUNCTION(CK_RV, C_SignFinal)(
  CK_SESSION_HANDLE hSession,
  CK_BYTE_PTR pSignature,
  CK_ULONG_PTR pulSignatureLen
);
As you can see, the key handle is specified in the C_SignInit call, and µ prehashing can easily be supported. Even libraries that have APIs that take digests (such as BoringSSL), can be made fit into the external-µ scheme, by supplying a hash function that takes the public key into account. As the hash function has to be computed in a FIPS validated module anyhow, this is no different than using a hash function that does not include the public key. The complexity here comes from the fact that prehashing is not a very well researched primitive, but that is not NIST's or ML-DSA's job to fix.
Note that HashML-DSA equally has the same deployment complexity, as you still need to compute the hash in a different place, and you are not calling a valid defined signature scheme on the primitive level. This is inherent to FIPS 204 defining the primitive, so anything that you do there will result in this split primitive computation. The only way around that is to compute the hash as part of the protocol, and then sign this hash. That is IMHO a valid approach, especially when the same message is supposed to be signed with multiple keys, but notably in this case the choice of hash is fixed by the protocol, whereas HashML-DSA leaves it open (This is my main objection to HashML-DSA, it is not a well-defined signature scheme as opposed to ML-DSA).

Samuel Lee (ENS/Crypto)

unread,
Mar 19, 2025, 7:24:53 PMMar 19
to Sophie Schmieg, Moody, Dustin (Fed), pqc-forum, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
I think we agree on what we are trying to achieve, but I want to highlight a key point of difference.

External-µ prehashing is morally equivalent to ECDSA prehashing

There is an important data flow difference between external-µ prehashing (for ML-DSA with an arbitrary length message) and traditional prehashing in the sense of DSA, RSA, ECDSA, or HashML-DSA / HashSLH-DSA.
That is, you must have the public key at the start of computing the hash of your message, and your hash is tied to that public key.
This may be a desirable property for some applications, but it is actively unhelpful for others.

This data flow change does make breaking changes to existing cryptographic APIs.
Concretely, using Windows CNG APIs to compute signatures looks like a combination of: BCryptHash + BCryptSignHash / BCryptVerifySignature
To support external-µ prehashing, we need to add a new non-standard ML-DSA-specific hash algorithm for ML-DSA, or have applications encode ML-DSA logic in code which should just be doing basic crypto calls, or invent a new signature API specially for ML-DSA.
All of these options make it hard for existing applications to migrate to use ML-DSA. Rather than only having to update algorithm selection for hash / signatures, they additionally need to potentially rearchitect to know at the point of start to hash, what signature algorithm they are hashing for, and potentially change what APIs they are calling based on the algorithm choice.

We strongly recommend that PQC signature algorithms (ML-DSA and SLH-DSA) are used with the data flow of existing widely used signature algorithms (i.e. explicit pre-hashing).


Note that HashML-DSA equally has the same deployment complexity, as you still need to compute the hash in a different place, and you are not calling a valid defined signature scheme on the primitive level. This is inherent to FIPS 204 defining the primitive, so anything that you do there will result in this split primitive computation. The only way around that is to compute the hash as part of the protocol, and then sign this hash. That is IMHO a valid approach, especially when the same message is supposed to be signed with multiple keys, but notably in this case the choice of hash is fixed by the protocol, whereas HashML-DSA leaves it open (This is my main objection to HashML-DSA, it is not a well-defined signature scheme as opposed to ML-DSA).

I do not follow this part - specifically around HashML-DSA leaving the choice of hash open, and that being a bad thing.
As far as I am aware, it has always been the case for use of traditional signatures schemes that the protocol must define how to compute the hash, and the signature scheme signs/verifies the result of a hash.
Traditional signature schemes do not restrict the messages explicitly, they only allow a relatively short message as input and protocol designers need to figure out how to encode the message in that input.

HashML-DSA / HashSLH-DSA seem like a drop in replacement to existing schemes, each protocol must specify how to:
      encode the messages they wish to sign/verify (i.e. endianness, order of fields etc.), and
      how to sign that encoding (i.e. signature algorithm used, potentially with parameters including how to pre-hash the message, etc.)

With HashML-DSA, the data flow is that I construct the (generic) hash for my message in exactly the same way as other signature schemes, and then feed that to HashML-DSA sign/verify with the OID of the hash function I used.
The protocol can and should only allow signatures with a subset of hash functions (and signature algorithms) they like.
As the Hash OID is encoded in the computation of HashML-DSA's μ, there is no ambiguity for a given signature about what hash was used, unless we have a collision on the resultant mu (requires a break of SHAKE256), we have a collision in the pre-hash (the protocol is broken and needs to update/restrict hash choices), or the protocol has some flaw in how to encode messages uniquely (the protocol is broken).



There are 2 paths forward that make sense to me:
  1. IETF LAMPS standardizes on using pure ML-DSA with an explicit protocol-level prehash (this could even be restricted to SHAKE256, similar to the existing definition), not relying on external mu, or HashML-DSA. This kind of pattern is followed by other IETF PQC adoption. FIPS bans or strongly discourages external mu for ML-DSA.

  2. If the NIST and the community agrees that construction of μ is not security relevant, then FIPS allows / standardizes on raw "external-mu-ML-DSA" being a signature primitive, with applications / protocols being free to encode their messages into 64-bytes μ in a way that makes sense for them. This should include allowing a message representative which is not tied to the public key. ML-DSA and HashML-DSA become more of a recommendation for how to build on top of this primitive, but they are not required for FIPS, as they are options that are compatible with the raw external-mu version. IETF still uses explicit pre-hashing for SLH-DSA where there is no external mu option.

The current position of being half way to supporting traditional pre-hashing seems like a bodge, and will make adoption of ML-DSA harder.

Best,
Sam

From: 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov>
Sent: 19 March 2025 12:14
To: Samuel Lee (ENS/Crypto) <Samue...@microsoft.com>
Cc: Moody, Dustin (Fed) <dustin...@nist.gov>; pqc-forum <pqc-...@list.nist.gov>; COSTA Graham <graham...@thalesgroup.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Cooper, David (Fed) <david....@nist.gov>

Antony Vennard

unread,
Mar 21, 2025, 8:25:34 AMMar 21
to Samuel Lee (ENS/Crypto), Sophie Schmieg, Moody, Dustin (Fed), pqc-forum, COSTA Graham, Mike Ounsworth, Cooper, David (Fed)
Dear Forum,

I normally just lurk but this caught my interest.

I'd like to point out that in any constrained environment it is not
always possible to buffer the entire message to be signed; rather a
streaming design is needed. I would point to
https://www.bearssl.org/tls13.html from Thomas Pornin that explains
this in the PureEdDSA context for TLS (see
https://datatracker.ietf.org/doc/html/rfc8032#section-4 for rfc).

I do not support making decisions for standards here or for TLS that
preclude such use cases. Conversely I understand the motivation for the
pure variants (and that in an ideal world that's what we'd use) and
support that too. There is not a single solution that works in all use
cases.

It is unfortunate that CNG uses BCryptSignHash and 90s CryptoAPI uses
CPSignHash as API names (see pkcs#11 that did not assume hash-and-sign)
but I think this is an opportunity to refine the API in discussion with
vendors and partners, and the PQ migration is a good time to do it due
to the many other challenges that will occur anyway in moving older
applications.

I do not think it is a requirement that windows' CNG APIs match up
exactly with fips modules, but it can be done even for external-mu.
Taking the pure API, I would expect that CNG would not know, or care,
whether external-mu were used or not (and I believe this to be somewhat
the intention of the standard to enable this). The provider that
implements Pure MLDSA would decide (by virtue of its implementation or
a config switch) whether to delegate mu calculation to (say) a
validated software module or other local component, (or perform it
itself if it is the module); then send mu+other params to the signing
module, e.g. network or PCIe HSM or whatever. In the guidance from
NIST:
https://csrc.nist.gov/csrc/media/Projects/post-quantum-cryptography/documents/faq/fips204-sec6-03192025.pdf
then module A on page 2 would be the CNG provider implementation, and
module B a communication to a signature creation device. Or
alternatively, the "application" line in the diagram on page 3 is the
crypto provider which takes over talking to the modules. If done like
this I think it would alleviate your concerns around data flow -
notably there is no need to differentiate pure-with-external-mu/just
pure at the API level.

Kind regards,

Antony

On Wed, 2025-03-19 at 23:24 +0000, 'Samuel Lee (ENS/Crypto)' via pqc-
> SHAKE256), we have a collision in the pre-hash (theprotocol is broken
> and needs to update/restrict hash choices), or the protocol has some
> flaw in how to encode messages uniquely (theprotocol is broken).
>
>
>
>
>
>
> There are 2 paths forward that make sense to me:
>    1.
>       IETF LAMPS standardizes on using pure ML-DSA with an explicit
> protocol-level prehash (this could even be restricted to SHAKE256,
> similar to the existing definition), not relying on external mu, or
> HashML-DSA. This kind of pattern is followed by other IETF PQC
> adoption. FIPS bans or strongly discourages external mu for ML-DSA.
>    2.
>       If the NIST and the community agrees that construction of μ is
> not security relevant, then FIPS allows / standardizes on raw
> "external-mu-ML-DSA" being a signature primitive, with applications /
> protocols being free to encode their messages into 64-bytes μ in a
> way that makes sense for them. This should include allowing a message
> representative which is not tied to the public key. ML-DSA and
> HashML-DSA become more of a recommendation for how to build on top of
> this primitive, but they are not required for FIPS, as they are
> options that are compatible with the raw external-mu version. IETF
> still uses explicit pre-hashing for SLH-DSA where there is no
> external mu option.
>
>
> The current position of being half way to supporting traditional pre-
> hashing seems like a bodge, and will make adoption of ML-DSA harder.
>
>
> Best,
> Sam
>
> As you can see, the key handle is specified in theC_SignInit call,
> > theonly ML-DSA signing primitive?
> > Then encoding of mu should simply be a suggestion for how protocols
> > may choose to compute a hash of the message, context, and public
> > key.
> >
> >
> > Alternatively, if specific encoding of mu is important for
> > security, why is it OK to perform this in a different FIPS module,
> > with all of the risks of errors and incompatibility that this
> > entails? (with an implicit ExternalMu-prehash algorithm which is
> > not validated or CAVP certified, and which the construction of the
> > inputs of mu may be written in application code)
> >
> >
> > The external interfaces explicitly defined by FIPS 204 for ML-DSA
> > and HashML-DSA should be sufficient for real-world use, and
> > introducing and standardizing on a third variant based on a note in
> > the pseudocode for the internal algorithms is making the situation
> > worse not better.
> >
> >
> >
> >
> > It especially makes very little sense to me that in FIPS 204, there
> > is a note allowing external mu computation for ML-DSA verification.
> > On the verification side if a module can compute mu, it can compute
> > the full verification (as it has access to the public key and
> > access to the message). This seems to only open the door for
> > mistakes which may affect interoperability or security. 
> >
> >
> >
> > Best,
> > Sam
> > From: 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov>
> > Sent: 19 March 2025 08:47
> > To: Moody, Dustin (Fed) <dustin...@nist.gov>
> > Cc: pqc-forum <pqc-...@list.nist.gov>; Samuel Lee (ENS/Crypto)
> > <Samue...@microsoft.com>; COSTA Graham
> > <graham...@thalesgroup.com>; Mike Ounsworth
> > <Mike.Ou...@entrust.com>; Cooper, David (Fed)
> > <david....@nist.gov>
> > Subject: Re: [EXTERNAL] Re: [pqc-forum] External calculation of M'
> > vs mu in FIPS 204. 
> >
> > To address the main concern in the Microsoft paper about the
> > security of external µ, the authors are correct that to prove this
> > security one has to do a case by case analysis of the call sites
> > using µ, and ensure that they are properly domain separated hash
> > function calls. The authors make it sound like this is somehow a
> > Herculean task, when in reality even a cursory look at FIPS 204 is
> > sufficient (and I presume NIST has done this very
> > analysis).image.png
> > > >    1. FIPS 204 allows alternate forms of the external and
> > > > internal functions where M’ and mu are passed in instead of the
> > > > data to be signed;
> > > >    2. there is no expectation for a module only implementing
> > > > the signing component and not implementing the calculation of
> > > > M’ or mu to be accountable for checking the content of these
> > > > inputs other than standard input validation checks such as
> > > > length checking;
> > > >    3. there is no expectation for a module implementing ML-DSA
> > > > to be responsible for checking that an approved module
> > > > calculated M’ and mu;
> > > >    4. pk used to calculate mu, if calculated external to a
> > > > given implementation boundary for ML-DSA, does not need to come
> > > > from the same module performing the sign operation using sk. 
> > > > Unless that is not the case.  If not the case, the errata
> > > > should confirm that the independent module calculating mu must
> > > > request and receive pk from the module that subsequently
> > > > handles sk; and
> > > >    5. confirm that although permitted for situations where
> > > > visithttps://groups.google.com/a/list.nist.gov/d/msgid/pqc-foru
> > > > m/CH0PR11MB57394814820343932514D7299F202%40CH0PR11MB5739.namprd
> > > > 11.prod.outlook.com.
> > > --
> > > You received this message because you are subscribed to the
> > > Google Groups "pqc-forum" group.
> > > To unsubscribe from this group and stop receiving emails from it,
> > > send an email topqc-forum...@list.nist.gov.
> > > To view this discussion visit
> > > https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/f03313c0-2d5e-40cd-ad5b-13ddc727e22en%40list.nist.gov
> > > .
> > > --
> > > You received this message because you are subscribed to the
> > > Google Groups "pqc-forum" group.
> > > To unsubscribe from this group and stop receiving emails from it,
> > > send an email topqc-forum...@list.nist.gov.
> > > To view this discussion visit
> > > https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/PH0PR09MB8667E7C9939411840D14CB6DE5D92%40PH0PR09MB8667.namprd09.prod.outlook.com
> > > .
> >
> >
> > --
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Sophie Schmieg | Information Security Engineer | ISE
> > Crypto | ssch...@google.com
> >
> > --
> > You received this message because you are subscribed to a topic in
> > the Google Groups "pqc-forum" group.
> > To unsubscribe from this topic, visit
> > https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/GPtJRsk67TY/unsubscribe
> > .
> > To unsubscribe from this group and all its topics, send an email to
> > pqc-forum+...@list.nist.gov.
> > To view this discussion visit
> > https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEEbLAYPP-3LiJqG1vkDu9rBoAt7Hui1Tz8xt1khgsp0tVtaHQ%40mail.gmail.com

Samuel Lee

unread,
Mar 25, 2025, 3:00:00 PMMar 25
to pqc-forum, Antony Vennard, Moody, Dustin (Fed), pqc-forum, COSTA Graham, Mike Ounsworth, Cooper, David (Fed), Samuel Lee (ENS/Crypto), Sophie Schmieg
Just to cross-post back here, I still think there is a bit of a gap in NIST position here, which I expanded on in a LAMPS thread: https://mailarchive.ietf.org/arch/msg/spasm/4tj1trnasxKXTAfS8Bs2YflFvhc/

In my opinion, it is inconsistent to both require mu is constructed in a specific way, and not to have any way to validate that construction is performed correctly.
Can a single FIPS module expose ExternalMu-MLDSA.Sign / ExternalMu-MLDSA.Verify and SHAKE256, and be certified for use in ML-DSA and HashML-DSA with some text in the module's SPD? How does this impact on CASTs?

I would recommend resolving this by either saying ExternalMu-MLDSA.Prehash and ExternalMu-HashMLDSA.Prehash are their own certifiable algorithms, or relaxing the specific constructions of mu specified in FIPS 204.
Where NIST sits on this would inform how we would recommend using ML-DSA in protocols with different constraints.


Also interested in any links to public materials for what you presented at RWPQC Sophie!

Antony there are a couple of cases w.r.t. breaking API changes.
If applications have the full material to be signed/verified in a contiguous buffer, then yes, external mu prehash is an internal implementation detail to cryptographic APIs, and it's not a huge deal.
However, if applications have previously streamed material to be signed/verified, they would have done this with explicit incremental hashing APIs before.
Now they either have to construct the full material to be signed/verified in a single buffer, and let the external mu prehash be an implementation detail of the cryptographic API, or they have to use a new streaming external mu hashing API. In either case it is a more invasive change to the application than simply updating algorithm choices and using the same data flow as before; it is not a drop in replacement. HashML-DSA or some explicit pre-hash followed by ML-DSA are drop in replacements for these applications.

Best,
Sam

Mike Ounsworth

unread,
Mar 25, 2025, 11:09:19 PMMar 25
to Samuel Lee, pqc-forum, Antony Vennard, Moody, Dustin (Fed), COSTA Graham, Cooper, David (Fed), Sophie Schmieg

Hi Anthony, Samuel,

 

I am extremely interested in this thread, but I am not sure exactly what point is being made. Is there currently a problem with the external mu suggestion (and banning of HashML-DSA) that is currently contained in draft-ietf-lamps-dilithium-certificates? I do not believe that it says anything that would preclude streaming APIs.

 

---

Mike Ounsworth

 

From: 'Samuel Lee' via pqc-forum <pqc-...@list.nist.gov>
Sent: Tuesday, March 25, 2025 2:00 PM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: Antony Vennard <ant...@vennard.ch>; Moody, Dustin (Fed) <dustin...@nist.gov>; pqc-forum <pqc-...@list.nist.gov>; COSTA Graham <graham...@thalesgroup.com>; Mike Ounsworth <Mike.Ou...@entrust.com>; Cooper, David (Fed) <david....@nist.gov>; Samuel Lee (ENS/Crypto) <Samue...@microsoft.com>; Sophie Schmieg <ssch...@google.com>
Subject: Re: [EXTERNAL] Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.

 

Just to cross-post back here, I still think there is a bit of a gap in NIST position here, which I expanded on in a LAMPS thread: https://mailarchive.ietf.org/arch/msg/spasm/4tj1trnasxKXTAfS8Bs2YflFvhc/ In my opinion, it is inconsistent to both

Any email and files/attachments transmitted with it are intended solely for the use of the individual or entity to whom they are addressed. If this message has been sent to you in error, you must not copy, distribute or disclose of the information it contains. Please notify Entrust immediately and delete the message from your system.

Joost Renes

unread,
Mar 26, 2025, 5:57:13 PMMar 26
to Mike.Ou...@entrust.com, samue...@microsoft.com, pqc-...@list.nist.gov, ant...@vennard.ch, dustin...@nist.gov, graham...@thalesgroup.com, david....@nist.gov, ssch...@google.com

Hi Mike, all,

 

I have seen multiple claims that there is no difference in security for ExternalMu, so I’d like to share the property below in case not everyone is aware.

 

For an ExternalMu-MLDSA.Verify interface, it is trivial to find two (in fact many) public keys pk and pk* such that ExternalMu-MLDSA.Verify(pk, mu) as well as ExternalMu-MLDSA.Verify(pk*, mu) pass.

This can be done by flipping the least significant bits of coefficients in t1 leading to many different pk* for a given pk.

Due to the rounding (Decompose) in UseHint, it is quite likely that the bit flips in t1 do not affect the value of w_1’ and therefore that verification passes (if it does for pk).

This does not happen for MLDSA.Verify or HashMLDSA.Verify because bit flips in t1 affect mu, and therefore ~c.

 

The introduction of ExternalMu at the very least raises the question what happens to the security properties (in particular non-resignability) if the creation of mu and the verification cannot be guaranteed to use the same public key.

I believe that raising this question is valid and should be encouraged.

And I do not think we have a complete answer (at least not to my knowledge), so claiming no difference in security at all seems premature.

 

Kind regards,

Joost (on personal behalf)

 

From: 'Mike Ounsworth' via pqc-forum <pqc-...@list.nist.gov>
Sent: Tuesday, March 25, 2025 10:09 PM
To: Samuel Lee <samue...@microsoft.com>; pqc-forum <pqc-...@list.nist.gov>
Cc: Antony Vennard <ant...@vennard.ch>; Moody, Dustin (Fed) <dustin...@nist.gov>; COSTA Graham <graham...@thalesgroup.com>; Cooper, David (Fed) <david....@nist.gov>; Sophie Schmieg <ssch...@google.com>
Subject: [EXT] RE: [EXTERNAL] Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.

 

Caution: This is an external email. Please take care when clicking links or opening attachments. When in doubt, report the message using the 'Report this email' button

 

Sophie Schmieg

unread,
Mar 27, 2025, 3:59:42 AMMar 27
to Joost Renes, Mike.Ou...@entrust.com, samue...@microsoft.com, pqc-...@list.nist.gov, ant...@vennard.ch, dustin...@nist.gov, graham...@thalesgroup.com, david....@nist.gov
external-µ is a multi-party computation scheme that requires the parties to trust each other (i.e. at most a "honest, but curious" threat model). It does not really have a well-defined model on verification, in any case, as neither party actually computes something that is of interest by itself, and all inputs are presumed public.

The hashing party computes µ, and as such is the only one with knowledge of the message (assuming high entropy of the message). The verifying party returns a boolean only.

A dishonest verifying party can create forgeries at will by executing "return true". An honest verifying party has nothing secret to glean from µ, as verification is using public inputs only. So it is not clear what adversary this party would be concerned with in the case of a dishonest hashing party.

I do not think that external-µ for verification has a meaningful application, but I also do not understand what adversary you are trying to implement, could you define the ROM game you are trying to attack?

Samuel Lee (ENS/Crypto)

unread,
Mar 27, 2025, 8:59:49 PMMar 27
to Sophie Schmieg, Joost Renes, Mike.Ou...@entrust.com, pqc-...@list.nist.gov, ant...@vennard.ch, dustin...@nist.gov, graham...@thalesgroup.com, david....@nist.gov
To address:

"Is there currently a problem with the external mu suggestion (and banning of HashML-DSA) that is currently contained in draft-ietf-lamps-dilithium-certificates?"
I broadly have 2 concerns:

  1. Making sure we have a consistent and full understanding of the properties of ExternalMu ML-DSA, associated benefits/risks, and interactions with FIPS and certification
  2. Setting the precedent of banning HashML-DSA when it seems to be a better choice in many domains

-- 1

I think I agree with Sophie's reasoning about ExternalMu-ML-DSA.Sign not leaking anything from sk for adversarial mu.


However, ExternalMu certainly complicates reasoning about what a signature guarantees, and it is a non-trivial change.
When mu is computed in the module which owns the private key, assuming the module computes tr correctly, ML-DSA has the property of non-resignability.
While I do not see huge real-world benefits of this guarantee, it is part of the rationale for disallowing HashML-DSA in draft-ietf-lamps-dilithium-certificates.

To build on Joost's point:
With mu being computed outside of the boundary of a module, there is a real possibility to break this property if both signer and verifier have a shared buggy implementation of computing mu.
Some ways such a bug in the mu computation might happen in practice:
  • tr is computed as H(pk[0..31]) (wrong length passed to SHAKE256)
  • tr is computed as 64 0 bytes (SHAKE256 not invoked correctly, or some error return ignored)
  • tr is computed as H of some hardcoded test public key (totally compliant in tests with the test key, but generates/verifies non-standard signatures for other keys)

An attacker with knowledge of the buggy implementation, the public key, and the signature, but not M, can generate a second public key which will verify that signature with M with the buggy implementation.

I recommend that if there is no use-case for ExternalMu-ML-DSA.verify, it should not be allowed by NIST.
Removing the option for ExternalMu-Prehash in the verification path would reduce scope for errors of this form; and there is nothing stopping a single module from implementing a certifiable streaming ML-DSA.verify API.


Splitting the key material used in signing across components also increases fragility of correct implementations.

Say we have some isolated component (TPM / HSM / etc.) which generates an ML-DSA keypair and exports the public key to some non-isolated SW component.
The non-isolated SW maintains this public key along with a handle to the private key, and computes mu locally in the signing case.
If the non-isolated component's copy of the public key is slightly corrupted between when it was generated and the being saved, we have a situation where we have an unreliable public key.

As signing / verification only uses a subset of the coefficients in the secret and public t1, honest and algorithmically correct signature generation with this corrupted public key, can return a signature which may or may not be verifiable against the public key.
The problem here is that a signature from ExternalMu-ML-DSA.Sign does not commit the whole contents of the secret key, whereas (Hash)ML-DSA.Sign does (provided the module computed tr correctly from the secret key).

For traditional signature schemes a single sign+verify test would, with almost mathematical certainty, indicate public key consistency.
If FIPS allows a module to only support ML-DSA key gen, ExternalMu-ML-DSA sign, and public key export, I do not think there is a good process for an operator to check the consistency of the exported public key and the internally held private key.

I also nodded to this fragility in discussion of pairwise consistency tests here (question b of this thread).
While it is possible for the same kind of error to occur within a single module implementing the full ML-DSA or HashML-DSA, the key generation process should be well tested by FIPS certification.
In contrast, transporting the public key from one module to another is out of scope of certification, and could well be more error prone.

Perhaps this could be remedied by requiring that any module supporting ExternalMu-ML-DSA.Sign also implements either ML-DSA.Sign or HashML-DSA.Sign, and then the non-isolated component can check the public key against a non-ExternalMu signature?


Beyond these concerns, I do not see that there is some fundamental attack on ExternalMu-based implementations.

I do have more concern with ExternalMu than HashML-DSA in terms of complexity/risk.
If ExternalMu pre-hash is still recommended by draft-ietf-lamps-dilithium-certificates-07, the RFC it should flesh out potential implementation concerns and pitfalls in the Pre-hashing appendix.
While the current text has a nod to opening a window for mismatch between tr and sk, I think this elides important details above, and I am not confident there are no other weird problems.

-- 2

I stand by the point that doing pure signatures of arbitrary length data seems to cause more problems than it solves.

Limiting data sizes that key owners need to ingest, either by using the HashML-DSA construction, or application-level prehash + pure ML-DSA, seem like a better path than using ExternalMu.
These solutions are also applicable to EdDSA and SLH-DSA, the only other two other signature algorithms with both pure and pre-hash versions, both of which have no ExternalMu option.
These options are also drop in replacements for applications wanting to use PQC in an existing application flow which relies on streaming the data to be signed, rather than having a change in data flow which only applies to ML-DSA (both EdDSA and SLH-DSA have no way to introduce a streaming API for pure signing given they need to do 2 passes over the data to be signed).

I do not have huge a concern with ExternalMu being an option for computing ML-DSA signatures if it is well understood.
I do have concerns with it being pushed as the only option for any ML-DSA signatures needing a pre-hash or streaming flow.

Best,
Sam


From: Sophie Schmieg <ssch...@google.com>
Sent: 27 March 2025 00:59
To: Joost Renes <joost...@nxp.com>
Cc: Mike.Ou...@entrust.com <Mike.Ou...@entrust.com>; Samuel Lee (ENS/Crypto) <Samue...@microsoft.com>; pqc-...@list.nist.gov <pqc-...@list.nist.gov>; ant...@vennard.ch <ant...@vennard.ch>; dustin...@nist.gov <dustin...@nist.gov>; graham...@thalesgroup.com <graham...@thalesgroup.com>; david....@nist.gov <david....@nist.gov>
Subject: Re: [EXT] RE: [EXTERNAL] Re: [pqc-forum] External calculation of M' vs mu in FIPS 204.
 

Sophie Schmieg

unread,
Mar 28, 2025, 4:30:41 AMMar 28
to Samuel Lee (ENS/Crypto), Joost Renes, Mike.Ou...@entrust.com, pqc-...@list.nist.gov, ant...@vennard.ch, dustin...@nist.gov, graham...@thalesgroup.com, david....@nist.gov
It is not the only option, HashML-DSA is still a NIST standard and should be FIPS compliant as such. As far as I'm aware, NIST has no intention of removing it. However, it is not compliant with CNSA 2.0 and MUST NOT be used for ML-DSA in LAMPS, and I have a draft pending with CFRG marking it as NOT RECOMMENDED within IETF standards. But if you do not want to align with those downstream standards, there is nothing from stopping you using HashML-DSA.

Antony Vennard

unread,
Jun 19, 2025, 11:09:37 AMJun 19
to Mike Ounsworth, Samuel Lee, pqc-forum, Moody, Dustin (Fed), COSTA Graham, Cooper, David (Fed), Sophie Schmieg
Hi Mike,

My apologies for the delay, life got in the way. I had two specific
points: 1) the boundary for FIPS-validated modules does not have to be
the boundary for Microsoft's APIs for implementing cryptography as far
as I'm aware (but I defer to people who work at vendors).

2) The main problem in constrained devices is, as Thomas' blog says,
key follows certificates because the usual certificate order is end-
entity -> ... -> subCA under root. To validate the certificate, it is
necessary to wait for the key to appear in the next certificate, while
buffering the previous certificate.

Hash-MLDSA (and EdDSA-ph) avoid this. The certificate can be both
processed and fed through a hash function without the need to buffer
it.

Embedded devices vary. The best taxonomy I am aware of is the one on
page 2 of this paper: https://www.s3.eurecom.fr/docs/ndss18_muench.pdf
- reworded, and slightly altered, they are:

Type-I: Runs Linux or something with an MMU.
Type-II variant a: cannot run Linux, uses one of the embedded "OS"es.
Type-II variant b: programmer interacts directly with hardware (type 3
in the paper; for us the distinction is unimportant).

The presence of absence of an MMU alone doesn't tell you much about the
resources of the device but generally more processor features =
bigger/more expensive device. So I use it as a guideline.

The last time I looked based on open source information a popular type
of sports smartwatch used Cortex-M3 chips. An MLDSA-65 certificate in
DER form takes ~5.3k, PEM ~7.3k. RAM capacity varies depending on the
chip but if we are considering these:
https://toshiba.semicon-storage.com/eu/company/news/2023/06/micro-20230627-1.html
for example, take the 66kb option. That makes the requirement around
11% of available RAM to buffer such a certificate.

This doesn't necessarily make it impossible to support, depending on
the order of the certificates. I don't think that's contained in the
lamps draft though and (afaik) would require tls 1.3, so that the
signature key is could be available before the signed certificate.

Kind regards,

Antony

On Wed, 2025-03-26 at 03:08 +0000, 'Mike Ounsworth' via pqc-forum
wrote:
> ML-DSAare drop in replacements for these applications.
Reply all
Reply to author
Forward
0 new messages