NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

5,183 views
Skip to first unread message

Dang, Quynh H. (Fed)

unread,
Mar 3, 2025, 2:11:03 PMMar 3
to pqc-forum

Hi all,

 

NIST previously requested feedback on the benefits of a variant of SLH-DSA with a smaller maximum number of signatures (see: pqc-forum link). Based on this feedback, NIST has decided to proceed with such a variant.

 

To guide the discussion, the publication https://eprint.iacr.org/archive/2024/018/1737032157.pdf lists  and discusses various parameter sets for this version of SLH-DSA. We invite community feedback on which specific parameter set(s) should be included, with a preference for minimizing the number of parameter sets to enhance interoperability.

 

Please let us know which one or two parameter sets would best suit your use cases and explain why. If none of the sets in the paper are suitable, please specify the characteristics you would require.

 

In general, we favor parameter sets that exhibit slow (conservative) security degradation under moderate overuse. For example, we prefer C-11 over C-1, despite C-11 being slower and larger, see Table 4 (Selection set: 128, 112, 2^20, 10,000,000) on page 14 of the paper. The idea is that if you prefer something like C-1 over C-11, let us know why you think the slower speeds and larger signature size of the latter would have a noticeable impact on your use case(s).


Similarly, if you prefer something like D-1 over D-8, let us know why you think the larger signature size of the latter would have a noticeable impact on your use case(s), see Table 5 under the Selection set (128, 112, 2^20, 100,000,000) on page 15 of the paper.

 

You can provide comments in this thread or email them to pqc-co...@nist.gov. Please note that submitted comments may be made public. We would appreciate responses by 4/14/2025.

 

Regards,

Quynh Dang, on behalf of the NIST PQC team.

 

Andrey Jivsov

unread,
Mar 3, 2025, 4:00:43 PMMar 3
to Dang, Quynh H. (Fed), pqc-forum
Just a general comment:

I think this is a great project to take for firmware signing (and possibly related signing, such as attestation). In these cases, the number of signatures is low enough so that there is no degradation of security for such a variant of ML-DSA.

LMS and XMSS are great options on the device side, but there is a strong dislike for them on the signing infrastructure side, which the sought SLH-DSA variant should solve because it would match the stateless operational properties of ML-DSA.

There may be a need to add parameters for ~2^24 signatures (something between 2^20 and 2^30).



--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/MW4PR09MB10059280AD24C92B4EA8C0E96F3C92%40MW4PR09MB10059.namprd09.prod.outlook.com.

John Mattsson

unread,
Mar 4, 2025, 10:56:44 AMMar 4
to Dang, Quynh H. (Fed), pqc-forum

Hi Quynh,

Thanks for the great news!

 

We (Ericsson) will discuss and provide feedback.

 

I think signatures designed for less than 2^64 signatures could be useful in many use cases if they have performance benefits (smaller signatures and/or faster verification) compared to FIPS 205. Most signature keys are used far less than 2^64 times, and as most signature keys have an expiery time, it is often practically impossible to even come close. Cloudflare stated in 2023 that they on average handle 46 million requests per second. During 90 days, which is the standard lifetime of HTTP certs, that would be 2^48 signatures even if resumption was not used. Most systems can handle far less than 46 million requests per second.


Andrey Jivsov wrote:

>but there is a strong dislike for them on the signing infrastructure

>side, which the sought SLH-DSA variant should solve because it would

>match the stateless operational properties of ML-DSA.

 

Yeah, we had plans for XMSS/LMS even if we did not like the statefullness. Forbidding key export killed these plans and CNSA 2.0/BSI disagreement on multi-tree did not help. Now we are planning to only support SLH-DSA-SHAKE-s, ML-DSA + Ed448 hybrids, and standalone ML-DSA. No state, no pre-hash, and no SHA-2.

 

Cheers,

John

 

Jade Philipoom

unread,
Mar 6, 2025, 9:24:14 AMMar 6
to John Mattsson, Dang, Quynh H. (Fed), pqc-forum
Hi Quynh,

For the OpenTitan project’s SLH-DSA use case (firmware signing), the
AAA-2 parameter set looks like the best option at level 1, and
analogously PPP-2 at level 3.

AAA and PPP are preferable over the other suites with faster signing,
because signing is infrequent in our context and we don’t need it to
be fast.

AAA-2 and PPP-2 are preferable over AAA-1 and PPP-1 because of
verification time; although AAA-1 and PPP-1 have slightly smaller
signatures, the verification needs over 10x as many hashes, making it
significantly slower (roughly 2x?) than the FIPS-205 128s parameters.
The verification speed of 128s is already a challenge in our context,
where signature verification happens multiple times every boot.
Verification speed, in addition to signature size, is a key benefit we
would hope to get from alternative parameter sets; AAA-2 is 4-5x
faster on our platform than 128s.

AAA-2 and PPP-2 are preferable over the higher-numbered options in the
AAA and PPP suites because of the gradual increase in both signature
size and verification time.

Sacrificing signature size and verification speed for overuse
resistance would strongly negatively impact OpenTitan’s use case.
Larger signatures mean less space to store application code, and
slower verification speeds mean sluggish boot times. Both of these are
difficult optimization points already for embedded contexts, where
every kilobyte and millisecond matters, and would limit the number of
boot stages for which we could use SLH-DSA.

The overuse safety of SLH-DSA is pretty good for all parameters; with
the AAA-2 parameter set; we could exceed 2^20 by 9x and still retain
112 bits of security. Given the slow signing time, this would likely
take over a decade of continuous signing, and firmware signing is a
tightly controlled, infrequent, and non-continuous process. For
contexts that require higher overuse resistance, it’s always possible
to use a FIPS-205 parameter set or a set that allows more signatures
at the target level (2^30 instead of 2^20, for example).

We’ve already produced chips using the FIPS-205 parameters (128s
sets), but options with smaller signatures and faster verification
times would be very useful. Thanks for this effort!

Best,
Jade

Dang, Quynh H. (Fed)

unread,
Mar 6, 2025, 11:07:12 AMMar 6
to pqc-forum, ja...@opentitan.org

Hi Jade,

 

(Hi all, my discussion below is only about technical facts, it does not present or imply any opinion of the NIST PQC team on the matter.  I try to get more useful information. )

 

Thank you for sharing your use case and thoughts on the parameter sets.

 

dd-14 (page 34) is about 11% smaller than AAA-2 (2928 vs 3264), its signing time is 28315644/ 449314816 (about 6%) of AAA-2's signing time, but its verifying time is (4758/451) 10.5 times slower than AAA-2's verifying time.

 

Would dd-14 provide better performance for your use case ?  On average, how many signatures get verified in a boot ? Do you have the distribution of the number of signatures in a boot for the devices that your community deals with ? 

 

It is possible to have users targeting not more than 2^10 signatures per key, dd-14 would provide a very good overuse safety in this case in my opinion.

 

Regards,

Quynh.

 

 

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Jade Philipoom

unread,
Mar 6, 2025, 11:43:09 AMMar 6
to Dang, Quynh H. (Fed), pqc-forum
Hi Quynh,

> dd-14 (page 34) is about 11% smaller than AAA-2 (2928 vs 3264), its signing time is 28315644/ 449314816 (about 6%) of AAA-2's signing time, but its verifying time is (4758/451) 10.5 times slower than AAA-2's verifying time.
>
> Would dd-14 provide better performance for your use case ?

Unfortunately no, because the verifying time is still 10x slower than
AAA-2. Signing time isn't important to us, but verifying time is, and
at the point it's that slow we might even be forced to accept the
larger signatures of 128s instead. The dd-15 parameters would make
sense though; 2^10 as a bound is acceptable (although I might feel
more comfortable with 2^20, it depends), but 10x slower signature
verification is not.

> On average, how many signatures get verified in a boot ? Do you have the distribution of the number of signatures in a boot for the devices that your community deals with ?

It depends on the specific setup, but 2-3 is a reasonable ballpark.
However, target total boot times in this context are very fast
(milliseconds) and most of that time is signature verification.

> It is possible to have users targeting not more than 2^10 signatures per key, dd-14 would provide a very good overuse safety in this case in my opinion.

I think optimizing too much for overuse safety here would be really
regrettable. By definition, the folks interested in these parameter
sets have contexts where they're confident for some reason that
signing won't happen very often. In SLH-DSA this unlocks a lot of
speed and size benefits! Going for high overuse safety would mean that
use cases like OpenTitan would not be able to take advantage of that
fact, while standardizing graduated options at 2^20, 2^30, 2^40 for
example would allow people to pick the next size up if their context
has any chance of going substantially over the bound. We're talking
about the difference between exceeding the limit by 10x and 100x; from
my perspective none of these parameters are actually weak against
overuse.

Best,
Jade

Joost Renes

unread,
Mar 6, 2025, 5:24:53 PMMar 6
to quynh...@nist.gov, pqc-...@list.nist.gov
Hi Quynh,

Software/firmware signing being the main use case for SLH-DSA, I believe at
least parameter sets should be selected that optimize for that specific use
case.
That means minimizing the signature size & verification time (in a
"reasonable" way with respect to signing time, whatever that means).
Signature creation happens once (or a few times), and verification many many
times across different devices (updates) or even on the same device (boot).

Others can speak up if they have other use cases in mind, but if there is
need to minimize signing time then I think that should be captured in a
separate parameter set.
I agree with the intent to minimize the number of parameter sets for
inclusion, but if you go for a single selection you may end up compromising
across all dimensions and no one will be happy.

Kind regards,
Joost (representing only myself)

-----Original Message-----
From: 'Jade Philipoom' via pqc-forum <pqc-...@list.nist.gov>
Sent: Thursday, March 6, 2025 10:43 AM
To: Dang, Quynh H. (Fed) <quynh...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>
Subject: [EXT] Re: [pqc-forum] NIST requests feedback on additional
SLH-DSA(Sphincs+) parameter set(s) for standardization.

Caution: This is an external email. Please take care when clicking links or
opening attachments. When in doubt, report the message using the 'Report
this email' button
--
You received this message because you are subscribed to the Google Groups
"pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to pqc-forum+...@list.nist.gov.
To view this discussion visit
https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.goo
gle.com%2Fa%2Flist.nist.gov%2Fd%2Fmsgid%2Fpqc-forum%2FCAEMn3UPE%253DJfauJAAx
yjm%252BXr%253DA%253DZ4Va6tb0vEry7YnUrf2Qk2HA%2540mail.gmail.com&data=05%7C0
2%7Cjoost.renes%40nxp.com%7Cfeb550f14a4944593b6908dd5ccdfb33%7C686ea1d3bc2b4
c6fa92cd99c5c301635%7C0%7C0%7C638768761948662013%7CUnknown%7CTWFpbGZsb3d8eyJ
FbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIld
UIjoyfQ%3D%3D%7C0%7C%7C%7C&sdata=11LGGGsUhiIlXEJANhcRHhL%2FhSXZ1IwU5ebcyv1q5
wU%3D&reserved=0.

Stefan Kölbl

unread,
Mar 10, 2025, 2:06:45 PMMar 10
to Joost Renes, quynh...@nist.gov, pqc-...@list.nist.gov
Hi Quynh,

We would support adoption of the AAA-2/PPP-2 parameter set (or one with a matching performance profile) for the use case of firmware signing, for similar reasons as already stated. A signing time of O(minutes) with the current reference implementation is acceptable, and as already stated verification time in those applications is crucial. For security level 1 and 3, any parameters with d>1 are therefore less preferable due to the additional costs of verifying additional OTSs in the tree.

If we consider only the parameters which provide fast verification (AAA-2 to AAA-6, PPP-2 to PPP-6), we don’t think the increased overuse safety of some of the parameters provides a meaningful risk reduction in practice. It is always less than a factor of 10 compared to the parameters with the smallest signature size (AAA-2, PPP-2), hence these are preferable. We also anticipate that most firmware signing use cases will already use significantly less than 2^20 signatures, and the general risk of overuse is low due to the controlled signing environment.

Kind regards,
Stefan

John Mattsson

unread,
Apr 14, 2025, 10:26:16 AMApr 14
to quynh...@nist.gov, pqc-...@list.nist.gov

Dear NIST,

 

Thanks for your continuous efforts to produce well-written, user-friendly, and open-access security documents. We do not think stateful signatures is a robust solution. We welcome NIST’s plan to standardize additional SLH-DSA parameter sets [1–2]. For many use cases, stateless signature parameter sets designed for significantly fewer than 264 signatures pose no issues. We think that NIST should standardize more than one additional parameter set.

 

Please find below Ericsson’s detailed feedback:

 

  • We are only interested in parameter sets that are “hedged”, “pure”, and use SHAKE256. BSI [3] recommends only the “hedged” variants and prefers the “pure” versions of both SLH-DSA and ML-DSA. Similarly, CNSA 2.0 [4] approves only the “pure” version of ML-DSA. SLH-DSA using SHA2 is considerably more complex than SLH-DSA using SHAKE. Since ML-KEM and ML-DSA are based on SHA-3, it is natural to switch to SHA-3 when migrating to quantum-resistant algorithms instead of sticking to SHA-2 [5-6].

 

  • Regarding security levels, we have strong confidence in standalone SLH-DSA at security level 1 and have higher trust in the security of SLH-DSA-SHAKE-128 than RSA-3072 against classical attackers. We hope that the European Union [7] will align with NIST and the UK NCSC in recommending all security levels of standalone SLH-DSA for industrial use cases. However, since many early government PQC recommendations focus on national security systems, there may be demand for security level 3 or 5.

 

  • In some use cases, private keys used for signing other certificates are used very infrequently. For instance, in our software signing setup, root CA keys are used for approximately 10 signatures over their entire lifetime, while intermediate CA keys are used around 20 times. Fast verification is a key requirement, while key generation and signing speed are not important. Signature size is not a significant concern in our use case. The signing process is manual and requires human intervention.

  • End-entity code-signing certificates are used for significantly more signatures than the CA certificates, though still far fewer than 264. If signing is integrated as an automated step in the build process, producing a few thousand signatures per day, it would take approximately a decade to reach a total of 220–230 signatures. In such scenarios, where signing performance directly affects build times, the signing time must remain within reasonable limits.

 

  • For software and firmware signing for use in constrained devices or over constrained radio, the signature size can be very important, particularly when dealing with certificate chains. Large signatures can limit the amount of signed data, and the energy cost associated with radio transmission is high compared to computation [8].

 

  • We are not convinced that overuse safety is a critical requirement. In the case of manual signing, any overuse would be deliberate and indicate a compromised CA. For automated signing, we would carefully select parameters to ensure that, even in worst-case scenarios, the number of signatures remains within the defined limits over the certificate's lifetime. Exceeding those limits would be considered a security incident, regardless of any built-in overuse safety. There is also a strong industry trend toward reducing the lifetime of end-entity certificates. In automated signing setups, mechanisms for quickly replacing a compromised key must already be in place. In this context, emphasizing overuse safety may lead to unnecessary performance degradation without offering meaningful security benefits. We believe it would be preferred for NIST to standardize multiple parameter sets (210, 220, 230, 240) to allow for flexibility based on different use cases.

We support the standardization of the AAA-2/PPP-2 parameter set, as suggested by Jade Philipoom. However, for manual signing of certificates, 210 would be sufficient. We believe 220 is too limited for automatic signing scenarios, such as when signing is integrated into the build process. For such use cases, we think NIST should standardize parameter sets designed to support 230 or 240 signatures, with reasonable signing performance. There is a significant need for flexible parameter sets to support adaptation across diverse industrial use cases. In general, we believe that “additional parameters” would be preferred for all our SLH-DSA use cases.

Cheers,
John Preuß Mattsson,
Expert Cryptographic Algorithms and Security Protocols, Ericsson



Links to the references can be found in the pdf version of the comments
https://emanjon.github.io/NIST-comments/2025%20SLH-DSA%20parameters.pdf

 

Thom Wiggers

unread,
Apr 16, 2025, 11:03:03 AMApr 16
to Dang, Quynh H. (Fed), pqc-forum, pqc-co...@nist.gov, Amber Sprenkels
Dear Quynh, all,

Apologies for the delay in submitting our comments, but we hope that they are still helpful.

It is our view that SLH-DSA with fewer maximum signatures, which we’ll refer to as SLH-DSA-few, may be particularly useful on restricted platforms for use cases such as firmware signing. These devices may not be able to accommodate a “large” signature scheme implementation such as ML-DSA, while SLH-DSA, superficially, is just a few loops around a hash function (which may even already be present and/or implemented in hardware), so its code size is just a fraction of e.g. that of ML-DSA. Firmware signing is a particularly clear use case in which few numbers of signatures are sufficient.

# Preferred parameters
We prefer the AAA-3 parameter set, which offers 3280-byte signatures, 19x overuse protection, and reasonably efficient signature verification. AAA-1 is too slow and we think that the 6% savings in size are not worth sacrificing the verification efficiency. AAA-3 offers more overuse protection than AAA-2 essentially “for free”, as we don’t see a meaningful difference in signing time if signing time is already on this order of magnitude.

For 2^30 signatures, we think that there may be some embedded use cases, such as individually signed firmware updates to large numbers of devices. As this use case may require online signing, signing time may be a relevant factor.  At this time, we do not have a strong preference for any parameters. 

# Slow signing times
The low number of total allowed signatures in SLH-DSA-few, especially when limited to 2^20 signatures, implies that signing is an infrequent event (as in the firmware signing use case). It is our view that this may warrant for keeping count of the number of generated signatures, ensuring that we do not exceed the limit. However, in this “state-light” setting, the scheme is still more robust than the stateful schemes against scenarios like backup recovery or a faulty counter update.

Conversely, we also think that (very) large signing times may serve to discourage developers from using SLH-DSA-few parameters in places that sign (very) often, as the large signing time will signal that this is a special-use-case scheme and, if in online code paths, lead to a bad user experience.

# Other variants
We do not think that it is helpful to give “in-between” parameters that balance the performance of the -s and -f variants of SLH-DSA, and we do not think that there are use cases for which 2^40 or 2^50 parameters for SLH-DSA are suitable: in both those cases, ML-DSA is an allround better option at the same security level: similar in size or smaller, arbitrary numbers of signatures, and much faster in both signing and verification. 

Regards,

Amber Sprenkels
Thom Wiggers
PQShield


-- 
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

John Mattsson

unread,
Apr 16, 2025, 12:05:08 PMApr 16
to Thom Wiggers, Dang, Quynh H. (Fed), pqc-forum, pqc-co...@nist.gov, Amber Sprenkels

Hi,

One thing
I forgot to write in Ericsson’s comments is the utter importance of SLH-DSA. While US NIST, UK NCSC, and Cadada Cyber Centre are approving all NIST PQC algorithms, many EU countries only recommend a subset of the NIST algorithms with additional requirements, such as mandating hybridization for lattice-based algorithms.

 

While hybrid KEMs are relatively straightforward and already widely deployed, my impression is that many large ICT companies view hybrid signatures as overly complex and generally undesirable. Although standalone ML-DSA may be viable for certain markets, its use is currently strictly forbidden in several European countries. In this context, standalone SLH-DSA emerges as the only globally reasonable option.

For many infrastructure use of TLS, DTLS, QUIC, and IPsec, the size of the certificate chain and signing/verification time are not critical factors. In that regard, it's fortunate that NIST chose to standardize both ‘f’ and ‘s’. My current expectation is that SLH-DSA will see very widespread adoption, much more than I would have guessed a year ago.


Cheers,
John

 


Date: Wednesday, 16 April 2025 at 17:03
To: Dang, Quynh H. (Fed) <quynh...@nist.gov>

Alex Railean

unread,
Apr 17, 2025, 3:37:43 AMApr 17
to pqc-forum
Hi John,


> Although standalone ML-DSA may be viable for certain markets, its use is currently strictly forbidden in several European countries.
Can you provide some specific references?

I am aware of some European states that recommend specific parameters for ML-DSA, or encourage the use of hybrids, but I cannot recall any that outright prohibit it.


Alex

Stephan Mueller

unread,
Apr 17, 2025, 3:40:35 AMApr 17
to pqc-forum, Alex Railean
Am Donnerstag, 17. April 2025, 09:37:43 CEST schrieb Alex Railean:

Hi Alex,
The German BSI TR02102 strongly suggests hybrids for ML-DSA and ML-KEM (not
for SLH-DSA).

Last week, I had the luxury to had a face-to-face meeting with a person from
BSI working on PQC and he confirmed that if there are valid reasons for not
having hybrids, BSI at least will consider and allow it.

Ciao
Stephan


Alex Railean

unread,
Apr 17, 2025, 4:43:35 AMApr 17
to pqc-forum
Hi Stephan,

Thanks for the quick reaction!

> The German BSI TR02102 strongly suggests hybrids for ML-DSA [..]
That I am aware of, but "suggesting strongly" is not the same as prohibiting. The text in BSI TR-02102-1 from January 2025 [1] is neutral, in my interpretation.

This is what is written about ML-DSA in section 5.3.4.2
{
[..] This Technical Guideline recommends the “hedged”-variant of ML-DSA and all parameter sets according to NIST Security Strength Categories 3 and 5 from [104], i.e.
• ML-DSA-65, in the “hedged”-variant according to [104],
• ML-DSA-87, in the “hedged”-variant according to [104].
Remark 5.10 The “pure”-version of ML-DSA is preferred; for special applications, the “pre-hash”version of ML-DSA can also be used in accordance with the remarks in [104].
}

And this is a statement about digital signatures in the introduction of section 5.3.4
{
This Technical Guideline recommends the use of a quantum-safe signature scheme only in combination with a classic signature scheme. Hybridisation should be implemented in such a way that the hybrid signature scheme is secure as long as at least one of the schemes is secure.
}

They recommend the use of hybrids, but there's a gap between "recommendation" and "prohibition". If what the authors are trying to convey is strong prohibition - I think the text does not convey that.

My first thought was that perhaps the German version of the document has stronger wording, but it still looks neutral to me - recommending hybrids, but not forbidding the pure PQ algorithm: "[..] empfiehlt den Einsatz eines quantensicheren Signaturverfahrens grundsätzlich nur in Kombination ..". Note, my command of German is not my sharpest skill :-)


I know that some folks from the BSI are members of this group, I hope they can clarify.


Alex

Stephan Mueller

unread,
Apr 17, 2025, 5:12:19 AMApr 17
to pqc-forum, Alex Railean
Am Donnerstag, 17. April 2025, 10:43:35 CEST schrieb Alex Railean:

Hi Alex,

> Hi Stephan,
>
> Thanks for the quick reaction!
>
> > The German BSI TR02102 strongly suggests hybrids for ML-DSA [..]
>
> That I am aware of, but "suggesting strongly" is not the same as
> prohibiting. The text in BSI TR-02102-1 from January 2025 [1] is neutral,
> in my interpretation.

Exactly - that is what I tried to highlight. There is no prohibition, but you
need to provide an argument if you deviate from the recommendation.

Ciao
Stephan



Daniel Apon

unread,
Apr 17, 2025, 1:52:21 PMApr 17
to Stephan Mueller, pqc-forum, Alex Railean
Maybe an appropriate contact at BSI on this topic is Stephanie Reinhardt; cf. https://rwpqc.org

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+unsubscribe@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/1806453.Ff3jj5kU4x%40tauon.

Stephan Ehlen

unread,
Apr 24, 2025, 4:11:59 AMApr 24
to pqc-forum, Stephan Mueller, Alex Railean
Hi,

sorry for the delayed response.
I'd like to point to a previous answer I gave to a similar question: 
https://mailarchive.ietf.org/arch/msg/openpgp/rwzt-vG6QconOITuBChAIfD4ktw/

In particular, let me cite again from the technical guideline TR-02102-1:
"However, no claim is made to completeness, that means mechanisms not listed are not necessarily considered to be insecure by the BSI. Conversely, it is also wrong to conclude that cryptographic systems which only use the mechanisms recommended in this Technical Guideline as basic components are automatically secure: The requirements of the concrete application and the linking of different cryptographic and non-cryptographic mechanisms can lead to the fact that the recommendations made here cannot be implemented directly or that vulnerabilities arise."Also, in general the technical guideline is a recommendation. But it can become mandatory depending on the context, for instance in some government applications.I hope that helps. Best, Stephan

Dang, Quynh H. (Fed)

unread,
May 21, 2025, 7:49:47 AMMay 21
to Thom Wiggers, pqc-forum, Amber Sprenkels

Hi Thom, Amber and all,  

 

Thank you for your comments.

 

Thom and Amber wrote “For 2^30 signatures, we think that there may be some embedded use cases, such as individually signed firmware updates to large numbers of devices. As this use case may require online signing, signing time may be a relevant factor.  At this time, we do not have a strong preference for any parameters.”

 

That is a very good point.

 

Do we know any specific systems for which ML-DSA-44 ( pk: 1312 bytes + sig: 2420 bytes = 3,732 bytes) ) does not work well for that use case ?  

 

SLH-DSA-SHAKE/SHA2-128s has about 2 million hashes for signing.

 

At level 1 security with 2^30 limit, the smallest options are G-1/2/3 but they have much slower signing (about 94, 84, 67 million hashes respectively) than SLH-DSA-SHAKE/SHA2-128s’ signing. Their signature sizes are 3568,  3664  and 3696 bytes respectively, just a bit smaller than 3,732.  After adding the 32 bytes of SLH-DSA-SHAKE/SHA2-128s’ public key, the size differences are even smaller. 

 

I think we should stay on safer sides (choices) and should not assume that everyone will do all right things for security.

 

Regards,

Quynh.

 

 

From: Thom Wiggers <th...@thomwiggers.nl>
Sent: Wednesday, April 16, 2025 11:02 AM
To: Dang, Quynh H. (Fed) <quynh...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>; pqc-comments <pqc-co...@nist.gov>; Amber Sprenkels <amber.s...@pqshield.com>
Subject: Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Dear Quynh, all,

Amber Sprenkels

unread,
May 26, 2025, 7:06:27 AMMay 26
to Dang, Quynh H. (Fed), Thom Wiggers, pqc-forum
Dear Quynh,

Do we know any specific systems for which ML-DSA-44 ( pk: 1312 bytes + sig: 2420 bytes = 3,732 bytes) ) does not work well for that use case?

For us this calls back to our point about the code size. To clarify: we do see use cases in which we want to reduce the code size as much as possible for crypto implementations. In our experience, small Armv7m ML-DSA implementations use around 10K–20K of code (excluding Keccak). However, because of the simplicity of SPHINCS the code size (both in terms of memory, as well as potential for implementation bugs) would be much smaller; more in the order of 5K. [pqm4, code for all algorithms excluding hashing]

For a device that has only a very limited amount of code-size budget available for the firmware updating mechanism, we see scenarios in which it would definitely be beneficial to use something like SPHINCS-few. One example that comes to mind is a small device that would—aside from its update mechanism—not do any post-quantum crypto signatures.

Best wishes,
Thom Wiggers

Amber Sprenkels

PQShield Ltd


John Mattsson

unread,
May 26, 2025, 7:31:52 AMMay 26
to Amber Sprenkels, Dang, Quynh H. (Fed), Thom Wiggers, pqc-forum

Dang, Quynh H. (Fed) wrote:

>Do we know any specific systems for which ML-DSA-44 ( pk: 1312 bytes + sig: 2420 bytes = 3,732 bytes) ) does not work well for >that use case?

Quite many. ML-DSA-44 does not work at all for constrained LPWANs. My understanding is that it does not work well for the Web as it works today. 2420 bytes does not work well for application layer authentication tokens that might be sent very frequently over a single TLS connection. But note that additional SLH-DSA parameters are not the solution. FN-DSA is also problematic. The signature ramp-on will hopefully help most use cases. In some cases, ML-KEM or Classic McEliece can be used instead of signatures.

 

John

 

Dang, Quynh H. (Fed)

unread,
Jun 3, 2025, 9:21:55 AMJun 3
to Amber Sprenkels, Thom Wiggers, pqc-forum

Hi Amber and Thom,

 

Thank you for the reminder of that important point.

 

Hi all, 

 

SLH-DSA-SHAKE/SHA-128s' signing takes about 2 million hashes. Assuming it takes 0.2 seconds to generate 1 SLH-DSA-SHAKE/SHA-128s signature. G-1/2/3 has about 67-94 million hashes per signing.  So, it takes about (67/2) x 0.2 to (94/2) x 0.2 (6.7 to 9.4) seconds to generate one signature. So, it would take about 77,546 to 108,796 days (about 212 years) to generate 1 billion signatures by one signer.

 

If signing gets 10 times faster, it will take about 21 years to generate 1 billion signatures by one signer. 

 

If there are 10 signers using the same signing key, it would take about 2 years. Note that in practice, the signers/servers won't just constantly sign signatures. 

 

G-7 (if we need fast verification), at 2^33 and 2^32 messages ( 8 and 4 times the limit), it retains about 119 and 123 bits of security, respectively.  

 

The signing time of G-7 is in between G-1 and G-3's signing times. But G-7's signatures are 320 and 192 bytes larger than G-1 and G-3's signatures, respectively. 

 

I don't know which parameter set is the most popular by the users.  Is the verification time increase worth the signature size decrease ? Personally, I prefer the ones with the best overuse safety(ies) such as G-4/6/11. However, NIST takes performance into consideration. So, I don't know what limit(s) and what parameter set(s) NIST is going to standardize.  

 

AAA-3 (2^20 limit), its signing takes 596 639 744 hashes, about (596/2 )x 0.2 = 59.6 seconds, about 1 minute.  2^20 is about 1 million.  So, it would take about 1 million minutes; about 694 days to generate 1 million signatures by one signer. 

 

2^20 seems safe enough for signing CA certificates. 

 

Is this safe enough for firmware and software updates ?  If all devices have the same signature for each update, 1 million signatures/updates over a few years by one signer seems safe.  With a good overuse safety parameter set, the risk of a security break by some vendor using the same signing key for many years would be reduced. 

 

What usage policies should NIST require for a 2^20 limit parameter set ?  

 

What should we say about the case of multiple signers using the same signing key ? 

 

How can we prevent a 2^20 limit parameter set from being used for the use case of a 2^30 limit parameter set discussed by Amber and Thom before? 

 

Regards,

Quynh.

Sophie Schmieg

unread,
Jun 3, 2025, 4:29:37 PMJun 3
to Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers, pqc-forum
We can have many more than 10 signers using the same key under some circumstances, reaching counts of 2^48 datapoints is definitely possible, and I wouldn't rule out having 2^64 messages in some (at this point poorly designed) system. From that point of view, I would not base the N count on the number of signatures that you can feasibly produce, a datacenter or two calculating signatures very quickly reaches most bounds.
I would instead base the number of signatures on the use case. For firmware signing in particular, you will usually only sign very few messages (e.g. one firmware signature per release).
When it comes to usage policies I would go for something similar to how we handle AES-GCM with random IVs. One can violate the bound of 2^32 encrypts with ease, if one is so inclined (and unfortunately also when one is not so inclined), the mechanism for preventing this is the documentation stating that it is not allowed and insecure. But there is no mechanism that NIST can prescribe that would prevent this misusage.



--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

John Mattsson

unread,
Jun 4, 2025, 12:31:19 AMJun 4
to Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers, pqc-forum

'Dang, Quynh H wrote:


>What usage policies should NIST require for a 2^20 limit parameter set ? 

I suggest including a requirement that the signing process must be manual and involve human intervention. Automated systems can get stuck in unintended loops.

 

>What should we say about the case of multiple signers using the same signing key ?

That this requires extra margins calculated for the exact number of signers? If the number of signers is unclear in any way, I think the 2^64 limit parameter set should be required.

 

>How can we prevent a 2^20 limit parameter set from being used for the use case of a 2^30 limit parameter

I don't think you can exept forbidding it in the standard.

 

Cheers,

John

 

Paul Hoffman

unread,
Jun 4, 2025, 9:48:39 AMJun 4
to pqc-forum
On Jun 3, 2025, at 21:31, 'John Mattsson' via pqc-forum <pqc-...@list.nist.gov> wrote:
>
> >How can we prevent a 2^20 limit parameter set from being used for the use case of a 2^30 limit parameter
> I don't think you can exept forbidding it in the standard.

One thought is that a separate, hopefully short, NIST document discussing the uses and security issues of all standardized parameters would go a long way. Telling people to read the middle of a long technical document won't catch all the parties NIST is interested in.

--Paul Hoffman


Sophie Schmieg

unread,
Jun 4, 2025, 2:51:00 PMJun 4
to John Mattsson, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers, pqc-forum
On Tue, Jun 3, 2025 at 9:31 PM 'John Mattsson' via pqc-forum <pqc-...@list.nist.gov> wrote:

'Dang, Quynh H wrote:


>What usage policies should NIST require for a 2^20 limit parameter set ? 

I suggest including a requirement that the signing process must be manual and involve human intervention. Automated systems can get stuck in unintended loops.


I would be cautious of making this a hard requirement. In general, this is very good guidance, but if you make it a hard requirement you preclude some potential system that has other ways of mitigating overuse, or low enough stakes that overuse in case of a runaway bug is acceptable. For example, a very weak embedded system that can, even if caught in an endless loop, not get up to 2^20 in any reasonable time. Or a system with very short-lived keys, where the expiration time similarly precludes accidental overuse. In the end, while you can use data centers full of signing jobs, doing though would have to be a very intentional choice, usually (Don't give agentic AI access to your Cloud credentials, I guess).

Daniel Apon

unread,
Jun 4, 2025, 3:38:17 PMJun 4
to Sophie Schmieg, John Mattsson, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers, pqc-forum
" Don't give agentic AI access to your Cloud credentials"

gesundheit =) <3

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.

Dang, Quynh H. (Fed)

unread,
Jun 12, 2025, 12:56:53 PMJun 12
to Amber Sprenkels, Thom Wiggers, pqc-forum

Hi all,

 

A 2^30 limit parameter set is safer than a 2^20 limit one.

 

If the goal is to minimize the number of new parameter set(s) and to standardize a new one only when the existing ones have noticeable performance impact on some important use cases, I am not sure if there would still be a need to standardize a 2^20 one if NIST standardized a 2^30 one.

 

Below are some performance numbers of several parameter sets at 2^20 and 2^30 limits.

 

Parameter Set      Limit       Sig Size         Verify      Overuse Safety

AAA-1                        20           3072                4767           1722

AAA-3                        20           3280                452              19

G-1                             30           3568                7085            205

G-7                             30           3888                736               22

G-8                             30           4000                743               53

 

A 2^30’s signature is about 500 bytes larger than a 2^20’s signature with the good overuse safety but slow verify direction. The former is about 600-700 byte larger than the latter with the weaker overuse safety but fast verify direction. 

 

The verify time difference between a 2^20 and a 2^30 parameter sets are not significant in the same direction. 

 

Are there use cases where a 2^20 one works fine, but a 2^30 one would create a serious performance issue due to its larger signatures ?

 

Regards,

Quynh.

Chris Fenner

unread,
Jun 13, 2025, 10:38:15 AMJun 13
to pqc-forum, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers
Speaking for the firmware signing scenario, boot time can be very sensitive. Example: boot-time firmware verification for a PCI device that has to be ready for link training within 100ms of power-on. I could see the 60% difference between AAA-3 and G-7 meaning the difference between doing firmware verification and not having enough time on some hardware.

Hardware crypto signing oracles (e.g., HSMs, TPMs) can be configured with rate-limiting in mind. Just as an example: "you can't use this key twice in the space of 1 hour" would mean it takes at least 119 years to use the key 2^20 times. Would it help on the requirements side to say that using a 2^20 parameter set requires some kind of rate-limiting setup?

Thanks
Chris

Markku-Juhani O. Saarinen

unread,
Jun 15, 2025, 7:30:48 AMJun 15
to pqc-forum, Chris Fenner, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers
Hi All,

There is an SLH-DSA implementation (in plain C) technically supporting all of these additional parameter sets. I did this mainly to provide a secondary sanity check for evaluation provided by NIST. https://github.com/slh-dsa/slhdsa-c/

I tested against all of the static ACVP test cases for the SLH-DSA standard parameters and a basic functionality "smoke test" for the 264 parameter sets in Tables 2-..28 of https://eprint.iacr.org/2024/018.pdf (not including the appendices.) The results (hash counts, etc.) are generally consistent with evaluations in 2024/018. The results are available as a spreadsheet: https://github.com/slh-dsa/slhdsa-c/blob/main/test/new_param.csv  The README in the test directory explains what was done: https://github.com/slh-dsa/slhdsa-c/blob/main/test/README.md

Anyway, here would be my rationale for selection:
- A significant (10x) speedup in signature verification is preferable to a small (<10%) reduction in signature size. SLH-DSA has such trade-off points.
- Key generation and signing times are less relevant than signature verification. SLH-DSA, especially with a limited number of signatures, is a poor choice for interactive identification and authentication. I'd imagine that these parameter sets are intended only for authenticating static data, such as firmware (and related certificates).
- From a validation and implementation viewpoint, I wouldn't like to see too many additional parameter sets. There are still SLH-DSA implementations that are not dynamically configurable (like this code) and some even require "re-compilation" for each parameter set. Additionally, a smaller set facilitates easier ACVP validation. But I'd like to see both {128,192} - bit security levels and {2^20,2^30,2^40) signatures in the set. Perhaps 1 of each of these categories, six in total?

So, from quick analysis, here is my top-6 list to be in the SP (to match the six options in FIPS 205 itself):
- AAA-2 (128-bit security, 2^20 signatures) looks good, with a signature size of 3264 bytes (41.6% of the FIPS 205's "128s" parameter set), and 4.9x times faster signature verification compared to "128s" (the fastest-verifying parameter set in FIPS 205.) Signature verification is 9x faster than AAA-1, which has 6% shorter signature.)
- G-7 (128-bit security, 2^30 signatures). Similar rationale: 3888-byte signatures; 9% longer signatures than G-1 (shortest candidate), but 9x faster signature verification.
- PPP-2 (192-bit security, 2^20 signatures). 7008 byte signature; 6% longer than PPP-1 but 12x faster verification.
- S-2 (192-bit security, 2^30 signatures). 8664 byte signatures; 14% longer than S-1 but 8x faster verification.
- J-1 (128-bit security, 2^40 signatures)
- V-1 (192-bit security, 2^40 signatures)

For the RoT use case, the paper 2024/018 somewhat underestimates the speed of hashing by hardware (The 2024/018 paper talks of "HSMs" which is a bit confusing -- the industry generally doesn't use that term to refer to RoTs that chips use to validate firmware boot, but only of the server/cloud side.) As demonstrated in "Accelerating SLH-DSA by Two Orders of Magnitude with a Single Hash Unit" [Crypto 2024, https://eprint.iacr.org/2024/367], SLH-DSA-SHAKE-128s verification can be completed in 179,603 cycles, which translates to 0.36 milliseconds with a 500 MHz clock (typical for a SoC Root-of-Trust). This speedup requires some additional SLH-DSA features in the hash unit, but they don't significantly increase the implementation area (but do decrease substantially power consumption due to the gained speedup). Therefore, signature verification, even with the standard parameter set and a small area, can be achieved in under 1 millisecond using a small hardware unit and just a little bit of additional hardware engineering, which should become a standard feature as SLH-DSA becomes common. 10,000,000 Keccak (SHA3/SHAKE) hashes per second is easily achievable, somewhat less for SHA2, as it is  slower in hardware.

Cheers,
-markku

Dang, Quynh H. (Fed)

unread,
Jun 16, 2025, 9:52:28 AMJun 16
to Chris Fenner, pqc-forum, Amber Sprenkels, Thom Wiggers

Hi Chris,
 
Thank you for the discussion.
 
SLH-DSA-SHA2-128s-q20 is AAA-2 and its verification of a short message is ~210,000 cycles, see https://www.zerorisc.com/blog/future-of-pqc-on-opentitan . AAA-1 is about 10 times slower than AAA-2 in verification of a short message.
 
AAA-1 would take about 2,100,000/50,000,000 = 0.042 to 0.021 seconds (42 to 21 ms) to verify a signature of a short message on a 32-RISC CPU at 50MHz to 100MHz speed.

 

With a faster processor, the verification is faster. 
 
G-7 and G-8 are about (4767 / 736) 6.47 and (4767 / 743) 6.41 times faster than AAA-1 in verification.
 
Regards.

Quynh. 

 

 

Message has been deleted

Dang, Quynh H. (Fed)

unread,
Jun 16, 2025, 10:56:24 AMJun 16
to Chris Fenner, pqc-forum, Amber Sprenkels, Thom Wiggers

Hi Chris,

 

So, 1,  2 or 3 signatures, G-7 would take about 6.5, 13 or 19.5 ms.

 

And the differences between AAA-2 and G-7 are 3, 6 or 9 ms. 

 

Do you know why or have experimental data to show that these differences would create performance impact/issue ?

 

Regards,

Quynh.

 

From: Chris Fenner <cf...@google.com>

Sent: Monday, June 16, 2025 10:31 AM
To: pqc-forum <pqc-...@list.nist.gov>

Cc: Dang, Quynh H. (Fed) <quynh...@nist.gov>; Amber Sprenkels <amber.s...@pqshield.com>; Thom Wiggers <th...@thomwiggers.nl>; Chris Fenner <cf...@google.com>
Subject: [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Thanks Quynh,

 

Indeed, we are talking about differences in single-digit numbers of milliseconds: based on the proportionate cycle time and the OT paper, on a 50MHz 32-RISC CPU, G-7 might take around 6.5ms to verify and AAA-2 might take around 3.5ms. That's a difference of 3 milliseconds (per verified signature, of which there might be a couple) out of a PCI device's entire ~100ms boot budget (which has to account for a few other activities besides signature verification, such as "actually booting" 😅)

 

Thus I see a lot of value in NIST standardizing (10^20) parameter sets, albeit with some type of guidance for how to hold them. What do you think of the idea for the signer's hardware to enforce rate-limiting of the production of signatures?

 

Thanks

Chris

Dang, Quynh H. (Fed)

unread,
Jun 16, 2025, 11:07:52 AMJun 16
to Chris Fenner, pqc-forum, Amber Sprenkels, Thom Wiggers

Hi Chris,

 

To address your question, I don’t doubt that you would use a 2^20 parameter set in secure/safe ways.  But I don’t assume that will be the case with everyone.

 

Generally, I would like to go with safer options.

 

Regards,

Quynh.

 

From: Chris Fenner <cf...@google.com>

Sent: Monday, June 16, 2025 10:31 AM
To: pqc-forum <pqc-...@list.nist.gov>

Cc: Dang, Quynh H. (Fed) <quynh...@nist.gov>; Amber Sprenkels <amber.s...@pqshield.com>; Thom Wiggers <th...@thomwiggers.nl>; Chris Fenner <cf...@google.com>
Subject: [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Thanks Quynh,

 

Indeed, we are talking about differences in single-digit numbers of milliseconds: based on the proportionate cycle time and the OT paper, on a 50MHz 32-RISC CPU, G-7 might take around 6.5ms to verify and AAA-2 might take around 3.5ms. That's a difference of 3 milliseconds (per verified signature, of which there might be a couple) out of a PCI device's entire ~100ms boot budget (which has to account for a few other activities besides signature verification, such as "actually booting" 😅)

 

Thus I see a lot of value in NIST standardizing (10^20) parameter sets, albeit with some type of guidance for how to hold them. What do you think of the idea for the signer's hardware to enforce rate-limiting of the production of signatures?

 

Thanks

Chris

Paul Hoffman

unread,
Jun 16, 2025, 11:25:06 AMJun 16
to Dang, Quynh H. (Fed), pqc-forum
On Jun 16, 2025, at 08:07, 'Dang, Quynh H. (Fed)' via pqc-forum <pqc-...@list.nist.gov> wrote:
> To address your question, I don’t doubt that you would use a 2^20 parameter set in secure/safe ways. But I don’t assume that will be the case with everyone.

For decades, NIST has pushed ECDSA as a standard even though they fully understood that not backing it with a secure RNG could lead to the exposure of the private key. This scenario got much worse in the past decade as reusable virtual machines became the norm.

Changing the rule now to say that NIST will only standardize signature algorithms that work for "everyone" seems like a bad policy, particularly in the face of large communities that understand the risks of a particular scheme and can measure precisely when a key is becoming unsafe to use.

> Generally, I would like to go with safer options.

This discussion is not about you. :-) It is about the ability for others to work in constrained environments in an interoperable fashion. If there is no NIST standard for these use cases, then developers and users will have to go to multiple SDOs, and this will add complexity that will reduce security for many of them.

--Paul Hoffman

Daniel Apon

unread,
Jun 17, 2025, 1:48:49 AMJun 17
to Paul Hoffman, Dang, Quynh H. (Fed), pqc-forum
Hi Paul,

A question:

Do you think the *customers* of these “large communities” have the ability to easily and independently vet whether a particular system has been properly built with “understand[ing of] the risks of a particular scheme and can measure precisely when a key is becoming unsafe to use”?

I don’t have a strong take here; leaving it as an open question.

Or said another way: how much of the burden should be on the customer of the communities a posteriori and how much should be on NIST a priori?

(In my experience, for every three reasonable system developers/implementers, there will be at least one snake oil salesman with a similar marketing pitch, even a polished one.)

Kind regards,
—Daniel 
--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+unsubscribe@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/C21677E7-543F-49C5-993A-D2E5691C7060%40icann.org.

Falko Strenzke

unread,
Jun 17, 2025, 2:02:00 AMJun 17
to Paul Hoffman, Dang, Quynh H. (Fed), pqc-forum

I fully agree with Paul. Especially signature size will be an important acceptance criteria for SLH-DSA. And counting / limiting of signatures is a simpler task than the state management required for the stateful hash-based schemes. In my view signature counting / limiting with an appropriate degree of accuracy can be build in software systems, while appropriate state management is generally understood to require costly HSM.

Falko

Am 16.06.25 um 17:24 schrieb Paul Hoffman:
--

MTG AG
Dr. Falko Strenzke

Phone: +49 6151 8000 24
E-Mail: falko.s...@mtg.de
Web: mtg.de


MTG AG - Dolivostr. 11 - 64293 Darmstadt, Germany
Commercial register: HRB 8901
Register Court: Amtsgericht Darmstadt
Management Board: Jürgen Ruf (CEO), Tamer Kemeröz
Chairman of the Supervisory Board: Dr. Thomas Milde

This email may contain confidential and/or privileged information. If you are not the correct recipient or have received this email in error,
please inform the sender immediately and delete this email.Unauthorised copying or distribution of this email is not permitted.

Data protection information: Privacy policy

Jade Philipoom

unread,
Jun 17, 2025, 7:52:14 AMJun 17
to Falko Strenzke, Paul Hoffman, Dang, Quynh H. (Fed), pqc-forum
Hi all,

> So, 1, 2 or 3 signatures, G-7 would take about 6.5, 13 or 19.5 ms.
>
> And the differences between AAA-2 and G-7 are 3, 6 or 9 ms.
>
> Do you know why or have experimental data to show that these differences would create performance impact/issue?

I ran some benchmarks on G-7 vs AAA-2 just to add some more empirical
data for how they compare in terms of verification speed on OpenTitan
(SHA2 parameters with a hardware SHA2):

G-7:
Number of tests: 10
Min cycle count: 318079
Max cycle count: 359723
Avg cycle count: 337322

AAA-2:
Number of tests: 10
Min cycle count: 202799
Max cycle count: 232258
Avg cycle count: 213692

This scales pretty well with the "verify time" ratio in
https://eprint.iacr.org/2024/018 (number of hashes performed):
>>> 736/451
1.6319290465631928
>>> 337322/213692
1.5785429496658743

I'm not sure where the number of 6.5 for one G-7 signature came from,
though -- there I wouldn't agree, it should be more like 3.4ms.

Benchmarks are reproducible at
https://github.com/jadephilipoom/opentitan/tree/spx-benchmark30
(depending on the commit, you can run G-7, AAA-2 aka q20, or 128s
parameters) with the following command:
./bazelisk.sh test --test_output=streamed --test_timeout=1000000000
//sw/device/silicon_creator/lib/sigverify/sphincsplus/test:verify_test_kat0_sim_verilator

More qualitatively, I'd add my voice again to those saying that
verification time is quite important. 60% is still painful, although
certainly better than the 10x for AAA-1 and tolerable given the total
is still less than 4ms.

Best,
Jade
> --
> You received this message because you are subscribed to the Google Groups "pqc-forum" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
> To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/5fbec7c7-53cf-491c-a0a5-83958c5d09b0%40mtg.de.

Chris Fenner

unread,
Jun 17, 2025, 8:32:18 AMJun 17
to Markku-Juhani O. Saarinen, pqc-forum, Thanh Trung Nguyen, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers
First, thank you Markku for the insights and for https://github.com/slh-dsa/slhdsa-c/

SLH-DSA-SHAKE-128s verification can be completed in 179,603 cycles, which translates to 0.36 milliseconds with a 500 MHz clock (typical for a SoC Root-of-Trust).

Just wanted to point out that discrete RoT (like a TPM) might have a substantially lower main clock speed (5-10x slower). While I expect discrete TPMs and similar embedded hardware security components to have dedicated hashing acceleration, every little bit helps here, which is why I see a lot of value in something like AAA-2 for firmware signing (again thank you Markku for the cheat sheet).

Chris

On Sun, Jun 15, 2025 at 3:06 PM Thanh Trung Nguyen <trungc...@gmail.com> wrote:


Trung


Vào CN, 15 thg 6, 7 Reiwa lúc 6:31 CH Markku-Juhani O. Saarinen <mjos....@gmail.com> đã viết:
You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/vtMTuE_3u-0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/0dbd9ade-f099-47cd-8231-5d039efcb0c0n%40list.nist.gov.

Chris Fenner

unread,
Jun 17, 2025, 8:32:23 AMJun 17
to pqc-forum, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers, Chris Fenner
Thanks Quynh,

Indeed, we are talking about differences in single-digit numbers of milliseconds: based on the proportionate cycle time and the OT paper, on a 50MHz 32-RISC CPU, G-7 might take around 6.5ms to verify and AAA-2 might take around 3.5ms. That's a difference of 3 milliseconds (per verified signature, of which there might be a couple) out of a PCI device's entire ~100ms boot budget (which has to account for a few other activities besides signature verification, such as "actually booting" 😅)

Thus I see a lot of value in NIST standardizing (10^20) parameter sets, albeit with some type of guidance for how to hold them. What do you think of the idea for the signer's hardware to enforce rate-limiting of the production of signatures?

Thanks
Chris

Blumenthal, Uri - 0553 - MITLL

unread,
Jun 17, 2025, 8:32:35 AMJun 17
to Dang, Quynh H. (Fed), Chris Fenner, pqc-forum, Amber Sprenkels, Thom Wiggers

ZjQcmQRYFpfptBannerEnd

Hi Chris,

 

To address your question, I don’t doubt that you would use a 2^20 parameter set in secure/safe ways.  But I don’t assume that will be the case with everyone.

 

Are you saying that there are no ways to misuse any of the other NIST-proposed or defined configurations or algorithms?

 

Generally, I would like to go with safer options.

 

Generally – yes. In this case, there seems to be sufficient justification for including 2^20 parameters for specific (valid and important) uses.

Chris Fenner

unread,
Jun 17, 2025, 10:07:24 AMJun 17
to Jade Philipoom, Falko Strenzke, Paul Hoffman, Dang, Quynh H. (Fed), pqc-forum
Thanks Jade, I think the 6.5ms number comes from my back-of-the-envelope math based on a 50MHz clock speed (which is in the middle of the ballpark for current-gen TPMs and similar).

By the way, the mailing list seems to be operating with some significant delay for me. Is that happening for anyone else? I sent an example replying to "Do you know why or have experimental data to show that these differences would create performance impact/issue ?" from Quynh yesterday and it hasn't appeared in the list yet.

Thanks
Chris

You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/vtMTuE_3u-0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEMn3UMAOKdrO84y__bS2AH1VwHYwjyLPhjkC3S2-CPMyxW1vg%40mail.gmail.com.

Chris Fenner

unread,
Jun 17, 2025, 10:14:00 AMJun 17
to Dang, Quynh H. (Fed), Falko Strenzke, Paul Hoffman, pqc-forum, Jade Philipoom
List seems to be working again, so I'll just send my message again. If you see it again later, sorry :)

Quynh asked:

Do you know why or have experimental data to show that these differences would create performance impact/issue ?

Absolutely, yes:

We have an internal Smart NIC product that uses the most recent generation of ("closed") Titan as its discrete RoT. As an example of the lengths we go to to shave off milliseconds from the boot time, we actually embed some of the NIC's firmware payload into the Titan's firmware (which is already verified at boot) to avoid adding one extra (RSA) signature verification step, to shave off a couple of ms from the NIC's overall boot time. Needless to say, this complicates our build infrastructure greatly, and would not be possible with an off-the-shelf RoT. So absolutely yes, a difference of 3-9ms can be make-or-break for the types of environments that call for the (2^20)-class parameter sets (firmware verification).

I'd like to avoid focusing on what Google can or can't do safely, or what "everyone" can or can't do safely, and propose that the reduced signature parameters just come with guidelines that define "do safely". That way the trade-offs and requirements are clear.

I assume that we are all aligned that a (2^30) parameter set is "safer to use" than a (2^20) parameter set (and both are probably "safer to use" than any LMS or XMSS parameter set). I believe that this tradeoff should be expressible in terms of operational requirements, e.g.,:
  • If you're using a current (2^64) SLH-DSA parameter set, you don't have any special operational requirements
  • For a smaller (e.g., 2^20, 2^30) parameter set, the signer needs to ensure that the signature limit cannot be reached within a (for example) 100-year span of time. This can be either by requiring the process to be manual with human intervention, or through the signing infrastructure itself imposing a rate-limit (e.g., 3 signatures per second in the case of (2^30), 1 signature per hour in the case of (2^20)), or some combination.
Thanks
Chris

Dang, Quynh H. (Fed)

unread,
Jun 17, 2025, 12:51:11 PMJun 17
to Jade Philipoom, pqc-forum
Hi Jade,

Thank you for working with us on this matter and for the data you have provided.

I have no doubts that you would use a 2^20 parameter set safely. And I think that you made an excellent choice of AAA-2 for your specific use case when overuse never happens.

There exists one parameter set which works best for any given specific deployment. To have the best choice available for every specific deployment, we would end up with many parameter sets.

With G-7 taking 3.4ms, AAA-2 takes 3.4 x (213692 /337322), about 2.5ms: less than 1 ms difference.

Two of the objectives are having a smaller number of parameter sets and having safer parameter sets. These goals generally go against "have the best choice available for every specific deployment".

These small parameter sets are safer than the stateful ones. But there are still uncomfortable risks with many of these parameter sets in my view.

Let's say we use a counter. If the attacker can shut down the signing system, the counter might be lost. What if that happens many times with an AAA-2 signing key ? (I am not saying that there are no ways to mitigate this risk).

There are people still using MD5 and DES. There are users who don't understand the cryptographic risk. One could say "they don't follow our security guidance or requirements, so we don't care about them...". However, with a safer choice, the risk for such users would get much reduced and maybe that could save their day at some point, especially, when the cost for the safer choice is very minimal.

Regards,
Quynh.

> -----Original Message-----
> From: Jade Philipoom <ja...@opentitan.org>
> Sent: Tuesday, June 17, 2025 7:52 AM
> To: Falko Strenzke <falko.s...@mtg.de>
> Cc: Paul Hoffman <paul.h...@icann.org>; Dang, Quynh H. (Fed)
> <quynh...@nist.gov>; pqc-forum <pqc-...@list.nist.gov>
> Subject: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests
> feedback on additional SLH-DSA(Sphincs+) parameter set(s) for
> standardization.
>
> Hi all,
>
> > So, 1, 2 or 3 signatures, G-7 would take about 6.5, 13 or 19.5 ms.
> >
> > And the differences between AAA-2 and G-7 are 3, 6 or 9 ms.
> >
> > Do you know why or have experimental data to show that these differences
> would create performance impact/issue?
>
> I ran some benchmarks on G-7 vs AAA-2 just to add some more empirical data
> for how they compare in terms of verification speed on OpenTitan
> (SHA2 parameters with a hardware SHA2):
>
> G-7:
> Number of tests: 10
> Min cycle count: 318079
> Max cycle count: 359723
> Avg cycle count: 337322
>
> AAA-2:
> Number of tests: 10
> Min cycle count: 202799
> Max cycle count: 232258
> Avg cycle count: 213692
>
> This scales pretty well with the "verify time" ratio in
> https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Feprint
> .iacr.org%2F2024%2F018&data=05%7C02%7Cquynh.dang%40nist.gov%7C5
> e7df8b496cb4e33597308ddad95619a%7C2ab5d82fd8fa4797a93e054655c
> 61dec%7C0%7C0%7C638857579858583109%7CUnknown%7CTWFpbGZsb
> 3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsI
> kFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C60000%7C%7C%7C&sdata=XO2gCu
> zClISd%2B7zHV2s6d2qHE9gF4323%2BgId2P2vneU%3D&reserved=0

Sanel Sabani

unread,
Jun 17, 2025, 12:52:44 PMJun 17
to Dang, Quynh H. (Fed), Jade Philipoom, pqc-forum
if you ask me!

i'don't know?

S.Shabani


You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/vtMTuE_3u-0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/MW4PR09MB100598BEF6815187DF85A027BF373A%40MW4PR09MB10059.namprd09.prod.outlook.com.

Paul Hoffman

unread,
Jun 17, 2025, 1:11:09 PMJun 17
to Daniel Apon, pqc-forum
On Jun 16, 2025, at 22:48, Daniel Apon <dapon....@gmail.com> wrote:
>
> Hi Paul,
>
> A question:
>
> Do you think the *customers* of these “large communities” have the ability to easily and independently vet whether a particular system has been properly built with “understand[ing of] the risks of a particular scheme and can measure precisely when a key is becoming unsafe to use”?
>
> I don’t have a strong take here; leaving it as an open question.

Yes, I think the customers can. I think customers being told "don't sign with that more than $n times" is much more likely to be followed than "don't use this signature scheme only in VMs that start their RNGs from a different number each time, often in a way that you cannot verify".

> Or said another way: how much of the burden should be on the customer of the communities a posteriori and how much should be on NIST a priori?

If this was a question of two algorithms with similar external properties, clearly the burden should be on NIST. But it's not: the signature sizes and validation times are quite different.

--Paul Hoffman

Q C

unread,
Jun 17, 2025, 1:33:56 PMJun 17
to pqc-forum, Markku-Juhani O. Saarinen, Chris Fenner, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers
>>>> - G-7 (128-bit security, 2^30 signatures). Similar rationale: 3888-byte signatures; 9% longer signatures than G-1 (shortest candidate), but 9x faster signature verification. 

For blockchains like Bitcoin, SLH-DSA G-7 offers an attractive alternative to ML-DSA, since pubKey+sig sizes are closer to ML-DSA. Impact of additional cost of Verify() is negligible due to low TPS (< 10) compared to overall blocktime (10 mins).

Arne Padmos

unread,
Jun 17, 2025, 6:54:13 PMJun 17
to pqc-forum, Paul Hoffman, pqc-forum, Dang, Quynh H. (Fed)
On Monday, 16 June 2025 at 17:25:06 UTC+2 Paul Hoffman wrote:
For decades, NIST has pushed ECDSA as a standard even though they fully understood that not backing it with a secure RNG could lead to the exposure of the private key. This scenario got much worse in the past decade as reusable virtual machines became the norm.

Changing the rule now to say that NIST will only standardize signature algorithms that work for "everyone" seems like a bad policy, particularly in the face of large communities that understand the risks of a particular scheme and can measure precisely when a key is becoming unsafe to use.

You're barking up a tree that is over 9 years old, not one that just sprouted overnight: https://csrc.nist.gov/pubs/ir/7977/final

NIST IR 7977 includes the following statement: NIST aims to develop cryptographic standards and guidelines that help implementers create secure and usable systems for their customers that support business needs and workflows, and can be readily integrated with existing and future schemes and systems. Cryptographic standards and guidelines should be chosen to minimize the demands on users and implementers as well as the adverse consequences of human mistakes and equipment failures.

The standardisation process of ECDSA is irrelevant as NIST IR 7977 is the current document that 'describes the principles, processes and procedures that drive cryptographic standards and guidelines development efforts at the National Institute of Standards and Technology'. Of course, you could state that an update is overdue. NIST IR 7977 doesn't describe how the different principles are weighted. Additionally, it was drafted as a result of the Dual EC situation, so there's an emphasis on competitions while work on cryptographic accordions will likely take the approach of a NIST-led collaboration (see NIST IR 8537 for details).

None of this is to say that specific SLH-DSA parameter sets don't make sense in specific situations, but process-wise misuse prevention is a clear standardisation objective, albeit not the only one. The Lightweight Crypto competition is an example of how misuse resistance has been one of the evaluation criteria, ending up with Ascon-AEAD128 which provides some level of nonce-misuse resistance. Similarly, one can view the accordion mode effort as an attempt to take the rough edges off of AES-GCM (which has seen it's share of real-world failure modes like https://www.usenix.org/conference/usenixsecurity22/presentation/shakevsky).

Regards,
Arne

Chris Fenner

unread,
Jun 21, 2025, 2:21:56 PMJun 21
to Arne Padmos, pqc-forum, Paul Hoffman, Dang, Quynh H. (Fed)
Out of curiosity, I built a little tool (http://github.com/chrisfenner/slh-dsa-rls) based on Scott Fluhrer's tool (https://github.com/sfluhrer/sphincs-param-set-search) to explore the solution space some more, given that firmware signing prioritizes verification time (and higher signing time could even be considered a feature for "rate-limited signatures". I was able to find some potentially interesting parameter sets at the 2^20 level by increasing the signing cost limit even further (and setting a minimum signing cost to make it easier to accidentally follow the rate-limiting guidance, see my readme)

I think some of these new possible sets could compare (with some additional analysis) favorably to the fastest-to-verify parameter sets for the firmware signing scenario from https://eprint.iacr.org/2024/018.pdf.

("rls" = "rate-limited signer" or "relatively little signatures", as you like 😅)

id

h

d

h'

a

k

w

m

sig bytes

sig time

verify time

aaa-2

18

1

18

24

6

4

21

3264

449M

451

rls128fw6

22

1

22

22

6

3

20

3312

1.6B

347

ppp-2

20

1

20

21

10

4

30

7008

920M

651

rls192fw1

21

1

21

23

9

3

29

7320

1.35B

508

zz-2

19

1

19

21

14

4

40

12640

651M

866

rls256fw1

21

1

21

20

14

3

38

12992

1.56B

678


Thanks
Chris

--
You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/vtMTuE_3u-0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/2bb53e5d-724c-452e-8257-dbd34502780cn%40list.nist.gov.

Chris Fenner

unread,
Jun 21, 2025, 7:00:22 PMJun 21
to Arne Padmos, pqc-forum, Paul Hoffman, Dang, Quynh H. (Fed)
PS: In case NIST would really, really prefer to standardize a single set of parameter sets that combine the firmware and software signing use-cases, I also observed there are some good parameter sets at the 2^24 level (1 signature per ~minute for 30 years) that are competitive in terms of verification time, and not too much larger in terms of signature size. The tables in the README list a few other values in case the size/verification-time tradeoff is worth considering further (e.g., rls256c5 has 13888-byte signatures and a verification cost of 706 hashes).

id

h

d

h'

a

k

w

m

sig bytes

sig time

verify time

rls128c1

22

1

22

24

6

2

21

3856

1.45B

311

rls192c1

21

1

21

23

10

3

32

7896

1.38B

532

rls256c1

21

1

21

22

14

2

42

15264

1.3B

612


Thanks
Chris

Peter Thomassen

unread,
Jun 23, 2025, 10:08:36 AMJun 23
to pqc-forum, Dang, Quynh H. (Fed), Amber Sprenkels, Thom Wiggers
On Thursday, June 12, 2025 at 6:56:53 PM UTC+2 Dang, Quynh H. (Fed) wrote:

If the goal is to minimize the number of new parameter set(s) and to standardize a new one only when the existing ones have noticeable performance impact on some important use cases, I am not sure if there would still be a need to standardize a 2^20 one if NIST standardized a 2^30 one.

Signature (and key) size certainly makes a difference for DNSSEC; even small improvements are significant (see, for example, the results listed at the bottom of https://pq-dnssec.dedyn.io/).

So I would think that even if gains are small, it's worth making such a (standardized) parameter set available. (This argument not only applies to SLH-DSA.)

Best,
Peter

Dang, Quynh H. (Fed)

unread,
Jul 7, 2025, 1:58:27 PMJul 7
to homa...@gmail.com, pqc-forum, Paul Hoffman

Hi Peter,

 

Thank you for your comment here: https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/0650d37c-ff0a-438d-999b-1c8f0f0b2fcdn%40list.nist.gov?utm_medium=email&utm_source=footer

 

What would the performance differences between AAA-3 vs G-7 and AAA-3 vs G-8  (https://eprint.iacr.org/archive/2024/018/1737032157.pdf ) be ? Their signature sizes are 3280,  3888 and 4000 bytes respectively.

 

Regards,

Quynh.

Sophie Schmieg

unread,
Jul 7, 2025, 2:43:17 PMJul 7
to Dang, Quynh H. (Fed), homa...@gmail.com, pqc-forum, Paul Hoffman
I like to think of the alternate parameter sets for SLH-DSA as a point in a continuum with stateful hash based signatures on one end, and SLH-DSA proper on the other. With a stateful hash based signature scheme, you have to track exactly which key has been used and which ones are still fresh in order to be secure. From there, you get few-time signatures (in some parameterizations of SLH-DSA or directly, although I don't think anyone wants to standardize it directly), where you still want to track how many times a subkey was used, but have a bit of slack. Then you get SLH-DSA parameterizations that require only a rough count of key usage, which can even be replaced with a physical capability estimate (my signer cannot physically produce enough signatures in order to cause key overuse), and finally SLH-DSA proper, where the computing power needed to actually exhaust a key, while not physically impossible, would be so expensive that you have to be actively trying to cause an overuse issue.
In that sense, SLH-DSA with alternate parameters fills a similar niche compared to stateful hash based signatures as nonce-misuse-resistant AEADs fulfill for AEADs (e.g. AES-GCM-SIV vs AES-GCM). Where deterministic IVs require perfect state-keeping, nonce-misuse resistant AEADs only need a "good enough" type of state keeping, and can e.g. use the current time or eventually consistent counters as their IVs without breaking. In practice, there is a mountain of difference between perfect state keeping and "good enough" state keeping, as perfect state keeping runs into all sorts of issues like crashes, backups, confused humans swapping digits or misremembering the date, day light saving time duplicating timestamps, etc etc. Meanwhile, as long as none of these issues allow an adversary to cause a significant amount of these types of state corruption errors (by e.g. making a law to mandate switching to and from daylight saving time every other week), none of these types of issues would be an issue for a nonce-misue-resistant AEAD or a key-reuse-resistant version of SLH-DSA.
In other words, I see the comparison baseline for SLH-DSA with alternate parameters in LMS/XMSS, not in SLH-DSA proper. And having something with "good enough" requirements on state keeping would greatly simplify signature generation for things like firmware or potentially root certificates where size and memory tradeoffs matter a lot. In a comparison with SLH-DSA proper, none of these parameter sets will be secure, but in a comparison to LMS/XMSS any of them is orders of magnitude safer.

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/MW4PR09MB100598964CEE1B21ECBA53F6AF34FA%40MW4PR09MB10059.namprd09.prod.outlook.com.


--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

Dang, Quynh H. (Fed)

unread,
Jul 8, 2025, 8:08:31 AMJul 8
to Chris Fenner, pqc-forum, Paul Hoffman, Arne Padmos

Hi Chris,

 

Thank you for sharing new parameter sets and your evaluations.

 

If some wants to have faster verify time than AAAs and Gs’ signing times and is willing to accept a bit larger signature size, I think you just found better and very good parameter sets.  Below is some detail about the parameter sets rls128c1, G-7 and G-8.

 

id

Sig in bytes

Sign in # hashes

Verify in #hashes

#sigs at 112 bits of security

rls128c1

3856

1.45 B

311

2^27.25

G-7/8

3888 / 4000

84.67/95.68 mil

736/743

2^34.5/2^35.74

Difference

1 / 1.03%

17.12 / 15.15 times

2.36 / 2.38 times

2^7.25 / 2^8.49 times

 

Signing time of G-8 is 95 680 000 / 2 186 222 ( signing of SLH-DSA-X-128s in hash function calls) about 43.76 times of SLH-DSA-X-128s’ signing time.

 

Signing time of rls128c1 is 1 450 000 000 / 2 186 222, about 663 times of SLH-DSA-X-128s’ signing time.

 

If SLH-DSA-X-128s takes 0.5 seconds to sign, rls128c1 would take 331.5 seconds (5.525 minutes) to sign.

 

I don’t see a problem of someone says 5.5 minutes for signing is fine with my specific use case.  But I don’t know if the other users are also fine with their specific software and firmware update and cert signing use cases.  

 

I have not seen data which shows that the verify time of G-8 would create performance problem for a specific use case.  It is just that rls128c1 and AAA-2/3 have better signing times than G-8’s signing time.

 

I think you and your teammates have found best parameter sets: AAA-2 and rls128c1 for the rebooting use case of the 32-bit RISC-V processors when overuse safety is not your concern or a risk in your system.

 

The attractive property of G-8 is that it has better overuse safety than AAA-2/3 and rls128c1’s.

 

Regards,

Quynh.

 

 

 

From: Chris Fenner <cf...@google.com>

Sent: Saturday, June 21, 2025 7:00 PM
To: Arne Padmos <goo...@arnepadmos.com>

Peter Thomassen

unread,
Jul 8, 2025, 1:53:55 PMJul 8
to pqc-forum, Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, homa...@gmail.com, ja...@goertzen.ch
Hi Quynh,

On Monday, July 7, 2025 at 7:58:27 PM UTC+2 Dang, Quynh H. (Fed) wrote:

What would the performance differences between AAA-3 vs G-7 and AAA-3 vs G-8  (https://eprint.iacr.org/archive/2024/018/1737032157.pdf ) be ? Their signature sizes are 3280,  3888 and 4000 bytes respectively.


I cannot answer your question precisely, as the DNS ecosystem is very diverse, and delivery success rates depend on all kinds of factors that don't allow writing down a simple function that would predict error rates based on DNSSEC signature (or key) size.

However, we've conducted a study last year with DNSSEC implementations of various PQC algorithms, then attempting to retrieve DNS records so signed from about 10,000 vantage points, using the RIPE ATLAS measurement platform.

The parameter space is quite large (UDP vs TCP; whether one (CSK) or two (KSK and ZSK) keys are used; whether the DO bit is set (delivers signatures to the client instead of only to the resolver); whether the requested name exists (if not, a large non-existence proof with several signatures has to be delivered); + some more details.)

Nevertheless, the results allow gaining some insights. In particular:
- On slide 7 of [1], consider the top-left diagram. Essentially, the NOERROR column is the success rate and the SERVFAIL column is the error rate for a standard query (UDP, name exists, no special circumstances). You can see that, as you go from FN-DSA-512 ("17_falcon512" row) to ML-DSA-44 ("18_dilithium2" row) to SLH-DSA-SHA2-128s ("19_spincs-sha256-128s" row), the success rate drops from 97.4% to 89.2% to 80.4%. -- These numbers apply when the domain uses 1 key only. (This is our "pdns" setup.)
- A typical 2-key scenario (our "bind9" setup) is shown on slide 6. Success rates in the corresponding diagram there go from 89% for FN-DSA-512 to around 68% for the others.

The point is that signature size alone (slide 7) can easily make the error rate go from 2.5% to 20%, which is a factor of 8. Requiring transmission of another key (either in a permanent 2-key setup, or temporarily during a rollover) makes it go to ~30%. Numbers hit 40% when querying a non-existing name (which causes additional signatures to be transmitted; see slide 8 where NXDOMAIN is success and SERVFAIL is error).

Without being able to speak to specific parameter sets, and acknowledging that the chosen measurement method is somewhat noisy, I wanted to express some caution due to the sensitivity of the DNS when it comes to large packets. To tell the actual performance difference, I suppose additional studies would be necessary.

Thanks,
Peter

Dang, Quynh H. (Fed)

unread,
Jul 9, 2025, 8:36:09 AMJul 9
to Peter Thomassen, pqc-forum, Paul Hoffman, ja...@goertzen.ch

Thank you Peter ! Yes, more experiments are needed. See my comment below.

 

From: Peter Thomassen <homa...@gmail.com>
Sent: Tuesday, July 8, 2025 1:54 PM
To: pqc-forum <pqc-...@list.nist.gov>
Cc: Dang, Quynh H. (Fed) <quynh...@nist.gov>; pqc-forum <pqc-...@list.nist.gov>; Paul Hoffman <paul.h...@icann.org>; homa...@gmail.com <homa...@gmail.com>; ja...@goertzen.ch
Subject: Re: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Hi Quynh,

On Monday, July 7, 2025 at 7:58:27PM UTC+2 Dang, Quynh H. (Fed) wrote:

What would the performance differences between AAA-3 vs G-7 and AAA-3 vs G-8  (https://eprint.iacr.org/archive/2024/018/1737032157.pdf ) be ? Their signature sizes are 3280,  3888 and 4000 bytes respectively.

 

I cannot answer your question precisely, as the DNS ecosystem is very diverse, and delivery success rates depend on all kinds of factors that don't allow writing down a simple function that would predict error rates based on DNSSEC signature (or key) size.

 

However, we've conducted a study last year with DNSSEC implementations of various PQC algorithms, then attempting to retrieve DNS records so signed from about 10,000 vantage points, using the RIPE ATLAS measurement platform.

 

The parameter space is quite large (UDP vs TCP; whether one (CSK) or two (KSK and ZSK) keys are used; whether the DO bit is set (delivers signatures to the client instead of only to the resolver); whether the requested name exists (if not, a large non-existence proof with several signatures has to be delivered); + some more details.)

 

Nevertheless, the results allow gaining some insights. In particular:

- On slide 7 of [1], consider the top-left diagram. Essentially, the NOERROR column is the success rate and the SERVFAIL column is the error rate for a standard query (UDP, name exists, no special circumstances). You can see that, as you go from FN-DSA-512 ("17_falcon512" row) to ML-DSA-44 ("18_dilithium2" row) to SLH-DSA-SHA2-128s ("19_spincs-sha256-128s" row), the success rate drops from 97.4% to 89.2% to 80.4%.

[Dang, Quynh H. (Fed)] Looking at the graphs on the slide 7, the ecdsa256 and ed22219 cases have failure rates not zero (all 4 ecdsa cases and 2 of the eddsa cases).  This seems to possibly indicate that the failure is the natural way of UDP.

 

But 2k RSA sig has zero failure rate at all 4 cases. That seems to possibly indicate that the failure rates of the ecdsa and eddsa cases above were not caused by UDP (because 2k RSA sig is larger than the ecdsa and eddsa sigs), but by something else (I don’t know).

 

The DNS responses having one of those signatures above should fit in one 1232-byte packet.

 

https://blog.powerdns.com/2022/04/07/falcon-512-in-powerdns  says (with Falcon512) that the 2 cases of having 1 sig, the response fits into a 1232-byte packet. The cases of having 2 sigs and (1 sig + 1 public key), the response does not fit into a 1232-byte packet which could create performance issue.

 

The “the top-left diagram” seems to have the case of a response having 1 Falcon512 sig and it fits into a 1232-byte packet. If so,  I don’t understand why it has a noticeably higher failure rate than the ecdsa and eddsa cases’.

 

ML-DSA44 sig of 2420 bytes having a higher failure rate makes sense. SLH-DSA-level 1-small’s sig of 7856 bytes having a much higher failure rate also makes sense.

 

With signatures being 3280,  3888 and 4000 bytes with a pk size of 32 bytes,  likely that they all would have similar performance with ML-DSA44 for the responses which include a sig and a pk.

 

Since those 3 sizes are not far away from each other much (subjective here), my guess is that they would have the same performance in most cases and in the other cases the differences are small. New experiments are needed to check my guess here.

 

Regards,

Quynh.

 

 

-- These numbers apply when the domain uses 1 key only. (This is our "pdns" setup.)

Peter Thomassen

unread,
Jul 9, 2025, 9:27:10 AMJul 9
to pqc-forum, Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
Hi Quynh,

On Wednesday, July 9, 2025 at 2:36:09 PM UTC+2 Dang, Quynh H. (Fed) wrote:

[Dang, Quynh H. (Fed)] Looking at the graphs on the slide 7, the ecdsa256 and ed22219 cases have failure rates not zero (all 4 ecdsa cases and 2 of the eddsa cases).  This seems to possibly indicate that the failure is the natural way of UDP.

Maybe, maybe not. RIPE ATLAS measurements generally are quite noisy; other things might be at fault (e.g., middleboxes ignoring certain algorithm numbers and what not). 

But 2k RSA sig has zero failure rate at all 4 cases. That seems to possibly indicate that the failure rates of the ecdsa and eddsa cases above were not caused by UDP (because 2k RSA sig is larger than the ecdsa and eddsa sigs), but by something else (I don’t know).

RSA working was the pre-selection, so it's 100% success by definition (see slide 4).

Thanks,
Peter 

Chris Fenner

unread,
Jul 9, 2025, 10:56:09 AMJul 9
to Peter Thomassen, pqc-forum, Dang, Quynh H. (Fed), Paul Hoffman, ja...@goertzen.ch
Thank you Quynh,

I think you could headline this entire discussion: "Performance, size, resilience to overuse; pick 2"
 
I don’t see a problem of someone says 5.5 minutes for signing is fine with my specific use case.  But I don’t know if the other users are also fine with their specific software and firmware update and cert signing use cases. 

Yup. I'd like to make 2 remarks on this point:
  1. Code (and I suppose, DNS record) signatures are verified millions of times more often than they are signed. Therefore one could argue that it would be optimal for signing to be millions of times more expensive than verification.
  2. If the signer is willing to cache the upper hypertree (i.e., XMSS layers), signature generation becomes significantly cheaper (at the cost of runtime memory for the signer).
I have not seen data which shows that the verify time of G-8 would create performance problem for a specific use case.  It is just that rls128c1 and AAA-2/3 have better signing times than G-8’s signing time.

I had hoped that one takeaway from my anecdote above about Smart NICs was that even a difference of ~3ms in boot time due to signature verification (the rough difference between G-7 and rls128c2 verification on a 50MHz RISC-V TPM-class SoC) can mean not having enough time to verify a signature during boot.
 
I think you and your teammates have found best parameter sets: AAA-2 and rls128c1 for the rebooting use case of the 32-bit RISC-V processors when overuse safety is not your concern or a risk in your system. 
 
The attractive property of G-8 is that it has better overuse safety than AAA-2/3 and rls128c1’s.

These remarks have me thinking that NIST should consider standardizing two sets of "rate-limited signer (RLS)" parameter sets:
  • one set for for [~once/minute] use for code-signing (and similar) use cases, where verification time is as important as signature size, and the high signature cost provides a feasibility-based defense against overuse
  • one set for general purpose [~once/second] use, where signature generation is a little cheaper the signer might reasonably deploy a fleet of parallel-operating HSMs and so strong overuse resilience is important and sign/verify performance needs to be a bit more balanced
I've updated my toolset and suggested parameter sets at https://github.com/chrisfenner/slh-dsa-rls for these two "cs" and "gp" cases (and to take note of signing performance when the hypertree is cached). Below is a summary of what I think might be the "best" options for these two sets of sets (with comparisons to most-comparable sets from Smaller Sphincs+)

Code signing, full strength to 2^24 signatures (1/min for 30 years), optimized for verification time

L

sigs

id

h

d

h'

a

k

w

m

sig bytes

sig time

sig time (cached)

verify time

1

2^24

rls128cs1

22

1

22

24

6

2

21

3856

1.45B

302M

311

1

2^20

AAA-2

18

1

18

24

6

4

21

3264

449M

451

3

2^24

rls192cs1

21

1

21

25

9

3

32

7752

2.03B

906M

526

3

2^20

PPP-2

20

1

20

21

10

4

30

7008

920M

651

5

2^24

rls256cs1

21

1

21

25

12

2

41

14944

2.33B

1.21B

602

5

2^20

xx-2

19

1

19

21

14

4

40

12640

651M

866


General purpose, full strength to 2^30 signatures (1/sec for 30 years), resilient to significant overuse (112/128/192 bit security @ 1/ms for 30 years), optimized for signature size

L

id

h

d

h'

a

k

w

m

sig bytes

sig time

sig time (cached)

verify time

overuse sigs

1

rls128gp2

45

3

15

19

6

8

21

3520

463M

9.44M

7082

43.02

1

G-4

39

3

13

18

7

7

22

3776

72M

4209

40.05

3

rls192gp3

36

2

18

21

9

6

30

7272

1.2B

56.6M

2414

42.73

3

S-1

36

3

12

19

10

8

29

7560

98M

10225

42.11

5

rls256gp1

34

2

17

21

13

7

41

12768

1.39B

81.8M

5316

40.12

5

qq-2

36

3

12

19

14

7

39

13888

83M

7809

41.06



Thanks
Chris

--
You received this message because you are subscribed to a topic in the Google Groups "pqc-forum" group.
To unsubscribe from this topic, visit https://groups.google.com/a/list.nist.gov/d/topic/pqc-forum/vtMTuE_3u-0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to pqc-forum+...@list.nist.gov.

Chris Fenner

unread,
Jul 9, 2025, 11:24:54 AMJul 9
to Peter Thomassen, pqc-forum, Dang, Quynh H. (Fed), Paul Hoffman, ja...@goertzen.ch
PS In case the tables in my previous message are garbled in the web view (not sure how HTML table copy pasting failed so bad), here is a low tech screenshot:

image.png

Dang, Quynh H. (Fed)

unread,
Jul 9, 2025, 12:39:06 PMJul 9
to Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch

Hi Chris,

 

Thank you for the input that you have provided.  See my comment below.

 

From: Chris Fenner <cf...@google.com>
Sent: Wednesday, July 9, 2025 10:56 AM
To: Peter Thomassen <homa...@gmail.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; Dang, Quynh H. (Fed) <quynh...@nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>
Subject: Re: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Thank you Quynh,

 

I think you could headline this entire discussion: "Performance, size, resilience to overuse; pick 2"

 

I don’t see a problem of someone says 5.5 minutes for signing is fine with my specific use case.  But I don’t know if the other users are also fine with their specific software and firmware update and cert signing use cases. 

 

Yup. I'd like to make 2 remarks on this point:

  1. Code (and I suppose, DNS record) signatures are verified millions of times more often than they are signed. Therefore one could argue that it would be optimal for signing to be millions of times more expensive than verification.
  2. If the signer is willing to cache the upper hypertree (i.e., XMSS layers), signature generation becomes significantly cheaper (at the cost of runtime memory for the signer).

I have not seen data which shows that the verify time of G-8 would create performance problem for a specific use case.  It is just that rls128c1 and AAA-2/3 have better signing times than G-8’s signing time.

 

I had hoped that one takeaway from my anecdote above about Smart NICs was that even a difference of ~3ms in boot time due to signature verification (the rough difference between G-7 and rls128c2 verification on a 50MHz RISC-V TPM-class SoC) can mean not having enough time to verify a signature during boot.

[Dang, Quynh H. (Fed)] https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEMn3UMAOKdrO84y__bS2AH1VwHYwjyLPhjkC3S2-CPMyxW1vg%40mail.gmail.com says that G-7 takes about 3.4 ms for a verification. 

 

AAA-2 takes 2.5 ms (3.4 x (213692 /337322)) for a verification. 

 

Why does AAA-2 work well, but G-7 would create a problem ? 

 

Regards,

Quynh. 

 

Chris Fenner

unread,
Jul 9, 2025, 1:41:03 PMJul 9
to Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch
Thanks Quynh,

[Dang, Quynh H. (Fed)] https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/CAEMn3UMAOKdrO84y__bS2AH1VwHYwjyLPhjkC3S2-CPMyxW1vg%40mail.gmail.com says that G-7 takes about 3.4 ms for a verification.

AAA-2 takes 2.5 ms (3.4 x (213692 /337322)) for a verification.

Why does AAA-2 work well, but G-7 would create a problem ? 

My argument isn't that AAA-2 works well but G-7 wouldn't (because we aren't currently using AAA-2 for Smart NIC firmware signing), it's that 3 ms is a big difference when it comes to firmware signing. Here's how I get to 3 ms:

G-7 (costing 736 hashes) takes 337322 cycles on OpenTitan. So we can approximate the cycles-per-verification-hash for RISC-V as 337322/736 = 458 cycles per verification hash.

AFAIK OpenTitan is 100MHz, but I want to model an "average current gen TPM RoT" as a 50MHz RISC-V based SoC (today's TPMs come in under this clock speed; let's not assume every TPM is faster than OpenTitan starting next year).

Thus we have:

G-7 verification costs 337322 cycles (6.7ms on a 50MHz RISC-V based SoC)
rls128cs1 verification costs 458*311=142438 cycles (2.8ms on the same SoC) for a difference of 3.9ms

Thus the delta between rls128cs1 (my proposed Level 1 parameter set for SLH-DSA firmware signing) and G-7 (your proposed alternative) costs around 3.9ms on the target hardware. And 3.9ms in some cases (as in the Google Smart NIC case) is a lot. And this is for Level 1. For Level 3 and 5 the deltas against the fastest-to-verify proposed 2^30 sets in Smaller Sphincs+ are even more significant:

S-2 (a 8664-byte signature targeting Level 3 @ 2^30) verification costs 458*1078=493724 cycles (9.9ms)
rls192cs1 costs 458*526=195108 cycles (3.9ms) for a difference of 6.0ms

qq-4 (a 15520-byte signature targeting Level 5 @ 2^30) verification costs 458*1426=653108 cycles (13.1ms)
rls256cs1 costs 458*602=275716 cycles (5.5ms) for a difference of 7.6ms

I think the difficulty of this tradeoff justifies standardizing 2 new sets of sets: one for code signing (and other verification-time-sensitive use-cases) and one for more general purpose small signatures. We have 6 parameter sets total for SLH-DSA, this would be just adding 6 more to target two important cases where the signature size needs to be cut down by at least half.

Thanks
Chris

Dang, Quynh H. (Fed)

unread,
Aug 6, 2025, 12:23:40 PMAug 6
to Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen

Hi Chris and all,

Fast Verify Options at 2^20 or 2^24 limits.

Level 1.

 

Id

n

h

d

h’

a

k

lg(w)

m

Level1-a

16

21

1

21

25

6

3

22

Level1-b

16

21

1

21

23

6

4

21

rls128cs1

16

22

1

22

24

6

2

21

 

Id

Bit-security/#sigs

pk

sig

-bytes

Sign in #hashes

Verify in # hashes

#Sigs at 112 bits

Level1-a

128/2^24

32

3488

-368

1266679808

465

26.93[1]

Level1-b

117.97/2^24

32

3216

-640

1329594368

448

25.16

rls128cs1

128/2^24

32

3856

0

1451229184

311

27.25

[1] Note: 26.93 means 2^26.93 messages and that applies to the rest of this message.

(Assuming 9.44 M hashes takes 1 second, 302M (cached signing of rls128cs1) would take about 32 seconds. )

I think Level1-a may be considered slightly more attractive than rls128cs1 by many others because of the former’s smaller signature size even though the former has slower verification than the latter.  I understand that you care about verification time very much.

Level1-b may be considered more attractive than the other 2 by many others due to its smallest signature size even though it has only 2^20 signatures limit at 128 bits of security.  For some firmware update and/or certificate signing situations, 2^20 limit with the degradation to 112 bits of security at 2^26.93 signatures per signing key might be safe enough.

For me, I prefer rls128cs1 due to its best overuse safety (27^25 sigs at 112 bits of security) among those 3.

However, there are 2 other parameter sets whose signature sizes are 3584 and 3776 bytes, their overuse safeties are 27^29 and 27^97 messages at 112 bits of security and they have 364 and 376 hashes for verification. These 2 are a bit slower than rls128cs1 in verification, but they have smaller signatures and better overuse safeties ( many people might consider these improvements small).

To me, it is very hard to say which one to go with even when fast verify is the most important factor.

 

Level 3. 

Id

n

h

d

h’

a

k

lg(w)

m

Level3-a

24

20

1

20

23

9

4

29

Level3-b

24

21

1

21

23

9

4

29

Rls192cs1

24

21

1

21

25

9

3

32

Level3-c

24

21

1

21

25

9

2

32

Level3-d

24

21

1

21

23

10

4

32

 

Id

Bit-sec/#sigs

pk

sig

% size

sign

verify

#Sigs at 128/112 bits

Level3-a

(168.46/2^24), (192/2^20.13)

48

6912

-840

1084227584

647

28.76/30.55

Level3-b

(175.69/2^24), (192/2^21.12 )

48

6936

-816

1329594368

648

29.76/31.55

Rls192cs1

192/2^24

48

7752

0

1451229184

526

31.77/33.55

Level3-c

192/2^24

48

8568

+816

1757413376

460

31.77/33.55

Level3-d

192/2^24

48

7512

-240

1967128576

672

31.19/32.79

 

I prefer rls192cs1 due to its 2^24 limit at 192 bits of security and its overuse safeties.  However, many others might prefer one of the others for its smaller signature size or faster verification.

Level 5.

Id

n

h

d

h’

a

k

lg(w)

m

Level5-a

32

21

1

21

20

14

4

38

Level5-b

32

21

1

21

22

14

4

42

Rls256cs1

32

21

1

21

25

12

2

41

 

Id

pk

sig

Size reduction in bytes.

sign

verify

Bit-security/#sigs

Level5-a

64

12256

-2688

2296381440

854

(256/2^20),

(192/2^27.17)

 

Level5-b

64

13152

-1792

2428502016

 

882

256/2^24), (192/2^29.26)

 

 

Rls256cs1

64

14944

0

2327838720

602

(256/2^24),

(192/2^29.98)

 

 

I prefer rls256cs1 for its largest overuse safeties. However, many others might prefer Level5-a or Level5-b for its smaller signature size.

Small Sig Options at 2^30 limit.

Level 1.

 

Id

n

h

d

h’

a

k

lg(w)

m

rls128gp2

16

45

3

15

19

6

8

21

Level1-A

16

32

2

16

22

6

7

21

 

Id

pk

sig

Size reduction in bytes.

sign

verify

Sigs at 112 bits

rls128gp2

 

32

3520

0

462618624

 

7082

43.02

Level1-A

32

3408

-112

428081152

 

2862

34.99

 

Personally, I slightly prefer rls128gp2 over Level1-A because of the numbers of sigs at 112 bits of security.  But many others might prefer Level1-A because it has smaller sig, much faster verification, and a slightly faster signing time.

 

Level 3.

 

Id

n

h

d

h’

a

k

lg(w)

m

rls192gp3

24

36

2

18

21

9

6

30

Level3-A

24

34

2

17

24

8

7

30

 

Id

pk

sig

Size reduction in bytes.

 

sign

verify

Sigs at 128 bits

rls192gp3

48

7272

0

1.2B

2414

42.73

Level3-A

48

7080

-192

1.4B

4078

41.98

 

Personally, I slightly prefer rls192gp3 over Level3-A because of the numbers of sigs at 128 bits of security.  Some others might prefer Level3-A which has smaller signature even though the size improvement is small.

Level 5.

 

Id

n

h

d

h’

a

k

lg(w)

m

rls256gp1

64

34

2

17

21

13

7

41

Level5-A

64

32

2

16

21

13

8

39

 

Id

pk

sig

Size reduction in bytes.

 

sign

verify

#Sigs at 192/128 bits of security.

rls256gp1

64

12768

0

1.39B

5316

40.12/45.15

Level5-A

64

12384

-384

1.22B

9026

38.12/43.15

 

I prefer rls256gp1 over Level5-A due to its overuse safety. However, many others might prefer the latter for its smaller signature size, but the size improvement seems to be small.

2 limits at 3 security levels with SHA3 and SHA2, there would be 12 new parameter sets which seems too many to me.  Also, if one more limit is added, there would be another 6 parameter sets.

Please discuss the parameter set(s) that you prefer here including your rationale to help us with this search and selection process.

 

Thank you and Regards,

Quynh.

 

 

 

 

From: Chris Fenner <cf...@google.com>

Sent: Wednesday, July 9, 2025 10:56 AM
To: Peter Thomassen <homa...@gmail.com>

Cc: pqc-forum <pqc-...@list.nist.gov>; Dang, Quynh H. (Fed) <quynh...@nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>
Subject: Re: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Thank you Quynh,

Chris Fenner

unread,
Aug 6, 2025, 5:53:52 PMAug 6
to Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
First, thanks Quynh for the continued thoughtfulness and consideration of the verification-time-sensitive use cases. As someone who works with a lot of firmware signing infrastructure (and security systems that need their firmware to be signed), I (and my teammates) really appreciate it.

I think I would still strongly suggest NIST consider publishing two sets of parameter sets, one for firmware/record signing and one for general use with a higher signature limit. I think that being able to tune each parameter set for the use case really helps tune these tricky tradeoffs. However, I also think that the "small signatures but not trying to do firmware/record signing" use case can be more crypto agile and thus has a lot more options, such as ML-DSA and (soon) FN-DSA. So I understand if NIST prefers not to try to standardize some SLH-DSA parameter sets that are good for that use case but not for firmware/record signing.

Assuming that NIST wants to standardize just one set of SLH-DSA parameter sets, of the options you listed, here are my suggestions and reasons for the suggestions.

The below table is a screenshot in case googlegroups garbles the tables again. I'll stick the HTML form of the table at the end of this message in case it is needed and helpful.

I'll acknowledge that it does not make a whole lot of sense to express differences in logarithms in terms of percentages. I mostly wanted to compare red/green across the aspects of interest :)
image.png
Mainly I think verification time and signature size the most important, and are about equally important: I think it is ok to make one of these about 10% worse to make the other one about 10% better. Beyond that, signing time and overuse protection are good to consider but not primary concerns, because (at least for the firmware/record signing use case) we are talking about data that is signed very infrequently, in an environment that can guarantee there will be less than one signature per minute produced for 30 years (for example).

I think the following choices would be very well balanced for firmware and record signing and "not too badly balanced" for general use:
  • rls128cs1 has dramatically faster verification time than the other choices, which is important because systems that choose this parameter set probably aren't very beefy (or they might choose a level 3 parameter set). I like rls128cs1 better than level1-b-24 because I don't think 44% worse verification time and slightly worse overuse protection is worth it for 17% signature size savings. I think 4KB is an important boundary value and both parameter sets stay under it.
  • level3-b-24 has very fast verification time (roughly tied for 3rd fastest in the group you listed) while staying under 8KB, which is an important (e.g., page size) boundary. I think there are several other really good options in the region, such as rls192cs1 and level3-a-24. If we decided we were willing to go over 8KB I think level3-c-24 is great too, but I do think that 8KB boundary is important to consider.
  • level5-a-24 is also very fast to verify (2nd fastest in the group you listed), while retaining a very small signature size. I think it is worth considering spending 22% more signature size for 30% less verification time to go to rls256cs1, but I think both are good options (and 2KB is a lot of bytes).
Thanks
Chris

PS, here is the above table in HTML form, I hope it is not garbled too much.

LvlSetSign timeVerify timeSignature sizelg(Sigs at reduced)
1rls128cs11.45E+09100%311100%3856100%27.25100%
1level1-b-241.33E+0992%448144%321683%25.1692%
1level1-a-241.27E+0987%465150%348890%26.9399%
1level1-a-304.28E+0829%2862920%340888%34.99128%
1rls128gp24.63E+0832%70822277%352091%43.02158%
3level3-b-241.33E+09100%648100%6936100%31.55100%
3rls192cs11.45E+09109%52681%7752112%33.55106%
3level3-c-241.76E+09132%46071%8568124%33.55106%
3level3-a-241.08E+0982%647100%6912100%30.5597%
3level3-d-241.97E+09148%672104%7512108%32.79104%
3rls192gp31.20E+0990%2414373%7272105%42.73135%
3level3-a-301.40E+09105%4078629%7080102%41.98133%
5level5-a-242.30E+09100%854100%12256100%27.17100%
5level5-b-242.43E+09106%882103%13152107%29.26108%
5rls256cs12.33E+09101%60270%14944122%29.98110%
5rls256gp11.39E+0961%5316622%12768104%45.15166%
5level5-a-301.22E+0953%90261057%12384101%43.15159%

Chris Fenner

unread,
Aug 6, 2025, 8:12:20 PMAug 6
to Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
Apologies for my reading comprehension, I did not realize some of the suggestions were at the 2^20 level and some were at the 2^24 level. I think 2^20 signatures is probably fine for firmware signing, but the "rate limiting for 30 years" calculus yields 1 signature per 15 minutes. I like that 2^24 signatures lets you sign once per minute, because it makes it easier to reason about the safety: especially if your HSM doesn't cache the top level Merkle tree and takes over a minute to produce the signature in the first place -- slowness becomes a safety feature in that scenario. I've re-titled the "levelx-a/b/c/d" in my table to include the different limits. I think my recommendations become the following:
  • rls128cs1 for the same reasons as before (next best thing is over 40% slower to verify but not that much smaller)
  • rls192cs1 because it gives full strength at 2^24 signatures; level3-c-24 is a very good 2nd option but I disfavor it due to the 8KB boundary. There's not really a strong magic reason for 8192 bytes to be a big threshold, but just thinking about the way firmware images get laid out into (e.g., 4KB or 8KB) pages, it seems nice to stay under 8KB.
  • level5-a-20 if NIST finds the 2^20 (instead of 2^24) limit acceptable, rls256cs1 otherwise. level5-b-24 is also very good. The reason I like 
    level5-a-20 is that it is so "small" it fits into three 4KB pages (similar argument to the above). Since this is the "safest" parameter set, it comes with some "bonus" overuse safety. An HSM that doesn't cache the Merkle tree, that can do about 1M short hashes per second, should take about 38 minutes to create a signature using this parameter set. So it may be quite reasonable to require users to avoid breaking the 2^20 limit, and recommend they limit the signer to one signature per 15 minutes. But I do wonder if it would be confusing to have a 2^20 limit for the level 5 parameter set, and a 2^24 limit for the others. The only reason I see not to go with a 2^20 parameter set for levels 1 and 2 is that the ones we've found don't give us a dramatic tradeoff benefit or cross an interesting size threshold.
But overall, I think all of the 2^20 and 2^24 parameter sets you've suggested in your recent mail Quynh, are very good for firmware signing. Mostly I am trying to come up with some ordered preference to give to you as feedback so you can finish your search :)

Lvl

Set

Sign time

Verify time

Signature size

lg(Sigs at reduced)

1

rls128cs1

1.45E+09

100%

311

100%

3856

100%

27.25

100%

1

level1-a-24

1.27E+09

87%

465

150%

3488

90%

26.93

99%

1

level1-b-20

1.33E+09

92%

448

144%

3216

83%

25.16

92%

1

level1-a-30

4.28E+08

29%

2862

920%

3408

88%

34.99

128%

1

rls128gp2

4.63E+08

32%

7082

2277%

3520

91%

43.02

158%

3

rls192cs1

1.45E+09

100%

526

100%

7752

100%

33.55

100%

3

level3-c-24

1.76E+09

121%

460

87%

8568

111%

33.55

100%

3

level3-d-24

1.97E+09

136%

672

128%

7512

97%

32.79

98%

3

level3-b-20

1.33E+09

92%

648

123%

6936

89%

31.55

94%

3

level3-a-20

1.08E+09

75%

647

123%

6912

89%

30.55

91%

3

rls192gp3

1.20E+09

83%

2414

459%

7272

94%

42.73

127%

3

level3-a-30

1.40E+09

96%

4078

775%

7080

91%

41.98

125%

5

level5-a-20

2.30E+09

100%

854

100%

12256

100%

27.17

100%

5

rls256cs1



Thanks
Chris

Chris Fenner

unread,
Aug 6, 2025, 8:19:30 PMAug 6
to Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
Screenshot of table since it's garbled in the web view:

image.png

Chris Fenner

unread,
Aug 6, 2025, 11:43:00 PMAug 6
to Dang, Quynh H. (Fed), pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
One more thing (sorry to send several messages in a row), I started working on tooling to print out the nice security level by signature count graphs like Quynh & Scott had in their paper, for the parameter sets listed in Quynh's message. It might be useful for thinking about how these compare with each other.


chart.png

Dang, Quynh H. (Fed)

unread,
Aug 7, 2025, 11:18:30 AMAug 7
to Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen

Hi Chris, 

 

Thank you for the table, the graph, and the suggestions. See more discussion inline below. 

 

Hi all,

 

Please discuss your preferences. 

 

 

From: Chris Fenner <cf...@google.com>
Sent: Wednesday, August 6, 2025 8:19 PM
To: Dang, Quynh H. (Fed) <quynh...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>; Peter Thomassen <homa...@gmail.com>
Subject: Re: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Screenshot of table since it's garbled in the web view:

 

 

On Wed, Aug 6, 2025 at 5:11PM Chris Fenner <cf...@google.com> wrote:

Apologies for my reading comprehension, I did not realize some of the suggestions were at the 2^20 level and some were at the 2^24 level. I think 2^20 signatures is probably fine for firmware signing, but the "rate limiting for 30 years" calculus yields 1 signature per 15 minutes. I like that 2^24 signatures lets you sign once per minute, because it makes it easier to reason about the safety: especially if your HSM doesn't cache the top level Merkle tree and takes over a minute to produce the signature in the first place -- slowness becomes a safety feature in that scenario. I've re-titled the "levelx-a/b/c/d" in my table to include the different limits. I think my recommendations become the following:

  • rls128cs1 for the same reasons as before (next best thing is over 40% slower to verify but not that much smaller)

[Dang, Quynh H. (Fed)] If it takes 32 seconds to sign with rls128cs1, it will take 17.024 years for a single machine doing only signing to go over 2^24. At 2^25.75 and 2^27.25 limits, it has 120 and 112 bits of security, respectively. It seems safe enough for firmware, software update and certificate signing purposes. How about the case of multiple signers using the same key ? Would it be reasonable to require that a single signing key shall not be used concurrently in multiple signing processes ? 

  • rls192cs1 because it gives full strength at 2^24 signatures; level3-c-24 is a very good 2nd option but I disfavor it due to the 8KB boundary. There's not really a strong magic reason for 8192 bytes to be a big threshold, but just thinking about the way firmware images get laid out into (e.g., 4KB or 8KB) pages, it seems nice to stay under 8KB.

[Dang, Quynh H. (Fed)] As I stated in my previous email, I personally prefer rls128cs1 and rls192cs1 for their overuse safeties. You wanted a fastest verify option, so I just showed you that level3-c-24 is 13% faster in verification than rls192cs1 with 11% larger signature size cost. Anyone preferring level3-c-24 over rls192cs1, please speak up. 

  • level5-a-20 if NIST finds the 2^20 (instead of 2^24) limit acceptable, rls256cs1 otherwise. level5-b-24 is also very good. The reason I like 
    level5-a-20 is that it is so "small" it fits into three 4KB pages (similar argument to the above). Since this is the "safest" parameter set, it comes with some "bonus" overuse safety. An HSM that doesn't cache the Merkle tree, that can do about 1M short hashes per second, should take about 38 minutes to create a signature using this parameter set. So it may be quite reasonable to require users to avoid breaking the 2^20 limit, and recommend they limit the signer to one signature per 15 minutes. But I do wonder if it would be confusing to have a 2^20 limit for the level 5 parameter set, and a 2^24 limit for the others. The only reason I see not to go with a 2^20 parameter set for levels 1 and 2 is that the ones we've found don't give us a dramatic tradeoff benefit or cross an interesting size threshold.

[Dang, Quynh H. (Fed)] As stated in my previous email, I personally prefer rls256cs1 over the other 2 for its largest overuse safety. My 2nd choice is level5-b for its limit at 2^24 (not 2^20) and its smaller signature. Anyone preferring either level5-b-24 or level5-a-20, please speak up. 

Regards,

Quynh. 

Sophie Schmieg

unread,
Aug 7, 2025, 2:35:14 PMAug 7
to Dang, Quynh H. (Fed), Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
As mentioned before, I do not think that it is good practice to work around limits with overly prescriptive limitations on deployments. After all, it is perfectly fine to have two servers (for reliability reasons alone, many users will want at least two signing servers in different hemispheres) creating signatures with SLH-DSA, as long as they do not breach the limits, and taking your own numbers, two servers would still require 8000 years to produce enough signatures to reach the limits. SP 800-38D simply specifies the limit - no more than 2^32 encryption operations SHALL be done. I'd argue that this is also the best approach here: Document the upper limit, and assume that implementers will follow the guidance that is given. Just with AES-GCM, stuff does not go up in flames the second the limit is breached here, the limit is merely a mathematical artifact computing when a certain probability reaches an unacceptable limit, and in both cases there is enough safety margin included in the calculation that only a severe overstepping of the limit actually results in an exploitable vulnerability, most of the time.

As an aside, while I'm talking about the 2^32 limit in SP 800-38D, I would actually argue that even that is not the right guidance, and the correct guidance would have been to state that the probability of an IV collision within the corresponding system shall not be above 2^-32. In most cases, this is the same result, as 2^32 encryption queries lead to the probability of IV collision to be 2^-32. However, there are scenarios where the difference matters: In many modern use cases of AES-GCM, the 2^32 limit is too restrictive, so in order to allow for the encryption of more messages, systems use key derivation to derive keys using an independent random nonce and then using this key to encrypt with AES-GCM. If we put the key derivation nonce at a length of 64 bit, we now have 2^64 keys instead of one, which, as a naive calculation suggests would allow for the encryption of 2^96 messages (even if the key derivation nonce is random, it turns out that due to law of large numbers the number of encryption calls allowed without violating NIST guidance is well above 2^95). However, since the individual collision chance is 2^-32, if one were to actually encrypt 2^96 messages (ignoring the fact that this is utterly impossible), one would expect not just one, but around 2^32 IV collisions. The correct computation instead models the key derivation nonce and the AES-GCM IV as a single, 160 bit nonce, which then, accounting for the 2^-32 collision probability limit and the birthday paradox, give the actual save to encrypt number of messages for this system, which is 2^64.

My point with this story is that if you are not careful to put down the limits that you actually, mathematically, mean to enforce, you will produce systems that are perfectly compliant with the standard while also being perfectly insecure. All in all, just document that you can at most produce 2^20 signatures, and add suggestions for how such a limitation could be enforced, but do not make those suggestions binding.

You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/MW4PR09MB10059BBB4AEC8510E85E70759F32CA%40MW4PR09MB10059.namprd09.prod.outlook.com.

Blumenthal, Uri - 0553 - MITLL

unread,
Aug 7, 2025, 2:41:08 PMAug 7
to Sophie Schmieg, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
As long as we have limits that can be exceeded in practice, rather than mere in theory - it seems inevitable that we state what those are, and what applications/deployments they are acceptable for. 

Following this logic, why should we not state that a given set of parameters has a given limitation, and is suitable (only?) for the given use cases?

Thanks!
Regards,
Uri

Secure Resilient Systems and Technologies
MIT Lincoln Laboratory

On Aug 7, 2025, at 14:35, 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov> wrote:


As mentioned before, I do not think that it is good practice to work around limits with overly prescriptive limitations on deployments. After all, it is perfectly fine to have two servers (for reliability reasons alone, many users will want
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside the Laboratory.
 
ZjQcmQRYFpfptBannerEnd
As mentioned before, I do not think that it is good practice to work around limits with overly prescriptive limitations on deployments. After all, it is perfectly fine to have two servers (for reliability reasons alone, many users will want at least two signing servers in different hemispheres) creating signatures with SLH-DSA, as long as they do not breach the limits, and taking your own numbers, two servers would still require 8000 years to produce enough signatures to reach the limits. SP 800-38D simply specifies the limit - no more than 2^32 encryption operations SHALL be done. I'd argue that this is also the best approach here: Document the upper limit, and assume that implementers will follow the guidance that is given. Just with AES-GCM, stuff does not go up in flames the second the limit is breached here, the limit is merely a mathematical artifact computing when a certain probability reaches an unacceptable limit, and in both cases there is enough safety margin included in the calculation that only a severe overstepping of the limit actually results in an exploitable vulnerability, most of the time.

As an aside, while I'm talking about the 2^32 limit in SP 800-38D, I would actually argue that even that is not the right guidance, and the correct guidance would have been to state that the probability of an IV collision within the corresponding system shall not be above 2^-32. In most cases, this is the same result, as 2^32 encryption queries lead to the probability of IV collision to be 2^-32. However, there are scenarios where the difference matters: In many modern use cases of AES-GCM, the 2^32 limit is too restrictive, so in order to allow for the encryption of more messages, systems use key derivation to derive keys using an independent random nonce and then using this key to encrypt with AES-GCM. If we put the key derivation nonce at a length of 64 bit, we now have 2^64 keys instead of one, which, as a naive calculation suggests would allow for the encryption of 2^96 messages (even if the key derivation nonce is random, it turns out that due to law of large numbers the number of encryption calls allowed without violating NIST guidance is well above 2^95). However, since the individual collision chance is 2^-32, if one were to actually encrypt 2^96 messages (ignoring the fact that this is utterly impossible), one would expect not just one, but around 2^32 IV collisions. The correct computation instead models the key derivation nonce and the AES-GCM IV as a single, 160 bit nonce, which then, accounting for the 2^-32 collision probability limit and the birthday paradox, give the actual save to encrypt number of messages for this system, which is 2^64.

My point with this story is that if you are not careful to put down the limits that you actually, mathematically, mean to enforce, you will produce systems that are perfectly compliant with the standard while also being perfectly insecure. All in all, just document that you can at most produce 2^20 signatures, and add suggestions for how such a limitation could be enforced, but do not make those suggestions binding.

On Thu, Aug 7, 2025 at 8:18 AM 'Dang, Quynh H. (Fed)' via pqc-forum <pqc-...@list.nist.gov> wrote:

Hi Chris, 

 

Thank you for the table, the graph, and the suggestions. See more discussion inline below. 

 

Hi all,

 

Please discuss your preferences. 

 

 

From: Chris Fenner <cf...@google.com>
Sent: Wednesday, August 6, 2025 8:19 PM
To: Dang, Quynh H. (Fed) <quynh...@nist.gov>
Cc: pqc-forum <pqc-...@list.nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>; Peter Thomassen <homa...@gmail.com>
Subject: Re: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Screenshot of table since it's garbled in the web view:

 

<image001.png>

 

<image002.png>

Sophie Schmieg

unread,
Aug 7, 2025, 4:19:17 PMAug 7
to Blumenthal, Uri - 0553 - MITLL, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
I think we should especially differentiate between the prescriptive guidance and the recommendations. The prescriptive parts of the standard should only state the actual, hard mathematical limits that one shall not exceed. The recommendations can be more comprehensive, telling us how to ensure that we do not exceed these limits. This way an implementer can follow the recommendations without consulting any cryptographers and be reasonably sure that they stay within the limits, but future use cases would not be constrained by our current imagination. The example of wanting to have redundancy is imho a very good one here: It is perfectly safe, even if you only use compute power as a rate limit, to have two signing servers, and having two signing servers is far more robust than a single one (the common rule of thumb at Google would be to have at least 2 more of however much one needs to fulfill capacity, in case one is down for maintenance and the other one has an outage, so really we would want 3 signing servers, but my point is the same). This is a completely reasonable variation of Quynh's calculations, completely safe, and yet, if you prescribe that only one server shall be allowed would be out of compliance. If however you prescribe that there shall be no more than 2^24 (or similar) signing operations, and recommend that this can be achieved by only having one signing server, it would merely force me to write up some internal doc doing some quick back of the envelope calculation, and the team would be free to use SLH-DSA on their 3 servers and be within compliance for NIST.

Long story short: Prescribe precisely what is necessary, recommend what is the easiest way to achieve that.

Blumenthal, Uri - 0553 - MITLL

unread,
Aug 7, 2025, 4:30:51 PMAug 7
to Sophie Schmieg, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
OK, makes sense. In that case, prescriptive guidance is - “this parameter set is for up to X signatures”; recommendation - “this parameter set is best used for Y applications”.

Thx

Regards,
Uri

Secure Resilient Systems and Technologies
MIT Lincoln Laboratory

On Aug 7, 2025, at 16:19, 'Sophie Schmieg' via pqc-forum <pqc-...@list.nist.gov> wrote:


I think we should especially differentiate between the prescriptive guidance and the recommendations. The prescriptive parts of the standard should only state the actual, hard mathematical limits that one shall not exceed. The recommendations

Sophie Schmieg

unread,
Aug 7, 2025, 4:38:48 PMAug 7
to Blumenthal, Uri - 0553 - MITLL, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen
Yes, exactly. That gives you both the flexibility to fine tune for specific use cases without requiring cryptography PhD for every single deployment.

John Mattsson

unread,
Aug 8, 2025, 5:10:10 AMAug 8
to Sophie Schmieg, Blumenthal, Uri - 0553 - MITLL, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen

Sophie Schmieg wrote:
>SP 800-38D simply specifies the limit - no more than 2^32 encryption operations >SHALL be done. I'd argue that this is also the best approach here:

 

I agree, while limits ideally should be high enough that they need not be considered in practice, there is almost always tradeoffs between security and performance. Overly prescriptive limitations often lead to people using something else, which might be worse, instead.

 

>the 2^32 limit is too restrictive, so in order to allow for the encryption of more >messages, systems use key derivation to derive keys using an independent random >nonce and then using this key to encrypt with AES-GCM.

 

Yes, but the 2^32 limit on invocations combined with the 2^32 block limit on plaintext length is actually too soft. A commonly agreed limit for block cipher modes is 2^59 blocks with a single key.

 

>As an aside, while I'm talking about the 2^32 limit in SP 800-38D, I would actually >argue that even that is not the right guidance, and the correct guidance would have >been to state that the probability of an IV collision within the corresponding >system shall not be above 2^-32.

 

SP 800-38D actually has both of these requirements and the requirement of 2^-32 IV collision probabiltiy is way to soft. With an IV collision probabiltiy of  2^-32 and attacker can completely compromise integrity and recover up to 2^33 blocks of plaintext with complexity 2^97. While this is the same as the complexity of a distinguishing attack based on the lack of collisions, the lack of collisions just allow an attacker to recover a small fraction of a bit and does not compromise integrity. 2^97 complexity is acceptable for the distinguishing attack (lack of block collisions), but not att all for the IV collision attack.

 

https://csrc.nist.gov/files/pubs/sp/800/38/d/r1/upd/iprd/docs/sp800-38d-pre-draft-public-comments.pdf

 

Cheers,

John

 

 

Arne Padmos

unread,
Sep 26, 2025, 6:24:55 PM (8 days ago) Sep 26
to pqc-forum, John Mattsson, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen, Sophie Schmieg, Blumenthal, Uri - 0553 - MITLL
Hi Quynh,

Any news on the selection of a new parameter set for use cases like firmware signing? I'm asking because it's currently in the critical path of the PQC transition with very tight timelines. There's only a little over two years left before the EU's Cyber Resilience Act will be fully enforced. (I'm skimming over the delegated act of the Radio Equipment Directive and associated harmonised standards, which can be read to require PQC today in situations where it's state of the art. The turning point for which seems to have been achieved with hybrid KEMs for TLS in devices like laptops, tablets, phones, and the like. Especially given its adoption in major browsers and the widespread support in the latest mobile and desktop operating systems, arguments strengthened by adoption statistics in Cloudflare Radar which I assume reflect X25519MLKEM768.)

While the CRA implementing act isn't out yet that specifies in more detail the types of devices that fall under the different criticality categories (it's expected before the 11th of December of this year), based on the draft implementing act, the wording of Annex IV of the CRA, and the scope of the SOG-IS technical domain of 'hardware devices with security boxes', it's to be expected that (among others) HSMs, payment terminals, smart meter gateways, smart cards, and secure elements will be included in the most critical category. The expected lifetimes and concomitant support periods that are like likely to be set by the manufacturers of these products – in particular smart meter gateways, HSMs, payment terminals, and similar products – are such that PQC can be assumed to be relevant (but ask your lawyer, this is not legal advice). Note that the CRA sets a generally applicable minimum support period of five years, but calls out that this period will generally be much longer for some categories of products and that it should meet the expected time in which it'll be in use. Also, the Commission can adopt delegated acts if market surveillance data shows that manufacturers of specific product categories are consistently low-balling the defined support period. Additionally, the CRA sees individual devices as separate products, so it's not the first but the last device of a given type that defines how long the manufacturer needs to keep up support.

The CRA's requirement for secure updates means that firmware signing becomes critical. As the timelines of the provision of security updates are very long for certain product categories, this makes a PQC root of trust relevant. As such, it's not strange that firmware signing is one of the two major priorities highlighted in the document 'A coordinated implementation roadmap for the transition to post-quantum cryptography' of the NIS Cooperation Group (the other priority being HNDL attacks). Of course, manufacturers might choose to ship PQC-vulnerable hardware with the understanding that they might have to do a recall if they can't fix the root of trust through a firmware upgrade. Or if they're comfortable leaving the grey areas like level 1 versus level 3 or precise quantum risk management considerations up to a judge to decide, who'll likely have zero familiarity with the relevant technical details.

The products deemed critical by the CRA will have to be evaluated by an independent lab. While the EUCC certification scheme isn't (yet) mandatory for critical products with digital elements, the recently published ECCC Agreed Cryptographic Mechanisms v2 (ACMv2) are likely to be used by security labs evaluating products to CRA requirements – 'notified bodies' in CRA and RED DA parlance. (Note that I hope that the EUCC scheme doesn't become mandatory, and I wonder if it will ever be given the requirement for the European Commission to do a market impact assessment before setting such a requirement. As there is a nascent but growing community working on (more) open solutions like Open Titan and given efforts like Rasperry Pi's RP2350 Hacking Challenge highlighting the relevance of security through transparency, I'm wondering how negative market effects can be prevented. For example, I wouldn't like forced implementation of EUCC to hamper or slow the commercialisation of CHERI. On a positive note, the Commission seems to generally have a positive perception of the open source community as being ahead of the curve when it comes to the adoption of security technologies. But those are all topics for another time.)

As to specific parameters, given the previously mentioned ACMv2, I mostly care about the level 3 parameter and think rls192cs1 would work well. Also, I'm assuming that the parameter sets for 'SLH-DSA-few' will only include SHAKE. Given the intended application domain, the performance difference between the SHA2 family and SHAKE256 in hardware, many of the PoCs mentioned appearing to be SHAKE-based, the interest shown in the thread, the advantage of compatibility, and the desire to limit the growth of additional parameter sets, I think just standardising SHAKE-based variants is the best decision. It might also help to reduce the alphabet soup of the current SLH-DSA identifiers.

As there haven't been any dissenting voices to the rls192cs1 proposal and no-one seems to have been clamouring for SHA-2-based variants, can we assume that SLH-DSA-SHAKE-rls192cs1 will be standardised (hopefully under a different, shorter name e.g. 'SLH-DSA-few' or 'SLH-DSA-fw')? If so, what are the timelines and publication format that you have in mind? I'm asking because I assume this will not be an update to FIPS 205, and as such ACMv2 will have to be updated (assuming that a v2.1 happens, which will take additional time). Of course, manufacturers might be able to convince their security labs to accept additional SLH-DSA parameters based on the existence of an SP plus ACMv2's reference to FIPS 205, but explicit approval in ACM 'v2.1' will of course be desirable.

The faster that 'SLH-DSA-SHAKE-rls192cs1' gets standardised by NIST, the more likely that it actually gets adopted in practice in lieu of less desirable options. In the absence of an expedient selection and publication of one or more additional parameters, the only other options I see being adopted in the EU given the tight timelines are use of SLH-DSA-SHAKE-192s (which leaves something to be desired performance wise), XMSS or LMS (which are foot cannons and an operational nightmare), or hybrid ML-DSA (which introduces all kinds of complexities and which has a bumpy path towards adoption). Please save us from these scenarios.

Discussion and selection of the level 1 and level 5 parameters is not as urgent in my opinion and can be put on hold if it makes the whole process for the level 3 selection and standardisation faster. Of course, you're free to specify a level 5 parameter set, but I'm not sure it's actually going to see widespread use in practice, at least for the foreseeable future. Even CNSA 2.0, which requires level 5 parameters for ML-KEM and ML-DSA, recommends LMS with SHA-256/192 as specified in NIST SP 800-208 for firmware signing, i.e. a targeted security strength of 192 bits. Even if there's remaining discussions about parameter sets at level 1 and level 5, if quick progress can be made in closing the remaining open points for level 3 that would be great. Precise operational constraints can be discussed later, but I support the points made by others on this topic. Note that I don't feel too strongly about a level 1 parameter set but I'm not sure if it'll be used for a root of trust in products considered to be critical according to the CRA.

Thank you very much for the time and effort spent towards standardising a smaller SLH-DSA variant for a specific niche use cases.

Regards,
Arne

John Mattsson

unread,
Sep 27, 2025, 1:08:01 PM (7 days ago) Sep 27
to Arne Padmos, pqc-forum, Quynh H. Dang, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen, Sophie Schmieg, Blumenthal, Uri - 0553 - MITLL

 

Arne Padmos wrote:
>Cloudflare Radar which I assume reflect X25519MLKEM768

 

I also assume that Cloudflare Radar primarily reflects X25519MLKEM768 usage. Can anyone confirm this? (49% PQC on Sep 24, 8.00 UTC)

https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption

 

3GPP is will soon discuss which TLS key exchange algorithms to mandate. Personally, I would like to see both X25519MLKEM768 and standalone ML-KEM mandated (I avoid the term pure, as it implies that non-hybrid variants are somehow superior). However, support for standalone ML-KEM in TLS libraries currently appears to lag behind X25519MLKEM768. For example, the TLS implementations in Go and Java do not yet support standalone ML-KEM. Does anyone know if this situation is likely to change? 3GPP vendors generally rely on commercially available TLS implementations.

 

>the recently published ECCC Agreed Cryptographic Mechanisms v2 (ACMv2)

 

I strongly hope this document is not applied beyond national security systems. If it is intended for broader use, it must undergo a transparent public review process and actively incorporate input from industry stakeholders.

 

>Also, I'm assuming that the parameter sets for 'SLH-DSA-few' will only include SHAKE. Given the intended application domain, the performance difference between the SHA2 family and SHAKE256 in hardware, many of the PoCs mentioned appearing to be SHAKE-based, the interest shown in the thread, the advantage of compatibility, and the desire to limit the growth of additional parameter sets, I think just standardising SHAKE-based variants is the best decision. It might also help to reduce the alphabet soup of the current SLH-DSA identifiers.

 

+ 1

(While exploring the construction of a PKE using ML-KEM, I was reminded how valuable a Keccak-based AEAD would be. It would enable building an ML-KEM–based PKE where Keccak is the sole symmetric primitive that needs to be implemented.)

Cheers,

John

Bas Westerbaan

unread,
Sep 27, 2025, 1:35:36 PM (7 days ago) Sep 27
to John Mattsson, Arne Padmos, pqc-forum, Quynh H. Dang, Chris Fenner, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen, Sophie Schmieg, Blumenthal, Uri - 0553 - MITLL

I also assume that Cloudflare Radar primarily reflects X25519MLKEM768 usage. Can anyone confirm this? (49% PQC on Sep 24, 8.00 UTC)

https://radar.cloudflare.com/adoption-and-usage#post-quantum-encryption-adoption


It's about 95% X25519MLKEM768 and 5% X25519Kyber768Draft00. There is a sliver (~0.01%) that advertises support for pure ML-KEM (which we don't support), mostly together with X25519MLKEM768. That's about the same number as CECPQ2. P256MLKEM768 is even lower.

Kampanakis, Panos

unread,
Sep 27, 2025, 11:14:45 PM (7 days ago) Sep 27
to Arne Padmos, pqc-forum, Quynh H. Dang

Hi Arne,

Quynh shared parameter candidates last week in the 6th PQC conference. The parameters included 2^16, 2^24 and 2^30. They are planning to standardize 2^24 params rls128cs1, rls192cs1, and rls256cs1. He solicited more feedback for the other two. The video is not public yet. I think it will take NIST 2-3 weeks to post it, assuming no government shutdown. Also see image.

 

 

 

 

From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Arne Padmos


Sent: Friday, September 26, 2025 6:25 PM
To: pqc-forum <pqc-...@list.nist.gov>

Cc: John Mattsson <john.m...@ericsson.com>; Quynh H. Dang <quynh...@nist.gov>; Chris Fenner <cf...@google.com>; pqc-forum <pqc-...@list.nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>; Peter Thomassen <homa...@gmail.com>; Sophie Schmieg <ssch...@google.com>; Blumenthal, Uri - 0553 - MITLL <u...@ll.mit.edu>
Subject: RE: [EXTERNAL] [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.

Dang, Quynh H. (Fed)

unread,
Sep 29, 2025, 8:59:07 AM (5 days ago) Sep 29
to Arne Padmos, pqc-forum, John Mattsson, Chris Fenner, pqc-forum, Paul Hoffman, ja...@goertzen.ch, Peter Thomassen, Sophie Schmieg, Blumenthal, Uri - 0553 - MITLL

Hi Arne, 

 

Thank you for all that important information.

 

Yes, as you recommended, in the talk at the workshop last week, we announced that we plan to standardize the rls192cs1 parameter sets with both SHAKE256 and a SHA2. I will discuss with my team about your suggestion of just going with the SHAKE. 

 

On the technical side, I will do my best to get the job done fast. 

 

Hi all, 

 

As presented in the talk, we plan to standardize the parameter sets rls128cs1, rls192cs1 and rls256cs1. 

 

As Amber Sprenkels and Thom Wiggers discussed in their message on 04/16/2025: "we think that there may be some embedded use cases, such as individually signed firmware updates to large numbers of devices.". With that, we have searched for many different parameter sets at the 2^30 limit.  We presented and discussed several of them. We asked the community for more feedback on the demand of parameter sets at this limit.  We would like to know if the mentioned use case is very common in the industry. 

 

rls128cs1’s signing time is about 664 times of the SLH-DSA-SHA2-128s’ signing time in the normal method. The scale reduces to about 138 times when the one-time signature tree is cached, and in this case if SLH-DSA-SHA2-128s takes 0.2 seconds to sign, rls128cs1 would take about 27.6 seconds to sign. So, the signing time of rls128cs1 could be considered too slow when a lot of signatures are desired to be generated in a short period of time (multiple signers would improve the speed for such a task). 

 

Beside the use case Amber Sprenkels and Thom Wiggers brought up above, there could be another use case: signing certificates, especially end-entity certificates when a lot of them are desired to be generated quickly.  We don't know if the CAs actually plan to use a smaller SLH-DSA parameter set to do that and if the answer is yes, is the signing time of rls128cs1 not acceptable to them ? 

 

We also asked you to let us know if the 3 2^24 parameter sets above would have unacceptable performance for your systems. 

 

We presented some 2^16 limit parameter sets, but their performance improvements over the 2^24 limit ones are small in our opinion. In addition, lower limits come with higher security risks when misuse happens and we would like to minimize the number of parameter sets, therefore, we don't plan to standardize them at this time. 

 

Thank you and Regards,

Quynh. 

 

 

From: Arne Padmos <goo...@arnepadmos.com>

Sent: Friday, September 26, 2025 6:25 PM
To: pqc-forum <pqc-...@list.nist.gov>

Joost Renes

unread,
Sep 29, 2025, 9:33:30 AM (5 days ago) Sep 29
to quynh...@nist.gov, goo...@arnepadmos.com, pqc-...@list.nist.gov, john.m...@ericsson.com, cf...@google.com, paul.h...@icann.org, ja...@goertzen.ch, homa...@gmail.com, ssch...@google.com, u...@ll.mit.edu

Hi Arne, Quynh,

 

> and no-one seems to have been clamouring for SHA-2-based variants

 

To this point, I think there is significant value in SHA-2 based variants given how much hardware still supports it, while not including SHA-3.

Adding SHA-3 in hardware is more expensive than SHA-2 (in terms of area), raising challenges for constrained devices for which these parameter sets would be especially useful.

 

Kind regards,

Joost

 

From: 'Dang, Quynh H. (Fed)' via pqc-forum <pqc-...@list.nist.gov>
Sent: Monday, September 29, 2025 7:59 AM
To: Arne Padmos <goo...@arnepadmos.com>; pqc-forum <pqc-...@list.nist.gov>
Cc: John Mattsson <john.m...@ericsson.com>; Chris Fenner <cf...@google.com>; pqc-forum <pqc-...@list.nist.gov>; Paul Hoffman <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>; Peter Thomassen <homa...@gmail.com>; Sophie Schmieg <ssch...@google.com>; Blumenthal, Uri - 0553 - MITLL <u...@ll.mit.edu>
Subject: RE: [EXTERNAL] Re: [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

Caution: This is an external email. Please take care when clicking links or opening attachments. When in doubt, report the message using the 'Report this email' button

 

Chris Fenner

unread,
Sep 29, 2025, 11:57:23 AM (5 days ago) Sep 29
to pqc-forum, Joost Renes, john.m...@ericsson.com, cf...@google.com, paul.h...@icann.org, ja...@goertzen.ch, homa...@gmail.com, ssch...@google.com, u...@ll.mit.edu, quynh...@nist.gov, goo...@arnepadmos.com
> To this point, I think there is significant value in SHA-2 based variants given how much hardware still supports it, while not including SHA-3.

+1 to this.

Hyperscalars have lots of existing hardware that they'd like to avoid just throwing away by <X date>. A lot of this hardware has SHA2 acceleration but not SHA3 acceleration. When considering how to bolt-on e.g., SLH-DSA verified update / verified boot / measured boot capabilities to existing hardware, the smaller the diff, the better :)

Also, huge thanks to the NIST team for the update here, and the hard work going on behind the scenes to enable it.

Thanks
Chris

Chris Fenner

unread,
Oct 1, 2025, 12:26:44 PM (3 days ago) Oct 1
to pqc-forum, Chris Fenner, Joost Renes, john.m...@ericsson.com, paul.h...@icann.org, ja...@goertzen.ch, homa...@gmail.com, ssch...@google.com, u...@ll.mit.edu, quynh...@nist.gov, goo...@arnepadmos.com
NIST posted the slides from Quynh's presentation here: https://csrc.nist.gov/csrc/media/presentations/2025/sphincs-smaller-parameter-sets/sphincs-dang_2.2.pdf. Thanks again to Quynh and team for their work here; I really enjoyed reading through the analysis in the deck (wish I could have made it to the meeting).

I might have found a minor copy-paste typo: slide 11 lists the signature cost (in hashes) of rls192cs1 as 1451229184, the same as rls128cs1. I believe the correct value is 2034237433. I don't think this substantially changes much (we already knew these parameter sets were slow for signers in order to be fast for verifiers 😅)

I have prepared some simple charts that summarize the comparison between the existing parameter sets for SLH-DSA, the proposed RLS (Relatively Little Signatures) parameter sets, and some select parameter sets from Quynh's presentation. I selected sets from the 2^16 parameter sets based on minimum signature size, and sets from the 2^30 parameter sets based on minimum signature time, just to try to get a sense for what would be possible for future proposals for 2^16 and 2^30 (depending on the anticipated actual use-cases here).

Thanks
Chris

Sig Bytes.png
Verify Time (Log).png
Sig Time (Log).png

Quynh Dang

unread,
Oct 1, 2025, 12:41:05 PM (3 days ago) Oct 1
to Chris Fenner, pqc-forum, Joost Renes, john.m...@ericsson.com, paul.h...@icann.org, ja...@goertzen.ch, homa...@gmail.com, ssch...@google.com, u...@ll.mit.edu, quynh...@nist.gov, goo...@arnepadmos.com
Thank you Chris.

I confirmed that typo when I copied and pasted. 

Regards,
Quynh. 

Kampanakis, Panos

unread,
Oct 1, 2025, 4:38:42 PM (3 days ago) Oct 1
to Quynh Dang, pqc-forum

Hi Quynh,

 

Thanks for making this happen.

 

The 2^24 parameters look fine. Personally, I would rather push the L1 2^24 parameters towards the smallest signature size, but I can live with rls128cs1.

 

Regarding the 2^16 options, personally I don’t think their size or performance benefit is significant enough to justify the parameter creep and the 2^16 limit monitoring impact.

 

As for 2^30, personally I am not aware of any use-cases where 2^24 will not be enough, but 2^30 will be.

 

Rgs,

 

 

From: pqc-...@list.nist.gov <pqc-...@list.nist.gov> On Behalf Of Quynh Dang
Sent: Wednesday, October 1, 2025 12:40 PM
To: Chris Fenner <cf...@google.com>
Cc: pqc-forum <pqc-...@list.nist.gov>; Joost Renes <joost...@nxp.com>; john.m...@ericsson.com <john.m...@ericsson.com>; paul.h...@icann.org <paul.h...@icann.org>; ja...@goertzen.ch <ja...@goertzen.ch>; homa...@gmail.com <homa...@gmail.com>; ssch...@google.com <ssch...@google.com>; u...@ll.mit.edu <u...@ll.mit.edu>; quynh...@nist.gov <quynh...@nist.gov>; goo...@arnepadmos.com <goo...@arnepadmos.com>
Subject: RE: [EXTERNAL] [Ext] [EXTERNAL] Re: [pqc-forum] NIST requests feedback on additional SLH-DSA(Sphincs+) parameter set(s) for standardization.

 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.

 

Thank you Chris.

 

I confirmed that typo when I copied and pasted. 

 

Regards,

Quynh. 

 

On Wed, Oct 1, 2025 at 12:26PM 'Chris Fenner' via pqc-forum <pqc-...@list.nist.gov> wrote:

NIST posted the slides from Quynh's presentation here: https://csrc.nist.gov/csrc/media/presentations/2025/sphincs-smaller-parameter-sets/sphincs-dang_2.2.pdf. Thanks again to Quynh and team for their work here; I really enjoyed reading through the analysis in the deck (wish I could have made it to the meeting).

 

I might have found a minor copy-paste typo: slide 11 lists the signature cost (in hashes) of rls192cs1 as 1451229184, the same as rls128cs1. I believe the correct value is 2034237433. I don't think this substantially changes much (we already knew these parameter sets were slow for signers in order to be fast for verifiers 😅)

 

I have prepared some simple charts that summarize the comparison between the existing parameter sets for SLH-DSA, the proposed RLS (Relatively Little Signatures) parameter sets, and some select parameter sets from Quynh's presentation. I selected sets from the 2^16 parameter sets based on minimum signature size, and sets from the 2^30 parameter sets based on minimum signature time, just to try to get a sense for what would be possible for future proposals for 2^16 and 2^30 (depending on the anticipated actual use-cases here).

 

Thanks

Chris

 

 

On Monday, September 29, 2025 at 8:57:23AM UTC-7 Chris Fenner wrote:

Reply all
Reply to author
Forward
0 new messages