SPHINCS+ Smaller Parameter Sets

310 views
Skip to first unread message

John Mattsson

unread,
Oct 13, 2025, 8:50:27 AMOct 13
to 'Edoardo Persichetti' via pqc-forum

Hi,

At the NIST 6th PQC Standardization Conference, Quynh Dang from NIST presented NIST’s current plan for “SPHINCS+ Smaller Parameter Sets” and asked for comments:

https://csrc.nist.gov/csrc/media/presentations/2025/sphincs-smaller-parameter-sets/sphincs-dang_2.2.pdf


We discussed this internally at Ericsson and have the following comments:

>We need more input from the community about the demand of 2^30 and 2^16 limits. Let us know if the 2^24 options above would have unacceptable performance for your systems.

We do not believe the 2^24 options are suitable from a security standpoint in automated software signing scenarios using software-based keys. In such environments, software bugs could unintentionally trigger a large number of signing processes, exhausting the 2^24 limit (but we have not done any calculations with rls128cs1, rls192cs1, rls256cs1). As we stated in our earlier comments [1], we think 2^30 would be a safer choice in that (automated) type of setting.

For manual certificate signing, by contrast, limits such as 2^16 or even 2^10 would be more than sufficient.

[1] https://emanjon.github.io/NIST-comments/2025%20SLH-DSA%20parameters.pdf

>We don’t plan to standardize them now

We believe all options should be standardized as soon as possible. To meet the 2030–2035 targets for post-quantum–only deployments, development must be finalized very soon. Roots of trust typically have lifetimes exceeding a decade, and any further delay could make it impossible to adopt these new options.

Cheers,
John

Chris Fenner

unread,
Oct 13, 2025, 12:56:44 PMOct 13
to pqc-forum, John Mattsson
Heya John,

What sort of hashing performance are you modeling for your runaway signer automated software-signing environment? E.g., 1M short hashes/second as in https://eprint.iacr.org/2024/018.pdf?

Thanks
Chris

John Mattsson

unread,
Oct 13, 2025, 5:08:38 PMOct 13
to Chris Fenner, pqc-forum

I don’t recall the exact numbers we used, but 1 MH/s seems low. From my understanding, already today, high-end modern CPU or GPU — such as the AMD EPYC 9965 or Nvidia RTX 5090 — can reach (GH/s) for short SHA-256 hashes. And CPUs and GPUs in 20 years will be even faster.

Cheers,
John

 

From: Chris Fenner <cf...@google.com>
Date: Monday, 13 October 2025 at 18:57
To: pqc-forum <pqc-...@list.nist.gov>
Cc: John Mattsson <john.m...@ericsson.com>
Subject: Re: SPHINCS+ Smaller Parameter Sets

Heya John,

 

What sort of hashing performance are you modeling for your runaway signer automated software-signing environment? E.g., 1M short hashes/second as in https://eprint.iacr.org/2024/018.pdf?

 

Thanks

Chris

On Monday, October 13, 2025 at 5:50:27AM UTC-7 John Mattsson wrote:

Hi,

At the NIST 6th PQC Standardization Conference, Quynh Dang from NIST presented NIST’s current plan for “SPHINCS+ Smaller Parameter Sets” and asked for comments:

Image removed by sender.

Sophie Schmieg

unread,
Oct 13, 2025, 5:27:27 PMOct 13
to John Mattsson, Chris Fenner, pqc-forum
I think the benchmark here should not be general purpose computers, but the vastly less powerful hardware security models that would actually house these keys. Those are usually not clocked anywhere near the frequency of a modern CPU, and usually have only a single core.

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/GVXPR07MB96787A9279F64A1B214F310189EAA%40GVXPR07MB9678.eurprd07.prod.outlook.com.


--

Sophie Schmieg |
 Information Security Engineer | ISE Crypto | ssch...@google.com

John Mattsson

unread,
Oct 14, 2025, 1:09:19 AM (14 days ago) Oct 14
to Sophie Schmieg, Chris Fenner, pqc-forum

Using HSMs for all signing operations would be ideal, but I don’t think that’s always the case in enterprise environments. One possible approach could be to restrict the smaller SPHINCS+ parameter sets to use only within HSMs or other rate limited environments. I agree that a bound of 2^24 seems reasonable for today’s HSMs, though I do not know what their performance will look like 20 years from now.

 

From: Sophie Schmieg <ssch...@google.com>
Date: Monday, 13 October 2025 at 23:27
To: John Mattsson <john.m...@ericsson.com>
Cc: Chris Fenner <cf...@google.com>, pqc-forum <pqc-...@list.nist.gov>
Subject: Re: [pqc-forum] Re: SPHINCS+ Smaller Parameter Sets

I think the benchmark here should not be general purpose computers, but the vastly less powerful hardware security models that would actually house these keys. Those are usually not clocked anywhere near the frequency of a modern CPU, and usually have only a single core.

 

On Mon, Oct 13, 2025 at 2:08PM 'John Mattsson' via pqc-forum <pqc-...@list.nist.gov> wrote:

I don’t recall the exact numbers we used, but 1 MH/s seems low. From my understanding, already today, high-end modern CPU or GPU — such as the AMD EPYC 9965 or Nvidia RTX 5090 — can reach (GH/s) for short SHA-256 hashes. And CPUs and GPUs in 20 years will be even faster.

Cheers,
John

 

From: Chris Fenner <cf...@google.com>
Date: Monday, 13 October 2025 at 18:57
To: pqc-forum <pqc-...@list.nist.gov>
Cc: John Mattsson <john.m...@ericsson.com>
Subject: Re: SPHINCS+ Smaller Parameter Sets

Heya John,

 

What sort of hashing performance are you modeling for your runaway signer automated software-signing environment? E.g., 1M short hashes/second as in https://eprint.iacr.org/2024/018.pdf?

 

Thanks

Chris

On Monday, October 13, 2025 at 5:50:27AM UTC-7 John Mattsson wrote:

Hi,

At the NIST 6th PQC Standardization Conference, Quynh Dang from NIST presented NIST’s current plan for “SPHINCS+ Smaller Parameter Sets” and asked for comments:

Error! Filename not specified.

https://csrc.nist.gov/csrc/media/presentations/2025/sphincs-smaller-parameter-sets/sphincs-dang_2.2.pdf


We discussed this internally at Ericsson and have the following comments:

>We need more input from the community about the demand of 2^30 and 2^16 limits. Let us know if the 2^24 options above would have unacceptable performance for your systems.

We do not believe the 2^24 options are suitable from a security standpoint in automated software signing scenarios using software-based keys. In such environments, software bugs could unintentionally trigger a large number of signing processes, exhausting the 2^24 limit (but we have not done any calculations with rls128cs1, rls192cs1, rls256cs1). As we stated in our earlier comments [1], we think 2^30 would be a safer choice in that (automated) type of setting.

For manual certificate signing, by contrast, limits such as 2^16 or even 2^10 would be more than sufficient.

[1] https://emanjon.github.io/NIST-comments/2025%20SLH-DSA%20parameters.pdf

>We don’t plan to standardize them now

We believe all options should be standardized as soon as possible. To meet the 2030–2035 targets for post-quantum–only deployments, development must be finalized very soon. Roots of trust typically have lifetimes exceeding a decade, and any further delay could make it impossible to adopt these new options.

Cheers,
John

--
You received this message because you are subscribed to the Google Groups "pqc-forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pqc-forum+...@list.nist.gov.
To view this discussion visit https://groups.google.com/a/list.nist.gov/d/msgid/pqc-forum/GVXPR07MB96787A9279F64A1B214F310189EAA%40GVXPR07MB9678.eurprd07.prod.outlook.com.

Chris Fenner

unread,
Oct 14, 2025, 12:54:16 PM (14 days ago) Oct 14
to John Mattsson, Sophie Schmieg, pqc-forum
Personally, I would not recommend that any reduced SLH-DSA parameter set hold up to being resilient to runaway signing on a GPU.

If I'm reading these Hashcat benchmarks correctly [1] [2], an NVIDIA H100 GPU (not even NVIDIA's latest ML GPU product anymore) can do something like 10 billion SHA256 operations per second. You can buy a box called DGX H100 with up to 8 of them.

rls128cs1 requires 1.45B hashes to produce a signature, with a 4.8x speedup if you can cache the height-22 OTS tree. It has 128 bits of security out to 2^24 signatures, and 112 out to 2^27.25 signatures.

That would mean a single DGX H100 would take about:
  • 3.5 days to drop below 128 bits of security for rls128cs1 (18 hours if the top level tree can be cached)
  • 34 days to drop below 112 bits of security for rls128cs1 (7 days if the top level tree can be cached)
Comparing it to G-7, a 2^30 parameter set from Fluhrer & Dang, which requires 85M hashes to produce a signature with around an 8x speedup if you can cache the height-32 OTS hypertree, a DGX H100 would take about:
  • 13 days to drop below 128 bits of security for G-7 (39 hours if the top level tree can be cached)
  • 10 months to drop below 112 bits of security for G-7 (36 days if the top level tree can be cached)
I would recommend comparing the reduced signature count parameter sets of SLH-DSA (be it 2^16 2^24, 2^30, whatever) to the stateful signatures (LMS, XMSS) when discussing resilience to misuse. A dedicated enterprise can absolutely misuse any of these things (trivially, in the case of LMS and XMSS).

I appreciate the needle NIST is trying to thread here, which is that the parameters should be resilient to reasonable mistakes from someone who is at least trying to do the right thing (e.g., using HSMs instead of GPUs for their sensitive key operations, but doing it continuously in all their HSMs for a little while before they realize). But if the threat model has to expand to include people who are willing to use high-end GPUs or CPUs to overuse the keys, I think we'll be left with something far less useful.

Thanks
Chris

Falko Strenzke

unread,
Oct 15, 2025, 1:34:44 AM (13 days ago) Oct 15
to pqc-...@list.nist.gov
Am 14.10.25 um 18:53 schrieb 'Chris Fenner' via pqc-forum:
Personally, I would not recommend that any reduced SLH-DSA parameter set hold up to being resilient to runaway signing on a GPU.

I fully agree. In my view count-limited SLH-DSA should function as an intermediate between the stateful hash-based schemes with their requirement to be used on an HSM and SLH-DSA with no specific requirements on key management. The way that the signature count-limit is enforced is something that will have to be solved by higher level standards and requirements from regulatory authorities.

Falko

--

MTG AG
Dr. Falko Strenzke

Phone: +49 6151 8000 24
E-Mail: falko.s...@mtg.de
Web: mtg.de


MTG AG - Dolivostr. 11 - 64293 Darmstadt, Germany
Commercial register: HRB 8901
Register Court: Amtsgericht Darmstadt
Management Board: Jürgen Ruf (CEO), Tamer Kemeröz
Chairman of the Supervisory Board: Dr. Thomas Milde

This email may contain confidential and/or privileged information. If you are not the correct recipient or have received this email in error,
please inform the sender immediately and delete this email.Unauthorised copying or distribution of this email is not permitted.

Data protection information: Privacy policy

John Mattsson

unread,
Oct 15, 2025, 6:40:10 AM (13 days ago) Oct 15
to pqc-forum

 

Am 14.10.25 um 18:53 schrieb 'Chris Fenner' via pqc-forum:

>who is at least trying to do the right thing (e.g., using HSMs instead of GPUs for their sensitive key operations, but doing it continuously in all their HSMs for a little while before they realize).


T
he design still needs to account for HSMs of the future, those that will exist decades from now, not just the ones available today. But I think most hardware and cloud HSMs have Policy-Based Rate Limiting?

 

With Policy-Based Rate Limiting, 2^24 signatures would be enough, I think.

Reply all
Reply to author
Forward
0 new messages