Hash-Based Signatures for Bitcoin's Post-Quantum Future

180 views
Skip to first unread message

Mikhail Kudinov

unread,
Dec 8, 2025, 3:47:49 PM (5 days ago) Dec 8
to Bitcoin Development Mailing List
Hi everyone,

We'd like to share our analysis of post-quantum options for Bitcoin, focusing specifically on hash-based schemes. The Bitcoin community has already discussed SPHINCS+ adoption in previous mailing list threads. We also looked at this option. A detailed technical report exploring these schemes, parameter selections, security analysis, and implementation considerations is available at https://eprint.iacr.org/2025/2203.pdf. This report can also serve as a gentle introduction into hash-based schemes, covering the recent optimization techniques. The scripts that support this report are available at https://github.com/BlockstreamResearch/SPHINCS-Parameters .
Below, we give a quick summary of our findings.

We find hash-based signatures to be a compelling post-quantum solution for several reasons. They rely solely on the security of hash functions (Bitcoin already depends on the collision resistance of SHA-256) and are conceptually simple. Moreover, these schemes have undergone extensive cryptanalysis during the NIST post-quantum standardization process, adding confidence in their robustness.

One of the biggest drawbacks is the signature sizes. Standard SPHINCS+ signatures are almost 8KB. An important observation is that SPHINCS+ is designed to support up to 2^64 signatures. We argue that this bound can be set lower for Bitcoin use-cases. Moreover, there are several different optimizations (like WOTS+C, FORS+C, PORS+FP) to the standard SPHINCS+ scheme, that can reduce the signature size even more.
For example, with these optimizations and a bound on 2^40 signatures we can get signatures of size 4036 bytes. For 2^30 signatures, we can achieve 3440 bytes, and for 2^20, one can get 3128 bytes, while keeping the signing time reasonable.

We should not forget that for Bitcoin, it is important that the size of the public key plus the size of the signature remains small. Hash-based schemes have one of the smallest sizes of public keys, which can be around 256 bits. For comparison, ML-DSA pk+sig size is at least 3732 bytes.

Verification cost per byte is comparable to current Schnorr signatures, alleviating concerns about blockchain validation overhead.

As for security targets, we argue that NIST Level 1 (128-bit security) provides sufficient protection. Quantum attacks require not just O(2^64) operations but approximately 2^78 Toffoli depth operations in practice, with limited parallelization benefits.

One of the key design decisions for Bitcoin is whether to rely exclusively on stateless schemes (where the secret key need not be updated for each signature) or whether stateful schemes could be viable. Stateful schemes introduce operational complexity in key management but can offer better performance.

We explored the possibilities of using hash-based schemes with Hierarchical Deterministic Wallets. The public child key derivation does not seem to be efficiently achievable. The hardened derivation is naturally possible for hash-based schemes.

If we look at multi/distributed/threshold-signatures, we find that current approaches either don't give much gains compared to plain usage of multiple signatures, or require a trusted dealer, which drastically limits the use-cases.

We welcome community feedback on this approach and hope to contribute to the broader discourse on ensuring Bitcoin's long-term security in the post-quantum era. In particular, we are interested in your thoughts on the following questions:
1) What are the concrete performance requirements across various hardware, including low-power devices?
2) Should multiple schemes with different signature limits be standardized?
3) Is there value in supporting stateful schemes alongside stateless ones?

Best regards,
Mikhail Kudinov and Jonas Nick
Blockstream Research

Greg Maxwell

unread,
Dec 8, 2025, 4:58:18 PM (5 days ago) Dec 8
to Mikhail Kudinov, Bitcoin Development Mailing List
On Mon, Dec 8, 2025 at 8:47 PM 'Mikhail Kudinov' via Bitcoin Development Mailing List <bitco...@googlegroups.com> wrote:
We should not forget that for Bitcoin, it is important that the size of the public key plus the size of the signature remains small. Hash-based schemes have one of the smallest sizes of public keys, which can be around 256 bits. For comparison, ML-DSA pk+sig size is at least 3732 bytes.

No scheme has such a limitation, because any scheme can use a hash of the underlying primitive as the public key-- which Bitcoin has done since day one.  The correct figure of merit is the the size of the signature, pubkey, and a hash combined  or-- if the pubkey is under 500 bits or so, just the size of the signature plus pubkey.
 
Verification cost per byte is comparable to current Schnorr signatures, alleviating concerns about blockchain validation overhead. 

Though hash based signatures don't really concern me much in validation costs, I disagree with the premise of this statement.  If the size was similar then I'd agree that cost per byte being similar was enough to make validation costs not a concern, but the size is some 40 times larger and 40x validation costs is certainly a concern unless the scheme is deployed without an effective increase in block capacity-- and without a capacity increase the utility of such large signatures is potentially pretty dubious.   Even if a proposal doesn't itself include a capacity increase one should be regarded as inevitable along with it,  particularly because just securing *your* coins against this attack won't do you any good if 95% of all other coins get stolen by it-- so a performance analysis should anticipate needing the capacity for all of the transaction flow to use the scheme, even if that isn't the case for the initial usage.

One of the key design decisions for Bitcoin is whether to rely exclusively on stateless schemes (where the secret key need not be updated for each signature) or whether stateful schemes could be viable. Stateful schemes introduce operational complexity in key management but can offer better performance.

It's not an either/or, I believe. I think schemes with weakened stateless security could be improved to ~full repeated use security via statefulness (e.g. grinding the message to avoid revealing signatures that leak key material).  There may be possibilities for other hybrids: an otherwise stateless wallet which is assumed to have visibility to its own confirmed transactions.  It may be that a 'few time secure' scheme could be adequate when coupled with best effort statefulness (e.g. blockchain visibility)  and a series composed schnorr signature (which means the brittleness of the hash signature only matters if the schnorr signature is broken).

Statefulness is not a great assumption with how bitcoin private keys work, particularly for cold storage.   Especially since key loss is usually the greatest risk to coin possession and the best mechanism against key loss is duplicate copies separately stored.   Although correct usage of bitcoin results in keys being single use or nearly so, it's a security footgun to make a strong assumption.

If we look at multi/distributed/threshold-signatures, we find that current approaches either don't give much gains compared to plain usage of multiple signatures, or require a trusted dealer, which drastically limits the use-cases. 

There may be advantages there to using a threshold schnorr in series with a single PQ scheme,  in that case the security model is "a threshold of participants must agree OR a participant must agree and have a successful attack on the threshold schnorr signature".  This may well be a reasonable compromise considering the costs of multiple PQ keys-- particularly when the participants are known entities and not e.g. an anonymous channel counterparty.

1) What are the concrete performance requirements across various hardware, including low-power devices?

I don't think it matters much if signing is slow on low power devices -- e.g. taking seconds per input.  It would obviously matter to *some* users but those users could use higher power signing devices.  The minimum amount of dynamic ram needed for signing (even at low performance) is probably pretty important.

2) Should multiple schemes with different signature limits be standardized?
3) Is there value in supporting stateful schemes alongside stateless ones?

Depends on their relative costs.  Plain stateful (of the 'two signatures breaks your security' sort) is a very concerning footgun with bad side effects (e.g. can't even bump fees) but even that could be attractive if the size is much smaller.    Having a totally free configuration is quite bad for privacy, however, and of dubious value.   I think that just having two options, e.g. secure for 'few' and secure for 'many' (but no need for 2^128) with both supporting but not requiring statefulness as a best effort hail-mary protection against self-compromise might be interesting, but it would depend on their relative performance.  One possibility would be to just always have both alternatives available (at a cost of 32 bytes) and for the user to decide at signing time.  

 


conduition

unread,
Dec 9, 2025, 1:49:46 AM (4 days ago) Dec 9
to Bitcoin Development Mailing List
Great work Jonas and Mikhail, glad to see more eyes and ears surveying these schemes and their potential. Also shameless plug for some of my prior work on related topics.

The post-quantum HD wallet derivation problem is one i've been thinking about a lot lately. Due to the lack of algebraic structure in SLH-DSA it's gonna be impossible to fully emulate BIP32 with that scheme alone. I'm personally hoping that we'll find a way to derive child pubkeys using lattices (ML-DSA) and/or isogenies (SQIsign), but I haven't heard of any solid proposals yet. Currently reading on isogeny crypto as that sounds like a promising candidate.

If such a thing is possible, then we could derive BIP360 tap trees containing a static SLH-DSA pubkey, alongside dynamically derived keys for a more structured algebraic PQ scheme like ML-DSA, and finally a regular EC BIP340 pubkey derived by regular BIP32. The idea being one could distribute a 'post quantum xpub' containing a regular BIP32 key extended with PQ public keys. Wallets or 3rd parties could derive child addresses which contain tap trees with one leaf per signing scheme. Since SLH-DSA doesn't support unhardened derivation, the SLH-DSA public key would have to be used as-is in every child leaf, without any derivation (maybe with an extra pseudorandom nonce thrown in to avoid reusing the same tap leaf hash in different TXs). Think of defining: CDKPub_slh(spx_pubkey, chaincode, n) := (spx_pubkey, HMAC(chaincode, n))

Regarding smaller signatures: I used to be a big fan of SPHINCS+C and SPHINCS-α. I asked Andreas Hulsing, the SPHINCS+ team lead, why weren't these obviously better schemes standardized instead? he said NIST shied away from them because they complicated the implementation for little material benefit. There were also "political" considerations, owing to the fact that they were proposed late in the standardization competition's timeline. Obviously bitcoin is a different ballgame entirely, but the point is we should optimize where it matters most. IMO that means smaller parameter set(s). Even without PoW compression, hybercubes, or other modern tricks to optimize SLH-DSA, we can make sigs much smaller by just tweaking constants, and we can do so without losing compatibility with the NIST-compliant algorithm. So that idea has a big +1 from me. NIST is in the process of standardizing smaller parameter sets but they have much higher signing/keygen perf overhead. If we want smaller hash-based sigs, we should pick new parameter sets to standardize in Bitcoin, covering different use cases, and use them with the standardized FIPS-205 algorithm. Agreeing on which parameter sets will be hard though. See https://github.com/chrisfenner/slh-dsa-rls

I agree with Greg about the verification cost, you can't just consider cycles/byte, you have to consider the cost of verifying entire blocks full of these signatures. While my recent research showed that SLH-DSA signing and keygen can be very effectively parallelized, verification is much harder to parallelize. You have to parallelize generically across signatures in a block (which can also be done with ECC, or any sig-verify algo for that matter). 

On statefulness: I once felt quite strongly that we should have stateful WOTS on-chain as an opcode - WOTS is pretty much the smallest hash-based signatures you can get. But talking with Ethan and Hunter has since convinced me that stateless sigs are the only way to go. There's just too many landmines and footguns to step on with schemes like WOTS, or even its big daddy XMSS, if you use them to sign non-deterministic data statefully. 

Finally, while everyone (including me) is really excited about hash-based signatures because we know and love and trust hash functions, in reality the performance, functionality, and sig-size tradeoffs will lead to 99% of people using schemes with new assumptions like ML-DSA for everyday usage. Hash based sigs will be the worst-case scenario fallback, in case the more efficient schemes like ML-DSA or SQIsign turn out to be cryptographically broken (a real possibility, the feds have standardized broken schemes before after all).

> I don't think it matters much if signing is slow on low power devices -- e.g. taking seconds per input. 

It's far worse than that. Here are some benchmarks run by Trezor on their Model T signing device. 75 seconds to create one SLH-DSA-SHA2-128s signature. RAM requirements are quite low for SLH-DSA compared to ML-DSA which is nice. Apparently some of their newer devices have dedicated chips which can execute SHA256 much faster than the ARM M4, but i haven't seen any benchmarks on those yet. I think those kinds of hash accelerators, or FPGAs etc, will need to become standard for hardware wallets if they plan to use SLH-DSA signing (or keygen). I don't know about ledger, because they apparently don't think quantum is a serious risk :/ 


regards,
conduition

Boris Nagaev

unread,
Dec 9, 2025, 3:20:25 AM (4 days ago) Dec 9
to Bitcoin Development Mailing List
Hi Mikhail, Jonas and all!

> If we look at multi/distributed/threshold-signatures, we find that current approaches either don't give much gains compared to plain usage of multiple signatures, or require a trusted dealer, which drastically limits the use-cases.

I think there's room to explore N/N Multiparty computation (MPC) for hash-based signatures. In principle you can secret-share the seed and run MPC to (a) derive the pubkey and (b) do the per-signature WOTS/FORS work, so the chain sees a single normal hash-based signature while it is a result of cosigning by all N parties. The output stays a single standard-size signature (e.g., a 2-of-2 compressed to one sig), so you save roughly by N times versus N separate signatures, but the cost is a heavy MPC protocol to derive the pubkey and to produce each signature. There's no linearity to leverage (unlike MuSig-style Schnorr), so generic MPC is heavy, but it could be interesting to quantify the overhead versus just collecting N independent signatures.

As a small reference point, here's a two-party SHA-256 MPC demo I recently wrote (not PQ-safe, EC-based oblivious transfer, semi-honest): https://github.com/markkurossi/mpc/tree/master/sha2pc . The protocol moves about 700 KB of messages and completes in three rounds while privately computing SHA256(XOR(a, b)) for two 32-byte inputs. The two-party restriction, quantum-vulnerable OT, and semi-honest model could all be upgraded, but it shows the shape of the protocol.

With a malicious-secure upgrade and PQ OT, sha2pc would already be enough for plain Lamport signatures by repeating it 256x2 times. For WOTS-like signatures you'd need another circuit, but the same repo has tooling for arbitrary circuits, and WOTS is just a hash chain, so it is doable; circuit and message sizes should grow linearly with the WOTS chain depth.

Curious to hear thoughts on whether N/N MPC with hash-based sigs is worth prototyping, and what overhead targets would make it compelling versus plain multisig.

Best,
Boris

Mikhail Kudinov

unread,
Dec 9, 2025, 5:50:23 PM (4 days ago) Dec 9
to Boris Nagaev, Bitcoin Development Mailing List

Dear Greg,

Thank you for your feedback, your points are important, and I appreciate the opportunity to continue the discussion and clarify a few aspects of your response.


On public-key and signature sizes:

My main point was that when we compare with other pq alternatives (such as Lattice-based schemes) we should take their public key sizes into account. For ML-DSA the public key is more than 1kB.


On verification costs:

I agree that it is important to consider how the verification cost scales with block size. At the same time, I believe it is still important to highlight the ratio between signature size and verification cost. For certain parameter sets, this ratio is significantly more favorable than for Schnorr signatures. For some parameter sets it can be more than 10 times better: if we look at the parameters sets in the Table 1, we can achieve 4480 bytes signatures (under the 2^40 signatures limit) with verification ratio being almost 9 times better. I would welcome further feedback here. Specifically, would it be reasonable to choose larger signatures if they offer lower verification costs?


On stateful vs. stateless security:

Regarding your comment, “I think schemes with weakened stateless security could be improved to ~full repeated-use security via statefulness (e.g., grinding the message to avoid revealing signatures that leak key material),” I did not fully grasp your argument. Could you please elaborate?


On combining threshold Schnorr with a PQ scheme:

You mentioned that “there may be advantages to using a threshold Schnorr in series with a single PQ scheme.” My current thinking is that such constructions could already be implemented at the scripting layer; in that sense, users could assemble them without additional opcodes (beyond the PQ signature opcode itself). While I see the potential benefits, I  am also worried that such an approach risks introducing loosely-defined security models, which can lead to vulnerabilities.


Best,

Mike



Вт, 9 дек. 2025 г. в 19:20, Boris Nagaev <bna...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/492feee7-e0da-4d4d-bb7a-e903b321a977n%40googlegroups.com.

Mikhail Kudinov

unread,
Dec 9, 2025, 6:17:10 PM (4 days ago) Dec 9
to Boris Nagaev, Bitcoin Development Mailing List

Dear Conduition,

You did a really nice job, I was wondering if it will be hard to add the different modifications to your implementations? 

As for lattice-based schemes and other assumptions, we also thought about investigations the possibilities there. 

With this derivation technique you propose, am I understanding correctly that if the user signs with the hash-based scheme, then the user would reveal that the different pub keys are linked?

I think it is true, that limiting the number of signatures is the main optimisation we should look at. But if we use different parameters sets don’t we already loose the compatiability with the standardized schemes? And if we already deviate from the standards, why don’t add the modifications that can save us extra couple of hundred bytes. From the implementation complexity, of course this is subjective, but I think these modifications are pretty straight forward. 

Best,

Mike



Ср, 10 дек. 2025 г. в 09:48, Mikhail Kudinov <mkud...@blockstream.com>:

Mikhail Kudinov

unread,
Dec 9, 2025, 7:10:36 PM (4 days ago) Dec 9
to Boris Nagaev, Bitcoin Development Mailing List

Dear Boris,

We also explored general MPC approaches. We discuss this in Section 15.3, and it appears they are not suitable. We cite an estimate indicating that generating a SPHINCS+ signature via a general MPC would take around 85 minutes. Even if we scale down from SPHINCS+ to a much smaller number of signatures, the approach still does not seem efficient enough. For a single WOTS instance it might be feasible, but then the question becomes whether adopting a simple WOTS scheme is desirable at all. As mentioned above, stateful schemes already introduce significant complexity, and one-time signatures are even more restrictive.

Best,

Mike




Ср, 10 дек. 2025 г. в 10:06, Mikhail Kudinov <mkud...@blockstream.com>:

conduition

unread,
Dec 9, 2025, 7:15:57 PM (4 days ago) Dec 9
to Bitcoin Development Mailing List
Hi Mike,

> You did a really nice job, I was wondering if it will be hard to add the different modifications to your implementations?

Thanks! It shouldn't be too hard. I already have some Rust code for SPHINCS-alpha and SPHINCS+C on my experimental testing implementation: 

For SLHVK, I'd add one extra shader at the 2nd-to-last signing stage, which would execute the WOTS message compression.

But still, the fruits of such optimizations are dwarfed by the benefits of parallelism and parameter tuning.

> With this derivation technique you propose, am I understanding correctly that if the user signs with the hash-based scheme, then the user would reveal that the different pub keys are linked?

Exactly right, since every child address would contain the same SLH-DSA pubkey. To maximize privacy with SLH-DSA, you'd need to use hardened derivation to derive different child SLH-DSA keys for each address.

In everyday use this would not be a problem though, because most people will use the more efficient schemes: ML-DSA, SQIsign, or for the time being, Schnorr BIP340. You'd want to throw a pseudorandom nonce into the SLH-DSA tap leaf script to make sure its tap leaf hash is always unique for every unhardened child address. If you do that, nobody can link them together unless you use the key to sign a TX. If you do reveal the key, then yes, you're doxxing common ownership for any coins you spend under that key. However personally I expect that'd only be necessary in an emergency where ML-DSA is broken and is no longer safe to spend with. 

> I think it is true, that limiting the number of signatures is the main optimisation we should look at. But if we use different parameters sets don’t we already loose the compatiability with the standardized schemes? And if we already deviate from the standards, why don’t add the modifications that can save us extra couple of hundred bytes. From the implementation complexity, of course this is subjective, but I think these modifications are pretty straight forward. 

There are two different levels of "compatibility": The algorithms, and the parameters. If we change both, then we're really not using SLH-DSA at all; we're using some variant of SPHINCS+. We lose compatibility with all current and future hardware and software built for SLH-DSA.

If we change only the parameters of FIPS-205, but keep the same algorithms, then sure, some software will not be compatible, but changing a few constants is much easier than rewriting algorithms. Most SLH-DSA implementations are written to support different parameter sets, so extending code to support Bitcoin's parameter set(s) would be easy.  

It'd also be attractive to do so. If Bitcoin's SLH-DSA implementation adopts a new set of parameters, then existing/future software and hardware would have good reason to integrate with Bitcoin, and little reason not to. They get compatibility almost for free, simply by adding new sets of constants into their code - no need for forking, or dedicated implementations. HSMs set up for SLH-DSA firmware signing could be repurposed as high-security bitcoin signing devices; Open source authors could broaden the impact of their libraries; All by changing less than 10 lines of code. I would argue that's way more valuable than saving 4% signature size on an algorithm we hope we never need.

regards,
conduition

Olaoluwa Osuntokun

unread,
Dec 9, 2025, 7:46:11 PM (4 days ago) Dec 9
to conduition, Bitcoin Development Mailing List
Hi y'all,

conduition wrote:
> I'm personally hoping that we'll find a way to derive child pubkeys using
> lattices (ML-DSA) and/or isogenies (SQIsign), but I haven't heard of any
> solid proposals yet.

This paper [1] proposes a variant of Dilithium (dubbed DilithiumRK, RK for
'randomized keys' presumably) that enables BIP-32-like functionality. It
achieves this by getting rid of a public key compression step in the OG
algorithm that results in a loss of homomorphic properties. There're
algorithmic changes required (eg: a new public network param is needed
which is used for seed/key generation), so it isn't vanilla FIP 204.

Aside from the deviation from the standard, the scheme introduces some
additional trade offs:

  * Signatures arger as signatures carry a new error hint

  * Signing is 2.7x slower

  * Verification is 1.75x slower

There's also a published BIP-32-like like scheme for Falcon signatures [2]. I'm
less familiar with the details here, but the signature size blows up to
~24KB compared to ~666 bytes for normal Falcon signatures.

-- Laolu

[1]: https://cic.iacr.org/p/2/3/3

[2]: https://link.springer.com/article/10.1186/s42400-024-00216-w


--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.

Olaoluwa Osuntokun

unread,
Dec 9, 2025, 7:56:09 PM (4 days ago) Dec 9
to Mikhail Kudinov, Boris Nagaev, Bitcoin Development Mailing List

Hi y'all,

Mike wrote:
> But if we use different parameters sets don’t we already loose the
> compatiability with the standardized schemes?

IIUC, these smaller SPHINCS+ parameters are under active consideration
by NIST [1].

On slide 3 of a recent talk [2] at the 6th PQC Standardization Conference
[3], Quynh Dang states:

> We plan to standardize 2^24-signature limit rls128cs1, rls192cs1, rls256cs1

But then later in the same slide that:
> We don’t plan to standardize them now:
> Lower limits come with higher security risks when misuse happens
> Minimize the number of parameter sets

In the linked forum post the OP advocates for a swifter standardization
process for the new params:

> We believe all options should be standardized as soon as possible. To meet
> the 2030–2035 targets for post-quantum–only deployments, development must
> be finalized very soon. Roots of trust typically have lifetimes exceeding
> a decade, and any further delay could make it impossible to adopt these
> new options.

So it appears they do plan to standardize these additional parameter sets,
but it won't be done "soon"?

-- Laolu

[1]: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/x-eaz9be6_U

[2]: https://csrc.nist.gov/csrc/media/presentations/2025/sphincs-smaller-parameter-sets/sphincs-dang_2.2.pdf

[3]: https://csrc.nist.gov/Events/2025/6th-pqc-standardization-conference

Jonas Nick

unread,
Dec 10, 2025, 11:19:44 AM (3 days ago) Dec 10
to bitco...@googlegroups.com
Thanks for all the feedback.

Trying to remain consistent with widely deployed, standardized variants of
SLH-DSA is a reasonable design consideration. But in that context it seems
noteworthy that using optimized schemes, instead of just tweaking parameters,
leads to way more than just a 4% reduction in signature size. The WOTS+C +
PORS+FP variant is 16% to 18% smaller than vanilla, size-optimized SPHINCS+ (for
2^40 signatures max) according to our scripts [0].

Another consideration is that in the scenario you [conduition] mention where
Bitcoin would adopt a lattice-based signature scheme and a hash-based signature
scheme, the lattice-based scheme may not be ML-DSA. Maximizing the functionality
benefits of lattice-based sigs may require a custom signature scheme that
supports public key derivation, multi/threshold signatures, aggregate
signatures, silent payments, etc. If the lattice-based signature scheme is
custom, there is little reason why the hash-based signature scheme should not be
custom as well.

More generally, one of my main motivations for working on this project was
whether there exist variants of hash-based signature schemes that are more
suitable for the "advanced" constructions we care about (HD wallets,
multi-signatures, ...). After doing this project with Mike (who has done
research on hash-based signatures for quite a few years), it seems like the
answer is basically no. We discuss some of the approaches in the paper, but it's
of course possible we're missing something. However, in that sense, the paper is
also a negative result.

I cannot follow the conclusion that 99% of people would use ML-DSA. Signature
size is pretty much the same as for parameter-optimized SPHINCS+. Without
lattice-based signature aggregation or silent payments, it seems like the main
benefit is verification time. Since you have probably the best collection of
numbers for perfomance of SLH-DSA, I'd be interested in the performance numbers
of ML-DSA you use for comparison with SLH-DSA.


[0] https://github.com/BlockstreamResearch/SPHINCS-Parameters/blob/main/costs.sage
Reply all
Reply to author
Forward
0 new messages