Groups keyboard shortcuts have been updated
Dismiss
See shortcuts

Trivial QC signatures with clean upgrade path

2,326 views
Skip to first unread message

Matt Corallo

unread,
Dec 15, 2024, 4:57:03 PM12/15/24
to Bitcoin Development Mailing List
There's been a few rough ideas for QC robustness in the signature scheme over Bitcoin transactions
over the past many years, but many of them have a number of fairly major drawbacks.

First, some base assumptions:

(a) QCs that can break EC will take a while (probably closer to a decade or two than a few years).
This lines up with NSA and other recommendations. We have time to upgrade, but we might consider
having an option today for wallets to get QC security later.
(b) Its entirely possible that fundamental scaling constraints will emerge and QCs that break EC
simply won't ever be reality. We might not want to bet on this, but its possible.
(c) We'll get some reasonable warning before QCs are there - QC development requires immense
resources, so much so that only a few organizations in the world can afford to hire the talent
required and fund the lab. This type of development has and will likely continue to lead to
announcements as progress continues, and we'll have a few years warning as QCs get closer and closer.
(d) post-QC security assumptions (like Lattices and obviously Supersingular Elliptic Curve Isogeny)
are insufficient to secure coins today, and are bad candidates for inclusion in Bitcoin's consensus
due to the likelihood of future cryptography research. This implies the only candidates for post-QC
signature security in Bitcoin's consensus today are hash-based signatures (basically SPHINCS/SPHINCS+).
(e) its not worth waiting on OP_CAT and the other more general script opcode additions for this, as
those seem stuck in bikeshed hell, not to mention questions around MEVil and Bitcoin's future
abound. Further, doing this via dedicated opcode simplifies wallet adoption, which is likely to
struggle already given the additional workload for wallet developers for no immediate user-facing
features.


Given these assumptions, it seems ill-advised for wallets today to start locking funds up in a way
where they need to pay the on-chain footprint cost to get post-QC security for their transactions
*today*, but given upgrade cycles in Bitcoin it also seems ill-advised to not have some option for
wallets to have "emergency" paths.

Luckily, taproot provides a great way to build such a scheme! Because taproot script-path spends are
strongly-bound (the taproot script-path hash t includes the internal key in its hash), a future QC
could determine the associated private key and script-path merkle root, but it cannot forge an
alternative script-path merkle-root.

This provides a compelling hook for post-QC security - with the simple addition of an OP_SPHINCS (or
equivalent post-QC non-one-time-use (i.e. not Lamport/Winternitz) signature verification opcode,
functioning in much the same was OP_CHECKSIG works today), wallets simply need to construct their
taproot outputs to always contain a script-path alternative spending condition. When QCs are
becoming a reality, key-path taproot spends could be disabled via soft-fork, forcing spends to be
done using the QC-secure path.

This scheme obviously has the major drawback of non-upgraded funds confiscation at the time of QC
existence, but:

(a) we could instead require explicit opt-in for this scheme. This has the drawback of yet another
on-chain fingerprint and would require a new scriptPubKey format (but keeping the existing bech32m
address format, hopefully most wallets support that without any code changes today). Of course if we
do, substantial quantities of Bitcoin which are unlikely to ever be spent could lead to supply
shock, severely damaging Bitcoin's utility in other ways,
(b) alternatively, we could allow key-path spends for wallets which prove the script-path is a NUMS
point (via some new keypath+proof spend variant). I doubt many wallets today bother committing to a
NUMS point for their taproot output pubkeys, so this would break existing wallets, but it would
allow for an opt-out scheme.

This scheme has the incredibly nice property of not bloating existing use-cases nearly at all (just
one extra taproot script-path branch, but that's not a huge deal generally).

There's a few things to bike-shed on here, though - first of all whether to require opt-in or
provide an opt-out and secondly whether to also fail any script-paths that hit an ECDSA signature
validation (probably yes?).

I assume this has been written up elsewhere but I couldn't find it. Most of this is due to
not_nothingmuch, I'm just writing it up here and taking credit for it.

This doesn't address the questions around PoW in a post-QC world, of course, but that likely isn't
something that can be answered until we see more practical limitations of QCs (eg what is the
minimal latency of a QC gate? If its particularly low, can we simply complexify Bitcoin's PoW hash
function in order to delay QC results far past when traditional hardware is able to mine a block?)

Matt

Luke Dashjr

unread,
Dec 15, 2024, 7:01:55 PM12/15/24
to bitco...@googlegroups.com
One thing to add: the post-QC script path does not require a softfork to
commit to, as long as it is well-defined. So wallets could begin
implementing this fallback immediately, without waiting for _any_
softfork activation, as soon as the spec is final. They _would_ need to
guard the post-QC script as if it were itself a private key, which could
be an issue for hardware wallets - but I suspect there's probably a way
around that too...

Weikeng Chen

unread,
Dec 15, 2024, 8:33:28 PM12/15/24
to Bitcoin Development Mailing List
I actually think this is a good reason to open OP_CAT because its ability to do general-purpose covenants allow different parties to experiment their own PQ signature algorithms before the Bitcoin core settles on one of them (which I believe would take longer).
OP_CTV does not enable it. It just needs to be a full transaction hash and the ability to reconstruct it.

If we think we will be able to add QC signatures in 3 years, then we don't need to do that. 
But if we don't think it is easy to settle down on one QC signature, then it is better to let everyone make their own decisions on PQ solutions.

It is okay to start with some less efficient but provably post-quantum algorithm, for example, Winternitz signatures in BitVM. 
With OP_CAT, the public key can be reduced into a single hash, 32 bytes. The signature would still be 1KB. This is not too different from other PQ proposals. 
Verifying a Winternitz signature costs about 4KB in Bitcoin script. A major limitation of Winternitz signatures is that it is one-time, and therefore the keys need to be protected in a very careful way.

Although this is still expensive and would better be handled by a native opcode, at least MicroStrategy and institutions as well as many individuals can move their "long-term" wallet for Bitcoin into PQ ones and provide enough time for Bitcoin core to decide on a post-quantum algorithm, ideally when one of them get mainstream adoption (e.g., replaced ECDSA and RSA in web browsers).

Nevertheless, the major issue right now with PQ is only P2WSH can be "post-quantum" while P2TR is not post-quantum. It may be necessary to have a P2TR new version where the key route is removed (script-only) or replaced with a PQ signature.

Matt Corallo

unread,
Dec 15, 2024, 8:53:13 PM12/15/24
to Weikeng Chen, Bitcoin Development Mailing List
Please see the assumptions list in the OP:

> (e) its not worth waiting on OP_CAT and the other more general script opcode additions for this,
as those seem stuck in bikeshed hell, not to mention questions around MEVil and Bitcoin's future
abound. Further, doing this via dedicated opcode simplifies wallet adoption, which is likely to
struggle already given the additional workload for wallet developers for no immediate user-facing
features.

As Luke notes, wallets should probably just start implementing this today against a standard
SPHINCS+ implementation. By the time they're ready to ship someone can pick a few constants for the
"standard" and we won't have to discuss it further until/unless we get a QC.
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development
> Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> bitcoindev+...@googlegroups.com <mailto:bitcoindev+...@googlegroups.com>.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/3b50081e-1a00-4d18-
> aee3-1cefde230a78n%40googlegroups.com <https://groups.google.com/d/msgid/
> bitcoindev/3b50081e-1a00-4d18-aee3-1cefde230a78n%40googlegroups.com?utm_medium=email&utm_source=footer>.

Anthony Towns

unread,
Dec 16, 2024, 7:35:47 AM12/16/24
to Matt Corallo, Bitcoin Development Mailing List
On Sun, Dec 15, 2024 at 04:42:59PM -0500, Matt Corallo wrote:
> This provides a compelling hook for post-QC security - with the simple
> addition of an OP_SPHINCS (or equivalent post-QC non-one-time-use (i.e. not
> Lamport/Winternitz) signature verification opcode, functioning in much the
> same was OP_CHECKSIG works today), wallets simply need to construct their
> taproot outputs to always contain a script-path alternative spending
> condition. When QCs are becoming a reality, key-path taproot spends could be
> disabled via soft-fork, forcing spends to be done using the QC-secure path.

Some downsides of this approach:

- "OP_SPHINCS" signatures would be very large, at 8kB to 50kB. That
reduces inputs spent per block to a maximum of between 500 and 80,
given the existing constraints on witness data. Compared to bitcoin
blocks today, as I write, tx cf6391ca [0] is targetting the next block
and spends over 600 inputs on its own, while taking up only about 4%
of a block, so this seems like a big limitation. Probably better to
either pick something with much smaller signatures (which probably
means risky cryptographic assumptions, or single-use-pubkeys), or
to increase the block size in one way or another, eg as cryptoquick
proposes [1].

[0] cf6391ca2f3c361b666937fe7ae3e283850c9b81682755b7f5ab64bfd4c9503a
[1] https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki

- There's a fair bit of bikeshedding you could do about OP_SPHINCS,
including choosing different parameters for SPHINCS+, different
encoding of pubkeys, different "sighash" selectors for what is to
be signed, and different PQ schemes entirely. Without real quantum
computers to optimise against, many of those variables probably can't
be chosen objectively.

- Adding in secret OP_SPHINCS spend paths prior to an OP_SPHINCS
consensus change being active (or at least locked-in) seems very risky:
- it provides a way for insiders to cause you to lose all your
funds (prior to activation, selling your SPHINCS pubkey to a miner
allows the miner to claim all the funds), with little ability to
do a k-of-n multisig-like approach to prevent a single bad actor
from causing problems
- if the parameters that are actually activated are different to
what you assumed, then your script path might be unspendable
anyway; if different groups are proposing different parameters,
and only one gets activated, their funds are accessible while
everyone else's isn't

- Disabling key path taproot spends via soft-fork is extremely
confiscatory -- for the consensus cleanup, we worry about even the
possibility of destroying funds due to transaction patterns never
seen on the network; here, shutting down key path spends would be
knowingly destroying an enormous range of utxos.

- If you're avoiding the confiscatory approach by adding a hard-fork
in order to make keypath (and potentially ECDSA) funds accessible
to their owners via some post-quantum mechanism, then there's little
benefit to having an explicit script path method in advance.

- This approach probably isn't compatible with smart contracts,
particularly if pre-signed transactions are involved. Probably the
only way to deal with that is to hope you will have enough warning
to say "in X months, all your smart contracts are broken, so shut
them down now". There probably isn't any feasible way to do anything
better than that, though.

> (b) alternatively, we could allow key-path spends for wallets which prove
> the script-path is a NUMS point (via some new keypath+proof spend variant).
> I doubt many wallets today bother committing to a NUMS point for their
> taproot output pubkeys, so this would break existing wallets, but it would
> allow for an opt-out scheme.

I don't think this paragraph makes sense? In a post-quantum world,
a legitimate key-path spend could likely be replaced by an attacker
while it was sitting in the mempool, same as for a tx spending a p2pkh
or p2wpkh output. Also, a script-path isn't a point at all, so having
it be a NUMS point doesn't make much sense. Having it be unspendable
can make sense, and is already recommended in BIP 341 (search for
"unspendable"). Conditional key-path spends for taproot outputs is
probably most sensibly done as a hard fork; though it could be done as
a soft fork if the "condition" data was added somewhere other than in
the witness.

What about a different way of allowing wallets to pre-commit to a
post-quantum pubkey? eg, rather than generating a pubkey P directly from
an xprv/xpub and committing to a script path with their post-quantum
pubkey Q; wallets could generate the pubkey as R = P+H(P,Q)*G. At that
point, a hard-fork could be made to allow "R CHECKSIG" (or key path spends
where R is the sPK) to be satisfied via "<Qsig> <Q> <P>", validated
by checking that P+H(P,Q)*G=R, and that Qsig is a valid post-quantum
signature based on Q.

That retains many of the drawbacks above and is only useful if enabled
via a hard fork, however it removes these drawbacks:

- insiders can steal your funds if you adopt it prior to it becoming
consensus
- it marginally increases the size of non-post-quantum spends
- it breaks complicated scripts even without pre-signed transactions

Cheers,
aj

Matt Corallo

unread,
Dec 16, 2024, 5:22:40 PM12/16/24
to Anthony Towns, Bitcoin Development Mailing List


On 12/16/24 6:14 AM, Anthony Towns wrote:
> On Sun, Dec 15, 2024 at 04:42:59PM -0500, Matt Corallo wrote:
>> This provides a compelling hook for post-QC security - with the simple
>> addition of an OP_SPHINCS (or equivalent post-QC non-one-time-use (i.e. not
>> Lamport/Winternitz) signature verification opcode, functioning in much the
>> same was OP_CHECKSIG works today), wallets simply need to construct their
>> taproot outputs to always contain a script-path alternative spending
>> condition. When QCs are becoming a reality, key-path taproot spends could be
>> disabled via soft-fork, forcing spends to be done using the QC-secure path.
>
> Some downsides of this approach:
>
> - "OP_SPHINCS" signatures would be very large, at 8kB to 50kB. That
> reduces inputs spent per block to a maximum of between 500 and 80,
> given the existing constraints on witness data. Compared to bitcoin
> blocks today, as I write, tx cf6391ca [0] is targetting the next block
> and spends over 600 inputs on its own, while taking up only about 4%
> of a block, so this seems like a big limitation. Probably better to
> either pick something with much smaller signatures (which probably
> means risky cryptographic assumptions, or single-use-pubkeys), or
> to increase the block size in one way or another, eg as cryptoquick
> proposes [1].
>
> [0] cf6391ca2f3c361b666937fe7ae3e283850c9b81682755b7f5ab64bfd4c9503a
> [1] https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki

Sure, of course. The point of this scheme is to provide an *option*. As mentioned in the
assumptions, it assumes that we have a decade or more before this is a pressing issue and thus, in
practice, the funds in these types of scripts will never use OP_SPHINCS. Instead, as PQC improves
over time (or confidence in lattice assumptions increases) other things will be added to replace it,
allowing for wallets in active use to be migrated to something more sensible.

Its intended to avoid the issue of funds that don't move for a decade. If we wait around and only
add PQC five years before its needed that leaves a lot of funds vulnerable to theft, vs if we give
wallets a decade or a decade and a half of time with a PQC option then the total funds vulnerable to
theft could be substantially decreased.

> - There's a fair bit of bikeshedding you could do about OP_SPHINCS,
> including choosing different parameters for SPHINCS+, different
> encoding of pubkeys, different "sighash" selectors for what is to
> be signed, and different PQ schemes entirely. Without real quantum
> computers to optimise against, many of those variables probably can't
> be chosen objectively.

Sure, I'm not sure "its bikeshedable" is a *downside* per se, but yea, either parameters would have
to be fixed with quite some guessing or it'd have to be configurable.

> - Adding in secret OP_SPHINCS spend paths prior to an OP_SPHINCS
> consensus change being active (or at least locked-in) seems very risky:
> - it provides a way for insiders to cause you to lose all your
> funds (prior to activation, selling your SPHINCS pubkey to a miner
> allows the miner to claim all the funds), with little ability to
> do a k-of-n multisig-like approach to prevent a single bad actor
> from causing problems

Sure, if you lose your "private key" someone can steal your funds. Indeed, this wouldn't allow for
multisig until/unless OP_SPHINCS were active.

> - if the parameters that are actually activated are different to
> what you assumed, then your script path might be unspendable
> anyway; if different groups are proposing different parameters,
> and only one gets activated, their funds are accessible while
> everyone else's isn't

Sure, it wouldn't make sense to use such a thing unless you have very strong confidence everyone
else is using the same opcode format.

> - Disabling key path taproot spends via soft-fork is extremely
> confiscatory -- for the consensus cleanup, we worry about even the
> possibility of destroying funds due to transaction patterns never
> seen on the network; here, shutting down key path spends would be
> knowingly destroying an enormous range of utxos.

Indeed, I think there's a large debate to be had here, and really we can't make such a decision
today. There are a lot of specifica around exactly how a theoretical future QC operates that would
materially change this decision, I think, so I'm not sure how much its really worth debating today.
That said, if there's been two decades of all wallets having a hidden PQC script path it might be an
*option* in a way that just adding a lattice option five years before a QC is available simply would
not offer that option.

Hence it makes sense IMO to spec this out so wallets can use it *today* and we can figure this kinda
stuff out later.

> - If you're avoiding the confiscatory approach by adding a hard-fork
> in order to make keypath (and potentially ECDSA) funds accessible
> to their owners via some post-quantum mechanism, then there's little
> benefit to having an explicit script path method in advance.

Strongly disagree with this one. The hard-fork spend-via-future-PQC-proof-of-knowledge approach is
incredibly speculative, likely to require some vaguely sketchy crypto assumptions, and might well
require more on-chain footprint than a hash-based signature. I don't buy that it makes sense to
assume schemes that allow for this will exist in a way that we're happy with. Instead, having
wallets commit to some OP_SPHINCS buys us a lot of optionality, and could even allow for adding an
alternative spend-via-future-PQC-proof-of-knowledge long after "locking" keypath spends.

> - This approach probably isn't compatible with smart contracts,
> particularly if pre-signed transactions are involved. Probably the
> only way to deal with that is to hope you will have enough warning
> to say "in X months, all your smart contracts are broken, so shut
> them down now". There probably isn't any feasible way to do anything
> better than that, though.

Indeed, and I explicitly listed this as an assumption because it seems incredibly likely to be the case.

>> (b) alternatively, we could allow key-path spends for wallets which prove
>> the script-path is a NUMS point (via some new keypath+proof spend variant).
>> I doubt many wallets today bother committing to a NUMS point for their
>> taproot output pubkeys, so this would break existing wallets, but it would
>> allow for an opt-out scheme.
>
> I don't think this paragraph makes sense? In a post-quantum world,
> a legitimate key-path spend could likely be replaced by an attacker
> while it was sitting in the mempool, same as for a tx spending a p2pkh
> or p2wpkh output.

This is very unclear. A lot needs to be figured out about exactly how a theoretical future QC
operates and there may well be some latency to the calculations.

> Also, a script-path isn't a point at all, so having
> it be a NUMS point doesn't make much sense.

Apologies, I had rewritten this sentence and that was a typo. I'd meant a NUMS constant, eg a 0-hash.

> Having it be unspendable
> can make sense, and is already recommended in BIP 341 (search for
> "unspendable"). Conditional key-path spends for taproot outputs is
> probably most sensibly done as a hard fork; though it could be done as
> a soft fork if the "condition" data was added somewhere other than in
> the witness.
>
> What about a different way of allowing wallets to pre-commit to a
> post-quantum pubkey? eg, rather than generating a pubkey P directly from
> an xprv/xpub and committing to a script path with their post-quantum
> pubkey Q; wallets could generate the pubkey as R = P+H(P,Q)*G. At that
> point, a hard-fork could be made to allow "R CHECKSIG" (or key path spends
> where R is the sPK) to be satisfied via "<Qsig> <Q> <P>", validated
> by checking that P+H(P,Q)*G=R, and that Qsig is a valid post-quantum
> signature based on Q.
>
> That retains many of the drawbacks above and is only useful if enabled
> via a hard fork, however it removes these drawbacks:
>
> - insiders can steal your funds if you adopt it prior to it becoming
> consensus

I don't see why you consider "if your private key leaks someone can steal your funds" to be a drawback.

> - it marginally increases the size of non-post-quantum spends
> - it breaks complicated scripts even without pre-signed transactions

These seem like drawbacks to your scheme.

Matt

Tadge Dryja

unread,
Dec 16, 2024, 5:22:44 PM12/16/24
to Bitcoin Development Mailing List
Hi everyone
 (disclosure: I'm highly skeptical QCs will break secp256k1 in my lifetime, but who knows)

IMHO the activation dilemma is the trickiest part of Bitcoin dealing with QCs.  On the one hand, you want a very long term plan, many years ahead of time, so that everyone has time to migrate and not get their coins stolen or destroyed.  But the further out a QC is, the less likely it seems it will ever occur, and thus the less reason there is to write software to deal with a theoretical, far off problem. (And that's not even getting into the fact that nobody's in charge of Bitcoin so there's no long term roadmap anyway.)

The ability to have a PQ fallback key with zero or very low on-chain cost helps a lot with this, whichever activation path ends up happening.  Picking something and committing to it in wallets today, before any kind of activation, is a bit scary since the PQ pubkey does become an exposed private key.  But I think there is a pretty good way to do it with a consensus level proof of quantum computer.

On Monday, December 16, 2024 at 7:35:47 AM UTC-5 Anthony Towns wrote:


- Disabling key path taproot spends via soft-fork is extremely
confiscatory -- for the consensus cleanup, we worry about even the
possibility of destroying funds due to transaction patterns never
seen on the network; here, shutting down key path spends would be
knowingly destroying an enormous range of utxos.

This is true, but faced with a QC you're in trouble either way: either the coins are destroyed, or the first (non-nice) entity to get a QC steals them.  We can hope that if the QC ever does arrive there will be enough warning and coordination that there won't be *too* many of these utxos at that point.

But there are still a lot of lost coins where nobody knows the private keys anymore and in those cases, the lead time doesn't matter. The original owners who lost the keys would probably prefer those coins to be destroyed rather than stolen.  But of course there's no way for them to reliably express that preference since they can no longer sign.

An on-chain proof of quantum computer (PoQC I guess :) ) would be a way to reduce the damage of activation forks.  One way to build it: Create a NUMS point pubkey - something like described in BIP341.  Send some coins to that address, then watch if it gets spent.  Providing a preimage of the x-coordinate of a point, as well as a valid signature from that point means that either the hash function is broken or (what we're assuming here) the one-wayness of the EC base point multiplication has been broken.  Nodes can then have code which watches for such a proof and changes consensus rules based on it.

This can be useful for the activation, or persistence of a confiscatory restriction of key path spends.  For example:

Say people worry about an immanent QC.  They estimate it will be built in 5-8 years.  They write software which contains a temporary fork, which activates in 3 years and *de*activates in 8 years.  This fork disables almost all key path spends (as well as ECDSA signatures).  The only key path spends allowed are from the NUMS utxos, and if any NUMS utxo is spent, then the EC prohibition locks in to become permanent instead of reverting 3 years after initial enforcement.

In this case the soft fork is only (permanently) confiscatory if the QC provably exists and would have (presumably, sooner or later) confiscated all those coins anyway.  It also could also allow people to enable PQ signatures and disable EC signatures a bit sooner than they otherwise would have, since the cost of being wrong about a QC is lessened -- losing EC operations would be painful, but even more so if it turns out the nearly finished QC never quite gets there.

An opt-in, non-confiscatory fork could also use this mechanism.  A new P2TR output type (PQ2TR?) could be used which is explicitly not key-spendable post-PoQC, but the older P2TR, P2WPKH, etc output types are still EC spendable (better move em quick).

It can also work the other way: The new PQ output type can work just like P2TR, except that one opcode (the OP_PQCHECKSIG) becomes an OP_FAIL until the PoQC.  Given PoQC, it's OP_SUCCESS.  That way we don't have to have a consensus level definition the actual PQ signature algorithm yet; we've just put a placeholder PQ signature opcode in, and an activation method.  A later soft fork can then define the signature algo.  You'd want to define it pretty soon after, since wallets committing to PQ pubkeys for schemes that end up not getting implemented doesn't help.  But it does allow wallets to do something soon for people who are worried and want to be "quantum safe".

-Tadge



conduition

unread,
Dec 17, 2024, 12:42:33 AM12/17/24
to Bitcoin Development Mailing List
Hey all, and thank you for this great idea Matt. It rhymes a little with my DASK idea but I actually like yours a lot better, as it provides a cleaner upgrade path for soft-fork compatibility than DASK.

However, I suggest we use one of the more succinct Winternitz one-time signature algorithms instead of committing to SPHINCS right away. This gives us the option of using the WOTS key as a certification layer for some pubkey from a future signature algorithm, which may or may not even exist yet.

Not only does this give researchers more time to find better algorithms, it also makes devs' lives easier, and makes the transition to a PQ world more incremental than a head-first dive committing to SPHINCS. WOTS is very simple: it'd be easy to standardize and easy for wallets to implement and derive. Its signatures can be as small as 1 kilobyte. Even if we do end up choosing SPHINCS as the 2nd-layer whose pubkey we certify, the time/space overhead added by one WOTS signature is negligible in comparison. Once the WOTS pubkey is standardized, we can take our time choosing and standardizing the probably-much-more-complicated 2nd layer signing algorithm.

This certification layer idea is straight from SPHINCS' own whitepaper. This is how SPHINCS authors created its many-time property without blowing up the runtime.

----------

Tadge, your PoQC idea is neat but it relies on at least one "good guy" QC willing to produce the proof. A self-interested QC would never willingly crack a pubkey which it knows would activate a soft-fork against its own interest. Any publicly-advertised PoQC honeypot addresses able to gather large donations would be obvious, and a rational QC would ignore them. I suppose users could sprinkle some hidden honeypot addresses around the network and hope the QC bites one of them, but I don't think that's practical - The QC would probably prioritize addresses with large balances like Satoshi's wallets. I'm not sure if I have any better ideas though, besides the also-risky option of relying on the community to act in unison, triggering activation manually if/when a QC appears to be coming soon to a blockchain near us. So a PoQC might be a good additional trigger, but we also need to be able to manually activate the post-quantum upgrade in case the PoQC is unavailable.

----------

Another fun aspect of QC's we haven't discussed yet is BIP32. Whatever pubkey we hide in the OP_SUCCESS tap leaf, it needs to be derived via quantum-secure means. So there can be no unhardened BIP32 anywhere in the derivation chain of the PQ-secret-key, or else xpubs can be sold to a QC for profit, even if the PQ soft fork upgrade is activated on time.

It seems like whatever upgrade path we go with, it will mean the end of BIP32 xpubs as we know them. A 3rd party won't be able to derive my post-quantum P2TR address from a regular xpub alone without also knowing the OP_SUCCESS script's tap leaf hash, which by necessity must be completely independent of the xpub. Sounds like we may need a new standard for that too. Perhaps some kind of 'wrapper' which embeds a regular BIP32 xpub, plus the tap leaf hash of the OP_SUCCESS script? That'd be enough for an untrusted 3rd party to derive a standard set of P2TR addresses with the attached PQ fallback leaf script hash.

A second wacky idea which modifies (bastardizes) BIP32 instead of replacing it: Replace the xpub's chain code with the OP_SUCCESS script's tap leaf hash before distributing. You could derive the PQ keypair from the parent xpriv, to maintain the integrity of BIP32's chained derivation, so in a it would be like inserting a little modification into BIP32's hardened derivation logic. Software which doesn't have PQ-fallback support will derive the standard P2TR addresses we all use today. Software which does have PQ-fallback support will derive the same child taproot internal keys, but tweaked with a PQ-fallback script leaf. I imagine this might make compatibility very complicated though, so i'm not sure if this is a smart choice - throwing this out there in case it inspires better ideas but i wouldn't advocate for it myself.

-c

Antoine Riard

unread,
Dec 17, 2024, 11:19:21 PM12/17/24
to Bitcoin Development Mailing List
Hi all,

In my understanding, whatever the recent advances in error code
corrections to get universal quantum computer in the real-world,
there is still big unknowns if all the scalability challenges can
be solved as the number of physical qubits is going up, whatever
the underlying information support (e.g the spin of an electron).

All the efforts by many well-funded research team all over
the world at building QC might just end up on discovering new
law of physics rending intractable the realization of QC...

On the other hand, given the slowness of any consensus upgrade
in bitcoin this is definitely an area of concern to keep an
eye on it in the situation where QC would become practical
enough to break the DL problem.

I think Tadge's idea of introducing a PoQC, assuming it's
feasible, can be brilliant to enhance all the "pubkey"
exposed coins. For all the old coins (e.g P2PK more than
10 years ago), there could be a soft-fork introduces to
restrain the spend to some "seal" PoQC, of which the spend
would trigger the key-path spend knock-out of all "pubkey-exposed"
coins starting at the next block (or at spend height + N).

This soft-fork could require the unseal spend of a PoQC
to have an inflated weight of 4 MB to make the validity
transition easy. The new "seal" PoQC could be made consensus
mandatory for old pubkeys types and user opted-in for newer
ones like P2TR (i.e the PQ2TR). Spending a _single_ mandatory
or opted-in "sealed" pubkey would trigger the key-path spend
desactivation for all the affected UTXOs. While this would
be sacrifying one UTXO to save many of them, I think it would
minimizes expectations magical coordination of the community
when we see a real QC attacker in the wild to rollout a soft-fork.

I'm not sure if we can come up with a post-quantum scheme to
unlock pubkeys exposed through key-path, like any post-quantum
scheme would have to assume some secret proof of knowledge, though
once the DL is broken what information is remaining to the "legit"
coins owners to prove their know a speicifc scalar before the
PoQC "seal" has been reached ...? A Shor-efficient QC would break
the secrecy currently affordness by the hardness of factorization.

A client-side option could start to anchor the hash of EC secret
key as an ordering to solve the double-spend problem towards a
future QC attacker...even if we don't know yet how to make this
proof valid yet within future post-quantum secure scheme.

I don't think we should go for now on one post-quantum scheme like
Sphincs, while its cryptanalytic assumptions are easier to understand,
it's not like CRYSTALS or FALCON have also been successfully vetted
by the NIST. A user could script-path commit to a a number of "reserved"
OP_SUCCESS leafs, and then they can become valid either when the PoQC
is unsealed by a QC attacker or by a soft-fork ulterior to the PoQC
soft-fork. The on-chain fee cost of uncertainty on the most robust
post-quantum scheme is burden by the user, rather than picking up one
scheme as the only one (...there is little available litterature on the
post-quantum Grover's algorithm to break hash functions).

Additionally, I don't think pre-signed smarts contracts will be
condemned to close down, as long as a parallel post-quantum state
is counter-signed all along the classical state. There is no
guarantee ahead that the new signature scheme will have a consensus
validity (e.g SPHINCS or CRYSTALS signatures becoming valid), but
at least the parallel state, quantum or classic will be ordered by
on-chain spend of the UTXO.

While this is an open question in the QC field how much time it would
take to break a public key DL, and if it's going to be in average inferior
to 600 seconds, I think looking for any solution where the PoW is accelerated
will condemn us to a never-ending race, as QC latency is improving. We should
rather design consensus rules to yield a new block every 10 min, indifferently
of access to a classic or quantum computer and with only the quantity of energy
as a distinction factor.

So to make a digest, in my view I think the main problems are the following:
- (a) can we come up with a practical PoQC scheme that can be implemented as full-node consensus rules ?
- (b) what is socially acceptable to do with the QC-exposed old "pubkeys" coins that their users will never touch again ?
- (c) what are the client-side opted-in post-quantum hooks that could be implemented _now_ ?

Ideally, we can solve (a) to make the (c) hooks automatically blessed with
consensus meaning, and that way minimizing community coordination when it's
time to upgrade to post-quantum cryptography. I don't think there is a good
answer for (b) assuming no future magic post-quantum scheme to prove a knowledge
before a time T, other than anticipating the QC problem _now_ to minimize the
numbers of potentially affected coins in the future.

Best,
Antoine
ots hash: 9238d0a7ce681f0dc944b28745d379b41cd2491d

David A. Harding

unread,
Jan 1, 2025, 7:25:21 AMJan 1
to Matt Corallo, Bitcoin Development Mailing List
On 2024-12-15 11:42, Matt Corallo wrote:
> wallets simply need to construct their taproot outputs to always
> contain a script-path alternative spending condition.

If wallets simply construct their regular or alternative spending
conditions with a QC-secure commitment to a secret preimage, they can
use the variation of Guy Fawkes signatures described by Tim Ruffing in
the original 2018 thread about taproot[1] and expanded by him about a
month later.[2] E.g., as a backup to your keypath spend, you include a
scriptpath that is: <key> OP_CHECKSIGVERIFY OP_HASH256 <digest>
OP_CHECKEQUAL.

This has the disadvantages of requiring a fork[3] in case QCs become a
reality and delaying the spend of any taproot output after the QC crisis
by 100 blocks or more---but the advantage of not requiring any
specification work or consensus changes now (saving lazy people like me
from having to learn anything about post-quantum cryptosystems).

-Dave

[1]
https://gnusha.org/pi/bitcoindev/1516786100.2...@mmci.uni-saarland.de/
[2]
https://gnusha.org/pi/bitcoindev/1518710367.3...@mmci.uni-saarland.de/
[3] Ruffing describes it as a hard fork, but it sounds to me like a soft
fork. It would break pruned nodes that upgraded after the soft fork
activated, though, requiring them to re-download and re-scan all blocks
since the activation.

David A. Harding

unread,
Jan 1, 2025, 7:25:24 AMJan 1
to Tadge Dryja, Bitcoin Development Mailing List
On 2024-12-16 12:20, Tadge Dryja wrote:
> An on-chain proof of quantum computer (PoQC I guess :) ) would be a
> way to reduce the damage of activation forks. One way to build it:
> Create a NUMS point pubkey - something like described in BIP341. Send
> some coins to that address, then watch if it gets spent. [...]
> Nodes can then have code which
> watches for such a proof and changes consensus rules based on it.

I think this could be even more useful if combined with a previous idea
far creating a NUMS[1][3] (or trust minimized[2]) pubkey compatible with
Bitcoin but with a security strength less than 128 bits. That way
someone might claim the bounty of the key with (say) 96 bits security
potentially months or years before QC advances made regular keys
insecure and tempted operators of QCs into stealing from regular user
addresses.

-Dave

[1]
https://gnusha.org/pi/bitcoindev/CAH5Bsr20n2T7KRTYqycSUx0i...@mail.gmail.com/
[2]
https://gnusha.org/pi/bitcoindev/aRiFFJKz5wyHFDi2dXcGbNEHZD2nIwDRk7gaXIte-N1BoOEOQ-ySYRnk0P70S5igANSr2iqF2ZKV1dWvipaQHK4fJSv9A61-uH7w4pzxKRE=@protonmail.com/
[3]
https://gnusha.org/pi/bitcoindev/CAH5Bsr39kw08ki76aezJ1EM9...@mail.gmail.com/

Ian Quantum

unread,
Jan 1, 2025, 7:47:06 PMJan 1
to Bitcoin Development Mailing List
FALCON failed the NIST vetting. Since 2022 they have said they will fix it next year. Same answer in 2024 when they formalized CRYSTALS-Dilithium, CRYSTALS-KYBER and SPHINCS+. At the end they again say, " NIST is also developing a FIPS that specifies a digital signature algorithm derived from FALCON as an additional alternative to these standards." https://csrc.nist.gov/News/2024/postquantum-cryptography-fips-approved

If it takes 1.5-3 years to get the entire ecosystem of software updated, tested, implemented and then allow users to migrate to quantum safety, then Bitcoin code is future proofed. It will still require months (if BTC blocks normal transactions) to years (as a supported address type but not required) in order to migrate the wallets to safety. The longer the quantum resistant upgrade is delayed, the harsher the migration will need to become. 

Alice and Bob recently announced a new algorithm that breaks ECC-256 in 9 hours with 127k qubits. https://alice-bob.com/blog/computing-256-bit-elliptic-curve-logarithm-in-9-hours-with-126133-cat-qubits/ The algorithms will continue to improve and the costs will continue to go down. While some people are very confident what the quantum hardware will look like in 3 years, can they be so confident about the algorithms? We have switched from supercomputer to network node method of growing quantum calculations. Parallel instead of Serial. Fault tolerant algorithms that prefetch results. Can we really be as confident about the algorithms as people seem to be about the hardware not being ready? Most people in quantum computing aren't aware of how much their competition has progressed, how can devs who don't read 10 or 50 new quantum computing papers per week be more confident than the people who do?
Reply all
Reply to author
Forward
0 new messages