Proposing a P2QRH BIP towards a quantum resistant soft fork

432 views
Skip to first unread message

Hunter Beast

unread,
Jun 9, 2024, 12:07:16 PMJun 9
to Bitcoin Development Mailing List
The motivation for this BIP is to provide a concrete proposal for adding quantum resistance to Bitcoin. We will need to pick a signature algorithm, implement it, and have it ready in event of quantum emergency. There will be time to adopt it. Importantly, this first step is a more substantive answer to those with concerns beyond, "quantum computers may pose a threat, but we likely don't have to worry about that for a long time". Bitcoin development and activation is slow, so it's important that those with low time preference start discussing this as a serious possibility sooner rather than later.

This is meant to be the first in a series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is intended to propose concrete solutions, even if they're early and incomplete, so that Bitcoin developers are aware of the existence of these solutions and their potential.

This is just a rough draft and not the finished BIP. I'd like to validate the approach and hear if I should continue working on it, whether serious changes are needed, or if this truly isn't a worthwhile endeavor right now.

The BIP can be found here:

Thank you for your time.

Pierre-Luc Dallaire-Demers

unread,
Jun 14, 2024, 10:15:29 AMJun 14
to Bitcoin Development Mailing List
SQIsign is blockchain friendly but also very new, I would recommend adding a hash-based backup key in case an attack on SQIsign is found in the future (recall that SIDH broke over the span of a weekend https://eprint.iacr.org/2022/975.pdf).
Backup keys can be added in the form of a Merkle tree where one branch would contain the SQIsign public key and the other the public key of the recovery hash-based scheme. For most transactions it would only add one bit to specify the SQIsign branch.
The hash-based method could be Sphincs+, which is standardized by NIST but requires adding extra code, or Lamport, which is not standardized but can be verified on-chain with OP-CAT.

Hunter Beast

unread,
Jun 14, 2024, 10:30:54 AMJun 14
to Bitcoin Development Mailing List
Good points. I like your suggestion for a SPHINCS+, just due to how mature it is in comparison to SQIsign. It's already in its third round and has several standards-compliant implementations, and it has an actual specification rather than just a research paper. One thing to consider is that NIST-I round 3 signatures are 982 bytes in size, according to what I was able to find in the documents hosted by the SPHINCS website.

One way to handle this is to introduce this as a separate address type than SQIsign. That won't require OP_CAT, and I do want to keep this soft fork limited in scope. If SQIsign does become significantly broken, in this hopefully far future scenario, I might be supportive of an increase in the witness discount.

Also, I've made some additional changes based on your feedback on X. You can review them here if you so wish:

Antoine Riard

unread,
Jun 16, 2024, 9:24:49 PMJun 16
to Bitcoin Development Mailing List

Hi Hunter Beast,

I think any post-quantum upgrade signature algorithm upgrade proposal would grandly benefit to have
Shor's based practical attacks far more defined in the Bitcoin context. As soon you start to talk about
quantum computers there is no such thing as a "quantum computer" though a wide array of architectures
based on a range of technologies to encode qubits on nanoscale physical properties.

This is not certain that any Shor's algorithm variant works smoothly independently of the quantum computer
architecture considered (e.g gate frequency, gate infidelity, cooling energy consumption) and I think it's
an interesting open game-theory problem if you can concentrate a sufficiant amount of energy before any
coin owner moves them in consequence (e.g seeing a quantum break in the mempool and reacting with a counter-spend).

In my opinion, one of the last time the subject was addressed on the mailing list, the description of the state of
the quantum computer field was not realistic and get into risk characterization hyperbole talking about
"super-exponential rate" (when indeed there is no empirical realization that distinct theoretical advance on
quantum capabilities can be combined with each other) [1].

On your proposal, there is an immediate observation which comes to mind, namely why not using one of the algorithm
(dilthium, sphincs+, falcon) which has been through the 3 rounds of NIST cryptanalysis. Apart of the signature size,
which sounds to be smaller, in a network of full-nodes any PQ signature algorithm should have reasonable verification
performances.

Lastly, there is a practical defensive technique that can be implemented today by coin owners to protect in face of
hyptothetical quantum adversaries. Namely setting spending scripts to request an artificially inflated witness stack,
as the cost has to be burden by the spender. I think one can easily do that with OP_DUP and OP_GREATERTHAN and a bit
of stack shuffling. While the efficiency of this technique is limited by the max consensus size of the script stack
(`MAX_STACK_SIZE`) and the max consensus size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional
"scarce coins" pre-requirement on the quantum adversarise to succeed. Shor's algorithm is only defined under the
classic ressources of computational complexity, time and space.

Best,
Antoine

[1] https://freicoin.substack.com/p/why-im-against-taproot

hunter

unread,
Jun 17, 2024, 6:25:25 PMJun 17
to Antoine Riard, Bitcoin Development Mailing List

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2024-06-16 19:31, Antoine Riard <antoin...@gmail.com> wrote:

>
> Hi Hunter Beast,I think any post-quantum upgrade signature algorithm upgrade proposal would grandly benefit to haveShor's based practical attacks far more defined in the Bitcoin context. As soon you start to talk aboutquantum computers there is no such thing as a "quantum computer" though a wide array of architecturesbased on a range of technologies to encode qubits on nanoscale physical properties.
>
Good point. I can write a section in the BIP Motivation or Security section about how an attack might take place practically, and the potential urgency of such an attack.
 
I was thinking of focusing on the IBM Quantum System Two, mention how it can be scaled, and that although it might be quite limited, if running Shor's variant for a sufficient amount of time, above a certain minimum threshold of qubits, it might be capable of decrypting the key to an address within one year. I base this on the estimate provided in a study by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one day. It would seem it scales roughly linearly, and so extrapolating it further, 36,000 qubits would be needed to decrypt an address within one year. However, the IBM Heron QPU turned out to have a gate time 100x less than was estimated in 2022, and so it might be possible to make do with even fewer qubits still within that timeframe. With only 360 qubits, barring algorithmic overhead such as for circuit memory, it might be possible to decrypt a single address within a year. That might sound like a lot, but being able to accomplish that at all would be significant, almost like a Chicago Pile moment, proving something in practice that was previously only thought theoretically possible for the past 3 decades. And it's only downhill from there...
>
> This is not certain that any Shor's algorithm variant works smoothly independently of the quantum computerarchitecture considered (e.g gate frequency, gate infidelity, cooling energy consumption) and I think it'san interesting open game-theory problem if you can concentrate a sufficiant amount of energy before anycoin owner moves them in consequence (e.g seeing a quantum break in the mempool and reacting with a counter-spend).
>
It should be noted that P2PK keys still hold millions of bitcoin, and those encode the entire public key for everyone to see for all time. Thus, early QC attacks won't need to consider the complexities of the mempool.
>
> In my opinion, one of the last time the subject was addressed on the mailing list, the description of the state of the quantum computer field was not realistic and get into risk characterization hyperbole talking about "super-exponential rate" (when indeed there is no empirical realization that distinct theoretical advance on quantum capabilities can be combined with each other) [1].
>
I think it's time to revisit these discussions given IBM's progress. They've published a two videos in particular that are worth watching; their keynote from December of last year [2], and their roadmap update from just last month [3].
>
> On your proposal, there is an immediate observation which comes to mind, namely why not using one of the algorithm(dilthium, sphincs+, falcon) which has been through the 3 rounds of NIST cryptanalysis. Apart of the signature size,which sounds to be smaller, in a network of full-nodes any PQ signature algorithm should have reasonable verificationperformances.
>
I'm supportive of this consideration. FALCON might be a good substitute, and maybe it can be upgraded to HAWK for even better performance depending on how much time there is. According to the BIP, FALCON signatures are ~10x larger than Schnorr signatures, so this will of course make the transaction more expensive, but we also must remember, these signatures will be going into the witness, which already receives a 4x discount. Perhaps the discount could be increased further someday to fit more transactions into blocks, but this will also likely result in more inscriptions filling unused space also, which permanently increases the burden of running an archive node. Due to the controversy such a change could bring, I would rather any increases in the witness discount be excluded from future activation discussions, so as to be considered separately, even if it pertains to an increase in P2QRH transaction size.
 
Do you think it's worth reworking the BIP to use FALCON signatures? I've only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the readiness levels between those two are presently worlds apart.
 
Also, do you think it's of any concern to use HASH160 instead of HASH256 in the output script? I think it's fine for a cryptographic commitment since it's simply a hash of a hash (MD160 of SHA-256).
>
> Lastly, there is a practical defensive technique that can be implemented today by coin owners to protect in face ofhyptothetical quantum adversaries. Namely setting spending scripts to request an artificially inflated witness stack,as the cost has to be burden by the spender. I think one can easily do that with OP_DUP and OP_GREATERTHAN and a bitof stack shuffling. While the efficiency of this technique is limited by the max consensus size of the script stack(`MAX_STACK_SIZE`) and the max consensus size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional"scarce coins" pre-requirement on the quantum adversarise to succeed. Shor's algorithm is only defined under theclassic ressources of computational complexity, time and space.
>
I'm not sure I fully understand this, but even more practically, as mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally with a value of fewer than 50 coins per address, and when funds ever need to be spent, the transaction is signed and submitted out of band to a trusted mining pool, ideally one that does KYC, so it's known which individual miners get to see the public key before it's mined. It's not perfect, since this relies on exogenous security assumptions, which is why P2QRH is proposed.
>
> Best,Antoine
> [1] https://freicoin.substack.com/p/why-im-against-taproot
>
 
I'm grateful you took the time to review the BIP and offer your detailed insights.
 
[1] “The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime,” 2022 - https://pubs.aip.org/avs/aqs/article/4/1/013801/2835275/The-impact-of-hardware-specifications-on-reaching
[2] https://www.youtube.com/watch?v=De2IlWji8Ck
[3] https://www.youtube.com/watch?v=d5aIx79OTps
 
>
>
> Le vendredi 14 juin 2024 à 15:30:54 UTC+1, Hunter Beast a écrit :
>
> > Good points. I like your suggestion for a SPHINCS+, just due to how mature it is in comparison to SQIsign. It's already in its third round and has several standards-compliant implementations, and it has an actual specification rather than just a research paper. One thing to consider is that NIST-I round 3 signatures are 982 bytes in size, according to what I was able to find in the documents hosted by the SPHINCS website.
> > https://web.archive.org/web/20230711000109if_/http://sphincs.org/data/sphincs+-round3-submission-nist.zip
> >  
> > One way to handle this is to introduce this as a separate address type than SQIsign. That won't require OP_CAT, and I do want to keep this soft fork limited in scope. If SQIsign does become significantly broken, in this hopefully far future scenario, I might be supportive of an increase in the witness discount.
> >  
> > Also, I've made some additional changes based on your feedback on X. You can review them here if you so wish:
> > https://github.com/cryptoquick/bips/pull/5/files?short_path=917a32a#diff-917a32a71b69bf62d7c85dfb13d520a0340a30a2889b015b82d36411ed45e754
> >
> >
> > On Friday, June 14, 2024 at 8:15:29 AM UTC-6 Pierre-Luc Dallaire-Demers wrote:
> > > SQIsign is blockchain friendly but also very new, I would recommend adding a hash-based backup key in case an attack on SQIsign is found in the future (recall that SIDH broke over the span of a weekend https://eprint.iacr.org/2022/975.pdf).
> > > Backup keys can be added in the form of a Merkle tree where one branch would contain the SQIsign public key and the other the public key of the recovery hash-based scheme. For most transactions it would only add one bit to specify the SQIsign branch.
> > > The hash-based method could be Sphincs+, which is standardized by NIST but requires adding extra code, or Lamport, which is not standardized but can be verified on-chain with OP-CAT.
> > >
> > > On Sunday, June 9, 2024 at 12:07:16 p.m. UTC-4 Hunter Beast wrote:
> > > > The motivation for this BIP is to provide a concrete proposal for adding quantum resistance to Bitcoin. We will need to pick a signature algorithm, implement it, and have it ready in event of quantum emergency. There will be time to adopt it. Importantly, this first step is a more substantive answer to those with concerns beyond, "quantum computers may pose a threat, but we likely don't have to worry about that for a long time". Bitcoin development and activation is slow, so it's important that those with low time preference start discussing this as a serious possibility sooner rather than later. This is meant to be the first in a series of BIPs regarding a hypothetical "QuBit" soft fork. The BIP is intended to propose concrete solutions, even if they're early and incomplete, so that Bitcoin developers are aware of the existence of these solutions and their potential. This is just a rough draft and not the finished BIP. I'd like to validate the approach and hear if I should continue working on it, whether serious changes are needed, or if this truly isn't a worthwhile endeavor right now.
> > > >  
> > > > The BIP can be found here:
> > > > https://github.com/cryptoquick/bips/blob/p2qrh/bip-p2qrh.mediawiki
> > > >  
> > > > Thank you for your time.
> > > >  
> > > >
> > >
> > >
> >
> >
>
>
> -- You received this message because you are subscribed to a topic in the Google Groups "Bitcoin Development Mailing List" group. To unsubscribe from this topic, visit https://groups.google.com/d/topic/bitcoindev/Aee8xKuIC2s/unsubscribe. To unsubscribe from this group and all its topics, send an email to bitcoindev+...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/bitcoindev/87b4e402-39d8-46b0-8269-4f81fa501627n%40googlegroups.com.

-----BEGIN PGP SIGNATURE-----
Version: OpenPGP.js v4.10.3
Comment: https://openpgpjs.org

wsBcBAEBCAAGBQJmcJwuAAoJEDEPCKe+At0hjhkIAIdM7QN9hAO0z+KO7Bwe
JT45XyusJmDG1gJbLZtb+SfuE1X5PFDHNTLSNliJWsOImxFCiBPnlXhYQ4B/
8gST3rqplUwkdYr52E5uMxTTq9YaXTako4PNb8d7XfraIwDKXAJF+5Skf4f9
bQUYMieBAFSEXCmluirQymB+hUoaze60Whd07hhpzbGSwK4DdSXltufkyCDE
tJUforNWm8X25ABTSNDh3+if5V/wJuix/u8GJyMHKucaEAO01ki2oyusq2rt
Xe6ysUieclusFFdQAs4PfYxhzXTf5XeAbFga/qxrVtbt7q2nUkYklqteT2pp
mH/DU20HMBeGVSrISrvsmLw=
=+wat
-----END PGP SIGNATURE-----

Antoine Riard

unread,
Jul 12, 2024, 9:44:27 PM (6 days ago) Jul 12
to Bitcoin Development Mailing List
Hi Hunter Beast,

Apologies for the delay in answer.


> I was thinking of focusing on the IBM Quantum System Two, mention how it can be scaled, and that although it might be quite limited, if running Shor's variant for a > sufficient amount of time, above a certain minimum threshold of qubits, it might be capable of decrypting the key to an address within one year. I base this on the estimate > provided in a study by the Sussex Centre for Quantum Technologies, et. al [1]. They provide two figures, 317M qubits to decrypt in one hour, 13M qubits to decrypt in one > day. It would seem it scales roughly linearly, and so extrapolating it further, 36,000 qubits would be needed to decrypt an address within one year. However, the IBM Heron > QPU turned out to have a gate time 100x less than was estimated in 2022, and so it might be possible to make do with even fewer qubits still within that timeframe. With > only 360 qubits, barring algorithmic overhead such as for circuit memory, it might be possible to decrypt a single address within a year. That might sound like a lot, but > being able to accomplish that at all would be significant, almost like a Chicago Pile moment, proving something in practice that was previously only thought theoretically > possible for the past 3 decades. And it's only downhill from there...

Briefly surveying the paper "The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime", I think it's a reasonble framework to evaluate
the practical efficiency of quantum attacks on bitcoin, it's self consistent and there is a critical approach referencing the usual litterature on quantum attacks on bitcoin. Just
note the caveat, one can find in usual quantum complexity litterature, "particularly in regard to end-to-end physical resource estimation. There are many other error correction
techniques available, and the best choice will likely depend on the underlying architecture's characteristics, such as the available physical qubit–qubit connectivity" (verbatim). Namely, evaluating quantum attacks is very dependent on the concrete physical architecture underpinning it.

All that said, I agree with you that if you see a quantum computer with the range of 1000 physical qubits being able to break the DLP for ECC based encryption like secp256k1, even if it takes a year it will be a Chicago Pile moment, or whatever comparative experiments which were happening about chain of nuclear reactions in 30s / 40s.


>  I think it's time to revisit these discussions given IBM's progress. They've published a two videos in particular that are worth watching; their keynote from December of last > year [2], and their roadmap update from just last month [3]

I have looked on the roadmap as it's available on the IBM blog post: https://www.ibm.com/quantum/blog/quantum-roadmap-2033#mark-roadmap-out-to-2033
They give only a target of 2000 logical qubit to be reach in 2033...which is surprisingly not that strong...And one expect they might hit likely solid
state issues in laying out in hardware the Heron processor architecture. As a point of thinking, it took like 2 decades to advance on the state of art
of litography in traditional chips manufacturing.
 
So I think it's good to stay cool minded and I think my observation about talking of "super-exponential rate" as used in maaku old blog post does not
hold a lot of rigor to describe the advances in the field of quantum computing. Note, also how IMB is a commercial entity that can have a lot of interests
in "pumping" the state of "quantum computing" to gather fundings (there is a historical anecdote among bitcoin OG circles about Vitalik trying to do an
ICO to build a quantum computer like 10 years ago, just to remember).

> I'm supportive of this consideration. FALCON might be a good substitute, and maybe it can be upgraded to HAWK for even better performance depending on how much > time there is. According to the BIP, FALCON signatures are ~10x larger t> han Schnorr signatures, so this will of course make the transaction more expensive, but we also > must remember, these signatures will be going into the witness, which already receives a 4x discount. Perhaps the discount could be incr> eased further someday to fit > more transactions into blocks, but this will also likely result in more inscriptions filling unused space also, which permanently increases the burden of running an archive > node. Due to the controversy s> uch a change could bring, I would rather any increases in the witness discount be excluded from future activation discussions, so as to be > considered separately, even if it pertains to an increase in P2QRH transaction size.

 
> Do you think it's worth reworking the BIP to use FALCON signatures? I've only done a deep dive into SQIsign and SPHINCS+, and I will acknowledge the readiness levels between those two are presently worlds apart.

I think FALCON is what has the smallest pubkey + sig size for hash-and-sign lattice-based schemes. So I think it's worth reworking the BIP to see what has the smallest generation / validation time and pubkey + size space for the main post-quantum scheme. At least for dilthium, falcon, sphincs+ and SQISign. For an hypothetical witness discount, a v2 P2QRH could be always be moved in a very template annex tag / field.


> Also, do you think it's of any concern to use HASH160 instead of HASH256 in the output script? I think it's fine for a cryptographic commitment since it's simply a hash of a hash (MD160 of SHA-256).

See literature on quantum attacks on bitcoin in the reference of the paper you quote ("The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime") for a discussion on Grover's search algorithm.


> I'm not sure I fully understand this, but even more practically, as mentioned in the BIP, value can simply be kept in P2WPKH outputs, ideally with a value of fewer than 50
> coins per address, and when funds ever need to be spent, the>  transaction is signed and submitted out of band to a trusted mining pool, ideally one that does KYC, so it's
> known which individual miners get to see the public key before it's mined. It's not perfect, since this relies on exogenou> s security assumptions, which is why P2QRH is
> proposed.

Again, the paper you're referencing ("The impact of hardware specifications on reaching quantum advantage...") is analyzing the performance of quantum advantage under
2 dimensions, namely space and time. My observation is in Bitcoin we have an additional dimension, "coin scarcity" that can be leveraged to build defense of address
spends in face of quantum attacks.

Namely you can introduce an artifical "witness-stack size scale ladder" in pseudo-bitcoin script: OP_SIZE <1000> OP_EQUALVERIFY OP_DROP ...checksig...
I have not verified it works well on bitcoin core though this script should put the burden on the quantum attacker to have enough bitcoin amount available to burn in on-chain fees in witness size to break a P2WPKH.


>  ideally with a value of fewer than 50 coins per address, and when funds ever need to be spent, the transaction is signed and submitted out of band to a trusted mining pool, ideally
> one that does KYC, so it's known which individual > miners get to see the public key before it's mined. It's not perfect, since this relies on exogenous security assumptions, which is
> why P2QRH is proposed.

The technical issue if you implement KYC for a mining pool you're increasing your DoS surface and this could be exploited by competing miners. A more reasonable security model can be to have miner coinbase pubkeys being used to commit to the "seen-in-mempool" spends and from then build "hand wawy" fraud proofs that a miner is quantum attacking you're P2WSH spends at pubkey reveal time during transaction relay.

Best,
Antoine

ots hash: 1ad818955bbf0c5468847c00c2974ddb5cf609d630523622bfdb27f1f0dc0b30
Reply all
Reply to author
Forward
0 new messages