Hi Hunter Beast,
I think any post-quantum upgrade signature algorithm upgrade proposal would grandly benefit to have
Shor's based practical attacks far more defined in the Bitcoin context. As soon you start to talk about
quantum computers there is no such thing as a "quantum computer" though a wide array of architectures
based on a range of technologies to encode qubits on nanoscale physical properties.
This is not certain that any Shor's algorithm variant works smoothly independently of the quantum computer
architecture considered (e.g gate frequency, gate infidelity, cooling energy consumption) and I think it's
an interesting open game-theory problem if you can concentrate a sufficiant amount of energy before any
coin owner moves them in consequence (e.g seeing a quantum break in the mempool and reacting with a counter-spend).
In my opinion, one of the last time the subject was addressed on the mailing list, the description of the state of
the quantum computer field was not realistic and get into risk characterization hyperbole talking about
"super-exponential rate" (when indeed there is no empirical realization that distinct theoretical advance on
quantum capabilities can be combined with each other) [1].
On your proposal, there is an immediate observation which comes to mind, namely why not using one of the algorithm
(dilthium, sphincs+, falcon) which has been through the 3 rounds of NIST cryptanalysis. Apart of the signature size,
which sounds to be smaller, in a network of full-nodes any PQ signature algorithm should have reasonable verification
performances.
Lastly, there is a practical defensive technique that can be implemented today by coin owners to protect in face of
hyptothetical quantum adversaries. Namely setting spending scripts to request an artificially inflated witness stack,
as the cost has to be burden by the spender. I think one can easily do that with OP_DUP and OP_GREATERTHAN and a bit
of stack shuffling. While the efficiency of this technique is limited by the max consensus size of the script stack
(`MAX_STACK_SIZE`) and the max consensus size of stack element (`MAX_SCRIPT_ELEMENT_SIZE`), this adds an additional
"scarce coins" pre-requirement on the quantum adversarise to succeed. Shor's algorithm is only defined under the
classic ressources of computational complexity, time and space.
Best,
Antoine
[1] https://freicoin.substack.com/p/why-im-against-taproot
Hello Hunter,
> Well, it's also important to remember that for every qubit added, it doubles the power of the system. A 2,000 qubit cryptographically-relevant quantum computer (CRQC) is exponentially faster than a 1,000 qubit one. There's also the > capability for cross-links for multiple chips to communicate with each other, which IBM is also researching. The IBM Quantum System Two can be upgraded to support 16,000 qubits according to their marketing. Also consider that the ve> rification of the results from the CRQC can be done via classical computer, so a high level of error correction might not be as necessary so long as the program is run enough times. It will take much longer, of course.
On performance, once again I think it all depends on the quantum computer architecture considered and if we're talking about physical qubits / logical qubits. As the paper "The impact of hardware specifications on reaching quantum advantage in the fault tolerant regime" linked in your BIP judiciously observe in its introduction that surface code (as used by IBM) is only one of the error code correction technique.
About cross-links for multiple chips, even if each chip parallelize towards a single classical logical unit, ordering computational units is a notoriously hard issue in classical computer. I don't think there is any certainty in quantum computer development that each set of qubits of isolated chips can be arithmetically additioned without a coefficient loss on the resulting sum (...there is always a bit of apprehension to have to dissociate between marketing claims and academic claim duly peer-reviewed...). And while indeed, the results can be evaluated via a classical computer, this doesn't mean transitively that the evaluation will be as efficient (in energy / computational cycles) rather than doing more error correction on the quantum computer side.
> I've decided in one of my more recent updates to the BIP to default to the highest level of NIST security, NIST V, which provides 256 bits of security. You can see my rationale for that in this PR:
> https://github.com/cryptoquick/bips/pull/7/files
Those are assumptions there is a security increase by scaling up the size of the public key. In the Bitcoin world, we don't even make assumption on the public key size
for ECDSA signature scheme as both compressed and uncompressed public keys have been historically valid. Similarly, the public key size does not have to be bundled with
the specification of the signature verification scheme itself (e.g see BIP340 discussion on x-only public keys).
> As such, you'll see FALCON is roughly 4x larger than SQIsign signatures. Although supersingular elliptic curve quaternion isogeny-based algorithms are newer and
> more experimental than lattice-based cryptography, I think the benefits outweigh the risks, especially when transaction throughput is a principal concern.
There are no public key size in the security table so it's hard to compare the overall on-chain space cost for each signature post-quantum algorithm considered.
Neither actually, there is an estimation of the verification cost for an average 200-bytes transactions, old good's Hamilton's quaternion and relying on complex numbers, which can be hard to deal with for the hobbyist CPUs can be a concern.
> It's crucial that the signature and public key both receive the witness discount. Can you go into more detail in how that might be accomplished?
The BIP341 taproot annex could be used for that, see https://github.com/bitcoin/bips/blob/master/bip-0341.mediawiki#cite_note-5
> Although it's too early to talk about activation of a QuBit soft fork, I've put some thought into how we can maintain the existing Bitcoin throughput with a soft fork, and I think it might be prudent to, when the time comes, introdu> ce a 4x additional QuBit witness discount, maybe we call it the quitness, which is only available to valid P2QRH signatures. This would preclude its abuse for things like inscriptions because the signature data would need to corresp> ond to the key, and even if this were possible, it's likely to result in only a burner address. This would increase chain state growth from roughly 100GB/yr to possibly closer to 2-300GB, depending on adoption. As the state of the a> rt of SSD technology advances, this should allow plebs to run their own node on a 4TB disk for over a decade, even including existing chain size of ~600GB.
The annex could have typed fields for post-quantum signature and public key further witness discount. However, I think it's a bit naive to assume that SSD technology advances will stay linear and that it will be economically accessible at the same pace to the tens of thousands of plebs actually running full-nodes and constituting the skeleton of the base-relay network. One could play out a posteriori the predictions on bandwidth technological advances that have been made in BIP103 to see how well they held on the last ~9 years.
(There is another caution with evaluating technological advances, namely that some hardware components could be actually massively consumed by other cryptocurrencies for their consensus algorithms...)
> If we were to use the same approach for FALCON signatures, a 16x discount would be needed, and I think that's far too much for the community to accept. As for pub key size and verification
> time, these are secondary considerations if the primary constraint is maintaining present transaction throughput. That's what makes SQIsign so promising.
Well, if there is something like the annex with typed fields each type of post-quantum signature could get a wider discount, especially if there are verification asymmetries favoring some scheme over another one, even if the security properties are differing.
> The Impact paper seems to dismiss Grover's algorithm, but I think it's important to err on the size of caution and instead use a 32-byte double SHA-2 (HASH256) for additional security in the P2QRH output.
Performance-wise, this doesn't shock me to use a double SHA-2 (HASH256) as it has been added for many domain separation tagged hash in taproot.
About Grover's algorithm, it's more the sample space and collision space that should be more defined to be relevant, you can always downgrade the performance of the Grover's algorithm by scaling up the sample space, however it's not sure it's practical for bitcoin transaction generation.
> I'm not sure I understand what you mean by this...
> Is your coin scarcity comment related to what I call "satoshi's shield" in the BIP?
Not at all the "satoshi's shield" as you're describing in the BIP.
This is just the observation that bitcoin coins are scarce in the sense that you need to burn raw energy to acquire the rewards according to the issuance schedule (or miners fees). Bitcoin script can be designed to request that a sufficient number of bitcoin coins, or satoshis, are burned before to unlock a coin locked under a quantum-frail scriptpubkey.
That means any quantum computer attacker, even if they have an efficient quantum computer, might not be able to break the redeem script itself, only the signatures composing the redeem script check sig operations.
Let's give a concrete example, let's say you have the following pseudo script:
<<OP_DEPTH> <OP_PUSHDATA2> <998> <OP_EQUALVERIFY> <pubkey> <OP_CHECKSIG>>
Interpeted the following script should request from the spending party, whatever it is to provide a witness stack of length 998 bytes, all dummy elements.
Those dummy elements are putting the burden on the quantum computer attacker to burn fees at the current sat per vbyte rate to realize a quantum exploit.
(There could leverage SIGHASH_NONE to escape this "fee jail"... however it sounds to expose them to be overrided by a miner).
So assuming this defensive scheme in face of quantum exploit is sound, I think this put the burden of a quantum attacker to have hashrate capabilities at the current level of difficulty, not solely an efficient CRQC.
> Yes, this makes more sense. I'm not sure anything can be done with the fraud proofs, but they could at least prove that a bad actor is present. Ideally both approaches are combined for maximum security and accountability.
No KYC is necessarily hurting mining pools as there is no single kyc definition that you can implement that do not open the door for a kind of DoS exploitation.
This is not an issue to build a practical fraud proofs systems on seen transaction, the open question is more if the average bitcoin user would pay to download fraud proofs demonstrating that a given miner is not engaging in quantum exploit.
> I've taken Antoine's feedback to heart and added FALCON to the specification, including a section that addresses the increased maintenance burden of adding two distinct post-quantum cryptosystems.
Thanks you for the addition, for the maintenance burden there is always the counter-argument to be made that you can secure a coins under multiple post-quantun signature scheme, especially if they're from different hardness assumptions breed. If one of the two scheme is secure, the coins are still locked by the other half.
I think it could be interesting to split the BIP in multiple ones, one for the general consensus mechanism introducing a P2QRH with all quantum risks considerations, and an individual one for each signature algorithm that could be deployed udner this generic P2QRH. Kinda in the same way, that BIP340 / BIP341 are split.
Best,
Antoine
ots hash: b57e9fe0b3de603ca66be29b7f1ba04fa5b8bc516c1277114ab42ac9f8572e12