Update on the Great Consensus Cleanup Revival

466 views
Skip to first unread message

Antoine Poinsot

unread,
Feb 6, 2025, 12:57:19 AMFeb 6
to Bitcoin Development Mailing List
Hi everyone,

A bit over a year ago i started working on revisiting the 2019 Great Consensus Cleanup proposal from
Matt Corallo [0]. His proposal included:
- making <=64 bytes transactions invalid to fix merkle tree weaknesses;
- making non-pushonly scriptSigs, FindAndDelete matches, OP_CODESEPARATOR and non-standard sighash
types fail script validation to mitigate the worst case block validation time;
- restrict the nTime field of the first block in each difficulty adjustment interval to be no less
than 600 seconds lower than the previous block's;

I set out to research the impact of each of the vulnerabilities this intended to patch, the
alternative fixes possible for each and finally if there was any other protocol bug fix we'd want to
include if we went through the considerable effort of soft forking Bitcoin already.

Later in March i shared some first findings on Delving [1] and advertized the effort on this mailing
list [2]. I also created a companion thread on Delving, kept private, to discuss the details of the
worst case block validation time [3]. As one would expect due to the larger design space available
to fix this issue, this private thread is where most of the discussion would happen. Thank you to
everyone who contributed feedback, insights, ideas and argumented opinions on the different issues
all along the process.

Now i would like to update the broader Bitcoin development community on the outcome of this effort.
I believe a Consensus Cleanup proposal should include the following.
- A fix for vulnerabilities surrounding the use of timestamps in the difficulty adjustment
algorithm. In particular, a fix for the timewarp attack with a 7200 seconds grace period as well
as a fix for the Murch-Zawy attack [4] by making invalid any difficulty adjustment period with a
negative duration.
- A fix for long block validation times with a minimal "confiscation surface", by introducing a
per-transaction limit on the number of legacy sigops in the inputs.
- A fix for merkle tree weaknesses by making transactions which serialize to exactly 64 bytes
invalid.
- A fix for duplicate transactions to supplement BIP34 in order to avoid resuming unnecessary BIP30
validation in the future. This is achieved by mandating the nLockTime field of coinbase
transaction to be set to the height of their block minus 1.

I have started drafting a BIP draft with the detailed specs for this.

Antoine Poinsot


[0] https://github.com/TheBlueMatt/bips/blob/7f9670b643b7c943a0cc6d2197d3eabe661050c2/bip-XXXX.mediawiki
[1] https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710
[2] https://groups.google.com/g/bitcoindev/c/CAfm7D5ppjo/m/bYJ3BiOuAAAJ
[3] https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
[4] https://delvingbitcoin.org/t/zawy-s-alternating-timestamp-attack/1062#variant-on-zawys-attack-2

Murch

unread,
Feb 6, 2025, 9:46:42 PMFeb 6
to bitco...@googlegroups.com
Thank you for the update and your work on the Great Consensus Cleanup. I
am looking forward to reading your BIP, and would hope that you could
share here or in the BIP’s Rationale what convinced you to change the
grace period from 600 seconds to 7200 seconds and how the nLockTime of
height-1 won out.

Cheers,
Murch

On 2025-02-05 13:09, 'Antoine Poinsot' via Bitcoin Development Mailing
OpenPGP_signature.asc

Antoine Poinsot

unread,
Feb 10, 2025, 12:15:50 AMFeb 10
to Murch, bitco...@googlegroups.com
I laid out my reasoning for increasing the grace period to 7200 on the Consensus Cleanup Delving
thread [0]. TL;DR: there is marginal safety benefits to doing so and virtually no cost (it only
increases the worst case block rate from ~0.1% to ~0.65%). So on balance i concluded it was
preferable to err on the safe side.

I chose to go with mandating nLockTime be set in coinbase transactions to the height of the block
they are included in minus 1 because it has marginal benefits in addition to ensuring coinbase
transactions can't be duplicate (retrieving / proving the block height more efficiently), and the
feedback i got from miners both publicly [1] and privately was that none of the options presented
significantly more challenge for them.

Antoine

[0] https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710/66
[1] https://groups.google.com/g/bitcoinminingdev/c/qyrPzU1WKSI/m/uzxS5jG0AwAJ
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/ff82fe21-8e02-42df-8760-c3e358a12766%40murch.one.

Antoine Riard

unread,
Feb 10, 2025, 12:15:51 AMFeb 10
to Bitcoin Development Mailing List
Hi Darosior,

Thanks for the work on reviving the Great Consensus Cleanup.


> Now i would like to update the broader Bitcoin development community on the outcome of this effort.
> I believe a Consensus Cleanup proposal should include the following.
> - A fix for vulnerabilities surrounding the use of timestamps in the difficulty adjustment
> algorithm. In particular, a fix for the timewarp attack with a 7200 seconds grace period as well
> as a fix for the Murch-Zawy attack [4] by making invalid any difficulty adjustment period with a
> negative duration.
> - A fix for long block validation times with a minimal "confiscation surface", by introducing a
> per-transaction limit on the number of legacy sigops in the inputs.
> - A fix for merkle tree weaknesses by making transactions which serialize to exactly 64 bytes
> invalid.
> - A fix for duplicate transactions to supplement BIP34 in order to avoid resuming unnecessary BIP30
> validation in the future. This is achieved by mandating the nLockTime field of coinbase
> transaction to be set to the height of their block minus 1.
>
> I have started drafting a BIP draft with the detailed specs for this.

So assuming some hypothetical future BIP-9-based deployment, there can be multiple soft-forks
activation at the same time (up to 30), as a soft-fork can be assigned distinct block nVersion
bit. While BIP-9 recommends a 95% activation threshold on mainnet, it's one line change to
adjust the `nThreshold` variable to another value. For the fix about timewarp vulnerabilities,
as it's an additional constraint on the validity of mined blocks allowed the current reward
schedule, there could be some reluctance in adopting the new consensus rules, and this fix
could deserve a specific threshold of its own - imho.

Additionally, the proposed soft-fork fixes are very different than the 3 set of rules than
have been activated under the DEPLOYMENT_TAPROOT flag. While BIP340, BIP341 and BIP342 are
building on top of each other in a modular fashion, this is not the case here with the 4
proposed fixes ("timewarp"/"worst-block-time"/"merkle-tree-weakness"/"enhanced-duplicated-txn",
as adoption of one fix is not necessitated to adopt the other fixes. There could be some
community consensus on "timewarp"/"merke-tree-weakness"/"enhanced-duplicated-txn", while
the minimal "confiscation surface" (which was very controversial when the GCC was proposed
the 1st time in 2019), not suiting a wide majority of folks, or even people who have use-cases
potentially affected.

For those reasons, I think it's wiser to spread each fix in its own BIP and patchset of
code changes to not only have discussions of each fix in parallel, though also eventually
enable separate activation of each consensus fix, in the optic that each fix might gather
different level of consensus, whatever the reasons.

This might be a stylistic note, though I could point in bitcoin core code today implemented
check in the script interpreter right in the crux of consensus code paths that is just stale
due to a never-activated BIP (-- yes I'm starring at you SIGPUSHONLY).

Best,
Antoine (the "evil" one)

OTS hash: 6c809fde007a53f380af41f0e22f3b9e95c83da24c2718ac2de0004570f94990

Antoine Poinsot

unread,
Feb 10, 2025, 5:38:00 PMFeb 10
to Antoine Riard, Bitcoin Development Mailing List
Hi "evil" Ariard,

I believe it is important to bundle all fixes together to make up for the substantial fixed cost of deploying a soft fork. It also seems absurd to deploy a soft fork aimed at patching security bugs, but only fix some of them and leave the protocol partly vulnerable. While it is technically possible it is not something i want to encourage.

Regarding the confiscation surface, please note the specific concerns raised about the 2019 proposal do not apply to the fix proposed here. The new approach to mitigating the worst case validation time is extremely conservative in this regard: no opcodes or other Script functionality get disabled. Only a limit is introduced at the transaction level, which allows to pinpoint exactly the harmful behaviour without rendering any innocent transaction invalid.

Best,
Antoine (the non-evil-one?)
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.

Chris Stewart

unread,
Feb 10, 2025, 9:23:31 PMFeb 10
to Antoine Poinsot, Bitcoin Development Mailing List
Hi everyone! Excited to see this work moving forward. I've taken the liberty of carving off the 64 byte transaction portion of this proposal and drafted a BIP. You can view a rendered draft with references here: https://github.com/Christewart/bips/blob/2024-12-20-64bytetxs/bip-XXXX.mediawiki

<pre>
  BIP: ?
  Layer: Consensus (soft fork)
  Title: Disallow 64 byte transactions
  Author: Chris Stewart <stewart....@gmail.com>
  Status: Draft
  Type: Specification
  License: BSD-3-Clause
  Created: ?
</pre>

==Abstract==

This BIP describes the rationale for disallowing transactions that are serialized to 64 bytes without the transaction's witness.
We describe the weaknesses to the merkle tree included in bitcoin block headers, various exploits for those weaknesses.

==Motivation==

Bitcoin block headers include a commitment to the set of transactions in a given
block, which is implemented by constructing a Merkle tree of transaction id’s
(double-SHA256 hash of a transaction) and including the root of the tree in the
block header. This in turn allows for proving to a Bitcoin light client that a
given transaction is in a given block by providing a path through the tree to the
transaction. However, Bitcoin’s particular construction of the Merkle tree has
several security weaknesses, including at least two forms of block malleability
that have an impact on the consensus logic of Bitcoin Core, and an attack on
light clients, where an invalid transaction could be ”proven” to appear in a block
by doing substantially less work than a SHA256 hash collision would require.
This has been prevented by relay policy since 2018<ref>[https://github.com/bitcoin/bitcoin/pull/11423/commits/7485488e907e236133a016ba7064c89bf9ab6da3 PR #11423 disallows 64 byte transactions in bitcoin core relay]</ref>

==Specification==

This BIP disallows bitcoin transactions that are serialized to 64 bytes in length without it's witness.

==Rationale==

=== Block malleability ===

64 byte transactions introduce block malleability. Malicious peers can construct consensus valid and invalid 64 byte
transactions that have the same serialization as the concatenation of 2 nodes in the merkle tree.

Assume we have a valid bitcoin block with 2 transactions in it - T<sub>0</sub> and T<sub>1</sub>.
The merkle root for this block is H(T<sub>0</sub>||T<sub>1</sub>).
A user could find a malicious 64 byte transaction T<sub>m</sub> that serializes to T<sub>0</sub>||T<sub>1</sub>.
Next the malicious user relays the block containing the malicious T<sub>m</sub> rather than the
valid bitcoin transactions T<sub>0</sub> and T<sub>1</sub>.

==== Block malleability with consensus INVALID transactions ====

The peer receiving the malicious block marks the block as invalid as T<sub>m</sub>
is not a valid transaction according to network consensus rules.
Other peers on the network receive the valid block containing T<sub>0</sub> and T<sub>1</sub>
add the block to their blockchain. Peers that receive the invalid block before the valid block
will never come to consensus with their peers due to the malicious user finding a collision
within the block's merkle root. Finding this collision approximately 22 bits worth of work<ref>[https://github.com/Christewart/bips/blob/2024-12-20-64bytetxs/bip-XXXX/2-BitcoinMerkle.pdf to produce a block that has a Merkle
root which is a hash of a 64-byte quantity that deserializes validly, it’s enough
to just do 8 bits of work to find a workable coinbase (which will hash to the first
32 bytes), plus another ≈22 bits of work ((1/5) ∗224, so slightly less) to find
a workable second transaction which will hash to the second 32 bytes) – a very
small amount of computation.]</ref>

This attack vector was fixed in 0.6.2<ref>[https://bitcoin.org/en/alert/2012-05-14-dos#risks CVE-2012-2459]</ref>, re-introduced in 0.13.x<ref>[https://github.com/bitcoin/bitcoin/pull/7225 #7225]</ref> and patched again in
0.14<ref>[https://github.com/bitcoin/bitcoin/pull/9765 #9765]</ref> of bitcoin core.

==== Block malleability with consensus VALID transactions ====

Producing a valid bitcoin transaction T<sub>m</sub> that adheres to network consesnsus
rules requires 224 bits of work<ref>[https://github.com/Christewart/bips/blob/2024-12-20-64bytetxs/bip-XXXX/2-BitcoinMerkle.pdf Note that the first transaction in a block must be a coinbase, and as discussed
above, that largely constrains the first 32 bytes of the first transaction: only
the 4 version bytes are unconstrained. So it would take at least 28*8= 224 bits
of work to find the first node in a given row of the tree that would match the
first half of a coinbase, in addition to the amount of work required to grind the
second half of the transaction to something meaningful (which is much easier –
only 16 bytes or so are constrained, so approximately 128 bits of work to find a collision). Of course, any of the rows in the Merkle tree could be used, but it nevertheless seems clear that this should be computationally infeasible.]</ref>.
This is computationally and financially expensive but theoretically possible. This can lead to a persistent chain split on the network.

=== Attack on SPV clients ===

BIP37<ref>[https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki BIP37]</ref>provides a partial merkle tree format<ref>[https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki#user-content-Partial_Merkle_branch_format Partial Merkle Tree Format]</ref>
that allows you to verify your bitcoin transaction is included in a merkle root embedded in a bitcoin block header.
Notably this format does not commit to the height of the merkle tree.

Suppose a (valid) 64-byte transaction T is included in a block with the property that the second 32 bytes (which
are less constrained than the first 32 bytes) are constructed so that they collide
with the hash of some other fake, invalid transaction F. The attacker can fool the SPV client into believing that F
was included in a bitcoin block rather than T with 81 bits<ref>[https://github.com/Christewart/bips/blob/2024-12-20-64bytetxs/bip-XXXX/2-BitcoinMerkle.pdf An attacker who can do 81 bits of work (followed by another 40 bits of work, to
construct the funding transaction whose coins will be spent by this one) is able
to fool an SPV client in this way.]</ref> of work. This also reduces implementation complexity of SPV wallets<ref>[https://delvingbitcoin.org/t/great-consensus-cleanup-revival/710/43 The steps needed to make sure a merkle proof for a transaction is secure.]</ref>.

This could be mitigated by knowing the depth of the merkle tree. Requiring SPV clients to request both the coinbase transaction could mitigate this attack.
To produce a valid coinbase transaction at the same depth that our fake transaction F occurs at would require 224 bits of work.
As mentioned above, this is computionally and financially expensive, but theoretically possible.

==Backward compatibility==

There have been 5 64 byte transactions that have occcurred in the bitcoin blockchain as of this
writing <ref>[https://github.com/Christewart/bips/blob/2024-12-20-64bytetxs/64byte-tx-mainnet.txt 64 byte transactions in the bitcoin blockchain]</ref>
With the last transaction 7f2efc6546011ad3227b2da678be0d30c7f4b08e2ce57b5edadd437f9e27a612<ref>[https://mempool.space/tx/7f2efc6546011ad3227b2da678be0d30c7f4b08e2ce57b5edadd437f9e27a612 Last 64 byte transaction in the bitcoin blockchain]</ref>
occurring at block height 419,606<ref>[https://mempool.space/block/000000000000000000308f1efc24419f34a3bafcc2b53c32dd57e4502865fd84 Block 419,606]</ref>.

TODO

==Reference implementation==

<source lang="cpp">
/**
 * We want to enforce certain rules (specifically the 64-byte transaction check)
 * before we call CheckBlock to check the merkle root. This allows us to enforce
 * malleability checks which may interact with other CheckBlock checks.
 * This is currently called both in AcceptBlock prior to writing the block to
 * disk and in ConnectBlock.
 * Note that as this is called before merkle-tree checks so must never return a
 * non-malleable error condition.
 */
static bool ContextualBlockPreCheck(const CBlock& block, BlockValidationState& state, const ChainstateManager& chainman, const CBlockIndex* pindexPrev)
{
    if (DeploymentActiveAfter(pindexPrev, chainman, Consensus::DEPLOYMENT_64BYTETX)) {
      for (const auto& tx : block.vtx) {
            if (::GetSerializeSize(TX_NO_WITNESS(tx)) == 64) {
                return state.Invalid(BlockValidationResult::BLOCK_MUTATED, "64-byte-transaction", strprintf("size of tx %s without witness is 64 bytes", tx->GetHash().ToString()));
            }
        }
    }

    return true;
}
</source>

https://github.com/bitcoin-inquisition/bitcoin/pull/24/files

== Rationale ==

<references />

==Copyright==
This BIP is licensed under the [https://opensource.org/license/BSD-3-Clause BSD-3-Clause License].

==Acknowledgements==

Suhas Daftuar, AJ Towns, Sergio Demian Lerner, Greg Maxwell, Matt Corallo, Antoine Poinsont, Dave Harding and Erik Voskuil

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.

Antoine Riard

unread,
Feb 12, 2025, 12:05:53 AMFeb 12
to Bitcoin Development Mailing List
Hi Darosior,

> I believe it is important to bundle all fixes together to make up for the substantial fixed cost of deploying a soft fork. It also seems absurd to deploy a soft fork aimed at patching security bugs, but only fix some of them and lea> ve the protocol partly vulnerable. While it is technically possible it is not something i want to encourage.

I don't wish to be dogmatic here, though I believe we have 2 distinct things. There is (a) to have 4 distinct BIPs for each fix ("timewarp"/"worst-block-time"/"merkle-tree-weakness"/"enhanced-duplicated-txn") and there is (b) to bundle all the fixes together as of course there is substantial ecosystem coordination cost to deploy a soft fork, even with BIP9. Getting 4 distinct BIPs, it's of course a bit more work for the soft fork authors and champions,  though I think this avoid too-beefy ill-written BIP as we already had and undocumented future rules in the script interpreter code like the SIGPUSHONLY check I was pointing out.

Apart of this editorial good practice motivation, this also indeed make it simpler to deploy the fix one by one (of course you can always hack-in activation code, even if it's single BIP), in the pessimistic scenario in the future there are no consensus on all the fixes, though only a subset of the 4 ones. In that optic, yes if we can get in 3 of the 4 fixes in a bundled soft fork and the remaining one at a latter time, it would be already a win to reduce the protocol vulnerability exposure.

It's unlike the Taproot patchset here, each proposed soft-fork fix is supposed to be addressing a vulnerability of its own, and there is almost no technical coupling between them (well one could go to argue there is one among the timewarp fix and the worst-block-validation time, if you go to consider the maximum DoS surface of a full-node under wall clock time). I don't think it's a question of wishing what we wished to encourage as the "ideal" deployment of this set of consensus rules changes, as if there are good technical reasons to object on 1 fix and no consensus, this shouldn't prevent to be realistic and go to deploy all the remaining ones for which there is consensus.


> Regarding the confiscation surface, please note the specific concerns raised about the 2019 proposal do not apply to the fix proposed here. The new approach to mitigating the worst case validation time is extremely conservative in this regard: no opcodes or other Script functionality get disabled. Only a limit is introduced at the transaction level, which allows to pinpoint exactly the harmful behaviour without rendering any innocent transaction invalid.

I'm not sure if there is already code, and even a BIP on the "worst-block-validation-time", though it's unclear to me if the limit aims to apply on UTXO spends spending lecacy scripts ex post to the activation (as I think the proposal of few months ago was laying out) or ex ante to the activation. If the latter one, with a "retro-active" confiscation surface, I think it's akin to the specific concerns raised in 2019, and if it's something like that we should be
very careful in the design.


> As one would expect due to the larger design space available to fix this issue, this private thread is where most of the discussion would happen.

There is another point - I'm sharing the practice on not exposing all experimentation on the worst-block-validation-time, as you never know it's the wild Internet and there can be always script kiddies at the corner of the block for unpatched vulnerabilities. On the other hand, we're proposing to change what is the "consensus" definition of people money, so a bit more of publicity in rationalizing why the changes are proposed could be welcome. For clarity I have access on the thread and there is bunch of other devs. It's always a (hard) question how much info you share on vulnerabilities in the process of fixing them.

Anyway, I'll go to review code+BIP(s) for all the fixes, those raised points are just relevant to keep in mind imho.

Best,
Antoine R. (nah you're evil, i'm evil, we're all evil, btc is beyond good and evil)
OTS hash: 0334fb9d557426c4f7d71d3e99bb9badbfd87903

Peter Todd

unread,
Feb 14, 2025, 5:45:54 PMFeb 14
to Antoine Riard, Bitcoin Development Mailing List
On Fri, Feb 07, 2025 at 05:02:46AM -0800, Antoine Riard wrote:
> This might be a stylistic note, though I could point in bitcoin core code
> today implemented
> check in the script interpreter right in the crux of consensus code paths
> that is just stale
> due to a never-activated BIP (-- yes I'm starring at you SIGPUSHONLY).

What specifically do you mean by this? You mean the
SCRIPT_ERR_SIG_PUSHONLY error condition?

--
https://petertodd.org 'peter'[:-1]@petertodd.org
signature.asc

Antoine Riard

unread,
Feb 16, 2025, 11:58:56 AMFeb 16
to Bitcoin Development Mailing List
Hi Peter,

I'm talking about this check in VerifyScript as of commit 43e71f74 in bitcoin core.

```
    if ((flags & SCRIPT_VERIFY_SIGPUSHONLY) != 0 && !scriptSig.IsPushOnly()) {
        return set_error(serror, SCRIPT_ERR_SIG_PUSHONLY);
    }
```

In my understanding, we never set SCRIPT_VERIFY_SIGPUSHONLY, neither in MANDATORY_SCRIPT_VERIFY_FLAGS,
nor in STANDARD_SCRIPT_VERIFY_FLAGS, and this sounds okay as it's a script check pertaining to BIP62
rule 2, and BIP62 was never activated. As far as I can tell, that's more a stale check just right
there in the interpreter code paths.

We still return SCRIPT_ERR_SIG_PUSHONLY for P2SH spends, verifying the scriptSig is push-only.

All the unit tests (i.e `script_tests.cpp`) are manually setting the SCRIPT_VERIFY_SIGPUSHONLY flag
to verify the logic correctness, even it appears as never being set for block validation.

The original PR is there: https://github.com/bitcoin/bitcoin/pull/5065

Feel free to point me out if I'm missing something obvious here.

Best,
Antoine
OTS hash: 42e2e614fea49ec876539e28b323718df3ef734b3a4b247fcc649f0704ea1b61

Matt Corallo

unread,
Feb 21, 2025, 10:18:31 AMFeb 21
to Antoine Poinsot, bitco...@googlegroups.com
In the delving post you said “This provides a 40x decrease in the worst case validation time with a straightforward and flexible rule minimizing the confiscatory surface. A further 7x decrease is possible by combining it with another rule, which is in my opinion not worth the additional confiscatory surface.”

Can you put numbers to this? How long does it take to validate a full block with this 40x decrease and how long would it take with the further 7x decrease?

A 40x decrease to a validation time of 30 seconds probably is worth a bit of risk for a further improvement. A 40x decrease to 1 second is obviously fine :).

On Feb 5, 2025, at 19:57, 'Antoine Poinsot' via Bitcoin Development Mailing List <bitco...@googlegroups.com> wrote:

Hi everyone,

Antoine Poinsot

unread,
Feb 23, 2025, 11:53:10 PMFeb 23
to Matt Corallo, bitco...@googlegroups.com
A 40x decrease to a validation time of 30 seconds probably is worth a bit of risk for a further improvement.

Depends on what hardware. I don't think a worst case of 30 seconds for end users is worth more risks. A worst case of 30 seconds for miners would probably be concerning.

In addition, although the worst case is important to limit the damage an attacker can do without being economically rational, what we care about for miners attacking each other is economically viable attacks. At the moment the optimal "validation time / cost to attacker" ratio is not the worst case, by a large margin [0].

I believe we should take into account the worst case to miners even if not economically viable for an attacker as a safety margin. But we should keep in mind this is already overestimating the attack risk since in this scenario what we should look at is the worst case validation time of blocks that may have positive returns to an attacker.

Can you put numbers to this?

Sure. Using my functional test which runs the worst case today and the worst case under various mitigations [1] on a Dell XPS 15 9520 laptop (the model with an i5-12500H) i get 120 seconds for the worst case today and 10 seconds for the worst block with a limited of 2500 (potentially) executed sigops per transaction. To (maybe, they could be running more powerful machines than my laptop) impose such a validation time to other miners an attacker would have to invest 89 (!!) preparation blocks. Sure, with low fees the opportunity cost of mining preparation transactions is not as high as it could be. But still.

Peter Todd

unread,
Feb 26, 2025, 12:29:23 PMFeb 26
to Antoine Riard, Bitcoin Development Mailing List
On Sat, Feb 15, 2025 at 01:13:24PM -0800, Antoine Riard wrote:
> Hi Peter,
>
> I'm talking about this check in VerifyScript as of commit 43e71f74 in
> bitcoin core.
>
> ```
> if ((flags & SCRIPT_VERIFY_SIGPUSHONLY) != 0 &&
> !scriptSig.IsPushOnly()) {
> return set_error(serror, SCRIPT_ERR_SIG_PUSHONLY);
> }
> ```
>
> In my understanding, we never set SCRIPT_VERIFY_SIGPUSHONLY, neither in
> MANDATORY_SCRIPT_VERIFY_FLAGS,
> nor in STANDARD_SCRIPT_VERIFY_FLAGS, and this sounds okay as it's a script
> check pertaining to BIP62
> rule 2, and BIP62 was never activated. As far as I can tell, that's more a
> stale check just right
> there in the interpreter code paths.

Right. So the unused code is just those three lines and the single line
defining SCRIPT_VERIFY_SIGPUSHONLY in script (plus four lines of test
code); IsPushOnly() itself *is* used elsewhere in consensus.

You could opena pull-req to remove that if you want. But the tests of
SCRIPT_VERIFY_SIGPUSHONLY indirectly test IsPushOnly(), so not
immediately clear if that's actually a good idea.
signature.asc

Antoine Poinsot

unread,
Feb 28, 2025, 2:41:07 AMFeb 28
to Matt Corallo, bitco...@googlegroups.com
Okay! That's a much better outcome than I was thinking. I assume this was with parallelization
across the 4+8 available cores?

Yes, this is using all 16-1 threads.

Do you happen to have numbers handy for the worst-case with, for
example, one block of prep?

The validation cost with the mitigation is proportional to the number of preparation blocks [0]. So the validation time for one block prep the runtime should be about 112ms.

As an another point of comparison the worst case using Taproot presents about the same validation cost as the worst case with legacy with my proposed mitigation and about 6 or 7 preparation blocks. Except for Taproot it does not need any preparation block and the operations cannot be parallelized.

Antoine

[0] See the graph in this message in the private thread: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/70.



On Wednesday, February 26th, 2025 at 2:11 PM, Matt Corallo lf-l...@mattcorallo.com wrote:

On 2/23/25 5:35 PM, Antoine Poinsot wrote:

A 40x decrease to a validation time of 30 seconds probably is worth a bit of risk for a further
improvement.

Depends on what hardware. I don't think a worst case of 30 seconds for end users is worth more
risks. A worst case of 30 seconds for miners would probably be concerning.

Sure, I'm not super concerned with an RPi. I am concerned with relatively cheap miner hardware, tho.

In addition, although the worst case is important to limit the damage an attacker can do without
being economically rational, what we care about for miners attacking each other is economically
viable attacks. At the moment the optimal "validation time / cost to attacker" ratio is not the
worst case, by a large margin [0].

I believe we should take into account the worst case to miners even if not economically viable for
an attacker as a safety margin. But we should keep in mind this is already overestimating the attack
risk since in this scenario what we should look at is the worst case validation time of blocks that
may have positive returns to an attacker.

Fair enough.

Can you put numbers to this?

Sure. Using my functional test which runs the worst case today and the worst case under various
mitigations [1] on a Dell XPS 15 9520 laptop (the model with an i5-12500H) i get 120 seconds for the
worst case today and 10 seconds for the worst block with a limited of 2500 (potentially) executed
sigops per transaction. To (maybe, they could be running more powerful machines than my laptop)
impose such a validation time to other miners an attacker would have to invest 89 (!!) preparation
blocks. Sure, with low fees the opportunity cost of mining preparation transactions is not as high
as it could be. But still.

Okay! That's a much better outcome than I was thinking. I assume this was with parallelization
across the 4+8 available cores? Do you happen to have numbers handy for the worst-case with, for
example, one block of prep?

Matt

[0] https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/70 <https://
delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/70?u=antoinep>
[1] https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/61 <https://
delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/61?u=antoinep>

Matt Corallo

unread,
Feb 28, 2025, 2:41:07 AMFeb 28
to Antoine Poinsot, bitco...@googlegroups.com


On 2/23/25 5:35 PM, Antoine Poinsot wrote:
> A 40x decrease to a validation time of 30 seconds probably is worth a bit of risk for a further
> improvement.
>
>
> Depends on what hardware. I don't think a worst case of 30 seconds for end users is worth more
> risks. A worst case of 30 seconds for miners would probably be concerning.

Sure, I'm not super concerned with an RPi. I am concerned with relatively cheap miner hardware, tho.

> In addition, although the worst case is important to limit the damage an attacker can do without
> being economically rational, what we care about for miners attacking each other is economically
> viable attacks. At the moment the optimal "validation time / cost to attacker" ratio is not the
> worst case, by a large margin [0].
>
> I believe we should take into account the worst case to miners even if not economically viable for
> an attacker as a safety margin. But we should keep in mind this is already overestimating the attack
> risk since in this scenario what we should look at is the worst case validation time of blocks that
> may have positive returns to an attacker.

Fair enough.

> Can you put numbers to this?
>
>
> Sure. Using my functional test which runs the worst case today and the worst case under various
> mitigations [1] on a Dell XPS 15 9520 laptop (the model with an i5-12500H) i get 120 seconds for the
> worst case today and 10 seconds for the worst block with a limited of 2500 (potentially) executed
> sigops per transaction. To (maybe, they could be running more powerful machines than my laptop)
> impose such a validation time to other miners an attacker would have to invest 89 (!!) preparation
> blocks. Sure, with low fees the opportunity cost of mining preparation transactions is not as high
> as it could be. But still.

Okay! That's a much better outcome than I was thinking. I assume this was with parallelization
across the 4+8 available cores? Do you happen to have numbers handy for the worst-case with, for
example, one block of prep?

Matt

> [0] https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/70 <https://
> delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/70?u=antoinep>
> [1] https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/61 <https://
> delvingbitcoin.org/t/worst-block-validation-time-inquiry/711/61?u=antoinep>

Greg Sanders

unread,
Mar 7, 2025, 10:25:05 PMMar 7
to Bitcoin Development Mailing List
I opened an issue a while ago on this topic: https://github.com/bitcoin/bitcoin/issues/26113
Reply all
Reply to author
Forward
0 new messages