BIP54 implementation and test vectors

328 views
Skip to first unread message

Antoine Poinsot

unread,
Oct 21, 2025, 1:17:21 PMOct 21
to Bitcoin Development Mailing List
Hi everyone,

I'd like to give an update on my Consensus Cleanup work, now BIP54.

I opened an implementation against Bitcoin Inquisition v29.1 at [0]. It contains extensive testing
of each of the four proposed mitigations, and was used as a basis to generate test vectors for
BIP54. I opened a PR against the BIPs repository to add them to BIP54 [1].

The test vectors for the transaction-level sigops limit contain a wide variety of usage combinations
as well as ways of running into the limit. They also include some historical violations as well as
pathological transactions demonstrating the implementation details of the sigop accounting logic
(which was itself borrowed from that of BIP16, which all Bitcoin implementations presumably already
have).

The test vectors for the new witness-stripped transaction size restriction similarly exercise the
bounds of the check under various conditions (e.g. transactions with/without a witness). All
historical violations were also added to the test vectors, thanks to Chris Stewart for digging those
up.

Because the new timestamp restrictions are tailor-made to the mainnet difficulty adjustment
parameters, the test vectors for those contain a number of chains of mainnet headers (from genesis).
Each test case contains a full header chain and whether it is valid according to BIP54. These chains
were generated using a custom miner available in [2] and added to the implementation as a JSON data
file.

The test vectors for the coinbase restriction similarly include a chain of mainnet blocks, because
the timelock check is context-dependent. These were generated using a similar miner also available
at [2].

I'm seeking feedback on these test vectors from everybody but in particular developers of
alternative Bitcoin clients, as compatibility with other Bitcoin implementations than Bitcoin Core
was a design goal.

Best,
Antoine Poinsot

[0]: https://github.com/bitcoin-inquisition/bitcoin/pull/99
[1]: https://github.com/bitcoin/bips/pull/2015
[2]: https://github.com/darosior/bitcoin/commits/bip54_miner

Antoine Riard

unread,
Oct 27, 2025, 1:36:14 AMOct 27
to Bitcoin Development Mailing List

Hi Poinsot,

Started to review a bit the code branch on inquisition, and while doing so I was specifically
thinking about the proposed 2500 sigops limit, and how it weights on a multi-dimensional matrix
of a full-node performace (e.g fees, CPU time, disk space, etc).

Currently, in a simple model, a DoS adversary could constitute a 1-MB (it's pre-segwit acoutning)
transaction with 80_000 sigops from a 1-sat UTXO. A full-node to validate that would have to SHA256
the 1MB tx 80_000 times, thus the O(n^2) "bad" complexity.

Assuming the novel per-tx 2500 sigops limit, still a simple DoS adversary could constitute 32 * 32_150
virtual bytes tx (it's still a 1 MB block) spending from _32_ 1-sat UTXOs. A full-node to validate that
would have to fetch the 32 UTXOs.

This is the 1 UTXO from 32 UTXOs trade-off, I would like to draw awareness on, as fair the O(n^2)
complexity is "bad" but quid of the UTXO memory fetches if there are not in your high-hierarchy cache
and they have to be fetched from RAM, or even worst from disk (i7 core have RAM bigger than the current
UTXO set, not necessarily all range of RasPi).

From the viewpoint of a defending full-node, sure I can limit the number of per-tx sigops, but if an
adversary can achieve the same DoS efficiency now that more than UTXOs have to be fetched to validate
the same per-block total number of sigops, it's a legitimate wonder about the efficiency limit, and
more interestingly if there wouldn't be a better value to be selected.

So it's a bit my interrogation about this 2500 proposal, if worst-case transactions samples binding
to the 2500 limit have been crafted to maximize the number of UTXOs fetches. One can make the hypothesis
that UTXO fetches are "free", but I don't think it's necessarily true, while on the other hand modern ISAs
have dedicated hashing instructions.

Current BIP54 is a bit silent if full-node performance metrics like CPU cycles, IO disk operations or
bandwidth consumptions have been weighted in to select the proposed 2500 value of the limit. This would
be a fair point to modify BIP54 to say that the new sigops limit is only aimed to mitigate CPU DoS
and that others dimensions like memory management have not been emperically observed to be downgraded.

Best,
Antoine
OTS hash: 975674252060994d92eecd63a924e7530623ee737e33c5646d382f0f8c04ec74 

Antoine Poinsot

unread,
Oct 28, 2025, 6:06:28 AMOct 28
to Antoine Riard, Bitcoin Development Mailing List
Hi Riard,

Thanks for the feedback. I understand your point as asking about other costs besides quadratic
hashing, and how the BIP54 "potentially executed" sigop limit relates to those.

There are indeed several other expensive operations when validating a block: ECDSA signature
verifications, FindAndDelete's vector modifications, and prevout lookups. This last one is related
to the recently discussed limit on scriptPubKey sizes [0], as they present a constant factor
increase in the lookup cost that is not bounded by the size of the block being validated.

In any case the cost of these other expensive operations is completely dwarfed by quadratic hashing. The
example you give is unfortunately far from the worst case. I added you to the semi-private Delving
thread [1] which contains the detail of the calculations for various DoS blocks. Feel free also to
reach out to me in private, if you prefer email to Delving.

Furthermore, exploiting the prevout lookup cost is highly uneconomical to a miner. It requires over
a hundred preparation blocks in order to create a single block that would take a couple dozens of
seconds to validate on a modern machine. By comparison, without the BIP54 sigops limit this same
effect can be achieved with 2 orders of magnitude less preparation blocks, making the attack viable
for a revenue-maximizing miner. Without the BIP54 sigops limit, over a hundred preparation blocks
also gets you a block that takes over 10 minutes to validate on a modern machine.

Finally, besides having diminishing returns these mitigations would also have a higher confiscatory
surface [2]. The BIP54 mitigation was chosen because it pinpoints exactly the harmful behaviour we
want to prevent, with only a minimal impact on potentially legitimate usage [3], while making
attacks of miners on their competition uneconomical as well as making the very worst case largely
uninteresting to an externally-motivated attacker.

Best,
Antoine

[0]: https://gnusha.org/pi/bitcoindev/OAoV-Uev9IosyhtUCyeIhclsVq-xUBZgGFROALaCKZkEFRNWSqbfDsVyiXnZ8B1TxKpfxmaULuwe4WpGHLI_iMdvPr5B0gM0nDvlwrKjChc=@protonmail.com/
[1]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
[2]: See for instance regarding the scriptPubKey size limit https://gnusha.org/pi/bitcoindev/CAAS2fgQEdVVcb=DfP7XoRxfXfq1unKBD...@mail.gmail.com/
[3]: If a miner wants to sweep more than 2500 P2PK utxos in a single non-standard transaction, they
now need to use more than one transaction but they can still do it.
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/e8d7baa0-5d96-4e41-8cb5-083742c61454n%40googlegroups.com.

Antoine Riard

unread,
Nov 9, 2025, 8:48:50 PMNov 9
to Bitcoin Development Mailing List
Hi Poinsot,

Thanks for the precision. Yes my wonder is more if you put yourself in the shoes of an attacker,
and you have to calculate your cost for an attack, what is the most interesting between playing on
the number of prevout lookups and maximizing the quadratic hashing. I do believe the proposed 2500
sigops limit is slashing the quadratic hashing worst-case concern, while at the same time not providing
an advantage to the attacker on the prevout lookup cost. Say differently, I believe we should ensure that
any introduced DoS limit in the goal to reduce worst-case for a DoS vector A do not downgrade the worst-case
for another DoS vector B.

Previously, as the way the novel limit was proposed in abstracto, I had a concern with given that
if you take for example bitcoin core multiple input checks where made (first all scripts flags and
then for consensus mandatory script flags) [0], a DoS attacker could have deliberately make the
script failed on a policy flag and then make it hard fails on the novel 2500 limit, _at a cheaper
price_ (less CHECKMULTISIG bytes to pack in the tx). I don't think it's a concern anymore as after
[1] and others, there is no double validation anymore and `CheckSigOpsBIP54` has been implemented
with the other policy check limits.

Of course the number of CHECKMULTISIG bytes to pack is only a concern for an attacker in the situation
where satoshis have to be provided to pass the `min_relay_feerate` policy rule, but it's a realistic
limit one has to reason when you're considering the cost of network-wide DoS. Somehow, you're maximizing
the higher DoS cost per byte per satoshi you might have to commit in a single tx.

Disagree with you on the prevout lookup cost exploitation, as I think there is at least variant to
attempt to slash the cost for an attacker for some categories of DoS. But yes seen the calculations
for various DoS blocks, and that can be discussed elsewhere.

Anyway, finished a first round of review of the BIP54 test cases, probably few vectors missings
like for the 64 byte and Taproot transaction with empty or full annex. I'll do more review rounds
of it, in parallel of the bitcoin-inquisition branch.

Best,
Antoine
OTS hash: 223aeb5ea932ada762b2d4181b8430b7cbf937579eccd467769c0284276e9595

[0] https://github.com/bitcoin/bitcoin/pull/31097
[1] https://github.com/bitcoin/bitcoin/pull/32473
Reply all
Reply to author
Forward
0 new messages