What's a good stopping point? Making the case for the capabilities enabled by CTV+CSFS

1,115 views
Skip to first unread message

Antoine Poinsot

unread,
Jun 23, 2025, 9:26:20 AMJun 23
to Bitcoin Development Mailing List
There is excitement in the technical community about a CTV+CSFS soft fork as specified in BIP119 and BIP348. We think
this combination of opcodes offers desirable capabilities to scale Bitcoin payments and is worth considering to soft
fork into Bitcoin. There has been several objections to this proposal, which we can group into three categories:
exploration of alternatives, demonstration of usage, and design of the operations to achieve these capabilities.

In an attempt to help the proposal move forward we would like to kick-off a discussion about the first category:
alternatives. We will start by making the case that the capabilities "commit to the transaction spending this output"
and "verify a BIP340 signature for an arbitrary message" are a good stopping point for a Bitcoin soft fork. We invite
anyone to share objections in reply to this thread so they can be addressed or inform a better course of action.

Let's keep the discussion focused on the capabilities, not the specific way of designing operations to achieve them. For
the sake of this discussion `OP_CTV` would be equivalent to `OP_TEMPLATEHASH` (push the template hash on the stack) as
the capability "commit to the transaction to spend an output". `OP_TXHASH` would be separate, as a "programmable
transaction introspection" capability.

The ability to commit to the exact transaction spending an output is useful to reduce interactivity in second-layer
protocols. For instance it can reduce roundtrips[^0] in the implementation of LN-Symmetry, or make receiving an Ark
"vtxo" non-interactive[^1]. Additionally, it enables significant optimizations[^2] in the implementation of Discreet Log
Contracts.

The ability to verify a signature for an arbitrary message in Tapscript enables oracle attestations and a form of
delegation. Oracle attestation for instance significantly reduce[^3] the onchain footprint of BitVM. Reducing an
application's onchain footprint benefits all Bitcoin users by easing block space competition, and it's especially
important for applications that generate very large transactions, which could otherwise increase pressure toward mining
centralization[^4].

Together, these two features enable a third capability: rebindable transaction signatures. Rebindable signatures make
possible a new type of payment channels, LN-Symmetry ("Eltoo"), whose simplicity makes practical advanced constructs
such as multiparty channels. They also enable further interactivity reduction in second layer protocols, as illustrated
by the Ark variant "Erk"[^5] or the dramatic simplification[^6] they bring to upgrading today's Lightning (i.e. without
switching to LN-Symmetry) from HTLCs to PTLCs.

These capabilities are simple and modular. They are well understood and present a low risk of enabling unwanted
behaviour. They do not increase the cost of validation, have low implementation complexity and are unlikely to become
redundant, even if more powerful capabilities are added in the future. These capabilities improve existing
tried-and-proven protocols used daily by Bitcoin users, like the Lightning Network. They also make new ones possible
either at all or through realistic interactivity requirements. The new enabled protocols take a similar approach to
existing Bitcoin scaling solutions, only with different tradeoffs not previously available. We can therefore reasonably
expect they won't introduce new systemic incentives, while broadening the range of supported use cases.

The first alternative approach to address is doing nothing. Doing nothing is *the* valid schelling point in a system
where consensus changes must be agreed on by a supermajority of the economic activity, and ideally by all stakeholders
in the system. Unless there is a critical vulnerability being fixed, the onus is on the proposer to overcome the various
valid objections. Further, a number of smart contracts have been deployed on Bitcoin and more are incoming, with or
without consensus changes. No softforks on the horizon are known to generate asymptotic scaling, and what's more,
on-chain demand has not been high except on infrequent intervals.

As said prior, we believe the capabilities of CTV+CSFS reach an appropriate bar for consideration for activation. While
it will not achieve asymptotic scaling, it will enable significant reduction in complexity in already-deployed systems,
and is worth deploying for their specific benefits. Regardless, it's important to emphasize it again: the onus is on the
proposer to overcome objections.

Other alternative approaches to scaling Bitcoin payments have been proposed such as with validation rollups[^7], enabled
by the ability to verify a zero-knowledge proof directly in Bitcoin Script. These rollups are trustless and could
effectuate a modest factor throughput increase under realistic assumptions and transaction load. This approach, used on
altcoins but new to Bitcoin, has yet to reach consensus among the technical community and Bitcoin users more broadly.
Relative immaturity of many of the employed crypto-systems make designing a ZKP-specific primitive a difficult task.
Further, trustless composibility with interactive protocols like LN to achieve further scaling are speculative at the
time. Nonetheless, the capabilities that enable this alternative approach to scaling are neither exclusionary nor
redundant with those proposed here.

It makes sense to focus first on capabilities improving the tried-and-proven approach, as the newer approach
(and the capabilities enabling it) may come with different tradeoffs.

Yet another alternative is a set of more powerful capabilities, enabling the use cases that "commit to next transaction"
and "verify a BIP340 signature for an arbitrary message" enable and more. For instance replacing "commit to the exact
transaction which must spend this output" with "programmable introspection on the spending transaction's fields" has
been considered. However this approach increases implementation complexity and broadens the risk surface[^8], which
warrants a compelling demonstration that arbitrary transaction introspection does enable important use cases not
achievable with more minimal capabilities.

Finally, a more radical alternative is to focus efforts to make Bitcoin smart contracts more capable with a sane
re-design of its scripting language. Similarly to the alternative of more powerful Bitcoin Script capabilities, it
remains to be shown that more expressivity is both safe and desirable. Furthermore, it is unclear that such an
undertaking would be achievable with the (very) limited engineering resources currently allocated to extending Bitcoin
scripting capabilities.

In conclusion, we believe the bundle of capabilities "commitment to the transaction spending an output" and "BIP340
signature verification of arbitrary message" to be the best direction for Bitcoin to take. These are well-understood,
simple capabilities, substantially improving an existing well-understood approach to scaling Bitcoin payments. This
direction does not preclude research into more advanced capabilities, though questions remain about their overall
desirability.

Antoine Poinsot & Greg Sanders

[^0]: https://delvingbitcoin.org/t/ln-symmetry-project-recap/359
[^1]: https://delvingbitcoin.org/t/the-ark-case-for-ctv/1528
[^2]: https://gnusha.org/pi/bitcoindev/CAH5Bsr2vxL3FWXnJTszMQj83...@mail.gmail.com
[^3]: https://delvingbitcoin.org/t/how-ctv-csfs-improves-bitvm-bridges/1591
[^4]: https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8
[^5]: https://delvingbitcoin.org/t/evolving-the-ark-protocol-using-ctv-and-csfs/1602
[^6]: https://delvingbitcoin.org/t/ctv-csfs-can-we-reach-consensus-on-a-first-step-towards-covenants/1509/18
[^7]: https://github.com/john-light/validity-rollups
[^8]: https://bluematt.bitcoin.ninja/2024/04/16/stop-calling-it-mev

Harsha Goli

unread,
Jun 24, 2025, 11:12:58 AMJun 24
to Bitcoin Development Mailing List
We will start by making the case that the capabilities "commit to the transaction spending this output"
and "verify a BIP340 signature for an arbitrary message" are a good stopping point for a Bitcoin soft fork. We invite
anyone to share objections in reply to this thread so they can be addressed or inform a better course of action.

We can consider narrower commitment combinations once we establish that broad commitments are a success for bitcoin. I reject the assumption some have that narrower commitment combinations might have more market demand than broader combinations, and that such assumptions might be an adequate reason to move forward with alternative proposals.

Thanks,
Harsha, sometimes known as arshbot

Matt Corallo

unread,
Jun 24, 2025, 12:00:56 PMJun 24
to Antoine Poinsot, Bitcoin Development Mailing List
Thanks, responding to one specific point:

On 6/23/25 9:14 AM, 'Antoine Poinsot' via Bitcoin Development Mailing List wrote:
> Yet another alternative is a set of more powerful capabilities, enabling the use cases that "commit to next transaction"
> and "verify a BIP340 signature for an arbitrary message" enable and more. For instance replacing "commit to the exact
> transaction which must spend this output" with "programmable introspection on the spending transaction's fields" has
> been considered. However this approach increases implementation complexity and broadens the risk surface[^8]

Responded to below [1]

> which
> warrants a compelling demonstration that arbitrary transaction introspection does enable important use cases not
> achievable with more minimal capabilities.

I'm somewhat skeptical that showing this isn't rather simple, though I admit I've spent less time
thinking about these concepts. ISTM even something as simple as a rate-limit requires more
full-featured introspection than only "commit to the exact next transaction" can provide. For
example, a trivial construction would be something which requires that transactions spending an
output have an output which claims at least Amount - Rate, which requires both more full-featured
introspection as well as a bit of math. Given one of the loudest groups advocating for the
additional features of CTV+CSFS are enterprise or large-value personal custody providers, it seems
somewhat of a loss to not work our way towards really basic features for this use-case.

More generally, more full-featured introspection like TXHASH provides a lot of flexibility in the
constructs people can build. For example, allowing BYO fees in the form of an additional input +
output in a transaction, rather than fixing an anchor output in the fixed "next transaction"
commitment to allow for fees (and then requiring the same additional input + output later). There's
also open questions as to the incentive-compatibility of anchors in a world with expensive block
space, as OOB fees become much cheaper.

Indeed, ISTM many use-cases for a construction like TXHASH become a lot more substantial with Math
(though, again, I spend less time thinking about the use-cases of these things than most, so I'm
sure others have more examples), I'm quite skeptical that *just* looking at an individual fork on
its own is the right benchmark. Sure, functionality in proposed changes to Bitcoin's consensus need
to be well-justified, but they don't need to be well-justified *purely on their own*. We add things
like OP_SUCCESS opcodes in soft forks specifically to expand the set of things we can do later, not
specifically in this fork.

If we assume that we end up wanting things like velocity limits (which I imagine we would?) then it
seems to me we should do a logical fork that adds features today, but which will allow us to make
minimal extensions in the future to further expand its use-cases later. Taking a more myopic view of
the present and ignoring the future results in us doing one thing today, then effectively replacing
it later by adding more flexibility in a new opcode later, subsuming the features of what we do
today. I don't see how this results in a net reduction in risk to Bitcoin, rather just means more
total work and more cruft in Bitcoin's consensus.

[1]

Responding to the MEVil question OOO because I think the above should go first :).

Indeed, more flexible introspection provides for a difference in risk to the system (though its
worth noting we cannot both argue that there is no "demonstrated utility" *and* that the utility of
a change is so substantially higher that it adds material risk to the system in the form of MEVil
from its use-cases). However, given the uses of the Bitcoin chain today, it seems entirely possible
(assuming sufficient adoption) that we end up with a substantial MEVil risk with or without any
functionality expansion. This mandates a response from the Bitcoin development community in either
case, and I'm confident that response can happen faster than any reasonable soft fork timeline.

While its possible that existing CSV-based MEVil risk never grows beyond its current anemic state
(due to preferences for stronger trust models from their users), and that there's a particularly
clever design using expanded introspection that improves the trust model such that suddenly
CSV-based protocol use explodes, ISTM given the risk and the need to mitigate it on its own, taking
decisions that are sub-optimal for Bitcoin's consensus on this basis isn't accomplishing much and
has real costs.

Matt

Antoine Poinsot

unread,
Jun 25, 2025, 12:54:45 PMJun 25
to Matt Corallo, Bitcoin Development Mailing List
Thanks for sharing some objections. Here is my attempt at addressing them.

> ISTM even something as simple as a rate-limit requires more full-featured introspection than only
> "commit to the exact next transaction" can provide. For example, a trivial construction would be
> something which requires that transactions spending an output have an output which claims at least
> Amount - Rate, which requires both more full-featured introspection as well as a bit of math.

Yes. These capabilities are really only useful for reducing interactivity in second-layer protocols.
They do not (reasonably) help for vaults and we do not claim it as a use case.

Previous efforts to promote opcodes implementing those capabilities have unfortunately fallen into
the trap of overpromising and claiming use cases they do not really enable. But this should not make
us overlook what they are really good at: interactivity reduction.

Do you think that the use cases presented in OP, if demonstrated, are not enough to warrant soft
forking in a primitive enabling them?

> Given one of the loudest groups advocating for the additional features of CTV+CSFS are enterprise
> or large-value personal custody providers

Yes. We intentionally do not mention vaults as a use case in our steelman. We should not change
Bitcoin on the basis of misleading use cases. If people are interested in vaults, they should
sponsor efforts on a different set of capabilities. Probably "programmable forwarding of value from
inputs to outputs", "programmable forwarding of spending conditions from inputs to outputs" and maybe
"commit to the exact transaction spending an output" (or more powerful introspection).

The lack of a similar enthusiasm for proposals enabling most or all of these functionalities (like
Salvatore Ingala's BIP334 or previously James O'Beirne's and Greg Sanders' BIP345) suggests such
loud support are in fact in favour of "just doing something" and rationalize nice-sounding use cases
backward from this proposal (but vaults!) because it appears to them to be "further down the road".
I think this view is very dangerous and is part of our motivation for redirecting discussions toward
what these capabilities actually enable.

> it seems somewhat of a loss to not work our way towards really basic features for this use-case.

I personally grew more skeptical of the reactive security model of vaults after working on it. Your
mileage may vary and that's fine if people want to work on capabilities that actually enable vaults,
but i don't think they should be a required use cases and block introducing primitives that
substantially improve existing layer 2s and make new ones possible.

> Indeed, ISTM many use-cases for a construction like TXHASH become a lot more substantial with Math
> [...], I'm quite skeptical that *just* looking at an individual fork on its own is the right
> benchmark.

I agree that modularity and forward composability with potential future upgrades are arguments in
favour of a more flexible approach. But those need to be balanced with the additional risk and
implementation complexity such an approach entails. I'm happy to be convinced if supporters of this
approach demonstrate the added flexibility does enable important use cases and the risks associated
are manageable. But if nobody is interested in doing so, i don't think it's reasonable to hold off a
safer upgrade as long as it is demonstrated to provide substantial benefits.

It's also unclear that we should stop at "programmable introspection" in this case. Why not also
include "programmable amount / spending condition forwarding" too, which would give you vaults? Why
not even CAT, which is a neat tool in many situations and would let ZK people experiment with
validation rollups? And at this point, it would seem wiser to just work on a sane Bitcoin Script
replacement rather than throwing buckets of opcodes at it and hoping it holds off. Which as we say
in OP i don't think is realistic.

> I don't see how this results in a net reduction in risk to Bitcoin, rather just means more total
> work and more cruft in Bitcoin's consensus.

In theory, i agree. But by this token the same goes for every future extension that introduces more
expressivity to Bitcoin Script. This ties back to the stopping point: why add more cruft to the
existing interpreter when we could all focus on BTCLisp instead?

It's also the case that even if future extensions introduce a superset of the capabilities being
discussed, it's unlikely that such simple ones like "just commit to the next transaction" and
"verify a signature for an arbitrary message" would ever be made fully redundant.

Finally, when considering technical debt we should also weigh the actual cost of the implementation
of these simple capabilities. Signature verification on arbitrary message reuses existing signature
checking logic. Similarly, committing to the next transaction can heavily lean on existing Taproot
signature messages, only minimally departing when necessary, and be implemented in a strictly
simpler manner than the existing CTV proposal. A minimal implementation of these capabilities would
not introduce significant technical debt.

Interestingly, this argument applies more to introducing more involved capabilities like arbitrary
transaction introspection, because of the substantially larger technical debt it would impose to
first support in Bitcoin Script instead of focusing on a replacement with transaction introspection
from the get go.

> Indeed, more flexible introspection provides for a difference in risk to the system (though its
> worth noting we cannot both argue that there is no "demonstrated utility" *and* that the utility
> of a change is so substantially higher that it adds material risk to the system in the form of
> MEVil from its use-cases).

Yes we can? It's reasonable to see how arbitrary introspection could be useful in various handwavy
ways, and therefore how they can be used for undesirable applications, while also not having an
important use case it enables clearly defined, much less demonstrated.

> However, given the uses of the Bitcoin chain today, it seems entirely possible (assuming
> sufficient adoption) that we end up with a substantial MEVil risk with or without any
> functionality expansion.

Sure, but I’d argue that the presence of risk now is a reason to be more cautious about adding to
it, rather than accepting it as inevitable.

Best,
Antoine
> sure others have more examples), I'm quite skeptical that just looking at an individual fork on
> its own is the right benchmark. Sure, functionality in proposed changes to Bitcoin's consensus need
> to be well-justified, but they don't need to be well-justified purely on their own. We add things
> like OP_SUCCESS opcodes in soft forks specifically to expand the set of things we can do later, not
> specifically in this fork.
>
> If we assume that we end up wanting things like velocity limits (which I imagine we would?) then it
> seems to me we should do a logical fork that adds features today, but which will allow us to make
> minimal extensions in the future to further expand its use-cases later. Taking a more myopic view of
> the present and ignoring the future results in us doing one thing today, then effectively replacing
> it later by adding more flexibility in a new opcode later, subsuming the features of what we do
> today. I don't see how this results in a net reduction in risk to Bitcoin, rather just means more
> total work and more cruft in Bitcoin's consensus.
>
> [1]
>
> Responding to the MEVil question OOO because I think the above should go first :).
>
> Indeed, more flexible introspection provides for a difference in risk to the system (though its
> worth noting we cannot both argue that there is no "demonstrated utility" and that the utility of
> a change is so substantially higher that it adds material risk to the system in the form of MEVil
> from its use-cases). However, given the uses of the Bitcoin chain today, it seems entirely possible
> (assuming sufficient adoption) that we end up with a substantial MEVil risk with or without any
> functionality expansion. This mandates a response from the Bitcoin development community in either
> case, and I'm confident that response can happen faster than any reasonable soft fork timeline.
>
> While its possible that existing CSV-based MEVil risk never grows beyond its current anemic state
> (due to preferences for stronger trust models from their users), and that there's a particularly
> clever design using expanded introspection that improves the trust model such that suddenly
> CSV-based protocol use explodes, ISTM given the risk and the need to mitigate it on its own, taking
> decisions that are sub-optimal for Bitcoin's consensus on this basis isn't accomplishing much and
> has real costs.
>
> Matt
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/8a9a2299-ab4b-45a4-8b9d-95798e6bb62a%40mattcorallo.com.

Chris Stewart

unread,
Jun 25, 2025, 4:23:09 PMJun 25
to Matt Corallo, Antoine Poinsot, Bitcoin Development Mailing List
>For example, a trivial construction would be something which requires that transactions spending an
output have an output which claims at least Amount - Rate, which requires both more full-featured
introspection as well as a bit of math.

I agree this could be a useful primitive in the Script language, which has been a motivating factor in my work on 64-bit arithmetic [0] and OP_{IN,OUT}_AMOUNT [1]. I've prototyped two vault-related opcodes—OP_VAULT and OP_CHECKCONTRACTVERIFY—using OP_{IN,OUT}_AMOUNT [2][3]. While the current proposal has some clear limitations (namely, what I refer to as “amount replay attacks”), I believe these can be mitigated via Taproot annex usage, as proposed by Antoine Poinsot [4]. That approach has not yet been prototyped, though.

That said, I don't see amount lock opcodes as being mutually exclusive with hash-based covenant or introspection opcodes. In fact, I suspect experimenting with the hash-based primitives will help reveal their limitations and inform better design decisions for the next generation of introspection opcodes to be considered for Bitcoin.

[0] – https://groups.google.com/g/bitcoindev/c/j1zEky-3QEE
[1] – https://delvingbitcoin.org/t/op-inout-amount/549/3?u=chris_stewart_5
[2] – https://delvingbitcoin.org/t/op-inout-amount/549/4?u=chris_stewart_5
[3] – https://delvingbitcoin.org/t/op-inout-amount/549/5?u=chris_stewart_5
[4] – https://delvingbitcoin.org/t/op-checkcontractverify-and-its-amount-semantic/1527/6?u=chris_stewart_5



 


Ethan Heilman

unread,
Jun 25, 2025, 5:28:02 PMJun 25
to Antoine Poinsot, Matt Corallo, Bitcoin Development Mailing List
> Why not even CAT, which is a neat tool in many situations and would let ZK people experiment with validation rollups? And at this point, it would seem wiser to just work on a sane Bitcoin Script replacement rather than throwing buckets of opcodes at it and hoping it holds off.

A Bitcoin Script replacement is likely to be a lot of work and many of
the features will be informed by what people are building at the time
it is designed. Merging a low implementation complexity opcode like
OP_CAT (BIP-347) would provide useful data to inform a Bitcoin Script
replacement.

In addition to allowing ZK people to experiment, OP_CAT also has many
simple use cases that improve everyday boring tapscripts.

> Sure, but I’d argue that the presence of risk now is a reason to be more cautious about adding to it, rather than accepting it as inevitable.

Addressing MEVil, if addressing MEVil is considered an important goal,
requires being proactive and enabling the ability to build better
protocols on Bitcoin. This is because, all things being equal,
building a protocol that doesn't care about MEVil resistance is easier
and requires less scripting functionality than one that does.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/73PXNVcmgiN2Kba63P2SRZfs42lvgME_8EF-DlYtkOSY8mLRxEXPEw5JAi5wtPU2MEMw1C6_EDFHKKrhaa1F53OgIJOcam-kbOUP3aG2_e0%3D%40protonmail.com.

Josh Doman

unread,
Jun 26, 2025, 12:03:57 PMJun 26
to Bitcoin Development Mailing List
> The ability to commit to the exact transaction spending an output is useful to reduce interactivity in second-layer
protocols.

Where do you stand on explicit commitments to neighbor prevouts? My understanding is that this capability would be extremely useful, for BitVM and for making offers to buy UTXOs. If the goal is to commit to the "exact transaction spending an output," CTV alone wouldn't quite get you there.

Greg Sanders

unread,
Jun 26, 2025, 1:15:29 PMJun 26
to Bitcoin Development Mailing List
Hi Josh,

It's definitely an interesting primitive, and imo would be best offered through an explicit method of committing to sibling prevouts. A bit of discussion that touched on TXHASH/bllish/ancestry proofs was on delving a bit ago: https://delvingbitcoin.org/t/how-ctv-csfs-improves-bitvm-bridges/1591/8 . I am interested to add more compelling uses to more programmable versions of commitment hashes like TXHASH, because I feel the space has been very unexplored outside of single-tx exogenous fees patterns, and this one specific use. It's a pretty thin playbook for something that feels quite powerful.

As I've mentioned elsewhere if it's a desired capability, we should actually validate a number potential script enhancements against it rather than get tunnel vision on any specific proposal that just so happens to kinda-not-really do it. 

Greg

Antoine Riard

unread,
Jul 1, 2025, 11:22:57 AMJul 1
to Bitcoin Development Mailing List
Hi,

I agree with Sanders and Poinsot on the perspective that enabling too much
powerful script primitves or powerful capabilities could definitely increase
risk surface, be it MEV-style in the open competition of miners themselves
or towards deployed second-layers e.g lightning.

I don't think CTV+CSFS let you do TxWithdhold or to design "evil" smart
contracts [0] (well I'm still a bit reserved for CSFS as you can pass on
the stack a transaction-as-message which has spent _another_ utxo...).
And somehow the gradual approach to change bitcoin scripts sounds more
wise rather than the drop a full replacmeent approach. Those days, people
are mentionning BTCLisp, few years ago it was Simplicity, tomorrow it
will be another one...and it sounds there is always a gap between the
properties of those new Script machines and what is effectively needed
to make second-layers secure, said even less performant.

In my opinion, it would be wiser to put more thoughts on mechanisms
that would prevent adverserial MEV at the consensus-level (e.g prevent
an UTXO to inspect the 2-of-2 funding UTXO of a lightning channel to
attack it), rather than chasing hypothetical "just land 5000 LoC in
the consensus engine kind of changes as a new Script interpreter".
Given there is more and more works realized to enable performant
verification of "compute anything" on bitcoin, we might have to be wary
of breaks in layers abstraction down the line (e.g what if sophisticated
off-chain contracts among a majority of miners to prevent a minority of
miners to spend their coinbase utxos, stuff like that...?).

In fine, I'm still thinking it's better to priotize in terms of design,
review and testing the set of fixes gathered in BIP54 over CTV+CSFS. While
in theory with BIP9 we can have multiple soft-fork concurrently assigned
to different block nVersion bits, the amount of skilled eyes and hands to
review soft-fork is not super elastic and we can never exclude defavorable
interactions to be found between the 2 set of changes. Defavorable interactions
that would requires to fix each other in consequence...

So very personally, I favor the optic we go to activate first as much fixes
we can among the set of BIP54 ones, then we go to consider for activation
CTV alone or CTV+CSFS together. Once BIP54 and CTV+CSFS are technically ready,
making the 2 activating within a 18-months window sounds realistic from what has
been done historically.

Anyone is free to allocate his time as one wish in bitcoin open-source,
and anyone is free to put one's name at the back of a letter and other signed
endorsment for all style of "politics". But at some point a bit of focus
and clarity on what is on the table in matters of consensus changes and what
we all converge on as a community, that would be very welcome...

Seriously reviewing and testing a consensus change doesn't happen overnight :(

Best,
Antoine
OTS hash: c94bf70c0cf2fae2d790184af5879dda5695a4d8f0c0ff7bf7bcb1a86a838a17

[0] https://blog.bitmex.com/txwithhold-smart-contracts/

Anthony Towns

unread,
Jul 3, 2025, 4:54:08 AM (14 days ago) Jul 3
to Matt Corallo, Antoine Poinsot, Bitcoin Development Mailing List
On Tue, Jun 24, 2025 at 11:54:02AM -0400, Matt Corallo wrote:
> > which
> > warrants a compelling demonstration that arbitrary transaction introspection
> > does enable important use cases not achievable with more minimal capabilities.
> I'm somewhat skeptical that showing this isn't rather simple,

I think the BitVM/CTV idea posted on delving [0] is one such simple demo?

I gave an example in that thread of how you'd implement the desired
construct using bllsh's introspection primitives, but the same could
equally well be done with Rusty's as-yet unpublished OP_TX, something
like:

DUP 0x1011 TX 0x00000002 EQUALVERIFY 0x1009 TX 0x0809 TX EQUALVERIFY

where:

* "0x1011 TX" pops an input index from the stack and gives the four-byte
vout index of that input's prevout
* "0x1009 TX" pops an input index from the stack and gives the txid of that input's
prevout
* "0x0809 TX" gives the txid of the current input's prevout

(this encodes "this utxo can only be spent (via this path) if its sibling
output at index 2 is also being spent in the same transaction")

Cheers,
aj

[0] https://delvingbitcoin.org/t/how-ctv-csfs-improves-bitvm-bridges/1591

Antoine Poinsot

unread,
Jul 4, 2025, 9:08:48 AM (12 days ago) Jul 4
to Anthony Towns, Matt Corallo, Bitcoin Development Mailing List
I agree the BitVM/CTV idea suggests inspection of other inputs can be useful for applications
leveraging connector outputs.

While it is potentially compelling, the BitVM use case was only briefly presented, with no
demonstration or even detailed description of how it would work in practice. This makes it hard to
assess the costs and benefits of this approach. Furthermore, it's hard to assess how much of an
improvement it brings to Bitcoin users as BitVM has yet to be delivered and see any meaningful
adoption.

As Greg responded when it was raised earlier in this thread[^0], as things stand today i don't think
this idea justifies the leap in expressivity.

Best,
Antoine

[^0]: https://gnusha.org/pi/bitcoindev/8d37b779-bf2e-4f63...@googlegroups.com
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aGX_MNORQVQT_lp4%40erisian.com.au.

Josh Doman

unread,
Jul 9, 2025, 5:53:59 PM (7 days ago) Jul 9
to Bitcoin Development Mailing List
I tend to agree. It's hard to justify the leap in expressivity of OP_TX / OP_TXHASH solely on the basis of enabling commitments to sibling prevouts. A more targeted approach would be better.

In that vein, I think there's a way to use MuHash to generalize CTV / TEMPLATEHASH and commit to sibling prevouts in constant time.

The idea is to precompute a MuHash accumulator containing SHA256(index || prevout) for each input in the transaction.

Then, to compute the sibling commitment for input i, we simply copy the accumulator and remove the SHA256 hash for that input. Thanks to MuHash, this takes constant time. Finally, we include the sibling commitment in the existing proposed commitment scheme.

This would represent a low-cost way to commit to the next txid, providing predictability regardless of how many inputs are spent (unlike existing proposals). Given that MuHash is already in the codebase, I'm inclined to believe this wouldn't be a heavy lift and would better achieve the goal of a primitive that "commits to the next transaction."

Thoughts?

Best,
Josh

Greg Sanders

unread,
Jul 10, 2025, 8:10:47 AM (6 days ago) Jul 10
to Josh Doman, Bitcoin Development Mailing List
Hi Josh,

For one, MuHash doesn't have a compact membership proof, for one, making it unlikely to be useful for anything we're likely thinking of. It's used in Bitcoin Core for equivalency of UTXO sets in snapshots. To validate membership, the entire population has to be iterated.

Best,
Greg

You received this message because you are subscribed to a topic in the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/bitcoindev/-qJc1EWQzY0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to bitcoindev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/b72e6f6f-af27-4043-b714-4e607bbe8880n%40googlegroups.com.

Josh Doman

unread,
Jul 10, 2025, 10:42:53 AM (6 days ago) Jul 10
to Bitcoin Development Mailing List
Hi Greg,

I'm not sure I quite follow the membership proof concern. The reason to use MuHash is to avoid quadratic hashing, by only needing to iterate through the input set once. Our goal is simply to prove that an indexed set of sibling prevouts is committed to.

In the naive implementation, validating a sibling commitment requires hashing all other prevouts in the transaction. In the worst case, this is O(n^2) if we need to validate a sibling commitment for each input.

With MuHash, this becomes O(n) because we can validate sibling commitments by precomputing a hash over all prevouts and then selectively removing one prevout, which is O(1). This gives us the same result as directly hashing the sibling prevouts.

Does this address your concern?

Best,
Josh
Reply all
Reply to author
Forward
0 new messages