A safe way to remove objectionable content from the blockchain

439 views
Skip to first unread message

Lazy Fair

unread,
Nov 20, 2025, 4:49:03 AMNov 20
to bitco...@googlegroups.com
I propose two changes to Bitcoin, one at the consensus level, and one at the client level. The purpose of this is to support filtering of objectionable content after the content has been mined, allowing each node operator to maintain only that data they find agreeable. In so doing, my hope is that we can satisfy all users, and deal with their greatest concerns.

I do however acknowledge those people that want to stop miners from mining non-monetary transactions, because of the data storage and processing cost, and I recognised that this proposal does nothing to address those concerns.

*** Motivation ***

You can't just change or delete some data from the blockchain, because a hash of everything in a block is included in the next block. If you change the data, you change the hash. The design presented here is an attempt to achieve a compromise, where a person can have all of the benefits of running a full node, including the integrity of the ledger, yet without storing the objectionable content - and importantly without even being able to recreate that objectionable content from what data they still have.

*** Preliminary ***

Objectionable content is defined here as whatever you want it to be, and two users don't have to share the same views. One person might object to copyrighted material used without permission, another a negative depiction of the prophet Muhammad, and another video of the sexual abuse of children. The design presented below lets each person decide what to remove for themself (if anything), while those who want everything can still have it all.

The design lets a user remove any data, and deals with the impact on the matching of block hashes, data integrity and malleability. 

In the case of OP_RETURN data, the result should be no functional effect at all. Whether that's also possible for other data elements will depend on the semantics of that data.

*** Solution ***

This solution is based on two ideas, both aimed at maintaining data integrity through hashing, while removing some of the hash's input data stream.

*** First Idea ***

When performing a hash of some data (D), each chunk of data that's processed updates an internal state (S) of the hashing algorithm. If you know what the internal state is at point A and then at point B, then you can compute the final hash of D even without the data between A and B. This is the first idea. First you need to know what S(A) and S(B) are, and once you do, you can compute the hash of D, without the data between A and B. You run the hashing algorithm normally up to A, then you update the internal state from S(A) to S(B), then you continue hashing from B to the end of D.

The hash still works as an integrity check for the data before A, and the data after B: change any of this, and the final hash will change. Now you can safely change or delete the data in between, without breaking the integrity of the blockchain and proof of work - but only if you can securely obtain S(A) and S(B), and only if you don't need the data between A and B for anything else.

The easiest way to obtain S(A) and S(B) is to calculate them yourself, but that requires that you hold the objectionable data, at least for a time. That also requires finding someone else that holds the objectionable data. But what if instead, we could share S(A) and S(B) across the network, do it securely, and in a way where up to 100% of nodes could choose to drop the data in between, permanently, without breaking anything?

*** Second idea ***

It may seem like there is no one you can trust to tell you what S(A) and S(B) are. There is only one source of data that a Bitcoin node can trust, and that is the blockchain, as mined by miners, with the most proof of work, and verified locally. Therefore, the second idea is that S(A) and S(B) are trusted if (and only if) they are written into the blockchain, and verified by the network.

For example, we write data to the semantic effect of "In Transaction X: at byte offset A, the internal state of the hash function is S1; at byte offset B, the internal state of the hash function is S2." Miners then mine this statement into a block, and verifiers confirm that it is cryptographically accurate with respect to the data in Transaction X as described - or else they drop the new block as invalid.

At this point, any node can choose to delete the data between S1 and S2. This can now be done with confidence because they can double check the accuracy, and the impact on the ledger, before they delete the data. After that they may also be able to share (with the agreement of the receiving node) this modified transaction as part of initial block downloads, along with S1 and S2 - to any other nodes that don't want this objectionable content. The receiving nodes wouldn't immediately and necessarily be able to trust S1 and S2, but they would eventually, once they have the full blockchain.

*** Conclusion ***

This isn't a concrete proposal - it's not even close - but perhaps it might be the start of a fruitful conversation. I have more to say, but this email is long enough already. Email me if you're interested in discussing or developing these ideas together. I have a private Discord server, but I'm open to other suggestions, or just further discussion here.

Laissez faire, laissez passer.

Let it be, let it go.

Ethan Heilman

unread,
Nov 21, 2025, 6:25:48 PMNov 21
to Lazy Fair, bitco...@googlegroups.com
I'm not convinced your hash function approach fully does what you want it to, although it does seem doable with some additional constraints.

There is a solution that does everything you want it and more, ZKPs.

ZKP (Zero Knowledge Proofs) can prove that some data X hashes to some hash output Y while keeping the actual value X secret. Thus, everyone can be convinced that H(X) = Y even if X is deleted and no one knows what the value X was.

Even more exciting, ZKPs can prove the correctness and validity of the entire Bitcoin blockchain. Thus storing old transactions is no longer needed to convince others that the chain is correct. This would remove any harmful data. Zerosync in 2017 compressed Bitcoin's blockchain into a 800 KB proof [0] which is constant size regardless of the number of transactions or bytes compressed. This approach does not require any changes to Bitcoin and you could implement a Bitcoin full node today that supports this.

We have a solution to solve the problem of harmful data on the blockchain since 2017. It just requires time, money and motivated people to work on it.

[0]:  Robin Linus and Lukas George,  ZeroSync: Introducing Validity Proofs to Bitcoin, 2017, https://zerosync.com/zerosync.pdf

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CABHzxrgbxG1qy3geyNHshA-q6tv0uNNwx5uiswUmAGDDxQjoHg%40mail.gmail.com.

Greg Maxwell

unread,
Nov 21, 2025, 6:25:50 PMNov 21
to Lazy Fair, bitco...@googlegroups.com
If you find blindly trusting miners acceptable, just run SPV and then you don't store anything but block headers.

Aside, allowing attackers access to manipulate a hash's midstate is dubious from a security perspective-- at the very least it's outside of the scope normally analyzed for security.

Saint Wenhao

unread,
Nov 23, 2025, 2:13:09 AMNov 23
to Greg Maxwell, Lazy Fair, bitco...@googlegroups.com
> allowing attackers access to manipulate a hash's midstate is dubious from a security perspective

It is unsafe, because the attacker can pick anything as the "middle state", run it through SHA-256, and get a valid result. For example: the hash of the Genesis Block is computed in this way:

hash0: 6a09e667 bb67ae85 3c6ef372 a54ff53a 510e527f 9b05688c 1f83d9ab 5be0cd19

01000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000000 3ba3edfd 7a7b12b2 7ac72c3e
67768f61 7fc81bc3 888a5132 3a9fb8aa

hash1: bc909a33 6358bff0 90ccac7d 1e59caa8 c3c8d8e9 4f0103c8 96b18736 4719f91b

4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
80000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000280

hash2: af42031e 805ff493 a07341e2 f74ff581 49d22ab9 ba19f613 43e2c86c 71c5d66d

hash0: 6a09e667 bb67ae85 3c6ef372 a54ff53a 510e527f 9b05688c 1f83d9ab 5be0cd19

af42031e 805ff493 a07341e2 f74ff581
49d22ab9 ba19f613 43e2c86c 71c5d66d
80000000 00000000 00000000 00000000
00000000 00000000 00000000 00000100

hash4: 6fe28c0a b6f1b372 c1a6a246 ae63f74f 931e8365 e15a089c 68d61900 00000000


And now, let's assume that we want to skip the first 64 bytes. We get "bc909a33 6358bff0 90ccac7d 1e59caa8 c3c8d8e9 4f0103c8 96b18736 4719f91b" from the network, receive "af42031e 805ff493 a07341e2 f74ff581 49d22ab9 ba19f613 43e2c86c 71c5d66d" as a result, so we can think, that our last data chunk is set to "4b1e5e4a 29ab5f49 ffff001d 1dac2b7c". However:

fake0: 189dcde9 da998d89 12414f36 fb7a1edd d48a4c3b c0237088 6beec03e 46b7bafb

4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
80000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000280

fake1: 189dcde9 da998d89 12414f36 fb7a1edd d48a4c3b c0237088 6beec03e 46b7bafb


So, the attacker can pick some data, and compute any two hashes, which will go through that initialization vector, and leave it unchanged. And then, instead of shrinking data, they can be expanded into infinite size.

Also, computing any difference between hashes is possible as well. For example: if we want to get a hash, which will be incremented by one:

fake2: f530fddf 74afe6c6 6004c3c0 c230b193 853774a9 6ab4c304 9d09ddde d9982546

4b1e5e4a 29ab5f49 ffff001d 1dac2b7c
80000000 00000000 00000000 00000000
00000000 00000000 00000000 00000000
00000000 00000000 00000000 00000280

fake3: f530fddf 74afe6c6 6004c3c0 c230b193 853774a9 6ab4c304 9d09ddde d9982547

Other attacks are possible as well. So, I wouldn't trust middle hashes that much, unless you have a strong cryptographic proof, that they are safe in a given context.

Peter Todd

unread,
Nov 29, 2025, 6:32:21 AM (8 days ago) Nov 29
to Ethan Heilman, Lazy Fair, bitco...@googlegroups.com
On Thu, Nov 20, 2025 at 04:21:33PM -0500, Ethan Heilman wrote:
> I'm not convinced your hash function approach fully does what you want it
> to, although it does seem doable with some additional constraints.
>
> There is a solution that does everything you want it and more, ZKPs.
>
> ZKP (Zero Knowledge Proofs) can prove that some data X hashes to some hash
> output Y while keeping the actual value X secret. Thus, everyone can be
> convinced that H(X) = Y even if X is deleted and no one knows what the
> value X was.
>
> Even more exciting, ZKPs can prove the correctness and validity of the
> entire Bitcoin blockchain. Thus storing old transactions is
> no longer needed to convince others that the chain is correct. This would
> remove any harmful data. Zerosync in 2017 compressed Bitcoin's blockchain
> into a 800 KB proof [0] which is constant size regardless of the number of
> transactions or bytes compressed. This approach does not require any
> changes to Bitcoin and you could implement a Bitcoin full node today that
> supports this.
>
> We have a solution to solve the problem of harmful data on the blockchain
> since 2017. It just requires time, money and motivated people to work on it.

Rather than being a solution, the technology behind Zerosync is a potential
threat to Bitcoin. The problem is that Bitcoin fundamentally requires
proof-of-publication to be decentralized and censorship resistant; a related
problem is that HTLCs (and thus Lightning) fundamentally requires
proof-of-publication to work at all.

For Bitcoin mining to remain decentralized, blocks need to be widely propagated
in a form suitable for creating new blocks. ZKP/Zerosync makes it possible to
prove that a block hash and all prior blocks follow the protocol rules and were
thus valid. However, valid block hashes alone are insufficient to mine on top
of because they do not contain the UTXO set data necessary to mine a new block.

Why do miners have an incentive to distribute the blocks they find? Ultimately
because doing so is necessary for the coins they mined to be valuable. But if
full nodes can be convinced of the validity of coins without full block
contents --- thus allowing those coins to be sold --- that weakens the
incentives to distribute block data in a form that allows other miners to mine.


With regard to HTLCs/Lightning, HTLCs rely on a proof-of-publication to be
secure: for the HTLC to be redeemed, the redeemer *must* publish the pre-image
in the Bitcoin chain, allowing the other party relying on the HTLC to recover
the pre-image. Again, ZKP/Zerosync weakens this security, as the validity of
the transaction spending the HTLC can be proven without actually making the
pre-image available.


Rather than presenting ZKP/Zerosync as a solution to the "harmful data"
problem, we should in fact be researching ways to defeat ZKP/Zerosync entirely.
We need a consensus protocol where the only way to fully validate a block is to
actually have the entire block contents.

As for "harmful data", that is a challenge to be solved legally/politically.

--
https://petertodd.org 'peter'[:-1]@petertodd.org
signature.asc

waxwing/ AdamISZ

unread,
Nov 29, 2025, 8:57:34 AM (8 days ago) Nov 29
to Bitcoin Development Mailing List
Hi Peter, list,

Interesting!

One thought that springs to mind: attempts to ameliorate IBD with ZKP should not forget one thing: what we actually want here is succinctness, and not so much ZK. Think SNARK instead of ZkSNARK.
Which is important; without the requirement for an actual ZK property for the protocol, you can have it have attached witness that is not secret.

Then a counter-thought strikes, that any version of these protocols that requires more data/bandwidth probably loses out to versions that have less data/bandwidth. Hmm.

 It seems to demonstrate, to me, that some kind of "data carrying" is required in the "state" (cf the "history"). Ironically recent discussion (see 'On (in)ability to embed data into Schnorr' but yeah a googolplex of "discussions" on the internet about filtering and spam...) has just re-emphasized that the utxo set can inevitably carry data (I guess that's obvious).

I do think, long term that ZKP over history is correct, and that (see typical rollup design) data carrying in state can do the job that you are (correctly) insisting, must be done.
(And the corollary: "harmful data on the blockchain" is a wrong mental model and should be abandoned, irrespective of architecture.)

Aside from your *main* concept here, I think the idea that HTLCs require *proof* of publication is wrong. What they require is publication. A wronged channel party needs to read the preimage, not have proof that it can be read. Take as contrast the opentimestamps model, where having proof that something was published, is the main functionality offered/required. I suppose there is another way to say it: the channel counterparty needs "proof of future publication" in contract setup. That's fair enough but it's a very different thing than getting a proof that something *was* published.

Cheers,
AdamISZ/waxwing

Erik Aronesty

unread,
Nov 29, 2025, 10:43:56 AM (8 days ago) Nov 29
to waxwing/ AdamISZ, Bitcoin Development Mailing List
You can stop arbitrary data encoding in public keys by requiring every key to be the **unique hash-to-curve output** of a publicly verifiable BLS root signature, rather than a user-chosen point on secp256k1. 

Because a BLS signature is checkable via a pairing equation, verifiers can confirm that each public key was deterministically forced by the root certificate and not selected to embed arbitrary bits. Under this construction, public keys become outputs of a constrained randomness beacon rather than an open steganographic channel.

In practice, the system fixes a BLS12-381 public key `PK_root` and a one-time BLS signature `σ = Sign_root(S)`. Any allowed secp256k1 key is then defined as `P_i = HashToCurve_secp256k1(σ || i)`, where `i` is an arbitrary index and the hash-to-curve map is a standard indifferentiable encoding (e.g., IETF RFC 9380). Verifiers check the pairing equation `e(σ, g) = e(H(S), PK_root)` once, and thereafter reject any public key whose curve point does not equal the canonical hash-to-curve output for some disclosed index `i`. Because the signer never chooses curve points—and because hash-to-curve eliminates degrees of freedom—no entropy remains to smuggle bits into the key, satisfying the same non-malleability criteria used in anti-steganographic constructions.

This “forced randomness” model parallels techniques in the literature on steganographic resistance and extractable commitments, particularly Hopper–Langford–von Ahn’s work on *provably secure steganography* and Bellare–Ristenpart–Tessaro’s analyses of *channel indistinguishability* in public-key spaces. 

The underlying idea is identical: eliminate sender choice over high-entropy objects so the objects cannot become covert storage.


--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.

waxwing/ AdamISZ

unread,
Nov 29, 2025, 11:15:13 AM (8 days ago) Nov 29
to Bitcoin Development Mailing List
Hi Erik,

> You can stop arbitrary data encoding in public keys by requiring every key to be the **unique hash-to-curve output** of a publicly verifiable BLS root signature, rather than a user-chosen point on secp256k1. 

Indeed, absolutely correct (afaik!), I had recently been discussing this a bit with Lloyd Fournier on nostr. I think at a theoretical level this is a very important observation, but at a practical level not so much. It's also worth noting that something like RSA FDH or hash based signatures, since they're deterministic (think "no nonce" and no technical ZK property) could technically do the same thing, but BLS is far and away better than those.

Theoretical not practical: I think there's no way such a thing would happen on bitcoin (IMO! could be wrong!) because it's an absolutely huge change to the crypto without improving quantum resistance (performance issues are I guess an open question, considering batching properties vs raw performance of a single pairing being bad). And the other reason: no point going this far without attempting to patch *every* hole that allows data that is not trivial. You could argue these holes are trivial: amount data, locktimes (both nLockTime and nSequence), in/out sequencing not being deterministic, and grinding curve points. The obviously much more relevant and non-trivial issue is Script, generally, and sort of peripheral to script stuff like control block in taproot etc. Since you'd have to "address" (that is to say, gut) Bitcoin's scripting before these other things like deterministic signatures become relevant, it does seem all very theoretical, if interesting.

I guess this would have been better in the "On (in)ability to embed data in Schnorr" thread but w/e it's all kind of connected I guess!

Cheers,
AdamISZ/waxwing

Erik Aronesty

unread,
Nov 29, 2025, 12:17:17 PM (8 days ago) Nov 29
to waxwing/ AdamISZ, Bitcoin Development Mailing List
there is no fundamental change to the cryptography.  The beacon proofs are only used for "proof of not spam".  The proven Bitcoin key is the same secp256k1 key and spending is unchanged.  uxto proofs are not terribly unreasonable given the cost of UTXOs 

Greg Maxwell

unread,
Nov 29, 2025, 1:16:34 PM (8 days ago) Nov 29
to Erik Aronesty, waxwing/ AdamISZ, Bitcoin Development Mailing List
You cannot perform pairing on secp256k1 as the DDH is hard in that group, so no BLS signature.  You may find that you make fewer technical errors if you refrain from misrepresenting other people's ideas as your own original work.


waxwing/ AdamISZ

unread,
Nov 29, 2025, 1:56:00 PM (8 days ago) Nov 29
to Bitcoin Development Mailing List
Erik,

> In practice, the system fixes a BLS12-381 public key `PK_root` and a one-time BLS signature `σ = Sign_root(S)`. Any allowed secp256k1 key is then defined as `P_i = HashToCurve_secp256k1(σ || i)`, where `i` is an arbitrary index and the hash-to-curve map is a standard indifferentiable encoding (e.g., IETF RFC 9380). Verifiers check the pairing equation `e(σ, g) = e(H(S), PK_root)` once, and thereafter reject any public key whose curve point does not equal the canonical hash-to-curve output for some disclosed index `i`. Because the signer never chooses curve points—and because hash-to-curve eliminates degrees of freedom—no entropy remains to smuggle bits into the key, satisfying the same non-malleability criteria used in anti-steganographic constructions.

Oh, I confess when you said "BLS" I didn't read the details and then just went off on the assumption you were talking about a replacement of secp with a BLS curve and then outputs being (BLS pubkey, BLS sig) i.e. an attached PoK of the key. But now I do actually read what you specifically meant, I realize I don't understand it: if the secp key is Hash-to-curve(sigma, i) then you don't have the private key of that pubkey. How could that be used? I don't get it. Maybe it's because I don't know what S is, in this scheme, or maybe not.

Lloyd and I rejected the idea of still using secp but attaching a BLS sig to it. That feels unworkable (mapping keys across groups or w/e), you'd need to just switch to a BLS curve entirely, I think.

AdamISZ/waxwing

Peter Todd

unread,
Dec 1, 2025, 3:36:55 AM (6 days ago) Dec 1
to waxwing/ AdamISZ, Bitcoin Development Mailing List
On Sat, Nov 29, 2025 at 05:54:13AM -0800, waxwing/ AdamISZ wrote:
> Hi Peter, list,
>
> Interesting!
>
> One thought that springs to mind: attempts to ameliorate IBD with ZKP
> should not forget one thing: what we actually want here is succinctness,
> and not so much ZK. Think SNARK instead of ZkSNARK.
> Which is important; without the requirement for an actual ZK property for
> the protocol, you can have it have attached witness that is not secret.

The Zero-Knowledge part is important to the goal in this specific use-case:
trying to prevent all arbitrary data publication.

> Then a counter-thought strikes, that any version of these protocols that
> requires more data/bandwidth probably loses out to versions that have less
> data/bandwidth. Hmm.

Ecash has even less data/bandwidth than Bitcoin. Yet people choose not to use
weaker security assumptions in favor of stronger security assumptions.

> I do think, long term that ZKP over history is correct, and that (see
> typical rollup design) data carrying in state can do the job that you are
> (correctly) insisting, must be done.
> (And the corollary: "harmful data on the blockchain" is a wrong mental
> model and should be abandoned, irrespective of architecture.)

It's quite possible that ZKP's are, in the context of decentralized
blockchains, an exploit that will prove to be impossible to patch. Similar to
how merge mining is an economic exploit that may well be impossible to patch.

Sometimes seemingly good ideas are ultimately killed by clever exploits.

> Aside from your *main* concept here, I think the idea that HTLCs require
> *proof* of publication is wrong. What they require is publication. A
> wronged channel party needs to read the preimage, not have proof that it
> can be read.

That is not correct. If Alice offers a HTLC to Bob, Alice needs proof that in
the event of a redemption, Bob is forced to publish the preimage in such a way
that Alice can recover it.

The *proof* aspect of this is critical to the security model. It's not enough
that Bob merely promise to give the preimage to Alice: redemption must be
atomic with publication.

> Take as contrast the opentimestamps model, where having proof
> that something was published, is the main functionality offered/required.

Nope. OpenTimestamps does not use proof of publication at all. OpenTimestamps
is a commitment operation: proof that if A was changed, B would have to change
too. The vast majority of OTS timestamps are for private data that is never
published in any way. OTS simply shows that data *existed*.

> I
> suppose there is another way to say it: the channel counterparty needs
> "proof of future publication" in contract setup. That's fair enough but
> it's a very different thing than getting a proof that something *was*
> published.

It is not a meaningfully different thing. An HTLC is proof that in the event of
an uncooperative redemption, publication will happen. Slightly changing the
time it takes is irrelevant to the general concept.

Concretely: unless you can propose a technical innovation that somehow turns
this pedantic nuance into a meaningfully different implementation, so what?
signature.asc

waxwing/ AdamISZ

unread,
Dec 2, 2025, 8:05:40 AM (5 days ago) Dec 2
to Bitcoin Development Mailing List
(apologies to OP; we've drifted off topic here). Answers inline.


On Monday, December 1, 2025 at 5:36:55 AM UTC-3 Peter Todd wrote:
On Sat, Nov 29, 2025 at 05:54:13AM -0800, waxwing/ AdamISZ wrote:
> Hi Peter, list,
>
> Interesting!
>
> One thought that springs to mind: attempts to ameliorate IBD with ZKP
> should not forget one thing: what we actually want here is succinctness,
> and not so much ZK. Think SNARK instead of ZkSNARK.
> Which is important; without the requirement for an actual ZK property for
> the protocol, you can have it have attached witness that is not secret.

The Zero-Knowledge part is important to the goal in this specific use-case:
trying to prevent all arbitrary data publication.

Yes agreed. (with the strange caveat: the ZK property itself allows data-embedding almost by force; the reason Schnorr has a data embedding channel and BLS does not is precisely that BLS does not have a ZK property, which itself relates to the fact that it's deterministic (think: no nonce = no channel) .. the caveat is not super relevant to some kind of ZK-ed IBD thing, though, since that's compressing an unfathomable amount).
 
<snip>
 
It's quite possible that ZKP's are, in the context of decentralized
blockchains, an exploit that will prove to be impossible to patch. Similar to
how merge mining is an economic exploit that may well be impossible to patch.

Sometimes seemingly good ideas are ultimately killed by clever exploits.

I have a sneaking suspicion you're wrong here, but I can't justify it. (Hence 'interesting!'). Would love to hear others opinion on the topic.
 
> Take as contrast the opentimestamps model, where having proof
> that something was published, is the main functionality offered/required.

Nope. OpenTimestamps does not use proof of publication at all. OpenTimestamps
is a commitment operation: proof that if A was changed, B would have to change
too. The vast majority of OTS timestamps are for private data that is never
published in any way. OTS simply shows that data *existed*.

That seems like a good correction. So, tamper protection, using binding property of commitments .. and "proof of existence" is *one* possible function? Is that fair?
 
> I
> suppose there is another way to say it: the channel counterparty needs
> "proof of future publication" in contract setup. That's fair enough but
> it's a very different thing than getting a proof that something *was*
> published.

It is not a meaningfully different thing. An HTLC is proof that in the event of
an uncooperative redemption, publication will happen. Slightly changing the
time it takes is irrelevant to the general concept.

Concretely: unless you can propose a technical innovation that somehow turns
this pedantic nuance into a meaningfully different implementation, so what?

On reflection I don't see it as strange to make the distinction between the two: 1/ proof that something was published in the past and 2/ proof that conditional on event X occurring, data Y will be published. I guess 1/ is just, most realistically, a case of publishing raw, unhashed data on a blockchain, then the proof that that event occurred in the past is the onchain txs ( using op_return or w/e) themselves. As you pointed out, that's not what OTS is doing. Nor is it what an HTLC is doing, that's 2/.
Reply all
Reply to author
Forward
0 new messages