BIP idea: Timelock-Recovery storage format

350 views
Skip to first unread message

Oren

unread,
Dec 28, 2025, 9:59:43 AM12/28/25
to Bitcoin Development Mailing List
Reposting here from BitcoinTalk:

After a short talk with Ava Chow during BTC++ Taiwan, I'm starting this thread to discuss whether my idea is BIP-worthy.

Motivation for Timelock-Recovery plans:
Storing seeds for recovery & inheritance is scary.
Pre-signed transactions to a secondary-wallet/custodian, are safer to handle and backup due to their immutability.
A single pre-signed transaction with a future nLocktime requires "renewal" when the nLocktime deadline is getting close, which could be annoying (i.e. if the seed is split over multiple geographic locations).
Covenants/Vaults are still being debated, and could scare less-technical Bitcoiners.

Solution:
Pre-signing a pair of transactions:
Alert/Initiate Transaction: A consolidation transaction that keeps most funds on the original wallet (except for a minimal amount that goes to anchor-addresses, for CPFP acceleration)
Recovery Transaction: A transaction that moves the Bitcoin from the consolidated UTXO to the secondary-wallet(s), with an nSequence relative-locktime that gives the user enough time to move the funds elsewhere (assuming they noticed that the Alert transaction was mined, and still have the seed or signed an alternative transaction in advance).

Similar to a single pre-signed transaction with a future nLocktime, Timelock-Recovery plans will not include new funds that are added to the wallet, and will be revoked even if a tiny amount is spent. This mechanism is intended for wallets that are going to remain untouched for a long time.

An example implementation can be found in the Timelock Recovery plugin that I've implemented for Electrum (merged since Electrum v4.6.0b1). Details and demo videos can be found at: https://timelockrecovery.com.
The plugin creates a UI for signing the two transactions, then saving them either in a PDF file (with detailed manual instructions for less-technological Bitcoiners how to broadcast them), or in a JSON format.

The BIP will be about the JSON format, which includes not only the raw transactions themselves, but also user-information (i.e. name, description, destination-labels, wallet-name, wallet-version), and data about the transactions (i.e. txids, amounts, fees, input-utxos, anchor-addresses, relative-locktime).
A standard JSON format will allow implementing a compatible feature on other wallets, as well as apps/servers for monitoring & initiating timelock-recovery plans - such as the one being developed by RITREK.com (disclosure: I'm one of RITREK's founders).

Let me know what you think!

Oren

waxwing/ AdamISZ

unread,
Jan 1, 2026, 9:34:07 AMJan 1
to Bitcoin Development Mailing List
Hi Oren, list,

I do think this is a really good general idea - and certainly fits into "this could be a BIP". Current covenant-less "deadman's switch" style systems are of course limited, but useful, at least to the enthusiast that can hand-craft them. Avoiding the "need to refresh every interval" is clearly desirable, though the extra complexity *may* end up being more trouble than it's worth (clearly, you don't think so!).

Btw, do you think you should address directly the philosophy found in Liana, and what setups they have decided to expose? (i.e. what's the delta, here).
Should the BIP actually be something with more flexibility in transaction structure, so as to cover more use-cases?

Other comments stimulated by the BIP text:

"The testmempoolaccept RPC can receive a list of transactions in which the later transactions may depend on earlier transactions - however in our case the Recovery Transaction has an nSequence relative-locktime, and therefore calling testmempoolaccept 'alert-tx' 'recovery-tx' will fail, claiming that the Alert Transaction UTXO is not confirmed (and the required time window has not passed). "

This sounds like a pretty chunky and important problem (which surprised me, as in, it never occurred to me). Speaking for myself, I would not rest easy with a recovery plan that I couldn't unambiguously check as follows: "I really, really want to know that this recovery works, but much more than that, I *must* know that it doesn't work (leak my funds) before the intended deadline".

With a single nlocktime you can do that (testmempoolaccept -> "non-final").

Is there a case for requesting a feature in Core that one could do a 'testmempoolaccept' with a conditional setting of the time/block height as argument? Would that solve this issue? I'm guessing not, as (clue is in the name!) the tool was mostly designed around "how does this tx interact with the currently seen mempool" more than "forget conflicts, is this even valid/could it ever be valid in and of itself?" (yeah perhaps it's just an incoherent question..?).

"alert_inputs (mandatory): An array of up to 10,000 inputs spent by the Alert Transaction. Each input is a string in the format "txid:vout" where txid is a 64-character lowercase hexadecimal string and vout is a decimal number of up to 6 digits.
alert_tx (mandatory): The raw Alert Transaction in uppercase hexadecimal format. A string of up to 800,000 characters."

I'm probably being an idiot, but: since you're defining alert_inputs as just txid:vout, what's the point of including it since that data is already in alert_tx, no?

One last thing, that 388 day limit is a little unfortunate, no? I certainly think a year is a good, reasonable "long time", but still, all the same.

Cheers,
AdamISZ/waxwing

Oren

unread,
Jan 1, 2026, 11:22:40 AMJan 1
to Bitcoin Development Mailing List
Hi AdamSZ, list,
Thanks for showing interest in the BIP and happy new year.


> Btw, do you think you should address directly the philosophy found in Liana, and what setups they have decided to expose? (i.e. what's the delta, here).
Yes, I think it's a good idea to talk about the advantages and disadvantages of pre-signed transactions vs script-based wallets. I'll add it to the Motivation section.


> Should the BIP actually be something with more flexibility in transaction structure, so as to cover more use-cases?
I think it's good to limit the structure of the transactions, so that different services could easily integrate this feature.
One guy that I talked to suggested building a more sophisticated sequence of transactions with multiple wallets, for example that after 90 days the Bitcoin will move to his wife's wallet, from which there will be another pre-signed transaction that moves the Bitcoin to his brother's wallet after 180 days, etc.
But after all, trying to support all custom logics and use-cases will end up in a service that compiles general-purpose "code" instead of something useful for standard users.


> Is there a case for requesting a feature in Core that one could do a 'testmempoolaccept' with a conditional setting of the time/block height as argument?
I've asked for a testmempoolaccept RPC that ignores nSequence/nLocktime here: https://github.com/bitcoin/bitcoin/issues/32142
It's a problem even in transactions with a single nLocktime. testmempoolaccept breaks on the first problem that it finds. Suppose you are signing a transaction with a future nLocktime, and testmempoolaccept breaks on "non-final" - how do you know that there are no other problems in the transaction? How can you tell that the cryptographic signatures are good, and what if it's a script-based wallet?
Some worry that complicate the testmempoolaccept RPC diverge from how transactions are checked in sendrawtransaction, and I somewhat agree with that.
Checking the signatures of P2WPKH wallets is straight-forward, but in order to build a service that verifies the Recovery transaction of a complicated wallet (i.e. multisig, taproot, etc.), without initiating the plan - I guess you would need to run a node with patched code for testmempoolaccept.


> I'm probably being an idiot, but: since you're defining alert_inputs as just txid:vout, what's the point of including it since that data is already in alert_tx, no?
I did mention that the JSON has some information duplicated. Even the alert_txid can be calculated from the raw transaction.
This is useful for displaying the information to the user for review. For example, a webpage to which you drag-n-drop the JSON file, shows you its information and has a button saying "Upload for Monitoring".
This will allow the frontend to show the user the list of UTXOs covered by the recovery-plan, without complicated frontend-code to parse the raw transaction.
Of course, if the user clicks "Upload for Monitoring" and the backend finds out that the alert_inputs mismatch the raw transaction, the whole process should be rejected with a warning that the file was corrupt (possibly maliciously).


> One last thing, that 388 day limit is a little unfortunate, no? I certainly think a year is a good, reasonable "long time", but still, all the same.
388 days is just the maximal value that you get from BIP-68:
After setting the correct bit flags, the relative-locktime is calculated from the lowest 16 bits, in units of 512 seconds.
This gives a maximal value of (2^16 - 1) * 512 seconds = 33,553,920 seconds = 388 days, 8 hours and 32 minutes.
Users can obviously choose a shorter cancellation-window.

Regards,
Oren

Oren

unread,
Jan 1, 2026, 11:53:13 AMJan 1
to Bitcoin Development Mailing List
One more thing to mention - none of the hardware wallets that I've tested show the custom nSequence fields to the user, and only some show a custom nLocktime.
They just agree to sign whatever nSequence/nLocktime was in the PSBT, even if they it's not one of the common values (i.e. nSequence of 0xFFFFFFFD - 0xFFFFFFFF).
This is an issue regardless of my BIP idea, because a malicious virus could manipulate the nSequence field before sending it to the hardware wallet, making the user sign transactions that would seem corrupt when they try to broadcast them, but will become valid in the future.
Just as users may wish to verify that all pre-signed transactions are valid using some custom testmempoolaccept, they may also want to verify the exact relative-locktime (nSequence) value on their hardware wallet.

I wrote a PR for Specter-DIY to show this information, and wish for other wallets to follow: https://github.com/cryptoadvance/specter-diy/pull/321

Regards,
Oren


orenz0

unread,
Mar 16, 2026, 6:48:39 AM (7 days ago) Mar 16
to Matt Whitlock, Bitcoin Development Mailing List
> I couldn't figure out how to subscribe to the group without using my Gmail address,

You can create a Google account with a non-gmail email - but I'm not sure about privacy issues (i.e. Google may require a phone number).

> Would be worth mentioning in the description of the "checksum" field that it is to be entirely omitted during checksum calculation. That is a noteworthy exception to the "mandatory" constraint.

Noted. I'll create a PR for that.

> What's the point of doing that? Given that order is important, why wouldn't you just output the entries of the object sorted by their keys?

Actually I checked again the Nostr protocol and it's really hashing an array of the object values in a fixed key-order, without the keys themselves. However, this might break if the standard has optional fields, because without the key-name you can't be sure which field the value belongs to.

The Javascript implementation of this is trivial: `JSON.stringify(Object.entries(obj).sort())`, and so would be a Python implementation.
I'm pretty sure the ECMAScript standard (https://262.ecma-international.org/16.0/index.html#sec-json.stringify ) is very specific regarding newlines and non-ascii characters.

Regards,
Oren


Sent with Proton Mail secure email.

On Friday, 13 March 2026 at 18:29, Matt Whitlock <bit...@mattwhitlock.name> wrote:

> On Friday, 13 March 2026 12:05:41 EDT, orenz0 wrote:
> > Hi Matt,
> > I can't find your message on https://groups.google.com/g/bitcoindev so I'll answer directly from my email.
>
> I couldn't figure out how to subscribe to the group without using my Gmail address, so when I tried to post from my bit...@mattwhitlock.name address, the listserv rejected my post. Oh well.
>
> >> * What should be the value of the "checksum" field during the checksum calculation? Null? An empty string? Absent entirely? None of those options satisfy the schema, which mandates that the value must be a string of 8 to 64 characters. Perhaps a string of eight zeros?
> >
> > The checksum field should be absent entirely during checksum calculation.
>
> Would be worth mentioning in the description of the "checksum" field that it is to be entirely omitted during checksum calculation. That is a noteworthy exception to the "mandatory" constraint.
>
> >> * What is "an array of key, value pairs"? Is it [{"key":"one","value":1},{"key":"two","value":2}]? The example JavaScript code implies that it should be [["one",1],["two",2]], but this is not spelled out in the normative text. Typically in a specification, example code is non-normative, and all necessary information to implement the spec is provided in the normative text.
> >
> > Your interpretation is correct, key-value pairs are exactly converting an object like {"one":1,"two":2} into [["one",1],["two",2]].
>
> What's the point of doing that? Given that order is important, why wouldn't you just output the entries of the object sorted by their keys? (I think you're going to tell me that you have no control over the order that ECMAScript's JSON.stringify() function outputs an object's keys, and I'm going to suggest not using a function whose output you can't precisely control.)
>
> > How would you improve the description of "...by converting the top-level JSON object to an array of [key, value] pairs" (in https://github.com/bitcoin/bips/blob/master/bip-0128.mediawiki#checksum-calculation )?
>
> "...by constructing an array of arrays, where each element of the outer array is an array of two elements, the first of which is the key of a property in the object and the second of which is the value of that property."
>
> > Converting objects into key-value pairs (i.e. with the Object.entries() function) is pretty common when the order of the keys is important - i.e. when you need a consistent hash. For example it is used in the Nostr protocol before signing event objects.
>
> That's a very ECMAScript-centric notion. JSON is a generic structured data transmission format that most high-level programming languages have adopted/implemented, and not all languages have an equivalent of Object.entries(), and even when they do, it may not do the same thing as ECMAScript's. For instance, jq's "to_entries" filter converts an object into an array of objects:
>
> $ jq -c 'to_entries' <<<'{"one":1,"two":2}'
> [{"key":"one","value":1},{"key":"two","value":2}]
>
> Java's Map.entrySet() method returns a Set view of a Map, in which the elements implement the Map.Entry interface, which exposes "key" and "value" properties.
>
> C++'s std::map::iterator dereferences to a std::pair whose "first" data member holds an entry's key and whose "second" data member holds the entry's value.
>
> Etc., etc. The point is: the world does not revolve around ECMAScript, so a protocol specification that incorporates by reference a particular algorithm from ECMAScript is under-specified for most consumers.
>
> >> * What does "stringifying" mean? There are many ways to encode abstract structured data into JSON. For instance, just look at all the possible "flags" that can be passed to PHP's json_encode function: https://www.php.net/json_encode . Are non-ASCII characters to be encoded in UTF-8 or as hex escapes? How about control characters? How are quotation marks to be escaped: as a backslash and a quotation mark or as a Unicode hex escape? Is there any optional whitespace between terminals in the encoding? If so, where, and are the line endings LF or CR+LF?
> >
> > I think we should agree that "stringifying" means using Javascript's JSON.stringify() function with a single parameter (no "replacer" and no "space" parameters). All other programming languages should follow the ECMAScript specification.
>
> Similar comments apply as above. Additionally, is the output of JSON.stringify() even guaranteed to be identical across all versions and implementations of ECMAScript? For instance, might some implementations choose to escape newline characters as "\n" while others might write code point U+000A directly into the output without escaping it? Unless ECMAScript specifies *exactly* what the function is to do in every case and guarantees that the specification will never change, you're under-specifying how your object is supposed to be encoded.
>
Reply all
Reply to author
Forward
0 new messages