I think the only addition on that front is the addition of `ledgerBounds`. I do not see any problem with this as a transaction's life cycle is already coupled with ledger life cycle.
## Preconditions on account state
### minSeqAge/minSeqLedgerGap
I think the challenge with this pre-condition is that we can't have more than one transaction per account in the mem pool as we can only check this condition for the first transaction.
The implication is that we can only flood the first transaction in chain of transactions.
This may limit quite a bit the kind of protocols that those conditions can be used.
As this condition is the basis for implementing "relative time locks", I can't help but notice that we're duplicating logic in the protocol.
I am wondering if there are better ways to do this using `BalanceEntry` that already support this (not ledger sequence number based, but we could add it there). As conditions in `BalanceEntry` are already quite generic and store state.
I remember that you had a proposal (a long time ago) that was leveraging "statements" stored in the ledger (where the only thing that mattered was their existence). Can't we use certain `BalanceEntry` transitions like this (maybe introduce new operations/Transaction types)?
### minSeqNum
This change is different from the other changes: it's loosening pre-conditions instead of being an additional pre-condition.
I imagine that in the context of payment channels, the sequence number ranges are very large (order of millions) and bad actors have access to transactions signed with any (older) sequence number.
As a consequence, I think this condition introduces some challenges in avoiding spam and fairness in transaction queues so we'll have to think of strategies to mitigate this (a bad actor can submit many of those transactions ahead of the "latest one", the only purpose is to delay processing).
This problem may be compounded further by `minSeqAge/minSeqLedgerGap`.
### GeneralPreconditions
Maybe not as important, but `GeneralPreconditions` do not seem future proof, it enumerates all conditions in one blob.
It would be better if we can make it a vector of sorts that we can extend over time. That way we avoid building a structure that defaults to worst case from a binary representation point of view.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/87a6rhqmlx.fsf%40ta.scs.stanford.edu.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/e373f140-7363-4fa8-b0d8-ed993102ee60n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/335bf8f0-61e2-4129-b232-add9d8779481n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/ade08cf4-4479-494b-99f4-94d3aa538b6cn%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CAG4FKhx%2BLWouKSu-M4ShQA4-kicdbMR-aC9prLY1Vnx%2Bo3orhQ%40mail.gmail.com.
> `seqTime` `seqLedger`
While the functionality is already opt-in (only transactions that use the new preconditions are effected), the proposal may have some broader testing implications. Should those be "opt-in" by using a new account flag? Only "envelope tests" would need to change. Maybe we can do that if testing as proposed is indeed too much work.
> This proposal changes TimePoint to be signed rather than unsigned
As noted in the "Backward incompatibility", this would break any pre-signed transaction that uses MAX_UINT64 (or any value larger than MAX_INT64) as `maxTime`. I *think* we have example of such transactions in history, so while unlikely, those exist already.
In any case, with this change, we'd have to support both signed and unsigned TimePoints depending on protocol version.
As code (in core, Horizon, SDKs, etc) has to deal with this today already, I am not sure the rationale to change it is sufficient.
> Transaction forwarding and ordering
> A transaction with a non-zero minSeqAge or minSeqLedgerGap must be treated as invalid and not forwarded--as if its minTime has not yet arrived--if either A) the appropriate condition (minSeqAge or minSeqLedgerGap) does not yet hold, or B) there are pending valid transactions with lower sequence numbers on the same sourceAccount
A few questions:
While the protocol described below under "Two-way payment channel", that uses separate D_i and C_i is immune to such delay attack, I wonder if we could harden the semantics a bit as it's quite a footgun.
Like: if we make it illegal to mix non 0 `minSeqNum` and `minSeqAge` (ie: make them mutually exclusive), it could nudge people towards safer smart contracts.
3. Should we relax the condition in "B" to just be "first transaction received wins" (ie: if there is already a transaction in the queue for the same source account, the transaction with the "minSeq" condition gets rejected)? Note that this does not fix the problem from "2." but at least makes it more difficult for an attacker to be able to reliably submit older transactions and we end up with a simpler implementation.
> Transaction Validation
> All but the transaction with the lowest seqNum on a given sourceAccount must have 0 for the minSeqAge and minSeqLedgerGap fields.
I am not sure this is that useful: we should simplify this to just not allow to mix `minSeq` transactions with any other transaction with the same sourceAccount (following the rule used in how transactions get flooded).
A detail, but "Apply order" needs to be specified: we still sort by "tx.SeqNum" but allow gaps.
The point on "dual validation" is confusing: there are no mechanism that would cause `maxTime` and `maxLedger` to be valid when the transaction set is validated but invalid during externalize (as per CAP-034).
At this point I
don't think we would allow any condition to trigger those error codes:
the only invalidations that can occur like this are seqnum related (bump sequence) or signature/threshold related but those don't fall under `txTOO_LATE` / `txTOO_EARLY`.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/1ffdc7ad-074a-482b-a5d5-e10f7d7c080an%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/792fdd98-1b45-40e7-bc1f-345187224880n%40googlegroups.com.