Automated Market Makers CAP Draft

1,360 views
Skip to first unread message

Orbit Lens

unread,
Mar 26, 2021, 4:26:40 PM3/26/21
to Stellar Developers

Preamble

CAP: To Be Assigned
Title: Automated Market Makers
Working Group:
    Owner: Nicolas Barry <nic...@stellar.org>
    Authors: OrbitLens <or...@stellar.expert>
    Consulted: Jon Jove <j...@stellar.org>, Nikhil Saraf<nik...@stellar.org>, Phil Meng <ph...@stellar.org>, Leigh McCulloch <le...@stellar.org>, Tomer Weller <to...@stellar.org> 
Status: Draft
Created: 2021-03-03
Protocol version: TBD

Simple Summary

This proposal introduces liquidity pools and automated market makers on the
protocol level. AMMs rely on a mathematical formula to quote asset prices. A
liquidity pool is a ledger entry that contains funds deposited by users
(liquidity providers). In return for providing liquidity to the protocol, users
earn fees from trades. The described approach of the interleaved order execution
combines the liquidity of existing orderbooks with liquidity pools.

Motivation

Orderbooks market-making (especially on-chain) may be quite tricky. It requires
trading bots that constantly track external asset prices and adjust on-chain
orders accordingly. In turn, this process results in endless offer adjustments
which clog ledger history.

Market makers need to provision liquidity and maintain inventories. For a few
trading pairs it is more or less straightforward but the ecosystem
expansion brings new assets, new trading pairs. Consequently, inventory
requirements increase, as well as the number of operations required to maintain
positions on all orderbooks.

On the other hand, automated market makers provide natural incentives for
liquidity crowdsourcing, making it much easier for ordinary users to participate
in the process while gaining interest on their long-term holdings.

Asset issuers don't need to wait until the token attracts a critical mass of
users. They can start making several trading pairs with a newly issued asset by
merely depositing tokens to the pool or engaging community users to provision
liquidity. This will certainly simplify the process of starting a new project on
Stellar, as well as provide a powerful marketing flywheel for early-stage
tokens.

The AMM concept implies that no third-party company holds user funds at any
point, and the algorithm itself doesn't rely on external data. Therefore,
potential regulatory risks are limited compared to the classic exchange design.

Liquidity pools don't store any complex information, don't require regular
position price adjustments, and work completely deterministically. From the
perspective of the on-chain execution, those characteristics offer much better
scalability compared to the existing DEX.

Proposed interleaved order execution on both the orderbook and liquidity pool
provides a familiar exchange experience in combination with the ability to have
on-chain limit orders. On the other hand, it fully incorporates all benefits of
shared liquidity pools, at the same time hiding the underlying technical details
from end-users. Users always get the best possible exchange price based on the
combined liquidity.

Abstract

This proposal brings the concept of shared liquidity pools with automated market
making to the protocol. Users deposit funds to a pool providing liquidity to the
automated market maker execution engine which can quote asset prices based on an
algorithm that derives the price directly from the amounts of tokens deposited
to the pool.

Pool fees charged on every executed trade are accumulated in the pool,
increasing its liquidity. A user can withdraw the pool stake plus proportional
accrued interest from the pool. Collected interest incentivizes users to deposit
their funds to the pool, participating in the collective liquidity allocation.

Specification

  • New ledger entries LiquidityPoolEntry and LiquidityStakeEntry
  • New operations DepositPoolLiquidityOp and WithdrawPoolLiquidityOp
  • LedgerHeader extended with new settings
  • Semantic altered for existing operations ManageSellOfferOp,
    ManageBuyOfferOp, CreatePassiveSellOfferOp, PathPaymentStrictReceiveOp,
    PathPaymentStrictSendOp

XDR changes

--- a/src/xdr/Stellar-ledger-entries.x
+++ b/src/xdr/Stellar-ledger-entries.x
@@ -403,6 +403,43 @@ struct ClaimableBalanceEntry
     ext;
 };
 
+/* Contains information about current balances of the liquidity pool*/
+struct LiquidityPoolEntry
+{
+    uint32 poolID;  // pool invariant identifier
+    Asset assetA;   // asset A of the liquidity pool
+    Asset assetB;   // asset B of the liquidity pool
+    int64 amountA;  // current amount of asset A in the pool
+    int64 amountB;  // current amount of asset B in the pool
+    int64 stakes;   // total number of pool shares owned by the account
+
+    // reserved for future use
+    union switch (int v)
+    {
+    case 0:
+        void;
+    }
+    ext;
+};
+
+/* Represents information about the account stake in the pool */
+struct LiquidityStakeEntry
+{
+    AccountID accountID; // account this liquidity stake belongs to
+    uint32 poolID;       // pool invariant identifier
+    Asset assetA;        // asset A of the liquidity pool
+    Asset assetB;        // asset B of the liquidity pool
+    int64 stake;         // share of the pool that belongs to the account
+
+    // reserved for future use
+    union switch (int v)
+    {
+    case 0:
+        void;
+    }
+    ext;
+};
+

@@ -431,6 +468,10 @@ struct LedgerEntry
         DataEntry data;
     case CLAIMABLE_BALANCE:
         ClaimableBalanceEntry claimableBalance;
+    case LIQUIDITY_POOL:
+        LiquidityPoolEntry LiquidityPool;
+    case LIQUIDITY_STAKE:
+        LiquidityStakeEntry LiquidityStake;
     }
     data;
 
@@ -479,6 +520,29 @@ case CLAIMABLE_BALANCE:
     {
         ClaimableBalanceID balanceID;
     } claimableBalance;
+
+case CLAIMABLE_BALANCE:
+    struct
+    {
+        ClaimableBalanceID balanceID;
+    } claimableBalance;
+	
+case LIQUIDITY_POOL:
+	struct
+    {
+		uint32 poolID;
+        Asset assetA;
+        Asset assetB;
+    } LiquidityPool;
+	
+case LIQUIDITY_STAKE:
+	struct
+    {
+		uint32 poolID;
+        AccountID accountID;
+        Asset assetA;
+        Asset assetB;
+    } LiquidityStake;
 };
--- a/src/xdr/Stellar-transaction.x
+++ b/src/xdr/Stellar-transaction.x
@@ -48,7 +48,9 @@ enum OperationType
     END_SPONSORING_FUTURE_RESERVES = 17,
     REVOKE_SPONSORSHIP = 18,
     CLAWBACK = 19,
-    CLAWBACK_CLAIMABLE_BALANCE = 20
+    CLAWBACK_CLAIMABLE_BALANCE = 20,
+    DEPOSIT_POOL_LIQUIDITY = 21,
+    WITHDRAW_POOL_LIQUIDITY = 22
 };
 
@@ -390,6 +392,38 @@ 
 
+/* Deposits funds to the liquidity pool
+
+    Threshold: med
+
+    Result: DepositPoolLiquidityResult
+*/
+struct DepositPoolLiquidityOp
+{
+    uint32 poolID;    // pool invariant identifier
+    Asset assetA;     // asset A of the liquidity pool
+    Asset assetB;     // asset B of the liquidity pool
+    int64 maxAmountA; // maximum amount of asset A a user willing to deposit
+    int64 maxAmountB; // maximum amount of asset B a user willing to deposit
+};
+
+/* Withdraws all funds that belong to the account from the liquidity pool
+
+    Threshold: med
+
+    Result: WithdrawPoolLiquidityResult
+*/
+struct WithdrawPoolLiquidityOp
+{
+    uint32 poolID; // pool invariant identifier
+    Asset assetA;  // asset A of the liquidity pool
+    Asset assetB;  // asset B of the liquidity pool
+};
+

@@ -1186,6 +1220,67 @@ 
 
+/******* DepositPoolLiquidity Result ********/
+
+enum DepositPoolLiquidityResultCode
+{
+    // codes considered as "success" for the operation
+    DEPOSIT_SUCCESS = 0,
+    // codes considered as "failure" for the operation
+    DEPOSIT_MALFORMED = -1,     // bad input
+    DEPOSIT_NO_ISSUER = -2,     // could not find the issuer of one of the assets
+    DEPOSIT_POOL_NOT_ALLOWED = -3, // invalid pool assets combination
+    DEPOSIT_INSUFFICIENT_AMOUNT = -4, // not enough funds for a deposit
+    DEPOSIT_ALREADY_EXISTS = -5, // account has a stake in the pool already
+    DEPOSIT_LOW_RESERVE =  -6 // not enough funds
+};
+
+struct DepositPoolLiquiditySuccessResult
+{
+    // liquidity pool stake that has been created
+    LiquidityStakeEntry stake;
+};
+
+union DepositPoolLiquidityResult switch (
+    DepositPoolLiquidityResultCode code)
+{
+case DEPOSIT_SUCCESS:
+    DepositPoolLiquiditySuccessResult success;
+default:
+    void;
+};
+
+/******* WithdrawPoolLiquidity Result ********/
+
+enum WithdrawPoolLiquidityResultCode
+{
+    // codes considered as "success" for the operation
+    WITHDRAW_STAKE_SUCCESS = 0,
+    // codes considered as "failure" for the operation
+    WITHDRAW_STAKE_MALFORMED = -1,    // bad input
+    WITHDRAW_STAKE_NOT_FOUND = -2,    // account doesn't have a stake in the pool    
+    WITHDRAW_STAKE_NO_TRUSTLINE = -3, // account does not have an established and authorized trustline for one of the assets    
+    WITHDRAW_STAKE_TOO_EARLY = -4     // an attempt to withdraw funds beforethe lockup period ends
+};
+
+struct WithdrawPoolLiquiditySuccessResult
+{
+    int32 poolID;   // pool invariant identifier
+    Asset assetA;   // asset A of the liquidity pool
+    Asset assetB;   // asset B of the liquidity pool
+    int64 amountA;  // amount of asset A withdrawn from the pool
+    int64 amountB;  // amount of asset B withdrawn from the pool
+    int64 stake;    // pool share that has been redeemed
+};
+
+union WithdrawPoolLiquidityResult switch (
+    WithdrawPoolLiquidityResultCode code)
+{
+case WITHDRAW_STAKE_SUCCESS:
+    WithdrawPoolLiquiditySuccessResult success;
+default:
+    void;
+};
+
--- a/src/xdr/Stellar-ledger.x
+++ b/src/xdr/Stellar-ledger.x
@@ -84,6 +84,8 @@ struct LedgerHeader
     {
     case 0:
         void;
+    case 1:
+        uint32 liquidtyPoolYield; // fee charged by liquidity pools on each trade in permile (‰)    
     }
     ext;
 };

Semantics

Modified semantics of trading-related operations presented in this CAP allows to
drastically reduce the number of new interaction flows. Liquidity from the pools
will be immediately available for existing Stellar applications through the
convenient offers and path payment interface operations.

In this section, a constant product invariant (x*y=k) is used for all
calculations. Other invariants can be implemented as separate pools with
different price quotation formulas and execution conditions.

DepositPoolLiquidityOp

DepositPoolLiquidityOp operation transfers user funds to the selected
liquidity pool defined as LiquidityPoolEntry.

  • Before processing a deposit, a lookup of issuer accounts for assetA and
    assetB is performed. If an asset is not a native asset and the issuer
    account does not exist, DEPOSIT_NO_ISSUER error is returned.
  • Basic validation is needed to ensure that a given combination of assets is
    allowed. For example, the situation when assetA=assetB should result in
    DEPOSIT_POOL_NOT_ALLOWED error. This version of the proposal doesn't imply
    any other restrictions, but this may change in the future.
  • The node performs a lookup of a LiquidityStakeEntry by operation source
    account, poolID, assetA, and assetB. If corresponding
    LiquidityStakeEntry was found, DEPOSIT_ALREADY_EXISTS error is returned.
  • The node loads source account balances for assetA, assetB. If any of the
    balances do not exist, DEPOSIT_INSUFFICIENT_AMOUNT error returned.
  • The actual deposit amount is calculated based on the current liquidity pool
    price from maxAmountA and maxAmountB. The current price can be determined
    as P=Ap/Bp where Ap and Bp - correspondingly amount of token A and token
    B currently deposited to the pool. Maximum effective amount that can be
    deposited to the pool Bdm=Ad/P and Adm=Bd*P where
    Ad=min(maxAmountA,accountBalanceA), Bd=min(maxAmountB,accountBalanceB),
    Bdm and Adm – maximum effective amounts of tokens A and B that can be
    deposited to the pool. If the actual deposited amount of any token equals
    zero, DEPOSIT_INSUFFICIENT_AMOUNT error returned. In case if maxAmountA or
    maxAmountB provided in the operation equals zero, the node takes the value
    from the matching source account balance entry.
  • Stake weights are calculated as S=A*B*Sp/(Ap*Bp) where S - share of the
    pool an account obtains after the deposit, A and B - actual amount of
    tokens to deposit, Sp - total stakes currently in the pool (the value from
    the LiquidityPoolEntry), Ap and Bp - correspondingly amount of token A
    and B currently deposited to the pool. If S=0 (this can be the case with a
    very small stake or as a result of rounding approximation), the node returns
    DEPOSIT_INSUFFICIENT_AMOUNT. If the native asset balance does not satisfy
    the basic reserve requirement, DEPOSIT_LOW_RESERVE error returned.
  • If LiquidityPoolEntry does not exist on-chain (this is the first deposit) it
    is automatically created. The stake weight for the deposit, in this case, is
    calculated as S=min(A,B).
  • The node creates LiquidityStakeEntry with stake=S.
  • numsubEntries for the source account incremented.
  • The node modifies LiquidityPoolEntry setting amountA+=A, amountB+=B,
    stakes+=S.
  • DEPOSIT_SUCCESS code returned.

WithdrawPoolLiquidityOp

WithdrawPoolLiquidityOp operation withdraws funds from a liquidity pool
proportionally to the account stake size.

  • The node performs a lookup of a LiquidityStakeEntry by operation source
    account, poolID, assetA, and assetB. If
    corresponding LiquidityStakeEntry was not found,
    WITHDRAW_STAKE_NOT_FOUND error is returned.
  • The node loads the liquidity pool information.
  • The amount of tokens to withdraw is computed as Kw=S*Ap*Bp/Sp where Kw -
    the constant product of the assets to withdraw, S - share of the account in
    the pool from the LiquidityStakeEntry, Sp - total number of pool shares
    from LiquidityPoolEntry, Ap and Bp - current token amount of asset A
    and asset B in the pool respectively. Current pool price is P=Ap/Bp.
  • The amounts of assets to withdraw calculated as
    A=√(Kw*P)=Ap√(S/Sp)
    B=√(Kw/P)=Bp√(S/Sp)
  • If the LiquidityStakeEntry has been created after now() - 24hours,
    WITHDRAW_STAKE_TOO_EARLY error returned.
  • Trustlines info loaded for assetA and assetB. If the source account does
    not have a trustline for one of the assets, the trustline is not authorized,
    or a trustline limit prevents the transfer,
    WITHDRAW_STAKE_NO_TRUSTLINE error returned.
  • numSubEntries for the source account decremented.
  • LiquidityPoolEntry updated: amountA-=A, amountB-=B, stakes-=S.
  • Withdrawn funds get transferred to the source account balances.
  • LiquidityStakeEntry removed.
  • WITHDRAW_STAKE_SUCCESS code returned.

To deal with disambiguation and simplify the aggregation process, assetA and
assetB in LiquidityPoolEntry and LiquidityStakeEntry should always be
sorted in alphabetical order upon insertion. The comparator function takes into
account the asset type, asset code, and asset issuer address respectively.

LedgerHeader changes

Ledger header contains new fields that can be adjusted by validators during the
voting process.

liquidtyPoolYield represents the fee charged by a liquidity pool on each trade
in permile (‰), so the poolFeeCharged=tradeAmount*liquidtyPoolYield/1000

Semantic changes for existing operations

Behavior updated for ManageSellOfferOp, ManageBuyOfferOp,
CreatePassiveSellOfferOp, PathPaymentStrictReceiveOp,
PathPaymentStrictSendOp operations.

When a new (taker) order arrives, the DEX engine loads the current state of all
liquidity pools for the traded asset pair, fetches available cross orders
(maker orders) from the orderbook, and iterates through the fetched orders.

On every step, it checks whether the next maker order crosses the price of the
taker order. Before maker order execution the engine estimates the number of
tokens that can be traded on each liquidity pool for the same trading pair up to
the price of the current maker order.

The maximum amount of tokens A to be bought from the pool can be expressed as
Aq=Ap-Bp*(1+F)/P where F - trading pool fee, P - maximum price (equals
currently processed maker order price in this case), Ap and Bp - current
amounts of *asset A and asset B in the pool respectively.

If Aq>0, the corresponding amount of tokens is deducted from the pool and
added to the variable accumulating the total amount traded on the pool.

After that, the current order itself is matched to the remaining taker order
amount, and so on, up to the point when a taker order is executed in full. If
the outstanding amount can't be executed on the orderbook nor the pool, a new
order with the remaining amount is created on the orderbook.

In the end pool settlement occurs – traded asset A tokens are deducted from
the pool and added to the account balance, a matching amount of asset B
transferred from the account balance to the pool.

A trade against a pool generates a ClaimOfferAtom result with sellerID and
offerID equal to zero.

Design Rationale

Orderbook+AMM Execution

Basic liquidity pools implementation separately from the existing DEX has
several shortcomings:

  • The new AMM interface requires new trading operations. In addition to the
    existing ManageSellOfferOp and ManageBuyOfferOp, it also requires
    something like SwapSellOp and SwapBuyOp for the interaction with AMM.
    Another two operations required for path payments:
    SwapPathPaymentStrictReceiveOp and SwapPathPaymentStrictSendOp.
  • While having separate DEX and orderbook looks like a simpler solution, in
    reality, it results in significantly larger codebase changes (more operations
    and more use-cases to handle), a lot of work on the Horizon side, and much
    more effort from the ecosystem developers.
  • The trading process becomes confusing for regular users. What's the difference between an order and a swap? How to get the 
    best rate?
    Of course, sooner or later wallets and exchange interfaces should come to the
    rescue, providing hints in the interface and maybe even aggregating
    information across the liquidity pool and orderbook for a given assets pair to
    ensure the best possible exchange price. That's feasible, but not very
    user-friendly and may lead to confusion.
  • Fragmented liquidity means that for any trade or path payment larger than
    several dollars a wallet needs to perform several trades (against the
    orderbook and liquidity pools) in order to deliver an adequate price to a
    user, increasing the number of transactions necessary for a swap. In the case
    of path payments, users will have to choose whether they want operation
    atomicity with inferior price (due to the path payment tapping either into a
    liquidity pool or an orderbook) or get a better price while sacrificing the
    atomicity.
  • The long-existing problem of bots spamming the ledger while competing for
    arbitrage opportunities substantially widens because independent AMMs rely on
    arbitrage actors that rebalance pools if the price on the AMM floats too far
    from global market prices. This presents much more profitable opportunities
    than doing circular path payments for several ordebook trading pairs. Given
    the limited capacity of the ledger, this will lead to the whole network
    paralysis once the competition between several arbitrage bots and market
    makers inflate transaction fees to the sky. Most of the time, it will be
    impossible to execute a simple payment because the whole ledger capacity will
    be flooded with arbitrage transactions sent in response to any DEX offer
    update from market makers.

Advantages of the proposed approach:

  • Users always receive the best possible price as the trade is executed against
    the entire liquidity available for the certain trading pair.
  • The orderbook and liquidity pool always remain in the balanced state which
    means there are no arbitrage opportunities between the pool and orderbook. The
    trading engine automatically conducts arbitrage rebalancing on each trade
    under the hood, eliminating the need for external arbitrage actors.
  • There are no reasonable use-cases that require trading exclusively on the
    pool. Price manipulations is probably the only applicable example of pool-only
    swaps.
  • Smaller attack surface since there is no way to trade on pool directly. This
    also automatically prevents attacks based on the imbalanced oracle state and
    significantly increases the cost of intentional price manipulations as the
    attacker has to trade against the entire aggregated liquidity on the given
    asset pair.
  • Better developer experience and interoperability between assets and products.
    Keeping things simple allows to avoid developer mistakes, and streamline the
    experience.
  • Immediate availability of liquidity pools for existing applications without
    the need to upgrade the codebase.
  • It's fairly easy to add new pool-specific swap operations in the future if
    such a necessity emerges. At the same time, deprecating swap operations in
    favor of the interleaved orderbook+AMM execution looks fairly more complex.

Deposit/Withdraw Semantics

DepositPoolLiquidityOp and WithdrawPoolLiquidityOp operations have
intentionally simplified interfaces. WithdrawPoolLiquidityOp removes all
liquidity from the given pool supplied by the account. This makes partial funds
withdrawal somewhat difficult because a client will need to remove all the
liquidity and then call the DepositPoolLiquidityOp to return the desired
portion back to the pool. But this makes the overall process much more reliable,
especially under the price fluctuation conditions. Both operations can be
executed in the same transaction, so overall it looks like the best option.

Proposed enforcement of the minimal pool deposit retention time results in a
more predictable liquidity allocation and helps to avoid frequent switching
between pools from users willing to maximize profits by providing liquidity only
to the most active markets.

Other Pool Invariants Support

The poolID identifier present in new operations and ledger entries provides
the way to use more than one pool per trading pair, with different price
quotation functions and execution parameters.

Protocol Upgrade Transition

Backwards Incompatibilities

Proposed changes do not affect existing ledger entries. Updated trading
operations semantics are consistent with the current implementation and do not
require any updates on the client side.

This CAP should not cause a breaking change in existing implementations.

Resource Utilization

The price quotation on the liquidity pool adds additional CPU workload in the
trading process. However, this may be compensated by the fewer orderbook orders
matched.

Utilized constant product formula may require 128-bit number computations which
may be significantly slower than 64-bit arithmetics. This potentially can be
addressed by modifying ledger entries to store values derived from asset
balances in the pool (L=√(A*B) and √P=√(B/A)).

Every LiquidityPoolEntry requires storage space on the ledger but unlike
LiquidityStakeEntry it is not backed by the account base reserve.

Security Concerns

TBD

Marcin Olszowy // COINQVEST

unread,
Mar 26, 2021, 7:22:12 PM3/26/21
to stell...@googlegroups.com
Extremely excited about this. There is so much goodness in this proposal.

This can be a true catalyst for additional anchors on the Network and more liquidity on the DEX. It's hard to become an anchor because it involves continuous market making. Market making also requires capital. Channeling all of this into liquidity pools addresses this beautifully. This CAP lowers the entry barrier for anchors and the built-in yield incentives can potentially drive up liquidity significantly.

Can't wait to hopefully see this materialize.
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CAJxYNoa2k-cpgTGjQVSGDb8hON1qgzQMqswaZ6CXyOjbUaU0kw%40mail.gmail.com.

Jem

unread,
Mar 27, 2021, 11:46:31 PM3/27/21
to Stellar Developers
This is an excellent suggestion. Order book based exchanges are ill suited to L1s and AMMs are a good & practical alternative. I'm very happy to see a proposal for an AMM solution in core.

The timing is fun. Uniswap v3 is innovating AMMs to the next level. Is it possible to build concentrated liquidity into this proposal? Without it, it will seem a step behind best practice.

Nicolas Barry

unread,
Mar 29, 2021, 3:23:56 PM3/29/21
to Jem, Stellar Developers
Good question.

I would prefer to get something that works sooner than later as to provide people some access to features like this.
Then, we should continue to iterate on this as part of our regular protocol work, as a consequence the way I am going to evaluate protocol proposals in this space (can't speak for other reviewers) will put more emphasis on how to future proof the thing and ensure it's easy to evolve.

Unlike Uniswap, we already have the Stellar DEX that provides full control to "power users" (in V3, people are expected to maintain their positions on a regular basis), also the network capacity/latency/fee that we have on Stellar is vastly different from what people get with Uniswap V2 on Ethereum and just that needs careful analysis (for example: because network fees are so high on Ethereum certain front running/arbitraging scenarios are not that practical there, where as they may be on Stellar).

As for Uniswap V3 specifics things, there are still a bunch of questions that are unclear to me until it gets battle tested: in particular there are probably a bunch of new ways to front run people that will only get fully tested when it gets released on L2 solutions like Optimism.

Nicolas


Nikhil Saraf

unread,
Mar 29, 2021, 5:11:57 PM3/29/21
to Stellar Developers
Hey Orbit!

Thank you for sharing this. Awesome to see this, it's very exciting!

Wanted to share some thoughts / comments on some parts of the proposal like we discussed, so sharing here.

1. LiquidityStakeEntry - maybe we don’t need the assetA, assetB fields here since it’s specified on the poolID? At least we shouldn’t save this on-chain (db) since we can find this out by a join and don’t need to duplicate asset information for every LiquidityStakeEntry.

2. DepositPoolLiquidityOp - Is it possible to use this operation to add more to our already staked amount? (i.e. add more liquidity)

3. WithdrawPoolLiquidityOp - maybe we can specify only units of shares we want to withdraw? Or specify as a % instead of always withdraw 100%? Users may not want to withdraw 100% always because that is a taxable event.

4. Error codes for deposit:
- rename: DEPOSIT_NO_ISSUER to DEPOSIT_INVALID_ASSET_A, DEPOSIT_INVALID_ASSET_B, DEPOSIT_INVALID_ASSET_AB. This covers the case where it has an issuer but code / combination of code:issuer is invalid. By separating out A and B we also cover the case where we can identify which asset was invalid or if both were invalid. If not required then we can just generalize to DEPOSIT_INVALID_ASSET instead of DEPOSIT_NO_ISSUER.

5. Deposit success result
- also include oldStake in the result?

6. WithdrawPoolLiquidityResultCode
- WITHDRAW_STAKE_NO_TRUSTLINE - would it be a good idea to make it invalid (fail) to remove a trustline if you have a stake in a pool for that asset?
- add: NOT_ENOUGH_TRUST - it’s possible that the user will try to withdraw tokens but the withdrawal will result in the user’s account holdings more tokens than what they trust. This should be a failure scenario since trustlines have an upper limit.

7. Stake weights calculation - Let's assume BTC/USD = $50,000, the user should deposit say $50,000 and 1 BTC so there is equal amounts of "value" of each token deposited. However, if the user deposits 0.1 BTC and $50,000 (incorrect ratio) then they have deposited more value in USD than they did in BTC.
- Should this be allowed?
- Should the protocol only accept $5,000 USD as the max USD asset value (because that's what corresponds to the lower "value" amount they deposited in the form of BTC = 0.1 BTC)?
- If we allow "asymmetric deposits" then that would skew the price of the pool, arbitrageurs would correct the price to make a profit, existing pool holders would make a profit from the fee paid by arbitrageurs, and the only loss-maker in this situation would be the depositor. If the depositor tried to take funds out of the pool in such a situation and the pre-deposit price = the pre-withdraw price in this situation of an "asymmetric deposit" then the total "value" they would get back would be less than the $55,000 of value they deposited.

8. Assets to withdraw -- not sure I understand why this is calculated as such:

A=√(Kw*P)=Ap√(S/Sp)
B=√(Kw/P)=Bp√(S/Sp)
Not sure if I'm thinking about this correctly, but can it be this?
A=Ap * (S/Sp)
B=Bp * (S/Sp)
(i.e. if I own 10% of the staked tokens then give me 10% of Ap and Bp each)

9. Withdraw stake too early -- if we were to allow users to add more deposits after their initial deposits, maybe we can add a lastDepositLedger field on the LiquidityStakeEntry and lock the withdrawals of the entire amount for 24 hours. This seems to be the simplest solution that allows follow-on deposits without getting into situations where we have to keep track of how much was deposited at what time so we unlock amounts correctly.

10. Maximum amount of tokens of assetA that can be bought from the pool - this section describes how many tokens of Aq we can take from the pool before we hit the first maker order -- what is the calculation for what price we will use?
- I think the invariant in this situation should be that if the users buys and sells the same small delta amount from the pool one after the other then the user should end up with less money than they started with (because that is consumed by the fee of the pool).

11. SwapSellOp and SwapBuyOp - What happens when a user wants to swap an amount that would have crossed the first maker order on the DEX if they had used a ManageOffer?
Ideally, stellar-core would factor in the maker order on the DEX to give them a better price than if they just consumed from the AMM. This sounds very similar to the updated implementation for the ManagerOffer ops -- the only difference being that there is no price associated with the operation and it can't stand on the books if partially filled, i.e. it is a market order.
Following suggestions based on my above understanding (which may be incorrect):
- If this is the only difference then maybe we can reuse a lot of the logic from ManageOffer and basically implement market orders into core with your suggested modifications? the same applies to the SwapPathPaymentStrictReceiveOp and SwapPathPaymentStrictSendOp.
- Alternatively, since it is dangerous to have a market order, maybe we do not offer these operations natively in stellar-core? Instead, we can add a field to the ManageOffer operations that will not leave any residual amount on the books and the UI can implement a limit price that is, say 5%, above current price and then execute via a ManageOffer. The UI can display this as a swap operation but it has a limit price instead of being a more dangerous market order. This is somewhat similar to how Uniswap is able to tell you that you will get a minimum of X number of units for your swap (haven't looked at their implementation, so I am guessing they do this).

12. Multiple pools -- as a v1 I think it will be easier and more manageable to implement only 1 pool per trading pair so liquidity is not fragmented. We can enable multiple pools in a future upgrade.
- This of course raises the question of what the fee will be for the one pool per market pair. I think this can be standardized to a reasonable value of say 0.3% for now across all pools and we can always upgrade in a future protocol upgrade. This reduces the possibility of choice overload on the user to choose between a large number of competing pool options and configurations for v1.

13. LiquidityPoolEntry reserve -- can we charge a reserve against the account that creates the pool?
- Alternatively, since pools are not owned by anybody (and maybe cannot be deleted also), we could have the "reserve" amount burned from the account that creates the pool? -- or moved to a locked account.



Markus Paulson-Luna

unread,
Mar 30, 2021, 9:27:10 AM3/30/21
to Stellar Developers
I'm a big fan of this proposal. It takes the right approach to LP's(Liquidity Pool's) on Stellar by integrating them with the DEX. Segregating the 2 ecosystems and thereby lowering capital efficiency wouldn't have made any sense. This seems like a great way to bootstrap DEX liquidity until more market makers enter the ecosystem.

A couple of thoughts on the proposal:
1. It's important to consider that this LP ecosystem is fundamentally different than the Ethereum LP ecosystem as liquidity providers are competing for spread with market makers. It is likely that market makers will create their orders inside the spread created by liquidity pool fees. As such, liquidity pools will probably end up "picking up the slack" by filling orders when market makers fail to adjust their spread correctly. That being said, this may not be the case initially as DEX spreads are currently very large.

2. How is the yield/liquidity pool fee set? I'd imagine it would make the most sense to allow users to set it in the Deposit Liquidity Pool Op. Users are going to need a good amount of flexibility in setting their pool fees as the economic viability of different fee levels will vary based on the liquidity of asset pairs.

3. I agree that this CAP should attempt to implement the Uniswap V3 liquidity concentration system if it's not a technological burden. Stellar's market is much smaller than Ethereum's, which makes capital efficiency extremely important. Additionally, since LPs are competing with market makers it will be harder for them to generate attractive fees without excellent capital efficiency (Market Makers have a huge capital efficiency advantage over x*y=k LPs). Without implementing liquidity concentration I'm concerned that fees simply won't be high enough to attract users.  One way of implementing liquidity concentration would be adding maxPriceA and maxPriceB inputs to the DepositPoolLiquidity op. This would set a max allowable price of asset A and a max allowable price of asset B in the liquidity pool. The max amount bought calculation would be modified to be Aq=Ap-Bp*(1+F)/min(P,Pma) where Pma is the maxPriceA set by the liquidity pool. I'm not totally sure how technically feasible this implementation would be, it's just an idea.

4. I also don't understand the assets to withdraw calculation. I would think the calculation would be A=Ap * (S/Sp).

5. I don't think we need to be concerned about having many separate liquidity pools at all. Since they're all using the DEX, the liquidity will be aggregated.

Drew Palmer / Marscoin

unread,
Mar 30, 2021, 9:27:16 AM3/30/21
to Stellar Developers
point 12) is probably an anti-trust violation and price-fixing, so might please reconsider that. Even if it's just renamed "concentrated liquidity"

Orbit Lens

unread,
Mar 30, 2021, 10:05:01 AM3/30/21
to Nikhil Saraf, Stellar Developers
Appreciate your detailed feedback, Nikhil!

LiquidityStakeEntry - maybe we don’t need the assetA, assetB fields here since it’s specified on the poolID? At least we shouldn’t save this on-chain (db) since we can find this out by a join and don’t need to duplicate asset information for every LiquidityStakeEntry.

poolID field is there to support other pool invariants if we decide to add them in the future. E.g. 0 is the default constant product invariant, 1 - stableswap invariant (better suited to highly correlated asset pairs, like stablecoin/stablecoin), etc. 
So the poolID is not enough to uniquely identify the pool.
BTW, there is a suggestion (a bit controversial) to allow several pools with the same invariant yet different fees. For example, 0.05%, 0.3%, and 1%. This way users will be able to choose a pool with higher yields. Low fees will work better on very active markets or stablecoin trading pairs while higher fees are preferable on illiquid or risky markets.

WithdrawPoolLiquidityOp - maybe we can specify only units of shares we want to withdraw? Or specify as a % instead of always withdraw 100%? Users may not want to withdraw 100% always because that is a taxable event.
DepositPoolLiquidityOp - Is it possible to use this operation to add more to our already staked amount? (i.e. add more liquidity)

No, but you can submit a transaction containing two ops - WithdrawPoolLiquidity and DepositPoolLiquidity. This essentially allows to add more liquidity or partially withdraw the stake. This logic allows to minimize the number of required ledger entries, limit the overall number of pool readjustments, and achieve a lighter ledger footprint as a result. My proposal contains a notion of a withdrawal timelock to prevent too frequent liquidity transfers and at least partially address potential withdrawal front-running manipulations. It may be trickier to implement if we allow multiple deposits.

- rename: DEPOSIT_NO_ISSUER to DEPOSIT_INVALID_ASSET_A, DEPOSIT_INVALID_ASSET_B, DEPOSIT_INVALID_ASSET_AB. This covers the case where it has an issuer but code / combination of code:issuer is invalid. By separating out A and B we also cover the case where we can identify which asset was invalid or if both were invalid. If not required then we can just generalize to DEPOSIT_INVALID_ASSET instead of DEPOSIT_NO_ISSUER.

There are pros and cons to having very granular error codes. Let's ask Core developers about this one.

Deposit success result - also include oldStake in the result?

This is only applicable if we allow more than one deposit to the same pool, right?

WITHDRAW_STAKE_NO_TRUSTLINE - would it be a good idea to make it invalid (fail) to remove a trustline if you have a stake in a pool for that asset?

Would be nice. Checking all account pool stakes before the trustline removal adds additional complexity to the existing trustline removal logic. So if Core devs are ok with this, I'll document this semantic change.

add: NOT_ENOUGH_TRUST - it’s possible that the user will try to withdraw tokens but the withdrawal will result in the user’s account holdings more tokens than what they trust. This should be a failure scenario since trustlines have an upper limit.

Good point.

Let's assume BTC/USD = $50,000, the user should deposit say $50,000 and 1 BTC so there is equal amounts of "value" of each token deposited. However, if the user deposits 0.1 BTC and $50,000 (incorrect ratio) then they have deposited more value in USD than they did in BTC.

In the proposal the deposit logic is currently defined in a way that prevents such situations. AmountA and AmountB are treated as maximum desirable amounts. The deposit never changes the price balance of the pool. In the described case a user ends up depositing 0.1BTC and 5,000USD as it's a maximum available stake amount with respect to the current pool price, and amountA/B.
Another approach is to use the exact token amounts specified by a user and execute the trade implicitly under the hood in order to rebalance the pool. Effectively it's the same as executing a self path payment op (since path payments, as well as manage offer ops, trade against both the orderbook and the amm) before the deposit. Under this scenario, the actual deposit will be 27,500 USD+0.55BTC (if we ignore price slippage in this case).

Probably, this question requires further discussion.

Not sure if I'm thinking about this correctly, but can it be this? A=Ap * (S/Sp) B=Bp * (S/Sp) (i.e. if I own 10% of the staked tokens then give me 10% of Ap and Bp each)

Right now I'm building a simulation app that will allow to check all operations and math behind the pool, as well as play with different invariants and deposit/withdrawal strategies. Following some ideas from the Uniswap V3 whitepeaper, I think there is a way to use more efficient pool parameters representation to minimize rounding errors. So the deposit/withdrawal formulas are likely to change.

Withdraw stake too early -- if we were to allow users to add more deposits after their initial deposits, maybe we can add a lastDepositLedger field on the LiquidityStakeEntry and lock the withdrawals of the entire amount for 24 hours. This seems to be the simplest solution that allows follow-on deposits without getting into situations where we have to keep track of how much was deposited at what time so we unlock amounts correctly.

This definitely makes sense. Thought about this approach as well. The only potential problem I see here is the inability to protect the ledger from micro-deposit transactions spamming. We need to assess how expensive the deposit op is in terms of computational resources and storage. If that's not a big deal, I have no objections.

Maximum amount of tokens of assetA that can be bought from the pool - this section describes how many tokens of Aq we can take from the pool before we hit the first maker order -- what is the calculation for what price we will use?

Sorry, I don't follow you here. We find the order in the orderbook to cross our taker order and before its execution try to trade with the pool if possible, limiting the maximum purchasing price so it's <= the maker order price. 
The maximum amount of tokens A to be bought from the pool can be expressed as Aq=Ap-Bp*(1+F)/P where F - trading pool fee, P - maximum price (equals currently processed maker order price in this case), Ap and Bp - current amounts of asset A and asset B in the pool respectively. 

I think the invariant in this situation should be that if the users buys and sells the same small delta amount from the pool one after the other then the user should end up with less money than they started with (because that is consumed by the fee of the pool)

Well, yes. It definitely works that way.

SwapSellOp and SwapBuyOp - What happens when a user wants to swap an amount that would have crossed the first maker order on the DEX if they had used a ManageOffer?
Ideally, stellar-core would factor in the maker order on the DEX to give them a better price than if they just consumed from the AMM. This sounds very similar to the updated implementation for the ManagerOffer ops -- the only difference being that there is no price associated with the operation and it can't stand on the books if partially filled, i.e. it is a market order.

Yes, exactly, if we decide to SwapSellOp and SwapBuyOp, they will work exactly as PathPaymentStrictSend and PathPaymentStrictReceive. Just to be clear, the proposal states that all operations that currently interact with the DEX (ManageSellOfferOp, ManageBuyOfferOp, CreatePassiveSellOfferOp, PathPaymentStrictReceiveOp,PathPaymentStrictSendOp) should use the same order execution model. As soon as all manage offer and path payment ops utilize the same mechanism trading against the orderbook+amm, there is no need to introduce any operations trading directly on the pool. I thought about potential usec-ases for this for a while and haven't come up with even a single scenario that requires direct pool interaction. Therefore, I'm actively opposing the idea of adding pool swap operations to the CAP just for the sake of it.

Multiple pools -- as a v1 I think it will be easier and more manageable to implement only 1 pool per trading pair so liquidity is not fragmented. 

Agree with you. Just want to pave the way for adding other invariants in the future. It will be much easier to add a few effectively unused fields (poolID) then make some groundbreaking changes later.

LiquidityPoolEntry reserve -- can we charge a reserve against the account that creates the pool?

Yes, but there will be no way to get this reserve back until everybody withdraws their stakes. This, in turn, will prevent this account from merging.

Alternatively, since pools are not owned by anybody (and maybe cannot be deleted also), we could have the "reserve" amount burned from the account that creates the pool? -- or moved to a locked account.

Yeah, that's probably the best solution. We can charge one time fee, say, 1000*base_reserve on pool creation (charged funds go the ledger fee_pool). This won't lock any money on the pool creator's account balance and protects us from unnecessary ledger entries creation. This way nobody will create useless pools on illiquid asset paris just for testing.



Gleb Pitsevich

unread,
Apr 10, 2021, 9:51:06 AM4/10/21
to Orbit Lens, Nikhil Saraf, Stellar Developers
Great work, Orbit!

I support the idea of AMM on Stellar in general, hoping the implementation won’t put too much performance burden on Core and Horizon.

Some quick thoughts on the proposal:
- I think the approach to combine order book and AMM liquidity is the right one.
- According to my limited understanding, a combination of AMM and order book, might work even better than Uniswap v3, so I dont see a need to make things more complicated and add the concept of concentrated liquidity to this proposal.
- I also don't think alternative pool types are needed in the beginning. Order books are ideal for stablecoin/stablecoin trading, and this complements “constant product” AMM capabilities.
- Can we add some references to what AMM is and give credit to existing implementations?


While the majority of the proposal is surprisingly easy to understand (especially if someone has background knowledge about AMM concept), I think the section which describes the interaction between SDEX and AMM and how matching works could benefit from a simplified summary.

According to my understanding, the formulas describe the following flow:

***

Briefly speaking, the matching engine will try to utilize AMM liquidity if it can provide a better price than SDEX order book.

When a new order arrives, before matching any of the maker orders in the order book, Stellar Core will check if liquidity pool can provide a better price then the current maker order. 

If liquidity pool price is worse, Core will match the current maker order and move to next one, repeating the same process. 

However, if AMM offers a better price (even for a part of the taker order) the order will be first executed (potentially partially) against liquidity pool, allowing taker to get a better price than on SDEX. The remainder of the order will be matched with SDEX maker order. If there’s any unmatched amount, Core will move to the next maker order on SDEX, repeating the same process.

***

If the above is not quite right (which is very likely), maybe you could correct me and add a similar section to the proposal, so that it’s easier to understand the general ideas of how SDEX and AMM can work together in order matching?


We will also continue thinking on how this would affect existing DEX interfaces on Stellar such as StellarX or Stellarport, as if this gets approved and implemented, these UI would provide user with an incorrect idea of liquidity and trade outcomes, since they are visualizing order books only, and the real liquidity might be significantly greater. 
Showing the real picture to the user would be an interesting challenge.

Best,
Gleb

Orbit Lens

unread,
Apr 14, 2021, 6:43:51 PM4/14/21
to Stellar Developers

Status Update

Following the feedback collected from the mailing list, questions list, and working group discussion, I prepared a revised version of the CAP draft (it's here: https://github.com/stellar/stellar-protocol/blob/master/core/cap-0037.md).

Changes:

  • Multiple pool deposits and partial withdrawals support, without the timelock – users will be able to deposit/withdraw funds without limitations. The working group pointed out that locking deposited pool funds for at least 24 hours may lead to additional risks for liquidity providers. Reducing the timelock period eliminates the benefits described earlier, and doesn't protect the network from spam. Another strong argument for allowing partial withdrawals is potential exposure to extra taxation as the described mechanics with the complete funds withdrawal may be considered a taxable event even if a part of the withdrawn funds re-deposited back to the pool in the same transaction. 
  • The new priceAccuracy parameter in the DepositPoolLiquityOp provides flexible protection from potential pool price manipulation attacks.
  • Revised formulas (math optimizations, explicitly specified rounding direction, and properly formatted equations in LaTeX format).
  • Updated Deposit/Withdraw error codes and error handling routines.
  • Improved formatting, consistent naming, etc.

I made a simple js calculator to speed up AMM logic testing: https://amm-calculator.pages.dev/ It is extendable, so once we decide to add any new invariant or change the deposit/withdrawal logic, it could be tested against the current implementation in no time.

Also, I'd like to address a few questions published here. 

UniswapV3-like pool

Although I see a huge potential in this idea, there are several arguments against it. I'll try to compile the feedback received from the mailing list, my personal p2p conversations, and working group consultations.

  • Since the Uniswap V3 curve actually consists of a multitude of the price-bounded micro-segments, computation and storage requirements are significantly larger in this case, making trading on such pools much more resource-expensive. This may become a scaling bottleneck for the Stellar Core in the future.
  • An excellent point was brought by Gleb. Orderbook works well for stablecoin/stablecoin pairs, while liquidity pools shine when it comes to more volatile markets.
  • Maximizing profits on UniV3 pools in high volatility situations requires trading bots or other automation, which makes it more suitable rather for professional market-makers than ordinary users. Ordinary liquidity providers are more likely to incur an impermanent loss.
  • The technology is new and not well tested yet. Given the pivotal changes of the AMM architecture, it's not clear whether it is performance efficient and safe.

It's important to consider that this LP ecosystem is fundamentally different than the Ethereum LP ecosystem as liquidity providers are competing for spread with market makers. It is likely that market makers will create their orders inside the spread created by liquidity pool fees. As such, liquidity pools will probably end up "picking up the slack" by filling orders when market makers fail to adjust their spread correctly. That being said, this may not be the case initially as DEX spreads are currently very large.

If the liquidity pool fee is small enough (for example, Uniswap v2 features 0.3% swap fee), it's very unlikely that market makers will be able to take advantage by maintaining such a tight spread without exposing themselves to additional risk.


How is the yield/liquidity pool fee set? I'd imagine it would make the most sense to allow users to set it in the Deposit Liquidity Pool Op. Users are going to need a good amount of flexibility in setting their pool fees as the economic viability of different fee levels will vary based on the liquidity of asset pairs.

Simultaneous trading against multiple pools with different fee parameters is tricky from a technical point of view. Therefore, my proposal describes the universal network-wide fee determined by validators voting (the liquidtyPoolYield parameter).


I don't think we need to be concerned about having many separate liquidity pools at all. Since they're all using the DEX, the liquidity will be aggregated.

Trading against multiple pools is much more expensive in terms of resources (by orders of magnitude). It's the primary concern here.


Can we add some references to what AMM is and give credit to existing implementations? The section which describes the interaction between SDEX and AMM and how matching works could benefit from a simplified summary.

My intention was to make the CAP as brief as possible since the proper description of all AMM concepts and mechanics might be quite lengthy. The impermanent loss section itself will be as large as the entire CAP draft. I'd be happy to include references to AMM whitepapers, but the CAP format currently doesn't imply citation. My introductory blogpost contains more comprehensive AMM theory analysis and ordebrook+amm trading examples. Not sure whether we need an exhaustive AMM explanation in the CAP doc or not. The interleaved execution process you described is very precise. I extended the section containing the trading process logic, but I'll definitely add more details if some aspects are still unclear.

-------

Thanks for the feedback!

Please make sure to check the revised CAP draft and the alternative proposal by Jon Jove.

Nicolas Barry

unread,
Apr 14, 2021, 9:54:31 PM4/14/21
to Orbit Lens, Stellar Developers
Thanks Orbit.

I finally took some time to look in more detail at your proposal... of course those AMMs multiply (you like the pun?) so now we also have CAP38 to look at (I didn't have time to look at it in detail yet, I'll try to do that tomorrow if it's not a giant thing).

High level: I think your proposal is not too far from what we could implement. As there is a lot of math in there, the thing I worry about is rounding issues and overflows (in the existing DEX it took us several versions to build something that had the right properties) - so the less of that there is the happier I am.


Xdr

 

 

uint32 poolType

This should really be its own type `PoolType` instead of using `uint32`

 

Something like


```

enum PoolType {

POOLTYPE_CONSTANT_PRODUCT_v0 = 0

}

```


That way it's extensible (possibly as something else than a uint32), is validated everywhere, and it's clear which values are allowed.

 

+/* Represents information about the account stake in the pool */
+struct LiquidityStakeEntry

Nit: "in a pool"


 

+        uint32 liquidtyPoolYield; // fee charged by liquidity pools on each trade in permile (‰)   

Nit: typo ` liquidityPoolYield`

 

 

Semantics

 

DepositPoolLiquidityOp:

 

Validity:

"This version of the proposal doesn't imply any other restrictions, but this may change in the future."

I imagine that you would check if the pool type is valid?

 

In general: you should use the names from the xdr, it makes it harder to follow the logic.

I think you probably want to reorder certain things as to avoid mutating variables (as to avoid defining "a" as "maxAmountA" only to adjust later), this will also make it clearer which conditions are really needed and when they apply.

 

I would get rid of the complexity tied to balance requirements etc: I think you can just model what you're looking for as a sponsorship of the `LiquidityPoolEntry ` and not special case (from a pool point of view) the account that created it.

 

Formula for stake calculation: I think you're implying that all the math is done in floating point (sqrt and floor)?

This should be specified better: I imagine you're looking at IEEE754, specifically binary64 (56 bit precision), what about rounding mode when performing operations in the floating point space (there are 5)? See IEEE 754 - Wikipedia

 

I don't see any guards against overflows when you perform operations in int64 space.

 

I don't see how authorization is handled.

 

Use of `H(operation source account address | poolType | assetA | assetB)` as LedgerKey for LiquidityStakeEntry:

I don't see why you need to do this: the ledger key can just be `(accountID, poolType, assetA, assetB)`.

I see at the end that this is to "optimize storage", but we have to store all those fields anyways, so it's actually additional storage.

 

 

WithdrawPoolLiquidityOp

 

Validity needs to be defined.

 

I think it would be better to have authorization issues get their own error code.

 

Is it possible to reach LiquidityPoolEntry.stakes=0 at the end? If so do you expect the pool to be destroyed (I think you do?)?

 

LedgerHeader changes

 

If you make `liquidtyPoolYield` mutable doesn't that potentially create problems like what was identified in https://github.com/98farhan94/doc/blob/main/Curve_Vulnerability_Report.pdf ?

 

Trading update

 

I understand the math you're trying to do when trading using multiple steps, but some explanation on which property you're trying to get would help. In particular, I would like to see it articulated in such a way that if we have more than constant product and/or more than one pool we can reason about it (as this property is what we will carry forward).

 

This "step" algorithm should probably be adjusted: right now it's actually pretty hard to see which of the bullet points applies when.

Related (on the how hard it's going to be to validate) there are a bunch of subtleties in the existing DEX. In particular smaller trades may not work because of the way rounding works, so this needs to be specified somehow as I think that could cause the order books to be slightly crossed (the AMM pool does not have this problem) or you'll have to allow a "step" to skip the DEX in some situations.  Maybe this is fine (but this goes back to which property you're aiming for exactly).

 

I noticed later that some of this was covered in the "rationale" section - I would still like to see which property and invariants are desired in this section.

 

Also, I would like to hear from people in the ecosystem about this particular strategy (that will require implementing the same logic in Horizon to give a quote on any given asset pair). At a minimum this would have to be implemented in Horizon.

 

Missing semantic changes

 

Authorization and clawback

What happens if a stake holder gets its trustline revoked?

How is clawback going to work (I imagine that this is the same answer than in the other thread for CAP38).

 

Rationale

 

I see that under "advantages" you're calling out the strategy as "best price" and "always balanced" - I think this may not be true due to rounding in the existing DEX that has to alternate which side (maker or taker) benefits from rounding.

 

"smaller attack surface due to not having direct pool access" - I am not sure I understand what you mean? Clearly if there are no offers in some price range all interactions will be directly against a given pool.




--

Markus Paulson-Luna

unread,
Apr 16, 2021, 8:35:20 AM4/16/21
to Stellar Developers

Hey OrbitLens,

Thanks for expanding on the technical complexity of offering multiple pools or a Uniswap V3-esque solution. Given the technical complexity factors, I support moving forward with an x*y=k model at this point.

I also want to add that I wrote a script to backtest AMM performance under the proposed system. The results might be helpful for the discussion here so I thought I'd share them. I backtested with the last 30 days of trading. All testing was done using the XLM-USDC asset pair as it's probably the most liquid pair on the DEX. The initial pool value was 20,000 USDC using XLM prices from a month ago.

Performance with an AMM fee of  .01% | APR: 1.5672% | Percent of Trades Filled: 85.35418310970797%

Performance with an AMM fee of  .03% | APR: 4.0408% | Percent of Trades Filled: 74.59174108244117%

Performance with an AMM fee of  .05% | APR: 5.9740% | Percent of Trades Filled: 67.52343364578194%

Performance with an AMM fee of .10%: | APR: 9.8261% | Percent of Trades Filled: 56.52409846578856%

Performance with an AMM fee of .15% | APR: 12.3041% | Percent of Trades Filled: 49.36121935579342%

Performance with an AMM fee of .20% | APR: 14.2258% | Percent of Trades Filled: 43.963306372065497%

Performance with an AMM fee of .25% | APR: 15.9257% | Percent of Trades Filled: 39.79385510677122%

Performance with an AMM fee of .30% | APR: 17.2445% | Percent of Trades Filled: 36.01084812623274%

Performance with an AMM fee of .35% | APR: 18.1553% | Percent of Trades Filled: 32.654771293375395%

Performance with an AMM fee of .40% | APR: 19.3461% | Percent of Trades Filled: 30.11234847738248%

Performance with an AMM fee of .45% | APR: 19.4294% | Percent of Trades Filled: 27.659784162026313%

Performance with an AMM fee of .50% | APR: 20.2193% | Percent of Trades Filled: 25.79325975561687%

Performance with an AMM fee of .55% | APR: 20.7282% | Percent of Trades Filled: 24.280788177339901%

Performance with an AMM fee of .60% | APR: 21.4639% | Percent of Trades Filled: 22.549019607843138%

Performance with an AMM fee of .65% | APR: 22.3148% | Percent of Trades Filled: 21.252278437361447%

Performance with an AMM fee of .70% | APR: 22.7552% | Percent of Trades Filled: 20.28671363121336%

Performance with an AMM fee of .75% | APR: 23.1144% | Percent of Trades Filled: 19.424602197152568%

Performance with an AMM fee of .80% | APR: 23.2075% | Percent of Trades Filled: 18.33095226365831%

Performance with an AMM fee of .85% | APR: 23.5160% | Percent of Trades Filled: 17.403940886699507%

Performance with an AMM fee of .90% | APR: 23.7566% | Percent of Trades Filled: 16.602620947876637%

Performance with an AMM fee of .95% | APR: 24.3690% | Percent of Trades Filled:  15.79465957237166 %

Performance with an AMM fee of 1.00% | APR: 24.7425% | Percent of Trades Filled: 15.29707360331067%

Performance with an AMM fee of 1.50% | APR: 25.9706% | Percent of Trades Filled: 10.508424475317765%

Performance with an AMM fee of 2.00% | APR: 26.4309% | Percent of Trades Filled: 08.050054192531284%

Performance with an AMM fee of 2.50% | APR: 26.7145% | Percent of Trades Filled: 06.428888122567614%

Performance with an AMM fee of 3.00% | APR: 26.3083% | Percent of Trades Filled: 05.203252032520325%

Performance with an AMM fee of 3.50% | APR:  28.0337% | Percent of Trades Filled:  04.488790342448879%

Performance with an AMM fee of 4.00% | APR: 30.7065% | Percent of Trades Filled: 03.991130820399113%

Performance with an AMM fee of 4.50% | APR:  30.7340 % | Percent of Trades Filled: 03.538340232603982%

Performance with an AMM fee of 5.00% | APR: 34.0707% | Percent of Trades Filled: 03.2180169524935937%

Performance with an AMM fee of 5.50% | APR: 34.1630% | Percent of Trades Filled: 03.0110388330376502%

Performance with an AMM fee of 6.00% | APR: 32.9611% | Percent of Trades Filled: 02.7742189809795998%

Performance with an AMM fee of 6.50% | APR: 31.6422% | Percent of Trades Filled: 02.5078833267638944%

Performance with an AMM fee of 7.00% | APR: 31.5648% | Percent of Trades Filled: 02.266791504459666%

Even though the past DEX state isn't indicative of how it will be in the future (volume and liquidity will increase as AMM's are implemented) this information is probably useful for setting global AMM fees. If anyone wants to see the script feel free to ask and I can share it here.

Tango Swap

unread,
Apr 19, 2021, 11:27:54 AM4/19/21
to Stellar Developers
Hi all,

@Orbit, thank you for putting this proposal forward, it looks really neat.

I have recent experience developing a constant product AMM (like the one you propose here) on the current available Stellar operations. You can check out the result in here (https://app.tangoswap.com/pool/GAPYEJ5MFIP7OLTDEYJMS5TYKQPZHUCSGVRYRPA7BVEUDRUMG6ADMR2G) currently pointing to test network, integrated with Albedo, but heavily under development. I did this basically mimicking the Uniswap V2 whitepapers.

Based on my recent experience, I'd like to make some observations; Full disclaimer: I'm fairly new to the Stellar world and have never contributed here, so what I write here might be completely wrong.

I would suggest that the WithdrawPoolLiquidityOp takes a number representing the share of the pool you want to take out, as opposed to withdrawing the whole thing. When you provide liquidity you get a stake, you should be able to withdraw a portion of that stake.

+struct WithdrawPoolLiquidityOp
+{
+    uint32 poolID; // pool invariant identifier
+    int64 stake // share of the pool to remove
+};


When providing liquidity, I think you should always calculate B in reference to A (I've called them primary and secondary to make this difference clear), and instead of providing maximum for each asset, I specified a tolerated slippage, which is always in relation to B (meaning, if you want to deposit 1000 USDC, that's what you'll deposit and what may vary is the secondary asset). However, I have done this on the application side instead of the transaction side (because obviously this specification isn't available). I think specifying slippage is more dev friendly than specifying max amounts.

+struct DepositPoolLiquidityOp
+{
+    uint32 poolID;    // pool invariant identifier
+    Asset primaryAsset;     // asset A of the liquidity pool
+    Asset secondaryAsset;     // asset B of the liquidity pool
+    int64 primaryAmount; // maximum amount of asset A a user willing to deposit
+    int64 secondaryAmount; // maximum amount of asset B a user willing to deposit
+    int64 maxToleratedSlippage;
+};

Another thing to consider is that DepositPoolLiquidityOp is only catering for a single liquidity pool to exist for any given pair. In the Ethereum world, this would mean that only Uniswap or Sushiswap could exist, but not both at the same time. I believe there's a need for a CreateLiquidityPoolOp and DeleteLiquidityPoolOp, so decentralised LP exchanges can create their own pools.

With the primary and secondary concepts, you can have a single SwapOp (instead of buy and sell ops separately)

+struct PoolLiquiditySwapOp

+{
+    uint32 poolID;    // pool invariant identifier
+    Asset primaryAsset;     // asset A of the liquidity pool
+    Asset secondaryAsset;     // asset B of the liquidity pool
+    int64 primaryAmount; // maximum amount of asset A a user willing to deposit
+    int64 secondaryAmount; // maximum amount of asset B a user willing to deposit
+    int64 maxToleratedSlippage;
+    int64 fee; //this amount gets deducted from primaryAmount
+};


Other things that I've bumped into are:

* While Uniswap pools are subject to "donations" (because they don't have the concept of trustlines), in Stellar each liquidity pool can augment and reduce their trust with each Deposit and Withdraw operation.
* This is not possible for XLM, because it has an implicit trustline, for which I think a WXLM (wrapped XLM) should exist, ideally emitted by Stellar, which is subject to trustline limits like any other asset. (Friendbot will need to cater for this!)
* Ability to find a pool by "signer" and primary and secondary asset. Right now, the only choice is to query all accounts that have a certain signer and then look at all the balances they hold to determine which liquidity pools exist.
* I will emit a custom TANGO asset to reward liquidity providers on top of the fees, not sure if custom asset emission is something we'd like to consider for this proposal.

I hope it helps!

Nicolas Barry

unread,
Apr 19, 2021, 2:04:40 PM4/19/21
to Orbit Lens, Stellar Developers
Following up on my response (now that I took a first pass at CAP38).

Both proposals need a bit of work on how to deal with compliance. In particular what to do when one of the assets gets auth revoked and/or how to deal with clawback. I suspect that the discussion on that front does not depend on which of those 2 CAPs ends up getting implemented.

Rounding errors:
I noted in my review of CAP38 that certain values may need to be adjusted after performing certain operations that have rounding issues.
Using "sqrt" to illustrate:
when you compute "s" based on "effective amounts of A and B to deposit", you actually want to make sure that "a" and "b" are the smallest values that satisfy the equation (and ensure that the error is within bounds).
For example, if you use a floating point implementation for `sqrt` (that only has 56 bit precision), "s" is missing the low 8 bits (compared to a 64 bit value) which can be relatively significant depending on the asset pair. For example BTC <> XLM ratio is over 100,000 so an error of even 1 bit can be relatively large if the amount traded is small.
 
From what I see the main differences from a math point of view between CAP37 and CAP38 are:
* the mechanism used to "bound" error. CAP37 seems more intuitive to me than what is in CAP38 as it's a relative error but maybe this creates new problems.
* using `sqrt` all the time in CAP37 when performing deposits. This needs to be analyzed further because of rounding.
* different algorithms to implement the interop DEX vs pools. CAP38 is a lot simpler to reason about as it doesn't introduce "new math". The proposed solution in CAP37 is a lot more complicated to validate (and I also don't know if it can be implemented for other curves) - question from me is: can we do this later when we have more data?

HTH

Nicolas

Markus Paulson-Luna

unread,
Apr 19, 2021, 4:49:32 PM4/19/21
to Stellar Developers
Hey Tangoswap team,

Just responding to a few of your points.

I would suggest that the WithdrawPoolLiquidityOp takes a number representing the share of the pool you want to take out, as opposed to withdrawing the whole thing. When you provide liquidity you get a stake, you should be able to withdraw a portion of that stake.

I'm going to second this suggestion as long as it doesn't introduce more technical complications that I'm not thinking of. I know users can submit a WithdrawPoolLiquidityOp followed by a DepositPoolLiquidityOp to adjust their liquidity pool share, but that adds unnecessary operation traffic on the ledger. Additionally, would allowing a percentage withdrawal from a liquidity pool deposit be useful for allowing clawbacks to be called on assets in a liquidityPool? I'm not familiar enough with clawback to know for sure. Still, maybe clawback ops could use a percentage withdrawal to only withdraw the amount specified by the clawback from the pool (the pool counter asset be withdrawn to the user's account).

Another thing to consider is that DepositPoolLiquidityOp is only catering for a single liquidity pool to exist for any given pair. In the Ethereum world, this would mean that only Uniswap or Sushiswap could exist, but not both at the same time. I believe there's a need for a CreateLiquidityPoolOp and DeleteLiquidityPoolOp, so decentralised LP exchanges can create their own pools.

Since this liquidity pool proposal works by integrating the liquidity pool swaps with DEX trades, there isn't any need to support decentralized LP exchanges. The goal is to keep capital concentrated on one location (the dex) to maximize market efficiency and interoperability. So you're correct in that both Uniswap and Sushiswap could not exist simultaneously, but that's a desired outcome of this LP model.

With the primary and secondary concepts, you can have a single SwapOp (instead of buy and sell ops separately)

+struct PoolLiquiditySwapOp

This can already be accomplished using pathPayment ops.

* While Uniswap pools are subject to "donations" (because they don't have the concept of trustlines), in Stellar each liquidity pool can augment and reduce their trust with each Deposit and Withdraw operation.
* This is not possible for XLM, because it has an implicit trustline, for which I think a WXLM (wrapped XLM) should exist, ideally emitted by Stellar, which is subject to trustline limits like any other asset. (Friendbot will need to cater for this!)

I don't think liquidity pools are subject to donations under the current proposal. I could be wrong, though. My understanding is that liquidity pools aren't tied to an account, so they can't be specified as a paymentOp destination. They exist as separate data entries on the ledger similar to claimable balances.

Ability to find a pool by "signer" and primary and secondary asset. Right now, the only choice is to query all accounts that have a certain signer and then look at all the balances they hold to determine which liquidity pools exist.
I'm not sure what use case you're trying to solve here? Since all swaps would happen via the DEX and there is only one liquidity pool per asset pair, there's no reason that users need to find a pool at this stage in the proposal. This could change if the dev community decides to explore a multi-pool model to allow variable pool fees or concentrated liquidity.


I hope I was helpful! I didn't help with this CAP, so some of my points could be off.

Markus

Orbit Lens

unread,
Apr 22, 2021, 1:41:36 PM4/22/21
to Stellar Developers
> poolType - This should really be its own type `PoolType` instead of using `uint32`

Good point. I'll update the CAP accordingly.

> you should use the names from the xdr, it makes it harder to follow the logic

My intension was to simplify the formulas (e.g. using A instead of liquidityPoolEntry.amountA makes it easier to read).

> Formula for stake calculation: I think you're implying that all the math is done in floating point (sqrt and floor)?
> I imagine you're looking at IEEE754, specifically binary64 (56 bit precision), what about rounding mode when performing operations in the floating point space?
> I don't see any guards against overflows when you perform operations in int64 space

Obviously, int64 can't be used for calculation since any of the basic operations can cause the overflow. All formulas rely on floating point numbers.
I never wrote a single line of code in C++, so it's quite difficult for me to provide the implementation details. But I think, double precision won't be enough.
As mentioned in the "Resource Utilization" section, the implementation may require 128bits math ( __float128 or something similar) to achieve adequate precision for edge cases.

> I don't see how authorization is handled.

DepositPoolLiquidityOp doesn't require any additional authorization rules since a user will need an authorized and funded trustline for the deposit.
The mechanics of revoked authorization and clawbacks are not covered yet since the working group hasn't reached consensus regarding the liquidity entries storage format (trustlines vs dedicated ledger entries). 

> Use of `H(operation source account address | poolType | assetA | assetB)` as LedgerKey for LiquidityStakeEntry:
> I don't see why you need to do this: the ledger key can just be `(accountID, poolType, assetA, assetB)`.
> I see at the end that this is to "optimize storage", but we have to store all those fields anyways, so it's actually additional storage.

Agree. That was something that popped up during the working group discussion, and I added the hashed ledger key without analyzing all the implications.
I'll revert to the previous version.

> Validity needs to be defined.

Please elaborate. I'm afraid I don't follow you here.

> Is it possible to reach LiquidityPoolEntry.stakes=0 at the end? If so do you expect the pool to be destroyed (I think you do?)?

You are absolutely right, I forgot about this.

> If you make `liquidtyPoolYield` mutable doesn't that potentially create problems like what was identified in https://github.com/98farhan94/doc/blob/main/Curve_Vulnerability_Report.pdf ?

No. Swap fee size doesn't change the curvature of the pool invariant, so it can't be exploited this way.

> I would like to hear from people in the ecosystem about this particular strategy (that will require implementing the same logic in Horizon to give a quote on any given asset pair). At a minimum this would have to be implemented in Horizon.

We discussed this during the workgroup meeting, and my stance on this is quite simple. We shouldn't implement price quoting on the Horizon level. Right now there is no price quoting for DEX orders except for the path finding functionality. Users are perfectly fine with this because wallets provide a friendly UI and can estimate the traded amount using the orderbook API endpoint info.

So we can implement the quoting on the SDK level or in some external lib. On the Horizon side we'll need an API endpoint that fetches all available pools (with currently deposited token amounts) for a given trading pair. Everything else can be calculated on the client side following the same step execution logic.
Speaking of, moving the pathfinding functionality out of the Horizon makes sense since this operation is much more expensive in terms of resources than most other Horizon queries.

> I see that under "advantages" you're calling out the strategy as "best price" and "always balanced" - I think this may not be true due to rounding in the existing DEX that has to alternate which side (maker or taker) benefits from rounding. 

"Best price" means that a user who wants to execute a swap (or a self path payment in terms of current Stellar ops) will always get the best exchange price.
Let's consider three cases:
- The ordebrook and liquidity pools for the traded market pair contain some liquidity – interleaved execution provides better price than either an orderbook-only trade or pool-only trade.
- Orderbook doesn't have liquidity – a trade executed on pools only.
- Liquidity pools don't have liquidity – a trade executed on the orderbook only.
The algorithm guarantees that the swap will use the best price for each step, combining the liquidity from all sources.

Any path payment that touches only an orderbook or only a pool can provide at least the same price only under a single scenario - there's no liquidity in other sources.

"Always balanced" refers to the absence of arbitrage opportunities between the orderbook and corresponding pools. Because of the pool fee, a pool always have spread equal to 2*poolFee. Trading  against the orderbook inside the spread won't cause a pool price changes. Of course, due to the rounding it's impossible to achieve 100% exact prices equality, but we are talking about 10^-7 orders here, so I doubt it will be as significant. Giving that it's impossible to conduct arbitrage trades between an orderbook and a pool, this tiny prices mismatch is not exploitable.

> "smaller attack surface due to not having direct pool access" - I am not sure I understand what you mean? Clearly if there are no offers in some price range all interactions will be directly against a given pool

Most of potential attack vectors discussed in the workgroup and questions doc rely on the pool price manipulation. If the pool price can be pushed independently of other pools and the orderbook for a given trading pair, it simplifies manipulation opportunities.

The front-running case demonstrated by Jon perfectly fits here. Let's imagine the situation where we have a pool with funds deposited primarily by a single malicious actor. It's possible to create an automated bot that monitors transactions mempool and submits a WidthdrawPoolLiquidityOp reducing the pool liquidity by, say, 90% if anyone tries to swap (execute path payment op) against this pool a significant amount of funds without a strict traded amount limit. In case if the front-running is successful (funds withdrawal operation is included before the original swap request), the effective exchange rate may be much worse than originally expected (even by orders of magnitude). This attack is considerably easier to execute if we have a bunch of independent pools and an orderbook on top of that. An attacker may control only one pool, and it will be enough since he can afford to offer best exchange rates by simply providing more liquidity to the pool; and people will tend to choose this pool over other exchange options because ecosystem wallets will use automated algorithms to find the most liquid swap option at the moment. With my proposal, the manipulation options are fairly limited – the same front-running attack can be executed only if an attacker owns almost the entire on-chain liquidity for a trading pair.

> The proposed solution in CAP37 is a lot more complicated to validate (and I also don't know if it can be implemented for other curves) - question from me is: can we do this later when we have more data?

I'd voted for this option myself, but I'm quite sure it's impossible. Once you introduce the way to trade directly against the pool, you'll have to maintain backwards compatibility in the future releases because of the pre-signed transactions and client applications that may rely on this logic. And having an ordebrook and several pools with imbalanced equilibrium prices eliminates most of the proposal benefits while introducing arbitrage opportunities. 

At the same time, the opposite approach is fairly straightforward. You can always introduce direct pool swap operations and turn off the interleaved orderbook+pool execution in the next release if you decide so without breaking anything.

> I understand the math you're trying to do when trading using multiple steps, but some explanation on which property you're trying to get would help. In particular, I would like to see it articulated in such a way that if we have more than constant product and/or more than one pool we can reason about it.

The question about the simultaneous execution against several pools has been also raised by Jon in CAP-38 draft with the notion that in general case this problem is NP-hard.
True, the optimization problem itself in general requires quite complex calculations. However, in our particular case we have a condition that the price is synchronized across all pools and the orderbook for a given asset pair. 

For example, let's imagine we have three constant product pools with different stakes and pool fees.
We have the basic price-targeted formula from the CAP: 
a=(√(A·B·P)-A)·(1+f) where a - traded amount of asset A, P - maximum price, f - pool fee, A and B - amount of assets A and B in the pool
So the amount traded on each pool will be:
a₁=(√(A₁·B₁·P)-A₁)·(1+f₁)
a₂=(√(A₂·B₂·P)-A₂)·(1+f₂)
a₃=(√(A₃·B₃·P)-A₃)·(1+f₃)
b₁, b₂, and b can be easily derived
So if we have a price-bound scenario (we trade against the pool up to the price of the next maker order), the total traded amount is 
a=(√(A₁·B₁·P)-A₁)·(1+f₁)) + (√(A₂·B₂·P)-A₂)·(1+f₂) + (√(A₃·B₃·P)-A₃)·(1+f₃)
If we have an amount bound-swap (the question is formulated as "how much tokens B can I receive for amount a of asset A?"), we can inverse the above equation to find the equilibrium price P after the trade:
P=((a + A₁·(1+f₁) + A₂·(1+f₂) + A₃·(1+f₃)) / (√(A₁·B₁·(1+f₁)) + √(A₂·B₂·(1+f₂))+√(A₃·B₃·(1+f₃))))²
and after that we can calculate a and b for each pool

The same approach can be used with any other pool invariant that can be expressed through the price-bound formula.
It worth noting that since each pool produces the rounding error approximated to the nearest integer, the compound error is directly proportional to the number of pools, so its application in the environment with more than a dozen pools may be questionable.

--------

Lastly, two words about the interleaved trades execution (CAP-37) vs isolated trades execution (CAP-38).
In simple terms:
CAP-37
- end-users happy (better prices, less complexity)
- majority of developers happy (pool swaps work out of the box, no need to add new interfaces)
- validators happy (less trade transactions, less arbitrage transactions – lower resources utilization)
- asset issuers happy (market-making hassle is minimized to depositing liquidity to a single pool)
CAP-38
- arbitragers happy (much more profitable arbitrage opportunities)
- top market-makers happy (advanced market manipulation controls)
- some developers happy (the wallets or DEX interface with the most advanced algorithms predicting the best exchange price will have a competitive advantage)

Maybe I wouldn't be concerned so much about the arbitrage problems if we'd have at least 10,000 ops/s on the mainnet.
But with an effective rate <200 ops/s arbitragers will simply clog the entire network driving transaction fees to the sky.

Regards,
OL

Jonathan Jove

unread,
May 7, 2021, 2:37:41 PM5/7/21
to Orbit Lens, Stellar Developers

This is in response to OrbitLens' blog post (https://stellar.expert/blog/stellar-amms-at-crossroads-between-triumph-and-disaster).


Most of this is going to follow the order of OrbitLens' blog post, but I want to open with an acknowledgement that the optimal interleaved execution is not NP-complete. I have reviewed OrbitLens' approach to solving the optimization problem (https://groups.google.com/g/stellar-dev/c/Ofb2KXwzva0/m/YVBKq-3PDAAJ) and I believe his method does work. In the case of multiple constant product pools, it actually works very elegantly. There may be some challenges with rounding, as he acknowledges, but that is a relatively minor detail. *I will update CAP-38 to reflect the fact that this problem is easier than it appeared.*


## Multi-Pool World by Example

OrbitLens constructed a set of scenarios involving multiple liquidity pools to demonstrate a difference between CAP-37 and CAP-38. I want to emphasize that *both CAP-37 and CAP-38 only support a single liquidity pool* using the constant product invariant with a fee of 0.3%. But let's consider the cases anyway:


### Case 1

Under these circumstances, it would be impractical to purchase 35k USDC even if all the liquidity were in a single no-fee constant product pool with total reserves 60k USDC, 120k XLM. It would cost 168k XLM, for an effective purchase price of 0.208 USD/XLM compared to a market price of 0.5 USD/XLM. I doubt anyone would be happy with this situation, so most operations would fail due to bounds on the trade price. In other words, this *trade is simply too big for the amount of liquidity available* regardless of the algorithm used to execute it.


### Case 2

I haven't checked the math, but it is an accurate representation that you will get a better price with CAP-37.


### Case 3

This is a significant misrepresentation. The blog post suggests that the four trades would have to be submitted separately, thereby sacrificing atomicity. This is, of course, false because Stellar supports atomic transactions (https://developers.stellar.org/docs/glossary/transactions/). So you don't have to worry about getting interrupted by arbitrageurs, because the four trades can be bundled into a single transaction.


What about the arbitrageurs? The reality is that *the arbitrage bots will awaken regardless of whether CAP-37 or CAP-38 is used*. It isn't hard to see why: arbitrageurs will simply arbitrage the liquidity pool against the centralized exchange. This is probably more profitable than doing an on-chain arbitrage between liquidity pools and the order book, because there will be less slippage on the centralized exchange, so this would be the preferred method for any serious arbitrageur. For comparison, executing a $5000 trade for XLM/USD on Coinbase would typically have about 5-10 basis points of slippage.


### Case 4

As noted above, CAP-38 does not support multiple liquidity pools for a single asset pair. There is no mechanism to deprecate a pool or make the pool withdraw only. The basic extension points for multiple liquidity pools are there in acknowledgment of the fact that we are not clairvoyant predictors of the future, and there may be a time when having a different type of liquidity pool outweighs the disadvantages that OrbitLens listed (and I agree about those disadvantages wholeheartedly).


### Case 5

This is an inaccurate portrayal of the routing behavior of CAP-38. CAP-38 attempts to execute on the liquidity pool and the order book, and actually executes whichever way provides the best price. If a malicious actor attempted to grossly increase the slippage in the pool by withdrawing their reserves then that pool would no longer have the best price. Therefore, the exchange will not be performed in the pool and the user will not experience any change in price. In this context, the attack has no impact at all.


## Closer look at the Arbitrage Problem

OrbitLens provides a reasonable description of the behavior of arbitrage bots, except that his analysis of the fees is incorrect. The fee auction is a variant on an all-pay auction (https://en.wikipedia.org/wiki/All-pay_auction) in which every transaction is charged the fee of the last included transaction. In the context of large arbitrage opportunities, more arbitrage transactions will be submitted than the ledger can process so the last included transaction is likely to be an arbitrage transaction. There is no advantage to having duplicate arbitrage operations in the same transaction, so each transaction would be expected to have a single operation. With current network settings of 1000 operations per ledger, there is a 1/1000 chance of winning the opportunity. The expected value of submitting an arbitrage transaction is (value of the arbitrage opportunity) / 1000. Therefore, the example of an 800 XLM arbitrage opportunity would lead to fee bids of about 0.8 XLM. This is high compared to the base fee, but approximately 3 orders of magnitude lower than OrbitLens' analysis suggests.


## Fundamental Misconceptions

OrbitLens discusses a variety of misconceptions, and the analysis is quite interesting. I already acknowledged the error in my suggestion that optimal interleaved execution is NP-complete. I will discuss some of the remaining sections.


### "Interleaved execution is a premature optimization, and can be done later"

The blog post suggests that one of the proposals introduces operations to trade directly against individual pools while the other does not. This is false, as both proposals use existing operations to trade against liquidity pools and both proposals have on-chain routing (CAP-37 splits among venues, CAP-38 chooses the one best venue). As a consequence, *there would be no backwards compatibility issues to implement interleaved execution later for CAP-38*.


Orbit also points out the reality that it can take months to years to solidify a CAP. But if it would only take "several additional weeks" to support interleaved execution today--which I think is extremely optimistic--then surely a new CAP with no backwards compatibility issues for that could be completed in approximately the same amount of time.


### Problems of interleaved execution across several pools

As noted above, OrbitLens' method does work. One residual challenge is that not all functions can be inverted analytically. Therefore even if it is possible to compute a "price bound" formula for a specific liquidity pool, multiple pools might require an iterative solver which would increase the cost. If there is no "price bound" formula for a specific liquidity pool (I am unsure if this formula exists for the StableSwap invariant, but it might) then I am not sure if this approach is practical in general.


Best,
Jon

Orbit Lens

unread,
May 11, 2021, 6:19:08 AM5/11/21
to Stellar Developers

Right at the beginning of my recent blog post, I mentioned that the root of the problem resides in the general strategy adopted to handle fractured on-chain liquidity. CAP38 opens Pandora's box of possible arbitrage between the orderbook and liquidity pool (which is dangerous by itself), but the future implications of having a dozen of pools for the same asset make it especially scary. In the video stream, you presented a case of having immutable pool parameters, so any future protocol upgrade that changes pool-related logic must leave the existing pool params (fees and everything else) untouched, naturally resulting in "legacy pools" residing on-chain. Independent Uniswap v1, v2, and v3 pools show the example of such non-breaking contract upgrades when a decent portion of liquidity remains in old contracts. Giving the suggestion of having several fee tiers implemented as individual pools makes the perspective of a dozen similar pools per asset pair in the future not so exaggerated anymore.

I believe that the case of independent execution on a liquidity pool/orderbook has been extensively covered in my first blogpost, so this time I primarily focused on the multi-pool scenario that has been actively discussed by CAP committee on the livestream.

> I want to emphasize that *both CAP-37 and CAP-38 only support a single liquidity pool* using the constant product invariant with a fee of 0.3%

CAP37 proposes a flexible mechanism of the collective pool fee rate voting, focusing on reducing the number of pools per asset pair. Maybe I got a wrong impression from the discussion, but people are actively looking for ways to have adjustable fees for CAP38 as a uniform 0.3% doesn't seem like a convenient one-size-fits-all constant. And the variant of having several identical pools with either adjustable or pre-defined yield rates has been extensively debated by the committee.

> Under these circumstances, it would be impractical to purchase 35k USDC even if all the liquidity were in a single no-fee constant product pool with total reserves 60k USDC, 120k XLM. It would cost 168k XLM, for an effective purchase price of 0.208 USD/XLM compared to a market price of 0.5 USD/XLM. 

First of all, in this example, I intentionally excluded the orderbook from the calculation to avoid speculations about orders placement and resulting price. However, from the real-world usage, we know that the liquidity profile in the liquid orderbook resembles normal distribution, so we can safely assume that including an orderbook in the equation will likely improve the average execution price to somewhere closer to 0.3-0.4 USDC/XLM.

> I doubt anyone would be happy with this situation, so most operations would fail due to bounds on the trade price. 

While it's obviously an edge-case event, there is a multitude of situations that may require order execution at any price.

For instance, collateral position liquidation, derivatives expiration, anchored token price drop induced by force majeure circumstances.

> The blog post suggests that the four trades would have to be submitted separately, thereby sacrificing atomicity. This is, of course, false because Stellar supports atomic transactions (https://developers.stellar.org/docs/glossary/transactions/). So you don't have to worry about getting interrupted by arbitrageurs, because the four trades can be bundled into a single transaction.

To my knowledge, neither of the existing trading interfaces allows bundling several path payments into a single transaction. Not even saying that the average user will ever think about doing this. Your suggestion implies that all wallets should update their trading interfaces and employ an elaborate price estimation algorithm to split a single swap into several path payment operations.

So we get a much more extensive ledger capacity utilization, larger transaction fees, and increased validators load. As the transaction is atomic, the likelihood of the entire trade failing grows proportionally to the number of included path payments since the market state in the multiuser environment is nondeterministic by its nature – significant price shift on a single pool will prevent trades against all other pools from execution. This can be partially addressed by specifying higher slippage tolerance in the path payments, although such a decision consequently exposes a user to greater risk.

> The reality is that *the arbitrage bots will awaken regardless of whether CAP-37 or CAP-38 is used*. It isn't hard to see why: arbitrageurs will simply arbitrage the liquidity pool against the centralized exchange. This is probably more profitable than doing an on-chain arbitrage between liquidity pools and the order book, because there will be less slippage on the centralized exchange, so this would be the preferred method for any serious arbitrageur.

While inter-platform arbitrage looks like a very simple and attractive thing, anyone who ever done the arbitrage between exchanges, would tell that there are plenty of risks here:

  • the transfer between two exchanges may take much longer than expected (anywhere between minutes and days)
  • indicative exchange price is subject to slippage which has to be factored
  • both traded asset carry volatility risk which is mitigated only after the final settlement
  • an arbitrage trade generates two separate taxable events
  • there are withdrawal fees and trading fees

For example, check anchored BTC price history and Binance chart. It clearly shows that arbitrage is not super efficient. 

On the contrary, executing arbitrage on-chain is almost immediate, virtually free, and doesn't carry any slippage or volatility risk. That's why it's a golden opportunity for arbitragers that will compete with each other, flooding the network and driving it unusable. From the perspective of the on-chain arbitrage bot, any $0.1 profit chance is worth it. Therefore the comparison of these two arbitrage types is clearly inaccurate. Not saying that only a limited number of tokens are traded elsewhere except Stellar DEX at the moment.

> Case 5. This is an inaccurate portrayal of the routing behavior of CAP-38. 

As I pointed above, this blog post has been primarily focused on the future interoperability between several pools in the context of different proposed mechanics. I removed this paragraph from the post. Admittedly, it doesn't fit the CAP37 vs CAP38 topic. Sorry for the misleading statement, I should have stay focused on the primary subject.

> The expected value of submitting an arbitrage transaction is (value of the arbitrage opportunity) / 1000. Therefore, the example of an 800 XLM arbitrage opportunity would lead to fee bids of about 0.8 XLM.

Actually, my example implied a slightly lower fee. 750 XLM in fees spent for all transactions -> 0.75 XLM. Maybe I should have chosen better wording. My primary point here is that the maximum network fee is directly proportional to the emerging arbitrage opportunity. With 10,000XLM worth arbitrage trade, the network fee may reach up to 10XLM per operation.

> *there would be no backwards compatibility issues to implement interleaved execution later for CAP-38*

After thinking about this for a bit, I agree with you. With the CAP38 in its current state, there will be no backward incompatibilities for the interleaved execution implementation.

> Orbit also points out the reality that it can take months to years to solidify a CAP. But if it would only take "several additional weeks" to support interleaved execution today--which I think is extremely optimistic--then surely a new CAP with no backwards compatibility issues for that could be completed in approximately the same amount of time.

From the top of my head, I can't remember any CAP that has been drafted, discussed, finalized, implemented, and shipped to the network in under half a year term, except urgent bug fixes.

> As noted above, OrbitLens' method does work. One residual challenge is that not all functions can be inverted analytically. 

Obviously, I can't claim that any invariant can be inverted analytically. Yet, I have a rather strong conviction that if one can use a formula to estimate a swapped amount and the number of tokens in the pool after the swap, it can be altered to estimate the price after the trade. Again, this might not work with oracle-based AMMs or other algorithms that rely on external information, although this is a very broad question by itself.

---

Thanks for taking time to review my post, Jon.

Markus Paulson-Luna

unread,
May 12, 2021, 12:29:51 AM5/12/21
to Stellar Developers
Hey guys,

Thanks for taking the time to discuss OrbitLens' blog post. I'm going to add a few points specifically on how I feel the various points relate to interleaved execution, as I feel that is the main point of contention here.

1. Arbitrage
It's safe to say arbitrage would increase with or without interleaved execution. However, it will be significantly more common without it. Interleaved execution would mean LP and orderbook prices track each other very closely, significantly reducing on-chain arbitrage opportunities. Increasing on-chain arbitrage is what we should primarily be concerned with, as on-chain arbitrage attempts result in a significant amount of network traffic, and off-chain arbitrage will increase by around the same amount regardless of which CAP is implemented.

2. Pricing and Fragmented Liquidity
After thinking about it more, I'm concerned not implementing interleaved execution could result in baseline worse slippage for path payments or "market orders" on the DEX. Say that CAP-38 is implemented and 30% of orderbook liquidity moves to LPs; in addition, we'll assume 30% additional liquidity enters the ecosystem because entities not currently providing liquidity decide to deposit funds into LPs. Under this scenario, the orderbook will contain 70% of the liquidity it used to, and LPs will contain 60% of the liquidity that the orderbook used to. This re-distribution means that both LPs and the orderbook are independently less liquid than the old orderbook was. Despite the CAP-38 path payment operation executing against whichever liquidity source offers the best price, the resultant slippage will always be greater than it would have been pre-CAP-38. This drop in DEX performance is a liquidity fragmentation and would never happen with interleaved execution.

3. Probability of requiring additional pools.
It seems like the best argument against interleaved execution is we don't currently have a good execution engine for handling multiple pools. I don't think this is a good argument for avoiding interleaved execution. OrbitLens' arguments for multiple pools being unnecessary under current ecosystem conditions are convincing. A voting mechanism could handle variable pool fees as it is unnecessary to support multiple fee tiers for a single asset pair. Additionally, non-x*y=k invariants are unnecessary as traditional market-making strategies are more effective at accomplishing the goals of alternative invariants. While it's true that things could change in the future, I think it's preferable to implement a better solution now and update it in the future if multiple pools become desirable.

In short, implementing AMMs without interleaved execution is a baseline worse solution and could leave the DEX in a worse state than it was pre-CAP. I'd rather wait the extra month or two for interleaved execution to be figured out. Implementing AMMs without interleaved execution risks hurting DEX and network performance.

Orbit Lens

unread,
May 13, 2021, 4:45:17 AM5/13/21
to Stellar Developers
Following questions received on Twitter and Reddit, I'd like to showcase the recent addition to CAP37 that mostly fell under the radar – a mechanism of cooperative dynamic pool fee rate adjustments.

Due to diverse trading conditions for different trading pairs, a uniform flat fee may not provide the best yield/risk ratio in edge cases – stablecoin/stablecoin liquidity pools benefit from reduced trading fees so that most of the trades will touch the pool instead of executing purely on the orderbook while illiquid or very volatile trading pairs require higher fees to counter possible impermanent loss or low yield scenarios.
To provide the required pool fee flexibility without inventing complex voting mechanics, the voting prerogative is delegated to the liquidity providers which can set the fee parameter in DepositPoolLiquidityOp operation, so the users can vote for the pool fee directly with their liquidity stakes. The effective pool yield represents a weighted average of proposed liquidity stake fees. This way accounts with larger stakes have more voting power which is fair given that every liquidity provider risks proportional to the deposited stake.

User can only vote for a fee value in a range between 0.0001 and 0.01 (0.01-1%) – a safeguard protecting market participant from paying overpriced fees on illiquid markets while ensuring at least the minimum yield for liquidity providers that participate in pool driven mostly by market makers.

To illustrate this concept, let's consider the following example:

1. User A deposits 100 XLM and 50 USDC to the pool with a 0.5% proposed fee yield. Since this is the first deposit to the pool, the effective pool fee will be equal to 0.5%.
2. User B deposits 40 XLM and 20 USDC to the same pool with a 1% proposed fee yield. After applying the weighted average formula, the effective pool fee becomes 0.643%. 
3. User C deposits 60 XLM and 30 USDC, proposed pool yield 0.4%. Effective pool fee = 0.57%.
4. User B withdraws the stake. The effective pool fee lowers to 0.463%.

This mechanism ensures that the effective pool fee is always up to date as any deposit/withdrawal operation triggers the recalculation process. Along with an uncomplicated and simple fee voting, it also provides an agile way to adjust to ever-changing market conditions – people always can withdraw a stake and immediately redeposit it with an updated fee in response to growing volatility or narrowing spread on the matching orderbook.

Updated CAP-37 is here.

Markus Paulson-Luna

unread,
May 13, 2021, 11:45:09 AM5/13/21
to Stellar Developers
Hey OrbitLens,

The updated CAP looks great for the most part! Although I haven't had a chance to give it a super close look yet.

One comment on the fee voting mechanism. Under the proposed fee voting mechanism, it looks like it would be possible for users to have multiple pool deposits with different fees. I think that would break the current WithdrawPoolLiquidity op. To avoid both this incompatibility and having to track multiple deposits for one account, it makes sense to have DepositPoolLiquidity check that the input fee matches the fee of previous liquidity deposits in the same pool. You would also have to add a DEPOSIT_POOL_LIQUIDITY_FEE_MISMATCH error response to the DepositPoolLiquidity op.

Markus

Orbit Lens

unread,
May 13, 2021, 12:05:15 PM5/13/21
to Stellar Developers
@markus

The described mechanism updates the proposed fee for the entire LiqudityPoolStakeEntry that belongs to an account.
From the Semantics section:

> If the account LiquidityStakeEntry exists, the stake is increased stake+=s and the proposed pool fee updated with the value from the deposit operation.

Overall process simplicity is the main reason behind this decision. Because otherwise, as you rightly pointed out, we'd have to create a separate LiqudityPoolStakeEntry for every account deposit, which is not only expensive in terms of storage, but would require locking base_reserve for every deposit as well. 

So the idea is to apply the proposed fee value from the latest deposit to the entire pool stake. There is an open question about the possibility of the fee adjustment without a deposit operation (since the deposit/withdrawal may be treated as taxable events in the future), so maybe it worth allowing DepositPoolLiquidityOp with empty amountA and amountB values so people could readjust the proposed stake value for an existing pool stake without actually depositing more liquidity.

Regards,
OL

Markus Paulson-Luna

unread,
May 13, 2021, 1:29:00 PM5/13/21
to Stellar Developers
OrbitLens,

That makes sense. I must've missed the part about the fee vote being adjusted with the value from the deposit operation. That's a very clean solution.

I'd personally be in favor of allowing a DepositPoolLiquidityOp with empty amounts. I can't think of any issues it would introduce, besides maybe some additional complexity, and in addition to your point about taxable events, it's just a better user experience.

Best,

Markus

Jed McCaleb

unread,
May 26, 2021, 4:46:22 PM5/26/21
to Stellar Developers

Here are my thoughts after reading OrbitLens' post https://stellar.expert/blog/stellar-amms-at-crossroads-between-triumph-and-disaster and other discussions of CAP 37 & CAP 38.

First off all these CAPs are very similar. Their external interface is almost identical. So I think either solution isn't that far from what we would want to add to core. 


There seem to be two main issues that Orbit has. The fear that CAP 38 will increase arbitrage and that CAP 37 will provide a nicer experience for users since it combines the liquidity of pools with the existing orderbook. I don't actually disagree with either point but I just see a different way forward. 


Arbitrage

CAP 38 doesn't increase arbitrage if you assume that there is a constant amount of liquidity in the network. Yes there are now two separate liquidity sources, the order book and the pool for each pair. But as liquidity moves from the order book to the pool this will widen the spreads on the order book and reduce the arb opportunities. Yes if liquidity increases overall because more people are using the dex that could increase the arbitrage opportunities but if liquidity increases in the current system or with CAP 37, arb opportunities also increase. Really we need to do a separate thing under any of these scenarios to make flooding the network with arb txs not make sense. 


Arb txs are a problem now, they will continue to be a problem with either CAP 37 or 38. I think we need to find a separate solution to this that is beyond the scope of either CAP.


Liquidity

The main difference between the two CAPs is that CAP 37 combines liquidity between the order book and the liquidity pool. This could potentially be a good idea but we just don't know yet. Although CAP 37 does a great job of showing that this is possible, it isn't trivial. There are a ton of edge cases and rounding issues we will have to address. It will inevitably make the network more loaded over all. And importantly we don't know yet if we need to do this work.

 There are a few possible scenarios after liquidity pools get rolled out:

1) Everyone uses the liquidity pools and orderbook usage dries up

2) No one uses the new liquidity pools. people keep using the orderbook.

3) People use both the liquidity pools and the orderbook.


Only in scenario 3 does it make sense to go to the effort to combine the liquidity of the pools and the orderbooks. If we do end up with scenario 3 it is straightforward to combine the liquidity sources after the fact as described in CAP 37. Our general philosophy has always been that we should maintain flexibility as much as possible when we add things and start with the minimal necessary set of features. It seems like it just makes sense to roll out the simple version first and if it gets adoption and it seems necessary we can improve it.



So for now:

- We would like to proceed with something similar to CAP 38

- There will only be one pool per asset pair

- We will interleave liquidity later if both forms of liquidity are being used. 

- We will work on doing something to make the flooding arbitrage transactions less attractive.

Markus Paulson-Luna

unread,
May 27, 2021, 9:04:53 PM5/27/21
to Stellar Developers

Hey Jed, thanks for responding to OrbitLens' points.

I understand the desire to lead with CAP-38, evaluate the results, then allocate dev time to implementing interleaved execution if it seems like it would be valuable. But I still have a few concerns about going forward with CAP-38 that I feel should be considered.

  • Arbitrage opportunities will increase significantly (this was previously addressed but I added counterpoints)

  • Low adoption of AMMs

  • DEX Liquidity Fragmentation

  • Orderbook liquidity drying up isn’t a realistic outcome to test for

These risks combined with the length of time required to patch them with a CAP-37 like approach could cause Stellar to miss a really exciting opportunity. Jed's outcome 3 is the most likely one (AMMs are great for retail users, orderbooks are great for professionals) and it doesn't make sense to implement CAP-38 first if it introduces additional risks that put a CAP-37 implementation and general DEX performance at risk.

Arbitrage

When addressing the arbitrage concerns, Jed stated that:

CAP 38 doesn't increase arbitrage if you assume that there is a constant amount of liquidity in the network. Yes there are now two separate liquidity sources, the order book and the pool for each pair. But as liquidity moves from the order book to the pool this will widen the spreads on the order book and reduce the arb opportunities. Yes if liquidity increases overall because more people are using the dex that could increase the arbitrage opportunities but if liquidity increases in the current system or with CAP 37, arb opportunities also increase. Really we need to do a separate thing under any of these scenarios to make flooding the network with arb txs not make sense.

I don't think I agree that liquidity moving to AMM's will result in the DEX spread widening and a reduction in arbitrage opportunities. DEX market makers will be competing with liquidity pools for trades, so they'll have to maintain tight spreads to capture orders. We'll likely see orderbook depth decrease, but the spread should stay the same if not become tighter. As a result, we'll see more arbitrage attempts with CAP-38 as tighter/similar spreads + two on-chain liquidity sources whose prices don't move together will result in more on-chain arbitrage opportunities.

I agree that the arbitrage spam issue can be dealt with in a different CAP, so it should not be the determining factor for choosing one CAP over the other. I just wanted to point out that CAP-38 will result in a greater increase in on-chain arbitrage opportunities than CAP-37 would.

Low AMM Adoption

On-chain AMMs should be popular. They're popular on Ethereum, and even without interleaved execution, Stellar-based AMMs should theoretically bring in around 20% APY (this number is based on some backtesting I did using horizon data). However, CAP-38 only allows AMM's to fill trades created by pathPayment operations. pathPayment operations make up a tiny portion of current network operations and trades. Without a massive shift to using pathPayments for trades over manageOffers, AMMs will take in far fewer fees than they rightly should. If AMMs aren't generating fees, people will stop using them. This could result in it appearing that there is little demand for AMMs on Stellar, and therefore, it is not worth implementing interleaved execution when the lack of demand is a result of implementing CAP-38 instead of CAP-37.

Liquidity Fragmentation

I discussed this risk in a previous post. I've copied it here:

2. Pricing and Fragmented Liquidity

After thinking about it more, I'm concerned not implementing interleaved execution could result in baseline worse slippage for path payments or "market orders" on the DEX. Say that CAP-38 is implemented and 30% of orderbook liquidity moves to LPs; in addition, we'll assume 30% additional liquidity enters the ecosystem because entities not currently providing liquidity decide to deposit funds into LPs. Under this scenario, the orderbook will contain 70% of the liquidity it used to, and LPs will contain 60% of the liquidity that the orderbook used to. This re-distribution means that both LPs and the orderbook are independently less liquid than the old orderbook was. Despite the CAP-38 path payment operation executing against whichever liquidity source offers the best price, the resultant slippage will always be greater than it would have been pre-CAP-38. This drop in DEX performance is a liquidity fragmentation and would never happen with interleaved execution.

If this liquidity fragmentation were to occur, DEX performance would suffer until interleaved execution is added. According to OrbitLens, it could take upwards of 6 months for an interleaved execution CAP to be drafted and implemented. 6 months of poor DEX performance would hurt both Stellar-based apps and network adoption.

Orderbook liquidity drying up

As OrbitLens has pointed out, orderbooks are much better tools for professional market makers than AMMs due to the flexibility in trading strategies they enable. Additionally, Stellar supports multiple anchors with the same underlying asset. Making a market between these assets using an AMM is incredibly inefficient, so the orderbook will still always be used for cases such as these. Given these points, I think we can rule out orderbook liquidity drying up as a realistic possibility. Therefore, it doesn't make sense to implement CAP-38 with the intention of testing for the evaporation of orderbook liquidity as a possible outcome.


Integrating AMM's with interleaved execution has the potential to make Stellar the most capital efficient blockchain in the space, and I'm concerned we might miss this opportunity if we implement an unfinished solution. I don't think the risks of implementing CAP-38 are being properly evaluated. I hope the Stellar Core team takes this viewpoint into account before moving forward with a decision.

Markus

Nicolas Barry

unread,
Jun 1, 2021, 3:51:18 PM6/1/21
to Markus Paulson-Luna, Stellar Developers
Hey Markus, thanks for the feedback.

I'm adding some perspective as somebody closer to "core".

See inline.

On Thu, May 27, 2021 at 6:04 PM Markus Paulson-Luna <mar...@script3.io> wrote:

Hey Jed, thanks for responding to OrbitLens' points.

I understand the desire to lead with CAP-38, evaluate the results, then allocate dev time to implementing interleaved execution if it seems like it would be valuable. But I still have a few concerns about going forward with CAP-38 that I feel should be considered.

  • Arbitrage opportunities will increase significantly (this was previously addressed but I added counterpoints)

  • Low adoption of AMMs

  • DEX Liquidity Fragmentation

  • Orderbook liquidity drying up isn’t a realistic outcome to test for

These risks combined with the length of time required to patch them with a CAP-37 like approach could cause Stellar to miss a really exciting opportunity. Jed's outcome 3 is the most likely one (AMMs are great for retail users, orderbooks are great for professionals) and it doesn't make sense to implement CAP-38 first if it introduces additional risks that put a CAP-37 implementation and general DEX performance at risk.

Arbitrage

When addressing the arbitrage concerns, Jed stated that:

CAP 38 doesn't increase arbitrage if you assume that there is a constant amount of liquidity in the network. Yes there are now two separate liquidity sources, the order book and the pool for each pair. But as liquidity moves from the order book to the pool this will widen the spreads on the order book and reduce the arb opportunities. Yes if liquidity increases overall because more people are using the dex that could increase the arbitrage opportunities but if liquidity increases in the current system or with CAP 37, arb opportunities also increase. Really we need to do a separate thing under any of these scenarios to make flooding the network with arb txs not make sense.

I don't think I agree that liquidity moving to AMM's will result in the DEX spread widening and a reduction in arbitrage opportunities. DEX market makers will be competing with liquidity pools for trades, so they'll have to maintain tight spreads to capture orders. We'll likely see orderbook depth decrease, but the spread should stay the same if not become tighter. As a result, we'll see more arbitrage attempts with CAP-38 as tighter/similar spreads + two on-chain liquidity sources whose prices don't move together will result in more on-chain arbitrage opportunities.

I agree that the arbitrage spam issue can be dealt with in a different CAP, so it should not be the determining factor for choosing one CAP over the other. I just wanted to point out that CAP-38 will result in a greater increase in on-chain arbitrage opportunities than CAP-37 would.

Yes totally agree: arbitrage traffic is already bad today, and may get worse without interleaved execution.
The good news is that the network should do "just fine" in dealing with those arb spikes (as in ledgers will close in time, etc).
The problem is that this creates a bunch of failed transactions that have to be processed by downstream systems and those failed transactions don't contribute real value to the network and cost real money to store and process.

What is unclear right now is how much worse the problem would be compared to today.

My intuition is that we probably need to deal with this regardless of where the liquidity is coming from. So a CAP to tackle this holistically might be in order (I already sketched a few ideas to seed the discussion). My preference would be to try to mitigate for any kind of failed transactions not just arb related (as to mitigate certain classes of front running attacks for example).

 

Low AMM Adoption

On-chain AMMs should be popular. They're popular on Ethereum, and even without interleaved execution, Stellar-based AMMs should theoretically bring in around 20% APY (this number is based on some backtesting I did using horizon data). However, CAP-38 only allows AMM's to fill trades created by pathPayment operations. pathPayment operations make up a tiny portion of current network operations and trades. Without a massive shift to using pathPayments for trades over manageOffers, AMMs will take in far fewer fees than they rightly should. If AMMs aren't generating fees, people will stop using them. This could result in it appearing that there is little demand for AMMs on Stellar, and therefore, it is not worth implementing interleaved execution when the lack of demand is a result of implementing CAP-38 instead of CAP-37.

This one falls into the "we don't know until we try" bucket.
`pathPayment` variants are "pure" take operations that are used for scenarios like cross border payment, while using `manageOffer` variants comes with a bunch of complexity if the order cannot be filled as it creates an offer on the DEX (that needs to be cancelled at some point).

I personally think that if there is any liquidity, takers will adapt to use it.

I also think that over time it may make sense to follow the patterns introduced by the AMM work (in both CAPs) as to have "maker only" DEX operations that allow creating/updating offers without ever crossing (pure add/remove liquidity in the DEX). This allows better traffic prioritization, and contracts closer to what traditional exchanges offer for example.

My overall point here is that we should be able to make adjustments based on what we learn.

Liquidity Fragmentation

I discussed this risk in a previous post. I've copied it here:

2. Pricing and Fragmented Liquidity

After thinking about it more, I'm concerned not implementing interleaved execution could result in baseline worse slippage for path payments or "market orders" on the DEX. Say that CAP-38 is implemented and 30% of orderbook liquidity moves to LPs; in addition, we'll assume 30% additional liquidity enters the ecosystem because entities not currently providing liquidity decide to deposit funds into LPs. Under this scenario, the orderbook will contain 70% of the liquidity it used to, and LPs will contain 60% of the liquidity that the orderbook used to. This re-distribution means that both LPs and the orderbook are independently less liquid than the old orderbook was. Despite the CAP-38 path payment operation executing against whichever liquidity source offers the best price, the resultant slippage will always be greater than it would have been pre-CAP-38. This drop in DEX performance is a liquidity fragmentation and would never happen with interleaved execution.

If this liquidity fragmentation were to occur, DEX performance would suffer until interleaved execution is added. According to OrbitLens, it could take upwards of 6 months for an interleaved execution CAP to be drafted and implemented. 6 months of poor DEX performance would hurt both Stellar-based apps and network adoption.


Two things here.

I am a lot more optimistic on this, in particular I trust that market dynamics ensure that "the right thing will happen":
if LPs don't make as much money because of fragmented DEX/AMM (which happens if demand dries out because of slippage etc), they'll just consolidate liquidity in either AMM or DEX. So the worst case (from this work point of view) is that AMMs don't get adopted.
This would happen on a per market basis, some markets are likely to move to AMMs so we'll still move the needle in the right direction for the overall network.

As for "time to market". The choice is a bit different:
if we were to implement something more complicated than CAP38 (that aims to be as simple as possible implementation wise, as we discussed in the CAP38 thread), we would just "take the hit upfront" and deliver an AMM solution on Stellar later.
I think delaying an implementation if there are obvious "deal breaker" type of issues makes total sense.
In our case, I don't think we're discussing deal breakers, but what, at this stage, looks more like arbitrary optimizations.
As we get AMMs in the hands of people, we're going to learn some more, and only then we should decide which problems should be solved next. Maybe that's my optimistic side again.
 

Orderbook liquidity drying up

As OrbitLens has pointed out, orderbooks are much better tools for professional market makers than AMMs due to the flexibility in trading strategies they enable. Additionally, Stellar supports multiple anchors with the same underlying asset. Making a market between these assets using an AMM is incredibly inefficient, so the orderbook will still always be used for cases such as these. Given these points, I think we can rule out orderbook liquidity drying up as a realistic possibility. Therefore, it doesn't make sense to implement CAP-38 with the intention of testing for the evaporation of orderbook liquidity as a possible outcome.



This is more of a "doomsday" type of scenario. I don't know if we have any data to support this. If the AMM+DEX ends up being so broken, we would push to get things fixed asap (can be as simple as disable AMMs entirely).
 

Integrating AMM's with interleaved execution has the potential to make Stellar the most capital efficient blockchain in the space, and I'm concerned we might miss this opportunity if we implement an unfinished solution. I don't think the risks of implementing CAP-38 are being properly evaluated. I hope the Stellar Core team takes this viewpoint into account before moving forward with a decision.


I agree with you. We can build the best experience. It doesn't mean we should be building it "all at once" though.
You're bringing up risk, which is actually related to that previous point: even with a bare bone CAP38 implementation we're already finding many things that can go wrong (math & compliance).
The last thing I want to have on the Stellar network is another "DeFi exploit" (we've been seeing those almost every week in the space) because of an oversight in the implementation. By starting "simple" we get to build a stronger foundation and minimize the chance that something terrible happens.

Again, thanks for helping move the conversation forward.

Markus Paulson-Luna

unread,
Jun 3, 2021, 10:57:23 AM6/3/21
to Stellar Developers
Thanks for addressing my concerns, Nicolas.

Given that it's clear SDF recognizes the potential of interleaved execution and will be monitoring the network for any unintended effects of CAP-38, I'm comfortable with CAP-38 being implemented first. Excited to see where this goes!

Markus
Reply all
Reply to author
Forward
0 new messages