Surge pricing changes proposal

474 views
Skip to first unread message

Nicolas Barry

unread,
Nov 29, 2021, 5:33:52 PM11/29/21
to Stellar Developers
Hello everyone!

As you probably saw, network activity has been increasing quite a bit lately. This is a good thing: more liquidity in more markets contributes to our mission.
One problem area that we've been tracking for some time that gets exacerbated with the number of markets is arbitrage traffic. Arbitrage traffic is necessary for the overall market place to make sense and be efficient. The problem on the network is that low fees make it very cheap (some might say too cheap) to submit many competing transactions for the same arbitrage opportunity only to see the first one "win" and all other transactions fail. Failed transactions just use up valuable network and validator resources without contributing any value back to the ecosystem, so if we can manage to reduce the number of failed transactions, all network participants are better off (one of the few times where all incentives are aligned).

Different factions of the community have been advocating for (and against) raising the minimum base fee of the network. You can read more about this in a blog post from last year https://www.stellar.org/blog/should-we-increase-the-minimum-fee .

I just drafted an alternative to raising the minimum base fee: I put together a CAP that aims at taking a bigger step towards reducing the number of failed transactions on the network while preserving equitable access to the network capacity. You can check out the first draft ---> https://github.com/stellar/stellar-protocol/blob/master/core/cap-0042.md .

Let me know what you think, I am sure I made some mistakes and missed something obvious!

In parallel to this CAP, we're merging "damping logic" into core https://github.com/stellar/stellar-core/pull/3280 that should help in the short term to mitigate the same type of issue. That short term mitigation is a lot more blunt of an instrument as it does not know if some arbitrage transactions are more valuable than others which can in theory penalize the wrong parties.

Onwards!

Nicolas

Eric Saunders

unread,
Nov 30, 2021, 12:38:24 PM11/30/21
to Nicolas Barry, Stellar Developers
Nicolas, could you provide a short scenario showing how this feature could be used? Who would use it? What would they set? What would the effect be? I think it would help motivate the intent behind the CAP.

Cheers

Eric

--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CABf4b3yGMtgM8-j-VG8j0cpuuy9W_ydveOQgnMAw5rWFM5Fkcw%40mail.gmail.com.

Nicolas Barry

unread,
Nov 30, 2021, 6:53:55 PM11/30/21
to Eric Saunders, Stellar Developers
Hey Eric, good question.

The CAP does not expose features that would be usable by end users of the network (similarly to how we wrote CAPs in the past that impact consensus).
Its purpose is to allow for faster innovation on which transactions get to be included, and at which price, in a given ledger. As a consequence "customers" in the context of this CAP are validator operators.
The first example in the CAP is probably the first change we would propose as it both reduces "arbitrage spikes" (therefore reducing fees for other transactions), while promoting healthy competition (on fees) between arbitrageurs all while reducing the percentage of failed transactions that make it into the ledger.

Nicolas


Xiao Ming

unread,
Dec 3, 2021, 9:44:25 AM12/3/21
to Nicolas Barry, Stellar Developers
I think damping is interesting approach but it needs broader community discussion and awareness before commit. It completely bypasses Stellar's free market economics based on fees and would allow network insiders with early knowledge to monopolize arbitrages. Currently Stellar main network is very centralized and with a few tier 1 organizations participating in consensus and almost all transactions going to Horizon. A purely time based model would allow these node operators to exclude all other participants from arbitrages.

All other blockchains have been able to solve the arbitrage spam problem using a fair free market model based on fees and I think Stellar should be able to do the same. I think a very effective change is to increase the minimum fee for abitrages and enforce that the source account actually has enough XLM balance to start the arbitrage and damp based on the source account. This change has the additional benefit of requiring accounts that want to participate in arbitrage to hold more XLM, which is good for XLM holders and price.
 

Daoyuan Wu

unread,
Dec 3, 2021, 11:25:16 AM12/3/21
to Xiao Ming, Nicolas Barry, Stellar Developers
SCP actually allows good decentralization, but a monetary incentive should be given to normal developers (they may need to stake some amount of XLM) to run their Stellar and Horizon nodes so that they can cover their machine and bandwidth costs. The XLM fee pool could be the source of such an incentive. However, the pool cumulatively has only 2M XLM due to the very low base fee as compared to other blockchains. Simply increasing the *global* base fee from 0.00001 XLM to 0.001 XLM (just like 0.001 Algo used in Algorand, while the current XLM price is five times lower than Algo) could not only provide such incentive for better decentralization but also address the arbitrage transactions mentioned here.

I think the logic of L1 blockchain should be kept as simple and stable as possible, so personally I don't like both the damping approach and "Each subset gets its own base fee" mentioned in cap-0042.md. They are sort of "complicated" to layer 1 and will make end users feel confused.

Xiao Ming

unread,
Dec 3, 2021, 4:14:10 PM12/3/21
to Daoyuan Wu, Nicolas Barry, Stellar Developers
On networks like Ethereum and Avalanche with a lot of arbitrage activity, arbitrageurs hold a lot of capital (native tokens) on the network and burn a lot of tokens in fees. Once a network has matured, successful arbitrages typically pay 80-95% of their profit back to the network via fees. Arbitrage opportunities happen due to network usage, so fees should naturally be the way to pay the network back. On Stellar where XLM fees are burned, this should be very good for XLM tokenomics and holders.

In a healthy environment, competitive bidding should cause arbitrageurs to pay most of their profits back to the network via fees. As a network grows to create more arbitrage opportunities, fee burning lets network stakeholders benefit from the growth. This could only be achieved through an open and fair free market process where anyone may participate by bidding. In contrast, the damping implementation creates a mechanism that could enable a privileged few to capture most of Stellar's arbitrage profits without paying much of anything back to the network. As such, I feel damping is against the spirit of an open and fair network and wastes a wonderful way to create value for XLM holders.

I feel Stellar's existing fee bidding process has mostly been working as designed. Arbitrage activity peaks and ebbs with market volatility as it should. Fees have been growing and at the recent peak, I noticed arbitrageurs were bidding very competitively (up to 1 XLM per attempt) and some may have overbid and even lost money before scaling back. 

I think Stellar's struggles with spam has two main causes:
1. Stellar users and apps are still operating under the impression that fees are fixed, and apps are not taking advantage of fee statistics to implement dynamic fee setting. Metamask users Ethereum, BSC, Avalanche, Fantom are already accustomed to the fee bidding process so they know that setting the fee too low could stall or fail the transaction. On Stellar, there needs to be a network initiative to upgrade apps with dynamic fee setting and educate users about the fee bidding process.

2. It is possible to arbitrage more XLM than an account actually holds. This "feature" has been unique to Stellar but it removes the capital requirement to arbitrage and enable many more attempts than is normally possible in other blockchains.

In summary, I think arbitrages could be very good for XLM holders and tokenomics if spam is solved in a healthy way (I don't think damping is it). It would encourage more holding of XLM on network (off exchanges) and increase the burning of XLM fees. 

Nicolas Barry

unread,
Dec 7, 2021, 9:20:54 PM12/7/21
to Xiao Ming, Daoyuan Wu, Stellar Developers
Thanks for the replies.

Yes, fee bidding has been working - so in theory we could leave it at that. What I outlined in the CAP is that this is working against equitable access to the network:
basically arb trades can yield arbitrarily high value (depending on market fluctuation), and as a consequence can bid much more than other transaction types. As a consequence, they can both "eat up" a lot of the network capacity while driving up the cost of all transactions on the network; all that yielding only one successful transaction (winner of the trade) and many failed transactions. Basically we have a very inefficient lottery.

What I am proposing is to segment network capacity into a number of groups, where each group becomes their own fee marketplace.
So using 1000 as an example of ledger capacity, instead of 1000 arb trades bidding 0.001 XLM each (where odds of winning that arb trade is 1/1000), arb trades can be competing in a pool of 100 arb trades bidding 0.01 XLM each (less slots means more competition for that capacity) while leaving alone the other 900 slots in the ledger.
Total fees collected by the network should end up being higher (potentially much higher) as arbitrageurs will just compete against each other (or at least be more careful when trying to capture arb opportunities with low expected value).

This fee segmentation will be very useful in other situations where capacity needs to be properly accounted for: you can imagine competing for capacity on a per CPU core basis (when certain transactions are forced into the same execution tree).

Nicolas


Xiao Ming

unread,
Dec 10, 2021, 7:08:04 PM12/10/21
to Rakeshkumar Mistry, Nicolas Barry, Daoyuan Wu, Stellar Developers
Thank you Nicolas. To clarify, I think the CAP is fine and appears to be a fair and equitable solution. 

With the CAP, I question the purpose of the change introduced in 18.2.0 to dampen arb flooding:
"Dampen arb flooding, by favoring transactions that are both higher fee and submitted first"

As I mentioned, this dampening implementation would not be equitable as it can be exploited by network participants with inside access. To be equitable, dampening should be completely random with regards to speed.

Also I have concerns with regards to DDOS attacks as this implementation performs a heavy check on the transaction before fee is deducted. Would it be susceptible to flooding of carefully crafted large transactions?

On Tue, Dec 7, 2021 at 11:57 PM Rakeshkumar Mistry <rbmist...@gmail.com> wrote:
I m about loose $176,000 worth of xlm with anchorusd exchange. My xlm has been seating in there and I won't be able to withdraw any from them. I remember jed macaleb was also one of the investor in that. I m litterely gonna do suicide if I won't get my money back. I just lost my job month ago. If anyone know anything about what's going on with their development team please let me know. Jim Berkeley danz is one of the founder I saw him at the miridian conference. 

Nicolas Barry

unread,
Dec 10, 2021, 7:56:00 PM12/10/21
to Xiao Ming, Rakeshkumar Mistry, Daoyuan Wu, Stellar Developers
Yes you are right.

If we implement this CAP (or something else that addresses the problem), we would be able to address this type of problem in a much more  comprehensive manner.

Until then we're stuck with overlay level changes that will have more tradeoffs.

Nicolas

Xiao Ming

unread,
Dec 11, 2021, 1:51:11 PM12/11/21
to Nicolas Barry, Rakeshkumar Mistry, Daoyuan Wu, Stellar Developers
I know it's not the intent but I do think it looks bad for network insiders to quickly push a change that could asymmetrically benefit network insiders. I think it's bad press and set up possible conflicts of interest in the future that could hinder solving this in the correct way that benefits all network stakeholders. 

I agree short-term changes will have trade offs but they should give high consideration to fairness which I think is critical to a public blockchain.

Nicolas Barry

unread,
Dec 13, 2021, 12:58:26 PM12/13/21
to Xiao Ming, Rakeshkumar Mistry, Daoyuan Wu, Stellar Developers
Hey

Can you clarify what you mean by "network insiders" and how this would benefit them? I want to make sure I understand if what you're concerned about also applies to CAP42.

Maybe it's worth teasing out what "fair" means in this context.
"Fairness" is mostly concerned by issues that would prioritize traffic from certain actors vs others, I think that neither the old behavior nor new behavior changes that.
Then, there are concerns around DoS protection (somebody spamming the network) and network incentives derived from "physics" (tradeoffs coming from the fact that transactions don't get submitted to "the network" but to individual nodes), that conflict with fairness to a certain extent. Here the goal is to have rules that promote "desirable behavior".
For the former, fees in that context are used to "deter" people from submitting too many transactions, and an example of the latter is that some other networks promote "speed of submission" by giving precedence to transactions submitted first (and rejecting transactions submitted later - obviously this "solves" failed arbitrage spikes but creates an incentive for people to aggressively be "first" when submitting transactions).

Going into specifics. Compared to the old behavior, the new network behavior basically drops random transactions early on instead of including them in a ledger (and having them fail there).
Fees were already mostly irrelevant before that change: once transactions make it into a ledger, the order they get applied does not depend on their fee bid and instead are shuffled as to increase fairness (in that context, we're trying to avoid people front running others by constructing their transaction in a special way).
A way to think about what happens to arb trades (both old and new behavior) is to think of transactions having to survive multiple rounds of lotteries in order to "win" an arbitrage opportunity.


Nicolas

Daoyuan Wu

unread,
Dec 16, 2021, 10:15:39 AM12/16/21
to Xiao Ming, Nicolas Barry, Rakeshkumar Mistry, Stellar Developers
Hi Xiao,

Now I also understand your concern.
However, I don't think this will be a big issue. What you worried about actually could turn out to be an incentive for more people to run their own Horizon nodes, for the purpose of arbitrage. This could be treated as a "reward" to Horizon nodes so that they become decentralized in the long run.

Best,
Daoyuan

On Thu, Dec 16, 2021 at 11:42 AM Xiao Ming <xmin...@gmail.com> wrote:
Hi Nicolas,

Thank you for entertaining my concerns. By "insider" I mean actors who are close to the core infrastructure like Horizon. It could be unauthorized access. By fairness I mean these actors should not be able to capture arbitrages for free without competing with the public for fees just because their privilege gives them a few milliseconds advantage. If they want to win the prize they then should still pay their fair dues. I think fees were still relevant in that increasing the base fee yielded more total fees paid by all arbitrageurs, resulting in more XLM burned or "profit" for the network stakeholders. As a big XLM holder I like seeing more XLM burned through fees as economic activity on the network grows. I agree that completely bottlenecking the network is problematic and I think applying differentiated services is an innovative idea to solve this. I think CAP is the right process for public awareness and discussion for making changes that could significantly alter fairness or "censorship" against certain transactions. Regarding the dampening implementation I also have technical concerns as I have mentioned but the expedited process it was rolled out just seems very centralized.

Thank you

Xiao Ming

unread,
Dec 16, 2021, 10:15:45 AM12/16/21
to Nicolas Barry, Rakeshkumar Mistry, Daoyuan Wu, Stellar Developers
Hi Nicolas,

Thank you for entertaining my concerns. By "insider" I mean actors who are close to the core infrastructure like Horizon. It could be unauthorized access. By fairness I mean these actors should not be able to capture arbitrages for free without competing with the public for fees just because their privilege gives them a few milliseconds advantage. If they want to win the prize they then should still pay their fair dues. I think fees were still relevant in that increasing the base fee yielded more total fees paid by all arbitrageurs, resulting in more XLM burned or "profit" for the network stakeholders. As a big XLM holder I like seeing more XLM burned through fees as economic activity on the network grows. I agree that completely bottlenecking the network is problematic and I think applying differentiated services is an innovative idea to solve this. I think CAP is the right process for public awareness and discussion for making changes that could significantly alter fairness or "censorship" against certain transactions. Regarding the dampening implementation I also have technical concerns as I have mentioned but the expedited process it was rolled out just seems very centralized.

Thank you

Nicolas Barry

unread,
Jan 4, 2022, 8:56:49 PM1/4/22
to Stellar Developers
Let's continue to move the conversation forward.

I would like to discuss CAP-0042 specifically as we can restore proper fees during spikes.

Something that occurred to me, is that we may not even need what I described in the CAP: a first iteration (following the 80/20 principle) could simply be to partition transactions with "anything that can potentially touch the DEX" on one side (with a hard limit that to let's say half of the ledger capacity), and everything else on the other.
Operations that "potentially touch the DEX" would be: path payment and manage offer variants (and this would be the only custom group per CAP-0042).
Operations to add/remove liquidity in pools, simple payments, claimable balances, set options, etc would not be in that group and would just be in the "default" group (therefore can use all capacity if needed).

Reason I think this might be good enough as a starter is that when there are arb spikes, they tend to come in "waves" that touch multiple order book pairs (because of example one popular asset changes price relative to everything else).

Nicolas

Leigh McCulloch

unread,
Jan 19, 2022, 5:03:31 PM1/19/22
to Stellar Developers
Hi Nicolas,

I think this is really interesting. Do you think this will also have applications with future scaling plans, where-by less resource intensive operations can be streamlined and more resource intensive operations could become a subset with a cap?

I also have some questions:

1. Do nodes in the network need to use the same rules for how subsets will be partitioned? What happens if nodes use different partitioning logic?

2. It looks like each subset will capped to 101 operations, and subsets will be partitioned by order book pairs. Does this mean that a popular pair see a surge in high quality transactions could see operation caps of 101 operations per ledger?

3. In regards to the below:

>Values are sorted in lexicographic order by:
>- number of operations (as to favor network throughput) unchanged,
>- total fees that the transaction set will collect (as to maximize quality of transaction sets) unchanged

If undesirable transactions (e.g. transactions likely to fail during apply) set their fee high enough, meaning the total fees of their transaction set is highest, will the network still be saturated with undesirable transactions?

The statement above about total fees being aligned with quality. Is more fees a clear indicator of higher quality transactions? The motivation section mentions that equitable access is a focus of the CAP, but the user with the deepest pocket will not necessarily produce the highest quality transactions.

4. How will multi-operation transactions be handled? If I'm submitting a transaction that contains 15 path payments and one of those path payments interacts with a high-fee pair, will all operations in the transaction be charged the higher-fee?

5. Could this have an impact where certain pairs only become inaccessible to users who cannot pay the higher fees, lowering the uniform equittable access users without deep pockets have access to?

6. I notice there is no change to transaction results. What tools will clients have to understand how the fee was applied to the transaction? Would the intent be to evolve existing tools like the Stellar Dashboard that exists today and shows fees utilization? Or, given that transactions will get assigned a variety of basefees do we need something more granular?

Leigh

Jonathan Jove

unread,
Jan 19, 2022, 6:07:16 PM1/19/22
to Leigh McCulloch, Stellar Developers
Hi Nicolas,

This proposal risks significantly weakening the fee auction. For example, if a validator were to nominate a transaction set with no transactions in the default subset, then no fee auction would take place. But perhaps this actually is a feature not a bug. Should we dispense with the subset notion and just assign a baseFee (or feeCharged) to every transaction? At least for the purpose of surge pricing, it’s not obvious to me what the subset based method adds compared to what I just described. Specifically, in the degenerate case every transaction could be in its own subset anyway which is also equivalent to what I described above. (I recognize there could be other future uses for the subset mechanism, but if we are choosing that design for future uses then we should be explicit about why.)


If we assign a base-fee to every transaction, we can then implement extremely generic fee policies. As an example that fits into the discussion around arbitrage damping, we could:


- Classify transactions as “not arbitrage” or “arbitrage”

- “Not arbitrage” transactions participate in the fee-auction, but all fees are calculated in advance of nomination—so they all get the same baseFee as would have been calculated by CAP-0005

- “Arbitrage” transactions do not participate in the fee-auction, they always pay their feeBid—so they all get baseFee = feeBid / numOperations, which is what would have been charged before CAP-0005


I’m proposing the above fee policy as a straw-man. It’s an interesting concept that proposals like CAP-0042 have the power to support, although I think my variant makes this quite a bit cleaner. We should try to understand what fee policy will really help advance the goals of the Stellar network to provide equitable access to the financial system.


------------------------

A separate thought: I don’t know if implementations will necessarily “compete on the quality of transaction sets.” I think it’s possible that implementing something like CAP-0042 would lead to a future where every validator constructs transaction sets in the way that best suits them.


Best,

Jon


--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.

Nicolas Barry

unread,
Jan 21, 2022, 7:57:04 PM1/21/22
to Leigh McCulloch, Stellar Developers
Thanks for the response, this is great. See inline below.

On Wed, Jan 19, 2022 at 2:03 PM 'Leigh McCulloch' via Stellar Developers <stell...@googlegroups.com> wrote:
Hi Nicolas,

I think this is really interesting. Do you think this will also have applications with future scaling plans, where-by less resource intensive operations can be streamlined and more resource intensive operations could become a subset with a cap?

Yes: one of the use cases I can see is to group "Speedex trades" in their own group to isolate them from less efficient transactions.
My idea for the first application of this CAP was to isolate "expensive" operations like any operation that can potentially touch offers (see my previous response earlier in this thread). I need to update the draft to reflect this.

As for the need for a new CAP: the current CAP is written in such a way to minimize the need for CAPs. There are pros and cons for both approaches.
The reason I wanted to start with this version is that it's simpler and mitigates a bit the "cat and mouse" game that comes with this type of work.
Con is that it's harder for somebody to understand why certain transactions compete with each other.
The thing to realize though is that overlay policies (how nodes prioritize traffic) cannot be encoded in the protocol anyways as this happens before consensus.


I also have some questions:

1. Do nodes in the network need to use the same rules for how subsets will be partitioned? What happens if nodes use different partitioning logic?

In theory nodes could have grouping rules so incompatible that after a few "hops" the intersection of rules would cause a node to forward a lot less tx/operations to their peers.

In practice I don't think this will be the case:
* there are many alternate "paths" to validators, and a transaction will be considered if it makes it to any one validator (when that validator nominates)
* rules deployed on the network won't be too different (a property that we'll try to have for consecutive builds), and if some people decide to deploy custom rules it won't be different than what can happen today
* there are not that many hops anyways to reach any given validator
* nodes forward more than fits in a ledger anyways, so even if policies "conflict" between nodes, there should still be enough transactions forwarded in aggregate
 

2. It looks like each subset will capped to 101 operations, and subsets will be partitioned by order book pairs. Does this mean that a popular pair see a surge in high quality transactions could see operation caps of 101 operations per ledger?

I think you're talking about "if the overall candidate set is within 101 operations of the maximum allowed, stop". This is not a maximum, this is coming from CAP-0005 to see if surge pricing is in effect.
Look for 6. if `count_ops(txSet) > max(0, ledgerHeader.maxTxSetSize-100)`.
That said, this was necessary because the only information we had on a `txSet` at the time was the number of operations, so that rule "6" was how we could tell if the entire txSet was in surge pricing mode. Now what we could do is simply create an explicit surge pricing group when this is the case (and "default" ends up being empty).
I will update the CAP to reflect this, this is simpler.

 

3. In regards to the below:

>Values are sorted in lexicographic order by:
>- number of operations (as to favor network throughput) unchanged,
>- total fees that the transaction set will collect (as to maximize quality of transaction sets) unchanged

If undesirable transactions (e.g. transactions likely to fail during apply) set their fee high enough, meaning the total fees of their transaction set is highest, will the network still be saturated with undesirable transactions?

Yes this is correct. This CAP doesn't try to address directly the issue of "failing transactions" in general.
The network is already being saturated by failing/undesirable transactions (that compete against each other), so at least with this CAP we can limit how much of the total network capacity is lost to those "bidding wars".
 

The statement above about total fees being aligned with quality. Is more fees a clear indicator of higher quality transactions? The motivation section mentions that equitable access is a focus of the CAP, but the user with the deepest pocket will not necessarily produce the highest quality transactions.

In the absence of other signals (signals are used to pick groups, so within a group), I think that market dynamics are a good way to get higher quality transactions. Even in the context of those "undesirable transactions", I would rather see people compete to arb $1000s than fractions of pennies.

For true equitable access, I agree that something else could work better. I don't think this CAP is necessarily incompatible with future work on that front. For example, you could imagine having groups with a maximum fee, where inclusion is decided in a different way (like some global FIFO order), assuming that the fee charged there would be high enough to exclude spam transactions and people have lower expectations for when those would be included.
 

4. How will multi-operation transactions be handled? If I'm submitting a transaction that contains 15 path payments and one of those path payments interacts with a high-fee pair, will all operations in the transaction be charged the higher-fee?

Yes, the base fee used for a transaction is derived from the most expensive operation.

5. Could this have an impact where certain pairs only become inaccessible to users who cannot pay the higher fees, lowering the uniform equittable access users without deep pockets have access to?

Yes this is correct. Within a group it's a "highest bidder" policy, from an equitable access point of view this is still strictly better than today where high activity on a single pair causes all transactions to become more expensive.

Note that I think that for the first version we'd start with a smaller change and just partition DEX vs non DEX activity as I mentioned earlier as it's also strictly better than today while being significantly easier to implement.
 

6. I notice there is no change to transaction results. What tools will clients have to understand how the fee was applied to the transaction? Would the intent be to evolve existing tools like the Stellar Dashboard that exists today and shows fees utilization? Or, given that transactions will get assigned a variety of basefees do we need something more granular?

Yes there is no change here from a CAP point of view. The strategy to use as fee bid the highest fee one is willing to pay does not change.
For other "power users", core's "tx" endpoint will continue to return a "recommended bid" when a transaction cannot be accepted by the node due to the fee bid being too low. This recommended bid will be adjusted to use the new grouping logic. Of course, this is a node's opinion (that is likely to be right if we assume that most people run "stock" stellar-core), other nodes may filter/prioritize differently, but this cannot be encoded in the protocol. Horizon nodes from organizations that nominate on the network could expose better estimates.
 

Leigh

On Monday, November 29, 2021 at 2:33:52 PM UTC-8 Nicolas Barry wrote:
Hello everyone!

As you probably saw, network activity has been increasing quite a bit lately. This is a good thing: more liquidity in more markets contributes to our mission.
One problem area that we've been tracking for some time that gets exacerbated with the number of markets is arbitrage traffic. Arbitrage traffic is necessary for the overall market place to make sense and be efficient. The problem on the network is that low fees make it very cheap (some might say too cheap) to submit many competing transactions for the same arbitrage opportunity only to see the first one "win" and all other transactions fail. Failed transactions just use up valuable network and validator resources without contributing any value back to the ecosystem, so if we can manage to reduce the number of failed transactions, all network participants are better off (one of the few times where all incentives are aligned).

Different factions of the community have been advocating for (and against) raising the minimum base fee of the network. You can read more about this in a blog post from last year https://www.stellar.org/blog/should-we-increase-the-minimum-fee .

I just drafted an alternative to raising the minimum base fee: I put together a CAP that aims at taking a bigger step towards reducing the number of failed transactions on the network while preserving equitable access to the network capacity. You can check out the first draft ---> https://github.com/stellar/stellar-protocol/blob/master/core/cap-0042.md .

Let me know what you think, I am sure I made some mistakes and missed something obvious!

In parallel to this CAP, we're merging "damping logic" into core https://github.com/stellar/stellar-core/pull/3280 that should help in the short term to mitigate the same type of issue. That short term mitigation is a lot more blunt of an instrument as it does not know if some arbitrage transactions are more valuable than others which can in theory penalize the wrong parties.

Onwards!

Nicolas

--
You received this message because you are subscribed to a topic in the Google Groups "Stellar Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/stellar-dev/_h3hd_Y3IiY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stellar-dev...@googlegroups.com.

Nicolas Barry

unread,
Jan 21, 2022, 10:03:57 PM1/21/22
to Stellar Developers

Thanks Jon! See inline.

On Wednesday, January 19, 2022 at 3:07:16 PM UTC-8 Jonathan Jove wrote:
Hi Nicolas,

This proposal risks significantly weakening the fee auction. For example, if a validator were to nominate a transaction set with no transactions in the default subset, then no fee auction would take place. But perhaps this actually is a feature not a bug. Should we dispense with the subset notion and just assign a baseFee (or feeCharged) to every transaction? At least for

I think what is weakened is the "network mininum fee", as using a group allows a validator to raise fees for all transactions in that group, even if the group is smaller than the network capacity. This is what I expanded on in the security section of the CAP https://github.com/stellar/stellar-protocol/blob/00d5894427ecb8ff4931be2e3401c5f8bebff316/core/cap-0042.md#security-concerns and probably requires improving tooling to audit nominators.

 
the purpose of surge pricing, it’s not obvious to me what the subset based method adds compared to what I just described. Specifically, in the degenerate case every transaction could be in its own subset anyway which is also equivalent to what I described above. (I recognize there could be other future uses for the subset mechanism, but if we are choosing that design for future uses then we should be explicit about why.)

yes, in my response to Leigh I realized that we should get rid of the default group for the same reason and just use the " TSUBSET_SURGE " (that should just be renamed to "TSUBSET_FEE").

Tagging individual transactions with a fee vs using groups to me is just an exercise in which one is more expressive.
That first version will be functionally the same: in one proposal you would need to assign a fee to each transaction, and in this version the base fee is assigned to multiple transactions at the same time.
The group version has the benefit of being more compact in most situations as we don't need to repeat the tag/fee for each element and base fees are likely to be the same (or can be made to be), and also allows to express things that are not related to individual transactions.

I didn't spend too much time thinking about what kind of other groups could exist as the extension point is basically free.

I could see other groups attached to the transaction sets be things like:
* some random numbers/seeds/parameters to update certain ledger entries (like oracles)
* meta data related to parallel execution (scheduling groups of sorts)
* meta data related to which network partition this transaction set originated from (nominators could flood differently based on that)
* claim related to execution (hash of expected result, or hints related to predictive execution)



If we assign a base-fee to every transaction, we can then implement extremely generic fee policies. As an example that fits into the discussion around arbitrage damping, we could:


- Classify transactions as “not arbitrage” or “arbitrage”

- “Not arbitrage” transactions participate in the fee-auction, but all fees are calculated in advance of nomination—so they all get the same baseFee as would have been calculated by CAP-0005

- “Arbitrage” transactions do not participate in the fee-auction, they always pay their feeBid—so they all get baseFee = feeBid / numOperations, which is what would have been charged before CAP-0005


I’m proposing the above fee policy as a straw-man. It’s an interesting concept that proposals like CAP-0042 have the power to support, although I think my variant makes this quite a bit cleaner. We should try to understand what fee policy will really help advance the goals of the Stellar network to provide equitable access to the financial system.


Yes, I am pretty interested in having more discussions on this policy front. The strawman I am actually going to put in the CAP is the simplest I could think of that I mentioned earlier: partition into two groups, "transactions that may interact with offers on the DEX" and "all other transactions" as it's there for illustrative purpose and that way we don't tie up the CAP to actual policy choices that are not actually blocking the CAP itself.

If the CAP gets enough support, we should continue the conversation on policies on this mailing list even though they're not going to be CAP related but deserve more eyes given the potential impact on the network.
 


------------------------

A separate thought: I don’t know if implementations will necessarily “compete on the quality of transaction sets.” I think it’s possible that implementing something like CAP-0042 would lead to a future where every validator constructs transaction sets in the way that best suits them.


Technically this is already the case, I think the important bit here is that we're giving new tools to validators.
I think this is well aligned with the trust model of the network, and is an opportunity for the ecosystem to build more tools to take advantage of this. I think we'll see better reports on block explorers around "validator fairness" for example.
 


Best,

Jon


Thanks again! This gave me new interesting ideas!

Nicolas

Nicolas Barry

unread,
Feb 9, 2022, 5:48:01 PM2/9/22
to Stellar Developers

Hello, I made a few updates to CAP-0042 based on our conversation at the protocol meeting last week.


A few highlights:

* I added an extension point "phases" that should allow to add more transaction phases in the future (such as a Speedex phase)
* I removed the whole "indices" stuff, instead I just tagged groups of transactions directly with their fee policy
* I simplified fee policies
   * requiring all transactions to confirm to explicit fee policies
   * by adding a new policy "fee is bid" and renamed the other policy to "discounted"
* I made it explicit that network users cannot expect fees to follow a Vickrey like auction
    * fee discounting is "best effort" and decided "off-chain" by individual validators (or groups of validators)
    * validators may take advantage of SCP's "flexible trust" to select validators better aligned with themselves on fee policies
* I added a few more examples of how we can leverage fee policies

Please let me know if you have additional feedback. In particular, I am not super happy with nomenclature for whatever sits under a "phase".

Nicolas

Nicolas Barry

unread,
Feb 14, 2022, 9:46:20 PM2/14/22
to Stellar Developers
Jon replied directly to me instead of the mailing list, so I am including his comments here with my responses.

> Hi Nicolas,

> Thanks for these updates.

> First, a nitpick on the document. The simple summary is pretty complicated, so it should probably be made actually simple. Maybe something like “This CAP changes transaction sets to enable different fee policies for different groups of transactions.”


Yeah I agree. Done.


> Second, two questions/comments about the XDR:

> - Don’t we need to make changes to StellarMessage to support the new GeneralizedTransactionSet for fetching transaction sets?

Yes you are correct. We normally do not include overlay spec in CAPs. I added a mention in the "backward compatibility" section of the doc.


> - TransactionHistoryEntry is pretty wasteful of space this way. I don’t think it really matters since there is only one per ledger, but I figured I would mention it.

This is explained in the "Consensus Value" section. We waste the size of an empty txSet, so 32+4=36 bytes per ledger.

This seemed negligible to me relative to the amount of data per ledger, but maybe other people think it matters? We could change the format of checkpoint files entirely and emit a new type that wraps TransactionHistoryEntry and TransactionHistoryEntryV2 with the extension point in front (a checkpoint may contain both old and new TransactionHistoryEntry). This way we'll waste only 4 bytes per ledger (at the expense of a breaking change for anything that deals with those files).



> Third, regarding backwards incompatibilities. I don’t think the statement “Users that just bid what they’re willing to spend on fees, will not be effected” is true. I might be willing to pay $10 to do something, but I don’t want to pay that unless I have to. This behavior doesn’t exist anymore: clients that want to pay a low fee will need a dynamic fee policy. Generally, our advice since CAP-0005 has been “bid what you’re willing to pay” but that’s not necessarily good advice anymore.

I expanded on this in a different section "Design rationale". It all comes down to the "unless I have to" which was and still is pretty vague (in a way this CAP by calling the CAP005 like behavior a "discount" conveys better what is going on).

I don't think people need to have complicated fee policies (other than people performing many transactions, but those fall under "advanced users"), a typical user would not set $10 fee but $0.001 (but still probably pay a lot less than that) and only if things keep timing out they can decide to do something else (try again later, try to pay 10x more fees).

I think this would not necessarily be a bad thing for people to bid closer to what they really want as it reduces the chance of a fee spike that catches people by surprise (CAP-005 can cause fees to escalate very quickly).


> Overall, I think these changes are a step in the right direction.

> Best,

> Jon


Thanks!


Leigh McCulloch

unread,
Feb 15, 2022, 5:25:33 PM2/15/22
to Nicolas Barry, Stellar Developers
Reviewing the latest update CAP-42@5e8ab65:

I think the proposal is much easier to grasp now. I have a few questions:

This CAP gives validators control of which fee policy to use for individual transactions included in a ledger.

Validators get to select how to decompose and assign policies.

There is no enforcement at the protocol layer on which of fee regimes (discounted vs direct fee) is picked for a given transaction. 

Who decides what fees get paid? The summary says "validators", which implies to me that all validators decide together, but later in the CAP it says there is "no enforcement" of which fees are assigned. Does that mean the validator selected to nominate the ledger has unilateral control over the fees assigned to transactions within that ledger?

As a consequence, with this CAP, a single transaction can be isolated by a validator and be forced to pay for its fee bid.

Does anything stop a validator isolating the transactions of specific account(s) to cause specific accounts to always pay higher fees than they logically need to? It seems like it might be possible for a validator to abuse this.

This CAP aims to correct this by isolating subsets of transactions that can compete on fees without causing overall fees to increase.

I think this CAP does a great job of introducing a tool validators can use to isolate transactions, but it is pretty difficult for clients to understand what that will look like in reality. As a user I don't know if I'm bidding for a fee auction, or if I'm likely to be pegged at my max bid repeatedly.

Users that just bid what they're willing to spend on fees, will not be effected.

I think it is reasonable to assume that applications or users could be setting bids on the basis of willingness to pay higher fees for the percentage of time the network has surge pricing. The documentation we've published about fees (some linked below) presents fees in this way, by pointing out that you only pay higher fees during surges, and that users pay the minimum fee required to get into a ledger. As a result, I think users will be affected by this CAP. Whether that effect is significant is unclear due to the flexibility of the CAP. I think we should include some discussion about this in the CAP.


Also, the way that developers find out about what fees to use, according to the docs, is using the fee stats endpoint of Horizon. I think this proposal breaks that workflow, and the backwards incompatibility section probably needs to discuss this also.

Regarding the XDR:

There probably needs to be some discussion about how applications will understand how fees applied to a transaction too. I imagine Horizon could include information about what fee component applied in the existing transaction endpoints, but it would probably also be useful if the transaction results were extended to include the transaction set component that applied too so that advanced users can track in real time fee behavior without having to re-query stellar-core or Horizon out-of-band. I don't feel strongly about this for my own use cases, but it seems like an area that will otherwise lack visibility.

George Kudrayvtsev

unread,
Feb 18, 2022, 8:27:39 AM2/18/22
to Stellar Developers
Hello everyone,

I just want to chime in to echo and expand upon some of Leigh's points from a Horizon / ecosystem perspective.

TL;DR: Users have a hard time dealing with fees as it stands today. Bubbling these tx-subset properties out of the network will be critical in keeping users informed, but I'm concerned about the depth to which we're violating abstraction layers. This CAP will have a long tail of downstream effects.


> Validators get to select how to decompose and assign policies.

The nature of the SCP means validators could eventually punish "bad actors", but will there be an accessible way for operators to even understand what's going on in the network à la StellarBeat? To fulfill the CAP's "if certain validators adopt unfair fee policies [...]" security mitigation, Core needs a way to detect abuse. However, even with such a mechanism, it seems to me that it'll be really difficult to determine erroneous vs. transient vs. malicious validator behavior (which could be anything from unnecessary up-charging fees, dropping transactions, etc.)...


> Transactions will never be charged more than their maximum bid [...]
> Users that just bid what they're willing to spend on fees, will not be effected [...]
> If things keep timing out they can decide to do something else [...]

While we do recommend that folks "set the highest fee you're willing to spend," I think many users who set this beyond the default expect it to rarely be the case. This is exacerbated by the fact that fee strategies are often not in control of individual users -- for example, I can't set a higher fee on an important payment from my LOBSTR wallet -- so the agency in the "you're willing to spend" and "do something else" parts gets diluted.

Basically, if this CAP raises the average paid fee, expect downstream outcry. Even if it doesn't, since there's already a lot of confusion about how fees work (as evidenced by Leigh's subset of doc links), changing that mechanism will require detailed messaging with a long tail of support.

There's a corollary here that Leigh also noted: folks rely on Horizon's /fee_stats to make decisions (~3% of requests). From my understanding, this CAP essentially makes those metrics defunct. Even if a Horizon instance could learn about what its Core is up to, how would it learn about the rest of the nodes to give a global view on fee stats?

This ties closely into the earlier point re: behavior observation -- Core will need to bubble up more network-level details, and this CAP will likely result in a lot more pressure on Horizon to provide ways for users to peek at that info. That's a layer of complexity and detail we've been able to cleanly abstract away for now. I'm concerned about how severely this breaks the Core <--> Horizon <--> SDKs abstraction barriers, and yet it seems unavoidable..


Anyway, I know a lot of that was pretty vague and lacking a call to action, but hopefully I've made some platform concerns clearer.

Cheers,
-George

Nicolas Barry

unread,
Feb 18, 2022, 9:37:52 PM2/18/22
to George Kudrayvtsev, Stellar Developers
Hi George. Thank you for your questions.

I am going to respond to your questions only as you're expanding on Leigh's questions (some of those questions we touched on during the last protocol meeting, and I didn't have time to respond here).

See inline below

On Fri, Feb 18, 2022 at 5:27 AM 'George Kudrayvtsev' via Stellar Developers <stell...@googlegroups.com> wrote:
Hello everyone,

I just want to chime in to echo and expand upon some of Leigh's points from a Horizon / ecosystem perspective.

TL;DR: Users have a hard time dealing with fees as it stands today. Bubbling these tx-subset properties out of the network will be critical in keeping users informed, but I'm concerned about the depth to which we're violating abstraction layers. This CAP will have a long tail of downstream effects.


Yes I agree, and I mentioned this during the protocol meeting. The current strategy used by people today to compute their fee bid does not work:
people try to predict what to bid by using the "fee stats" endpoint, but this does not really work (as we've seen when the network gets into long periods of "surge pricing") as network conditions change fast.
I added in the CAP an example strategy that would work a lot better for people, without the need for complex logic client side.

 


> Validators get to select how to decompose and assign policies.

The nature of the SCP means validators could eventually punish "bad actors", but will there be an accessible way for operators to even understand what's going on in the network à la StellarBeat? To fulfill the CAP's "if certain validators adopt unfair fee policies [...]" security mitigation, Core needs a way to detect abuse. However, even with such a mechanism, it seems to me that it'll be really difficult to determine erroneous vs. transient vs. malicious validator behavior (which could be anything from unnecessary up-charging fees, dropping transactions, etc.)...

This is not something that would be implemented at core's layer: this type of analysis needs to be performed "out of band" and based on historical data.
The potential risk that some validators try to censor certain transactions already exist, so there is already a need for auditing validators. I added some information in the draft about this (the data is already there and accessible).

The change here is that now we're allowing some validators to potentially not give as much discounts on fees than what a typical user would expect. If some validators deviate from others, it will be visible in historical data. For example, isolating arbitrary transactions will be clearly visible (as no sane strategy would want to penalize a small group of transactions).
 


> Transactions will never be charged more than their maximum bid [...]
> Users that just bid what they're willing to spend on fees, will not be effected [...]
> If things keep timing out they can decide to do something else [...]

While we do recommend that folks "set the highest fee you're willing to spend," I think many users who set this beyond the default expect it to rarely be the case. This is exacerbated by the fact that fee strategies are often not in control of individual users -- for example, I can't set a higher fee on an important payment from my LOBSTR wallet -- so the agency in the "you're willing to spend" and "do something else" parts gets diluted.

Yes we discussed this during the protocol meeting (I updated the "backward compatibility section"). There will be some changes needed in wallets in particular.
Right now very few applications actually follow the previous guidelines based on CAP-0005: 20,000 stroops (fraction of a penny) covers the 80th percentile of fee bids.
 

Basically, if this CAP raises the average paid fee, expect downstream outcry. Even if it doesn't, since there's already a lot of confusion about how fees work (as evidenced by Leigh's subset of doc links), changing that mechanism will require detailed messaging with a long tail of support.

There's a corollary here that Leigh also noted: folks rely on Horizon's /fee_stats to make decisions (~3% of requests). From my understanding, this CAP essentially makes those metrics defunct. Even if a Horizon instance could learn about what its Core is up to, how would it learn about the rest of the nodes to give a global view on fee stats?

fee stats is based on transactions that were processed by the network, so it's still useful.
Like you said earlier, the problem is that today people don't really know how to use it effectively so there is some work to do in documentation regardless.
I added an example strategy in the CAP that would improve the experience quite a bit.
 

This ties closely into the earlier point re: behavior observation -- Core will need to bubble up more network-level details, and this CAP will likely result in a lot more pressure on Horizon to provide ways for users to peek at that info. That's a layer of complexity and detail we've been able to cleanly abstract away for now. I'm concerned about how severely this breaks the Core <--> Horizon <--> SDKs abstraction barriers, and yet it seems unavoidable..

As mentioned earlier, this should not be needed. Even today it's not possible to know what the state of all transaction queues are on the network (and in particular the state of the transaction queue of the next nominator), so any strategy that tries to reason about those things is bound to fail.

What we discussed during the protocol meeting was that we want to be sure that the user has control and that if someone bids higher than others regardless of how the transaction set was broken up, that transaction will win against others (this addresses a concern raised earlier on this thread when a change was added back in December to core that violates this principle).
 


Anyway, I know a lot of that was pretty vague and lacking a call to action, but hopefully I've made some platform concerns clearer.

Cheers,
-George

Thank you. I updated the CAP.

Nicolas


 
Reply all
Reply to author
Forward
0 new messages