--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CABf4b3yGMtgM8-j-VG8j0cpuuy9W_ydveOQgnMAw5rWFM5Fkcw%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CAGLFo2TBQkxoh4LWa7Yz00dVnYcrvei2%3Db%2BjBEZnDvipPPmabQ%40mail.gmail.com.
I m about loose $176,000 worth of xlm with anchorusd exchange. My xlm has been seating in there and I won't be able to withdraw any from them. I remember jed macaleb was also one of the investor in that. I m litterely gonna do suicide if I won't get my money back. I just lost my job month ago. If anyone know anything about what's going on with their development team please let me know. Jim Berkeley danz is one of the founder I saw him at the miridian conference.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/CABf4b3zmx93DsUjeZkgDY%2BfQPCHeM3QSEkd9SmW5EX8UOUHsQg%40mail.gmail.com.
Hi Nicolas,Thank you for entertaining my concerns. By "insider" I mean actors who are close to the core infrastructure like Horizon. It could be unauthorized access. By fairness I mean these actors should not be able to capture arbitrages for free without competing with the public for fees just because their privilege gives them a few milliseconds advantage. If they want to win the prize they then should still pay their fair dues. I think fees were still relevant in that increasing the base fee yielded more total fees paid by all arbitrageurs, resulting in more XLM burned or "profit" for the network stakeholders. As a big XLM holder I like seeing more XLM burned through fees as economic activity on the network grows. I agree that completely bottlenecking the network is problematic and I think applying differentiated services is an innovative idea to solve this. I think CAP is the right process for public awareness and discussion for making changes that could significantly alter fairness or "censorship" against certain transactions. Regarding the dampening implementation I also have technical concerns as I have mentioned but the expedited process it was rolled out just seems very centralized.Thank you
If we assign a base-fee to every transaction, we can then implement extremely generic fee policies. As an example that fits into the discussion around arbitrage damping, we could:
- Classify transactions as “not arbitrage” or “arbitrage”
- “Not arbitrage” transactions participate in the fee-auction, but all fees are calculated in advance of nomination—so they all get the same baseFee as would have been calculated by CAP-0005
- “Arbitrage” transactions do not participate in the fee-auction, they always pay their feeBid—so they all get baseFee = feeBid / numOperations, which is what would have been charged before CAP-0005
I’m proposing the above fee policy as a straw-man. It’s an interesting concept that proposals like CAP-0042 have the power to support, although I think my variant makes this quite a bit cleaner. We should try to understand what fee policy will really help advance the goals of the Stellar network to provide equitable access to the financial system.
------------------------
A separate thought: I don’t know if implementations will necessarily “compete on the quality of transaction sets.” I think it’s possible that implementing something like CAP-0042 would lead to a future where every validator constructs transaction sets in the way that best suits them.
Best,
Jon
--
You received this message because you are subscribed to the Google Groups "Stellar Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stellar-dev...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/08a3a957-89ab-4a7a-9aab-ee2171c81c8bn%40googlegroups.com.
Hi Nicolas,I think this is really interesting. Do you think this will also have applications with future scaling plans, where-by less resource intensive operations can be streamlined and more resource intensive operations could become a subset with a cap?
I also have some questions:
1. Do nodes in the network need to use the same rules for how subsets will be partitioned? What happens if nodes use different partitioning logic?
2. It looks like each subset will capped to 101 operations, and subsets will be partitioned by order book pairs. Does this mean that a popular pair see a surge in high quality transactions could see operation caps of 101 operations per ledger?
3. In regards to the below:
>Values are sorted in lexicographic order by:
>- number of operations (as to favor network throughput) unchanged,
>- total fees that the transaction set will collect (as to maximize quality of transaction sets) unchanged
If undesirable transactions (e.g. transactions likely to fail during apply) set their fee high enough, meaning the total fees of their transaction set is highest, will the network still be saturated with undesirable transactions?
The statement above about total fees being aligned with quality. Is more fees a clear indicator of higher quality transactions? The motivation section mentions that equitable access is a focus of the CAP, but the user with the deepest pocket will not necessarily produce the highest quality transactions.
4. How will multi-operation transactions be handled? If I'm submitting a transaction that contains 15 path payments and one of those path payments interacts with a high-fee pair, will all operations in the transaction be charged the higher-fee?
5. Could this have an impact where certain pairs only become inaccessible to users who cannot pay the higher fees, lowering the uniform equittable access users without deep pockets have access to?
6. I notice there is no change to transaction results. What tools will clients have to understand how the fee was applied to the transaction? Would the intent be to evolve existing tools like the Stellar Dashboard that exists today and shows fees utilization? Or, given that transactions will get assigned a variety of basefees do we need something more granular?
LeighOn Monday, November 29, 2021 at 2:33:52 PM UTC-8 Nicolas Barry wrote:Hello everyone!As you probably saw, network activity has been increasing quite a bit lately. This is a good thing: more liquidity in more markets contributes to our mission.One problem area that we've been tracking for some time that gets exacerbated with the number of markets is arbitrage traffic. Arbitrage traffic is necessary for the overall market place to make sense and be efficient. The problem on the network is that low fees make it very cheap (some might say too cheap) to submit many competing transactions for the same arbitrage opportunity only to see the first one "win" and all other transactions fail. Failed transactions just use up valuable network and validator resources without contributing any value back to the ecosystem, so if we can manage to reduce the number of failed transactions, all network participants are better off (one of the few times where all incentives are aligned).Different factions of the community have been advocating for (and against) raising the minimum base fee of the network. You can read more about this in a blog post from last year https://www.stellar.org/blog/should-we-increase-the-minimum-fee .I just drafted an alternative to raising the minimum base fee: I put together a CAP that aims at taking a bigger step towards reducing the number of failed transactions on the network while preserving equitable access to the network capacity. You can check out the first draft ---> https://github.com/stellar/stellar-protocol/blob/master/core/cap-0042.md .Let me know what you think, I am sure I made some mistakes and missed something obvious!In parallel to this CAP, we're merging "damping logic" into core https://github.com/stellar/stellar-core/pull/3280 that should help in the short term to mitigate the same type of issue. That short term mitigation is a lot more blunt of an instrument as it does not know if some arbitrage transactions are more valuable than others which can in theory penalize the wrong parties.Onwards!Nicolas
--
You received this message because you are subscribed to a topic in the Google Groups "Stellar Developers" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/stellar-dev/_h3hd_Y3IiY/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stellar-dev...@googlegroups.com.
Hi Nicolas,This proposal risks significantly weakening the fee auction. For example, if a validator were to nominate a transaction set with no transactions in the default subset, then no fee auction would take place. But perhaps this actually is a feature not a bug. Should we dispense with the subset notion and just assign a baseFee (or feeCharged) to every transaction? At least for
the purpose of surge pricing, it’s not obvious to me what the subset based method adds compared to what I just described. Specifically, in the degenerate case every transaction could be in its own subset anyway which is also equivalent to what I described above. (I recognize there could be other future uses for the subset mechanism, but if we are choosing that design for future uses then we should be explicit about why.)
If we assign a base-fee to every transaction, we can then implement extremely generic fee policies. As an example that fits into the discussion around arbitrage damping, we could:
- Classify transactions as “not arbitrage” or “arbitrage”
- “Not arbitrage” transactions participate in the fee-auction, but all fees are calculated in advance of nomination—so they all get the same baseFee as would have been calculated by CAP-0005
- “Arbitrage” transactions do not participate in the fee-auction, they always pay their feeBid—so they all get baseFee = feeBid / numOperations, which is what would have been charged before CAP-0005
I’m proposing the above fee policy as a straw-man. It’s an interesting concept that proposals like CAP-0042 have the power to support, although I think my variant makes this quite a bit cleaner. We should try to understand what fee policy will really help advance the goals of the Stellar network to provide equitable access to the financial system.
------------------------
A separate thought: I don’t know if implementations will necessarily “compete on the quality of transaction sets.” I think it’s possible that implementing something like CAP-0042 would lead to a future where every validator constructs transaction sets in the way that best suits them.
Best,
Jon
> Hi Nicolas,
> Thanks for these updates.
> First, a nitpick on the document. The simple summary is pretty complicated, so it should probably be made actually simple. Maybe something like “This CAP changes transaction sets to enable different fee policies for different groups of transactions.”
Yeah I agree. Done.
> Second, two questions/comments about the XDR:
> - Don’t we need to make changes to StellarMessage to support the new GeneralizedTransactionSet for fetching transaction sets?
Yes you are correct. We normally do not include overlay spec in CAPs. I added a mention in the "backward compatibility" section of the doc.
> - TransactionHistoryEntry is pretty wasteful of space this way. I don’t think it really matters since there is only one per ledger, but I figured I would mention it.
This is explained in the "Consensus Value" section. We waste the size of an empty txSet, so 32+4=36 bytes per ledger.
This seemed negligible to me relative to the amount of data per ledger, but maybe other people think it matters? We could change the format of checkpoint files entirely and emit a new type that wraps TransactionHistoryEntry and TransactionHistoryEntryV2 with the extension point in front (a checkpoint may contain both old and new TransactionHistoryEntry). This way we'll waste only 4 bytes per ledger (at the expense of a breaking change for anything that deals with those files).
> Third, regarding backwards incompatibilities. I don’t think the statement “Users that just bid what they’re willing to spend on fees, will not be effected” is true. I might be willing to pay $10 to do something, but I don’t want to pay that unless I have to. This behavior doesn’t exist anymore: clients that want to pay a low fee will need a dynamic fee policy. Generally, our advice since CAP-0005 has been “bid what you’re willing to pay” but that’s not necessarily good advice anymore.
I expanded on this in a different section "Design rationale". It all comes down to the "unless I have to" which was and still is pretty vague (in a way this CAP by calling the CAP005 like behavior a "discount" conveys better what is going on).
I don't think people need to have complicated fee policies (other than people performing many transactions, but those fall under "advanced users"), a typical user would not set $10 fee but $0.001 (but still probably pay a lot less than that) and only if things keep timing out they can decide to do something else (try again later, try to pay 10x more fees).
I think this would not necessarily be a bad thing for people to bid closer to what they really want as it reduces the chance of a fee spike that catches people by surprise (CAP-005 can cause fees to escalate very quickly).
> Overall, I think these changes are a step in the right direction.
> Best,
> Jon
Thanks!
This CAP gives validators control of which fee policy to use for individual transactions included in a ledger.
Validators get to select how to decompose and assign policies.
There is no enforcement at the protocol layer on which of fee regimes (discounted vs direct fee) is picked for a given transaction.
As a consequence, with this CAP, a single transaction can be isolated by a validator and be forced to pay for its fee bid.
This CAP aims to correct this by isolating subsets of transactions that can compete on fees without causing overall fees to increase.
Users that just bid what they're willing to spend on fees, will not be effected.
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/0b8608fe-d15c-4af1-a00d-53aa1e4487bfn%40googlegroups.com.
Hello everyone,I just want to chime in to echo and expand upon some of Leigh's points from a Horizon / ecosystem perspective.TL;DR: Users have a hard time dealing with fees as it stands today. Bubbling these tx-subset properties out of the network will be critical in keeping users informed, but I'm concerned about the depth to which we're violating abstraction layers. This CAP will have a long tail of downstream effects.
> Validators get to select how to decompose and assign policies.The nature of the SCP means validators could eventually punish "bad actors", but will there be an accessible way for operators to even understand what's going on in the network à la StellarBeat? To fulfill the CAP's "if certain validators adopt unfair fee policies [...]" security mitigation, Core needs a way to detect abuse. However, even with such a mechanism, it seems to me that it'll be really difficult to determine erroneous vs. transient vs. malicious validator behavior (which could be anything from unnecessary up-charging fees, dropping transactions, etc.)...
> Transactions will never be charged more than their maximum bid [...]> Users that just bid what they're willing to spend on fees, will not be effected [...]> If things keep timing out they can decide to do something else [...]While we do recommend that folks "set the highest fee you're willing to spend," I think many users who set this beyond the default expect it to rarely be the case. This is exacerbated by the fact that fee strategies are often not in control of individual users -- for example, I can't set a higher fee on an important payment from my LOBSTR wallet -- so the agency in the "you're willing to spend" and "do something else" parts gets diluted.
Basically, if this CAP raises the average paid fee, expect downstream outcry. Even if it doesn't, since there's already a lot of confusion about how fees work (as evidenced by Leigh's subset of doc links), changing that mechanism will require detailed messaging with a long tail of support.There's a corollary here that Leigh also noted: folks rely on Horizon's /fee_stats to make decisions (~3% of requests). From my understanding, this CAP essentially makes those metrics defunct. Even if a Horizon instance could learn about what its Core is up to, how would it learn about the rest of the nodes to give a global view on fee stats?
This ties closely into the earlier point re: behavior observation -- Core will need to bubble up more network-level details, and this CAP will likely result in a lot more pressure on Horizon to provide ways for users to peek at that info. That's a layer of complexity and detail we've been able to cleanly abstract away for now. I'm concerned about how severely this breaks the Core <--> Horizon <--> SDKs abstraction barriers, and yet it seems unavoidable..
Anyway, I know a lot of that was pretty vague and lacking a call to action, but hopefully I've made some platform concerns clearer.Cheers,-George
To view this discussion on the web visit https://groups.google.com/d/msgid/stellar-dev/9310478b-b3ef-4fc4-8c76-8e6845a2d3cbn%40googlegroups.com.