Ames and the end-to-end principle

188 views
Skip to first unread message

Philip Monk

unread,
May 8, 2019, 2:40:59 PM5/8/19
to urbit-dev
Ames has an unusual system of acks and nacks ("negative acknowledgments", but not like TCP's packets of the same name; Ames nacks mean the packet was received but the message resulted in an error).  In brief, each Ames packet of a message should get either an ack or a nack.  In the current system, the nack may include an error message (eg an error code or a stack trace), but this has undefined behavior when the error message exceeds the MTU.  A few proposals have been made to clean this up, including removing error messages, sending the error messages out of band, and removing nacks altogether.  I'd like to explain why we have nacks, and why I think they should remain.

This is based on the end-to-end principle, which is best described in Saltzer, Reed, and Clark's "End-to-End Arguments in System Design": http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf.

This paper is very good and easy to read, so please read it if you're at all curious this.  The most important line is: "the end-to-end check of the file transfer application must still be implemented no matter how reliable the communication system becomes."

So the basic argument for including end-to-end acks (and by extension, nacks) is that they're necessary for everything except those things which we don't care whether the message was received at all.  Thus, for Ames to give the guarantee that "if you give me a payload I will get it to the other side exactly once" isn't useful in itself, because no application cares about that.  They either (1) don't care whether it gets there or (2) care whether the request itself was "completed", in an application-defined sense.

Thus, if you wish to use simple acks as a performance enhancement, that's fine, but it's critical to understand that they're only for performance, not for correctness.  Correctness requires cooperation with the application.  This means we could either punt entirely on correctness (and rely on each application to design an ad-hoc e2e ack system) or provide a unified system for e2e acknowledgments, which must be positive or negative.  Hence, nacks.

(An aside from the paper:  they make the argument that exactly-once processing of packets requires end-to-end acknowledgments and messages that are idempotent at the application level.  While true in their context, this is *not* true for us because the only part of the system that could duplicate the message is the network, and Ames can guarantee exactly-once delivery because both ships are transactional (so if they give an ack, we know for sure they have received it permanently and won't forget about it).)

endtoend.pdf

dpc

unread,
May 8, 2019, 5:28:30 PM5/8/19
to urbit-dev
First though: Since this is a performance optimization question - is it necessary to wait for some ecosystem to grow, to collect some real-life metrics?

Second though: Could the request just carry a bit or two, to tell receiver if the ack/nack are expected?

Third thought: The idea from the  second thought is probably needless. Each app receiving a message most probably know if other side needs a response (since it is probably the same app/protocol in the first place). So an unified built-in "framework" should be able to handle all cases, with potential escape hatch for apps that want to do it in entirely custom way.

Very interesting subject.

fyr

unread,
May 8, 2019, 6:07:05 PM5/8/19
to urbit-dev
Nacks are good, unbounded-size nacks do sound dubious: at minimum we want "a bit or two" in the form of the %term error-type constant, which in turn can be capped at 1kb or w/e. As for the potentially heavy (list tank), how deterministic do these have to be? If it's acceptable to have a per-neighbour empirical MTU, then if it fits it fits, otherwise truncate each "line" to some number of characters and the list middle with a [skipped n...] message as we do with crashes, if it still doesn't fit insert a "Contact ~me for error details, Ray ID 0wburg.asdfj" placeholder I guess. (If we can't derive these per-neighbour, doing the above with a single least-common-denominator constant might suffice, it'd just be a squeeze)

Philip Monk

unread,
May 8, 2019, 6:27:23 PM5/8/19
to fyr, urbit-dev
> First though: Since this is a performance optimization question - is it necessary to wait for some ecosystem to grow, to collect some real-life metrics?

The question of whether nacks should exist is a correctness one.  That's why the end-to-end (n)acks would be what you implement, and then you decide if you want to add intermediate acks (eg an ack that says "I've received this, but I might not succeed in userspace") according to real-life metrics.

> If it's acceptable to have a per-neighbour empirical MTU, then if it fits it fits, otherwise truncate each "line" to some number of characters and the list middle with a [skipped n...] message as we do with crashes,

I don't like the idea of changing what gets sent based on the MTU.  If there's anything a networking stack should do, it's to stop the userspace code from worrying about MTU.  I figure either you say that you can only send a fixed-size thing like a flag or a number or term (and hope they don't abuse that by sending a very large number/term) or you support arbitrary-length nacks.  Arguably, nacks should be full marked messages, not constrained as they are now to a list of tanks.

Anton Dyudin

unread,
May 8, 2019, 6:40:16 PM5/8/19
to Philip Monk, urbit-dev
"term with limit of 1KB or 256b or sth" is the minimum yup

I'm guessing the reason this came up is splitting nacks across multiple packets doesn't work? And would require some kind of ack-the-ack bootstrapping for correctness. If you could just stuff an arbitrarily large noun into a UDP socket natively, presumably /ames/ wouldn't need to care about MTU either?

(I guess one option here is the Out Of Band thing of sending a "true nack" packet with the term, and then following up with a large "forward" packet with all the details that didn't fit, which the receiver waits on for a bit and then just truncates the nack to term-only; this gets super messy however)

Philip Monk

unread,
May 8, 2019, 6:46:19 PM5/8/19
to ohA...@gmail.com, urbit-dev
> I'm guessing the reason this came up is splitting nacks across multiple packets doesn't work?

That's correct.  Ted is working on some improvements to Ames and we realized this stuff was never properly thought through.  Ted's writing up our proposal, but it's similar to your out-of-band proposal.  The old system acked acks if you weren't careful, and all nacks were a space leak because you needed to keep them around forever.  Ted's proposal solves both of those in what I think is fairly clean way.


On Wed, May 08, 2019 at 3:39 PM, Anton Dyudin <antec...@gmail.com> wrote:
"term with limit of 1KB or 256b or sth" is the minimum yup

I'm guessing the reason this came up is splitting nacks across multiple packets doesn't work? And would require some kind of ack-the-ack bootstrapping for correctness. If you could just stuff an arbitrarily large noun into a UDP socket natively, presumably /ames/ wouldn't need to care about MTU either?

(I guess one option here is the Out Of Band thing of sending a "true nack" packet with the term, and then following up with a large "forward" packet with all the details that didn't fit, which the receiver waits on for a bit and then just truncates the nack to term-only; this gets super messy however)

fyr

unread,
May 8, 2019, 6:53:24 PM5/8/19
to urbit-dev
Hah, I definitely remember a "once you get message n+1 you don't need to keep details for message n" system being specified, but perhaps never implemented. (Or n+5, where 5 is the number of packets you let yourself send in parallel / be behind by? Some constant such that you only send a message if you've received all the acks that many back, communicating this implicitly to your interlocutor)


On Wednesday, May 8, 2019 at 3:46:19 PM UTC-7, Philip Monk wrote:
> I'm guessing the reason this came up is splitting nacks across multiple packets doesn't work?

That's correct.  Ted is working on some improvements to Ames and we realized this stuff was never properly thought through.  Ted's writing up our proposal, but it's similar to your out-of-band proposal.  The old system acked acks if you weren't careful, and all nacks were a space leak because you needed to keep them around forever.  Ted's proposal solves both of those in what I think is fairly clean way.


On Wed, May 08, 2019 at 3:39 PM, Anton Dyudin <antec...@gmail.com> wrote:
"term with limit of 1KB or 256b or sth" is the minimum yup

I'm guessing the reason this came up is splitting nacks across multiple packets doesn't work? And would require some kind of ack-the-ack bootstrapping for correctness. If you could just stuff an arbitrarily large noun into a UDP socket natively, presumably /ames/ wouldn't need to care about MTU either?

(I guess one option here is the Out Of Band thing of sending a "true nack" packet with the term, and then following up with a large "forward" packet with all the details that didn't fit, which the receiver waits on for a bit and then just truncates the nack to term-only; this gets super messy however)

Anton Dyudin

unread,
May 8, 2019, 6:56:36 PM5/8/19
to urbit-dev
Oh I guess that code existed and then got ripped out in the "all pokes go on different channels" update? But yeah you can probably rebuild it by adding explicit "and I've gotten these acks" metadata, whether tacked on to existing messages or sent separately.

--
You received this message because you are subscribed to the Google Groups "urbit-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@urbit.org.

Ted Blackman

unread,
May 8, 2019, 9:01:55 PM5/8/19
to ohA...@gmail.com, urbit-dev
Calling it my proposal is generous, but I can at least explain the idea.

There will be a clear layering between packets and messages. A packet can be either:
a) a packet to forward to someone else
b) an ack / nack
c) a message fragment

Most packets are encrypted, and the encryption is performed per-packet, not per-message. This closes a denial of service attack vector involving spoofing plaintext message fragment metadata. Specifically, an eavesdropper could send a forged packet based on a real packet but with the fragment number changed. This would cause the message to be garbled on receipt, in turn causing a nack. Encrypting each packet separately fixes this problem, since we can encrypt the metadata along with the payload.

A message-fragment packet includes (encrypted) metadata containing its message sequence number, fragment number, and number of fragments for this message. Once all the packets from a message have been received and decoded, their fragments are bitwise concatenated and deserialized (using +cue) into a full message. A single-packet message isn't special-cased; it just has a :num-fragments of 1, so the conversion from packet to message is trivial and happens immediately.

An ack packet acks a message-fragment packet. It includes (encrypted) data containing the message and fragment number of the packet it's acking, along with a bit indicating whether it's a positive or negative acknowledgment. Once a ship has received acks for every packet within a message, it considers the recipient to have completed receiving and processing the message. If none of the ack packets have their "nack" bit set, then we notify the local vane that had originally asked us to send the message that the message has been successfully processed by the remote.

If the message failed, which is indicated by the reception of a nack packet, the sender waits for exactly one message from the recipient containing an explanation for the nack. This message will be sent on bone 0 (bones are sort of like Unix ports), which is reserved for this purpose. This "nacksplanation" message contains the bone and sequence number of the failed message and some explanation for the failure, probably as a +tang, which we usually use to represent stack traces. Once the sender has received the nacksplanation message, it will notify the local requesting vane that the message failed, passing along the explanation.

This arrangement also removes the space leaks from the nack system. The receiver ship only has to hold the nack until it has received an ack on the nacksplanation message it sent. At that point, it knows that the sender knows about the nack. Since Ames is forever, this means post-nacksplanation, if we ever hear a repeated packet from the message that we nacked, we don't have to do anything in particular with it; we know the other ship knows we nacked that message, so no new information is required from us.

We could drop the repeated packet, but actually, nothing bad happens if we just ack it positively, since the other ship has definitely already heard our nack. Positively acking a packet is what we do in general when we hear a packet from a previously acked message, so this means our normal response to ack a dupe is fine.

Since we don't need to behave differently when receiving a dupe of a packet we've nacked, we don't need to remember that we nacked it. We don't need to store nacks indefinitely; we only need to store a nack until we've confirmed our nacksplanation message was received in full. This means no more space leak.

The last hairy edge case we've thought about is: what happens if the nacksplanation message gets nacked? The answer to this is "never nack a nack." In practice, this means if I crash while processing a nacksplanation, I just ack it positively anyway. As a courtesy I should tell my sender vane that I crashed while listening to the nacksplanation, but that's not critical for maintaining the network invariants.

~rovnys-ricfer

dpc

unread,
May 9, 2019, 1:15:06 AM5/9/19
to urbit-dev
I have questions.

Should (in the future) urbit planets be able to do stuff like real-time video streaming? (low latency, no acks necessary, reorderings OK?). That would suggest that some ability to escape nacks/acks etc is actually necessary.

Are things like connection latencies, window sizes, network speeds etc. to be considered? Basically - do we want to consider performance characteristics at all?

Is it necessary to have only one common protocol, or would be possible to have more? Is layering (think TPC/UDP on top of IP) possible? Could more protocols be added in the future?

Is this whole exercise reinventing something that existing protocols might have solved already?
To unsubscribe from this group and stop receiving emails from it, send an email to d...@urbit.org.

Philip Monk

unread,
May 9, 2019, 1:43:53 PM5/9/19
to dpc, urbit-dev
> Should (in the future) urbit planets be able to do stuff like real-time video streaming? (low latency, no acks necessary, reorderings OK?). That would suggest that some ability to escape nacks/acks etc is actually necessary.

I'd say yes, but not over Ames.  The interface that ames.hoon gets from ames.c is basically a naked UDP socket, so you could create another vane for a protocol that doesn't need Ames-level guarantees.  You probably would want to share the routing system, though.  Of course, for many applications, WebRTC would let you skip Arvo altogether (you would just use Arvo for finding your peer, etc).

> Are things like connection latencies, window sizes, network speeds etc. to be considered? Basically - do we want to consider performance characteristics at all?

Yeah, we do have some fairly reasonable congestion control in current Ames.  There's a lot of tuning that can and should be done.  Some of that is generic tuning the same as TCP, etc.  Some would be specific to our use — eg if acks are end-to-end, congestion control maybe should be calculated from transmission time only, not computation time.

> Is this whole exercise reinventing something that existing protocols might have solved already?

Some of this has prior art, and we should certainly steal as much as we can from that.  Our routing system is a little unique, but it's not too dissimilar from a standard system based on root nodes combined with a mostly-external PKI.

The part of Ames that is completely unique, as far as I'm aware, is the fact that we can trust each of the nodes to never forget anything it's acknowledged.  No other protocol can take advantage of that fact, so we're exploring a new part of the idea maze there (I think; if anybody's studied these I would be very interested in reading about it!).  The main thing we get from this is exactly-once messaging, but also we don't need to have an idea of a "session" or being "disconnected", which simplifies things for both the vane and userspace.  There's potentially more we could do cryptographically (like the rachet algorithm for all messages, giving forward secrecy), but we haven't pursued that yet.

Ted Blackman

unread,
May 9, 2019, 1:48:19 PM5/9/19
to dpc, urbit-dev
Different networking protocols make different tradeoffs to achieve different properties. Ames is all the way on the reliability end of the spectrum, whereas a video streaming protocol would optimize for latency. TCP is closer to the Ames side of things, but Ames takes the guarantees even further by encrypting and signing packets and providing exactly-once messaging by requiring permanently persisted sequence numbers. Another way to think of this is that Ames improves upon the guarantees of TCP by making them permanent instead of scoped to a (temporary) connection.

Ames isn't well-suited to streaming video or other low-latency applications, for much the same reasons TCP isn't great for that. Ames is probably a bit worse than TCP because of our transactionality, which requires a disk write before acking.

It would be cool to see a Azimuth-based low-latency streaming protocol. It would be cool if that protocol could take advantage of Ames' decentralized routing. Maybe it could even upgrade from Ames, similar to how a websocket "upgrades" from an HTTP connection. One way to start would be to take the Ames packet header and set one of the three unused bits to indicate it's a different protocol. Then Ames and the streaming protocol could share a UDP port.

The other option would be to use an off-the-shelf streaming protocol and just use the routing and authentication information from Ames. Plenty of room for different approaches here.

~rovnys-ricfer

Ted Blackman

unread,
Jun 14, 2019, 12:26:20 AM6/14/19
to dpc, urbit-dev
Let's think through the new Ames nack model more thoroughly.

Every flow is a directed link between two ships.  The flow's
"subscriber" can send requests to the "publisher", which can stream down
subscription updates.

The publisher will either ack or nack a request.  If the request is
nacked, the publisher will send a "naxplanation" message to the
subscriber as a subscription update on the dedicated nack flow (bone 0).
This naxplanation will be received by the subscriber, whose Ames will
relay the naxplanation to the local vane that made the failed request.

A subscription update cannot be nacked.  The subscriber's Ames acks a
subscription update message unconditionally and then relays the message
to its local vane.  The publisher's Ames uses the ack internally for
message ordering and congestion control, but it does not relay the ack
to the vane that published the update.

This arrangement helps enforce the CQRS semantics of these flows.  If an
app wants to ensure its subscribers have received a particular piece of
data, the application-level protocol should require each subscriber to
send some request in response to receiving the data in question.  Ames
won't do this for you; if an app wants this feature, it can build it
itself.

One could imagine allowing nacks for subscription updates, but the big
problem is: how should the publisher handle a nack from a subcriber?

In contrast, with requests the situation is clearer.  If a request fails
and generates a nack, the requesting app has some options, depending on
the situation.  Repeating the request a couple times might be worth
trying, or simply accepting that our peer refused to do what we wanted
them to do might be best.

If a subscription update fails, though, what options does the publisher
have?  Skip this update and publish the next one?  That could
potentially lead to data inconsistencies if the app was hoping to rely
on Ames's exactly once delivery.  Re-publishing the update, possibly
just to one subscriber out of many, potentially bogs down the other
subscribers.

We don't want the subscriber to be able to exert too much control over
the publisher.  It can make requests, which can be nacked, but it doesn't
get to block the flow of data by recalcitrantly refusing to accept a
subscription update.

Both requester and subscriber need to maintain some nack state until
both sides know all the relevant information has been exchanged.

The subscriber Ames needs to receive both a message-nack packet and a
naxplanation message before it can relay the error to its requesting
vane and forget about the nack.  The publisher Ames needs to send its
naxplanation message and receive the message-level ack on the
naxplanation before it can forget about the nack.

These constraints in turn mean the subscriber must only emit the
message-level ack on the naxplanation message once it's heard the nack
packet on the original message.  This way when the publisher hears the
message ack for its naxplanation, it knows the subscriber has heard both
the nack and the naxplanation.

Once the publisher has heard the naxplanation ack, if the publisher
hears a duplicate message fragment from the original failing request, it
doesn't have to know to respond with a nack.  It doesn't strictly need
to do anything, because it knows the subscriber knows both that the
request failed and why the request failed.

This means the publisher can release all this negative energy and forget
the nack ever happened; it can cheerily respond with a positive
acknowledgment if it hears a duplicate message fragment from the
original failing request.  The subscriber will ignore this duplicate
ack.

There are some things I don't like about this design.  The fact that
there is any cross-talk among flows at all at this layer is a smell,
since they should really be entirely independent.  Maybe there's a way
to move the naxplanation back into the original flow to fix this.

My other concern is that this seems finicky.  Lots of behaviors must be
delicately interwoven to achieve correct semantics.  This means my
certainty in having thought through every contingency can't be as high,
which I find unpleasant.

I suspect there is a point somewhere in the design space that solves
these problems in a way that's easier to understand and verify.  Maybe
we're trying too hard to avoid bundling the naxplanation into the
message ack, and we should just bite the bullet and have multi-fragment
nacks with some internal sequence numbers.  Or maybe we're trying to
squeeze too many concepts into a bone, and we should instead have a
triple of [bone, request/subscription-update bit, main/nack bit] on each
message.

For now, I'm planning on implementing the proposal here, because it
should work.  Open to suggestions for improvements.


~rovnys-ricfer


On Thu, May 09, 2019 at 10:48 AM, Ted Blackman <t...@tlon.io> wrote:
Different networking protocols make different tradeoffs to achieve different properties. Ames is all the way on the reliability end of the spectrum, whereas a video streaming protocol would optimize for latency. TCP is closer to the Ames side of things, but Ames takes the guarantees even further by encrypting and signing packets and providing exactly-once messaging by requiring permanently persisted sequence numbers. Another way to think of this is that Ames improves upon the guarantees of TCP by making them permanent instead of scoped to a (temporary) connection.

Ames isn't well-suited to streaming video or other low-latency applications, for much the same reasons TCP isn't great for that. Ames is probably a bit worse than TCP because of our transactionality, which requires a disk write before acking.

It would be cool to see a Azimuth-based low-latency streaming protocol. It would be cool if that protocol could take advantage of Ames' decentralized routing. Maybe it could even upgrade from Ames, similar to how a websocket "upgrades" from an HTTP connection. One way to start would be to take the Ames packet header and set one of the three unused bits to indicate it's a different protocol. Then Ames and the streaming protocol could share a UDP port.

The other option would be to use an off-the-shelf streaming protocol and just use the routing and authentication information from Ames. Plenty of room for different approaches here.

~rovnys-ricfer


Joe Bryan

unread,
Jun 14, 2019, 2:32:25 AM6/14/19
to Ted Blackman, dpc, urbit-dev
I too find the cross-flow dependency concerning. I wonder if it's possible to recover the independence of flows, while still plugging the nack space leak. Here's a sketch:

Old ames nack semantics are as follows: if an "end-application" (specifically, a vane) fails to process a message, ames sends and saves a negative acknowledge. If it receives a duplicate of this failing message, it resends exactly the same negative acknowledgement. Per this thread, there are two problems with this model: nacks must fit in the MTU, and ames must save them forever. The "nacksplanation" flow fixes both of these problems, but introduces the aforementioned cross-flow dependency.

My proposal is simply "y not both?". First, require that nacks fit in the MTU. The MVN (minimum viable nack) is just a bit, the maximum could be something like first/last 5 lines of the stack trace. The nacking ames can decide how much to send and save. Second, have a dedicated "ames diagnostic flow" that can be used to request a full naxplanation. Upon successfully delivering a nacksplanation (of arbitary size), ames can forget about the nack, ack future duplicate failed messages, etc.

In other words, remove the blocking cross-flow nacksplanation requirement, but preserve the pattern that delivers arbitrarily large nacksplanations and plugs the space leak, by making it optional. If ames trusts its counter-party to cooperate in plugging the leak, it can helpfully deliver an early (partial) nacksplanation in the nack itself, if not, it can just deliver a nack bit (which is not an onerous leak). The ames we ship should in fact always request a nacksplanation to clear the nack from its peer's state, but the protocol is simpler (and flows are not mutually dependant) if this action is formally optional.

Thoughts?

Also, this suggestion has been vague with regards to packet-level vs message-level nacks (specifically consider the difference between ames nacking a packet which is some intermediate fragment of a larger message and the "end-application" nacking the entire message). What is a situation in which this should happen, and does it change anything about this suggestion?

~master-morzod

Anton Dyudin

unread,
Jun 14, 2019, 4:13:06 AM6/14/19
to Joe Bryan, Ted Blackman, dpc, urbit-dev
On the latter front, imo there is clear space for both: a generic packet nack takes issue with some aspect of the individual packet(e.g. rate-limiting), while a message level nack can only occur in response to the final packet upon assembling the whole message(and repetitions thereof).

--
To unsubscribe from this group and stop receiving emails from it, send an email to dev+uns...@urbit.org.

Philip Monk

unread,
Jun 14, 2019, 7:29:10 AM6/14/19
to ohA...@gmail.com, Joe Bryan, Theodore Blackman, dpc, urbit-dev
I agree that subscription updates should always be positively acked. If you want to retry in some sense, that should be handled by the subscriber's gall/ames after acking. The publisher shouldn't ever deal with one specific subscriber except possibly on initial subscription. I'm not sure if this is true of all backward flows or just gall subscriptions.

I don't view this as cross-flow dependency at all.  Flow 0 isn't a flow. You can't send a message on flow 0. There's no user-facing ordering guarantees on flow 0. No nacks on flow 0. We just use the number 0 to represent "not on a flow". Perhaps we would be better served by making a flow (unit @ud).

We stopped trying to send multipart nacks because we found it impossible or incredibly complicated to do that consistently and reliably. It violates the maxim "never ack an ack".

Nothing user-facing exposes the fact that an MTU exists, and I think that's very important. I don't want to explain MTU to any Urbit developer ever - that's a layering violation.

Overall, I don't think our approach is that finicky compared to any other design we've considered that gives similar guarantees (mostly, multipart explanations of nacks).

Joe Bryan

unread,
Jun 14, 2019, 2:07:22 PM6/14/19
to Philip Monk, ohA...@gmail.com, Theodore Blackman, dpc, urbit-dev
In case my email was unclear, what I'm proposing preserves all of a) no multipart nacks; b) never ack an ack; c) no user-facing MTU exposure; and d) nacksplanations of arbitrary size. I'm merely suggesting that nacksplanations (and their resolution of the nack space leak) should be "just another flow" layered over core ames primitives, not a blocking, stateful side-channel that's "a flow but not a flow". Ames itself would enforce the nack-proper size limits, by either just sending a nack bit, or sending a deterministically-derived subset of the full nacksplanation. This choice could be made universally, per ship my reputation, per ship-class by durability expectation (ie, comets and moons just get nack bits), etc.

My tangent about packet vs message nacks could be restated as a couple questions. Does this proposal have any impact on them? And what are the actual circumstances in which a packet-that's-not-the-last-fragment-of-a-message should be nacked? Anton's suggestion of rate-limiting is interesting -- how do current variants of ames handle such a nack?

~master-morzod

Philip Monk

unread,
Jun 14, 2019, 2:28:43 PM6/14/19
to Joe Bryan, ohA...@gmail.com, Theodore Blackman, dpc, urbit-dev
In case my email was unclear, what I'm proposing preserves all of a) no multipart nacks; b) never ack an ack; c) no user-facing MTU exposure; and d) nacksplanations of arbitrary size. I'm merely suggesting that nacksplanations (and their resolution of the nack space leak) should be "just another flow" layered over core ames primitives, not a blocking, stateful side-channel that's "a flow but not a flow".

I think this is almost exactly what Ted originally described? Your "just another flow" has the same restrictions as Ted's flow 0 (no user messages, no nacking a naxplanation, etc). Whether or not it's a "real flow" doesn't matter.

You suggest perhaps including a snippet of the naxplanation in the original nack, but that seems like unnecessary complication for a dubious optimization, not a 1.0 feature.

The other thing you add is that the recipient of the nack must request a naxplanation. First, remember that this "cross-flow dependency" is still within a single peer, and it doesn't depend on the order of the messages. If their Ames doesn't send a naxplanation for every nack, it's broken. We could eventually add another bit in the nack to say "don't expect a naxplanation", but that's a later optimization. Second, this is a spurious request if we must always send it; it just complicates and slows it down. Third, this means the sender of the nack must hold onto the naxplanation until it's requested - which may be never. Another space leak.

Does this proposal have any impact on them? And what are the actual circumstances in which a packet-that's-not-the-last-fragment-of-a-message should be nacked? 

Should have no effect. A fragment ack can never be negative, only a message ack (last ack). We enforce this is new ames.

Anton's suggestion of rate-limiting is interesting -- how do current variants of ames handle such a nack?

Not well. I think maybe it ignores the negativity? Would have to check. That seems like the wrong place to do rate limiting. Start dropping packets if you want them to back off.

Ted Blackman

unread,
Jun 14, 2019, 4:56:06 PM6/14/19
to Philip Monk, Joe Bryan, oha...@gmail.com, dpc, urbit-dev
I agree with everything Philip wrote.

I do think the flow cross-talk feels a little wrong, because it introduces an opportunity for one flow to slow down other flows by triggering naxplanations.

I think this problem could be solved by having one naxplanation channel per flow, instead of one naxplanation channel total between two ships. That way there's absolutely no cross-talk between flows, and none of the other properties of the system would be disturbed.

I think only downside would be an extra bit of size per bone, which is negligible.

Right now each new flow increments by two in bone space, so that the first bit (LSB) of a message's bone determines request vs. subscription update. I propose incrementing by four, so that the bone's second bit determines whether or not the message is a naxplanation.

When sending, we'd (mix 1 bone) to set the forward/backward bit, and we'd also (mix 0b10 bone) to set the naxplanation bit.

I need to think through this a little more to make sure those bits compose properly, but if anything, the issue would be allowing representation of impossible "naxplanation requests" (we only allow naxplanations as subscription updates). This would not be a serious problem in practice, just an inelegance.

We should also remember that we don't have to use the bone bit twiddling for these decisions. We could have separate, explicit metadata. I like that the bone twiddling is very efficient in space and computational complexity, but we could instead use more explicit metadata as scaffolding while building the vane, and consolidate into the numeric representation once it's settled.


~rovnys-ricfer

Ted Blackman

unread,
Jun 14, 2019, 5:00:07 PM6/14/19
to Philip Monk, Joe Bryan, oha...@gmail.com, dpc, urbit-dev
Oh, there is a complication there: we'd need to have a separate packet pump for each naxplanation flow, so they would really be separate flows in some sense. There'd be cross-talk between a normal flow and its naxplanation flow, but not among normal flows.


~rovnys-ricfer

Joe Bryan

unread,
Jun 14, 2019, 5:45:16 PM6/14/19
to Ted Blackman, Philip Monk, oha...@gmail.com, dpc, urbit-dev
I've done a poor job explaining my proposal. Sorry about that! Let me try again:

I'm specifically suggesting that nacksplanations *not* be nacks, but simply be diagnostic information sent over another flow. This flow would be a normal ames flow, with the usual semantics around ordering, acknowledgements, etc. The current nacksplanation proposal has the downside of requiring ames to support two different kinds of flows, with different semantics for each. I'm claiming that we can have one flow type, with straightforward semantics, and just layer a diagnostic channel over the top. This diagnostic would support arbitrary sized nacksplanation messages (as normal, fragmented, acknowledged ames messages), providing additional context for nacks and plugging the nack space leak without formal, blocking cross-talk between flows.

The downsides are that the nack space leak would still be formally present and practically possible (if our peer's ames implementation does not in fact request nacksplanations), and that the end to end performance of delivering fully-contextualized negative-acknowledgements to end-applications would be worse. But the upsides are a much simpler conceptual model, and, I believe, a much simpler implementation.

~master-morzod

Philip Monk

unread,
Jun 14, 2019, 6:32:40 PM6/14/19
to Joe Bryan, Theodore Blackman, ohA...@gmail.com, dpc, urbit-dev
Perhaps we've done a poor job of explaining our proposal, because I think what we're proposing is basically the same thing except we require this diagnostic information and therefore block on it. I may have overemphasized how the naxplanation flow is "not a flow". It uses the exact same machinery as flows and is implemented on top of flows. It just has its own flow for communicating this "diagnostic information"/"naxplanation". When I say it's "not a flow" I just mean it uses a subset of its functionality (eg we shouldn't ever give a nack to a diagnostic data message, we shouldn't let userspace send a message on this flow, etc).

As for why you want this information to be required:

- If you don't require this information, you make it impossible to properly implement the protocol without introducing a potential space leak (or throwing away all diagnostic data, or having diagnostic data be inconsistently available). You also have to think through two cases: whether or not they request the diagnostic data. Cyclomatic complexity is bad!

- You need to block on this information because you want to provide this information to the calling vane. And you can't give to the calling vane a positive ack to message n+1 before a negative ack to message n, or else you've violated the fundamental ordering guarantees of a flow.

I think requiring it significantly simplifies the model.

Ted Blackman

unread,
Jun 14, 2019, 7:10:34 PM6/14/19
to Philip Monk, ohA...@gmail.com, dpc, urbit-dev, Joe Bryan
Joe, your clarification conforms to my earlier understanding of your proposal. Philip beat me to the punch in responding to it, but I'll include my (similar) response since it includes some details of my thought process on this issue. This email also includes several more speculative ideas, for which I am not currently advocating.

Philip, I'm curious about your thoughts on having one naxplanation flow per normal flow to prevent normal flows from blocking each other due to nacking.


I see the appeal of making the naxplanation formally optional, but I don't think that optionality can easily be made real. First, as you mention, we would need to send a naxplanation for every nack to avoid the space leak. Second, we'd need to send just the short inline naxplanation to the client vane when we receive a nack. The vane would have to make a separate naxplanation request afterward to get the full naxplanation. Or we rely even more heavily on the assumption that every nack gets naxplained and wait and deliver the full naxplanation — but then we're back to the current design, effectively abandoning all pretense of naxplanation optionality.

The thing is, in order to lose the space leak, we somehow need to ack the nack. There are various approaches to this, but the one Philip and I have taken so far is to trigger a whole, real Ames message for the naxplanation, which then gets acked using the normal Ames protocol. It's only the Ames internal state and logic that knows about naxplanations; the wire protocol doesn't know they exist. Since we can rely on the normal Ames acking, message ordering, and whatnot for the naxplanations, we can treat the ack on the naxplanation message as an ack on the nack without introducing new primitives into the protocol or doing something half-baked, probabilistic, or optimistic for clearing nack state.

Having a completely separate flow for diagnostics is an interesting idea. I like that it would seem to reduce the amount of complex machinery needed for nacking by kicking naxplanation out of the protocol into a higher layer.

I'd be inclined to take this even further. What if naxplanation is a per-application feature that not all applications implement? Here's what that would look like (I don't think this ends up working out, because the only way to get rid of the space leak is to somehow ack the nack, even if the nack packet doesn't contain the naxplanation):

A message ack packet would then contain a (unit @tas), or maybe  a (unit path), which if nonempty indicates a failure with a tag indicating the reason — or maybe it's cleaner as just a bit, and we don't even try to squeeze the reason into this field.

This message nack gets relayed to the requesting vane immediately, and Ames has no naxplanation machinery at all. It does have to permanently store the message sequence number of the failing message, since we don't know which of our acks the peer has received.

Interestingly, this suggests that some sort of packet like "I've received acks through message 17" would let the other side delete nacks up through that message sequence number. One way of solving the nack space leak would be to emit one of those packets every, say, hundred messages, as a courtesy — or we could even require it in the protocol. It feels ugly, though, I think because it's a very specific, somewhat heuristic piece of information that doesn't fit neatly with the rest of the protocol.

If there was a hard limit on the number of messages, say, 100, that could be sent at any given time, Ames could take advantage of that by deleting nacks up through message 337 when we receive the first packet from message 437.

Maybe we can build a nack space reclamation system as part of a flow reclamation system. Every nack is on a flow, after all, and we probably already have a problem with flow proliferation causing a space leak. What if every flow timed out after a certain amount of time, and there was a handshake for closing a flow? This would be an Ames-internal piece of functionality, opaque to client vanes.

When no messages have been sent or received on a flow for a certain amount of time, Ames sends a packet saying "I'm closing flow x after having sent message y and received message z". When we get an ack on this packet, we consider the flow closed. (Are there problems with this ack not getting acked? I suspect there would be ...)

Alternatively, the flow gets canceled every hundred messages, resetting sequence numbers and nacks. It seems theoretically possible to me that we could do that while maintaining exactly-once delivery, since it would be a controlled demolition that we could synchronize with the peer. Or maybe this just means message sequence numbers roll over after 99, so we treat the sending and receiving as something like circular buffers, where nacks get deleted automatically as the thing rolls around. The problem with this arrangement is that we could never have more than 99 messages in flight per flow. This strikes me as potentially having poor scaling properties as network bandwith-delay products continue to increase.

There's something strange here where the subscriber is the only one who has a duct associated with a flow. The publisher only has a bone for each of its publications, and its clients maintain the semantic mappings between bone and publication.

Intuitively, I'd imagine the side with the duct would be the one who could cancel a flow. That intuition might be wrong, though, since we really want the publisher to be the one with control over the flow. I guess the exception to that rule is that in most pub-sub systems, a subscriber can cancel its subscription. We don't seem to have a way to do this; every subscription is forever — well, not in Gall, but in Ames this is true.

Maybe we should use this observation to construct a system where a subscriber can cancel its subscription (by duct), and when it does so, the subcription's bone, sequence numbers, and nacks are all forgotten. That could give us more options for how to deal with naxplanations, since space leaks would generally be temporary, or at least limited to bones that stay open forever. 

We've talked about having an Arvo-wide memory reclamation event, triggered by the runtime under memory pressure conditions. Maybe this event could trigger Ames to cancel and restart its live subscriptions, clearing any accumulated nacks.



~rovnys-ricfer


On Fri, Jun 14, 2019 at 3:32 PM, Philip Monk <phi...@tlon.io> wrote:
Perhaps we've done a poor job of explaining our proposal, because I think what we're proposing is basically the same thing except we require this diagnostic information and therefore block on it. I may have overemphasized how the naxplanation flow is "not a flow". It uses the exact same machinery as flows and is implemented on top of flows. It just has its own flow for communicating this "diagnostic information"/"naxplanation". When I say it's "not a flow" I just mean it uses a subset of its functionality (eg we shouldn't ever give a nack to a diagnostic data message, we shouldn't let userspace send a message on this flow, etc).

As for why you want this information to be required:

- If you don't require this information, you make it impossible to properly implement the protocol without introducing a potential space leak (or throwing away all diagnostic data, or having diagnostic data be inconsistently available). You also have to think through two cases: whether or not they request the diagnostic data. Cyclomatic complexity is bad!

- You need to block on this information because you want to provide this information to the calling vane. And you can't give to the calling vane a positive ack to message n+1 before a negative ack to message n, or else you've violated the fundamental ordering guarantees of a flow.

I think requiring it significantly simplifies the model.

On Fri, Jun 14, 2019, 17:45 Joe Bryan <joe@tlon.io> wrote:
I've done a poor job explaining my proposal. Sorry about that! Let me try again:

I'm specifically suggesting that nacksplanations *not* be nacks, but simply be diagnostic information sent over another flow. This flow would be a normal ames flow, with the usual semantics around ordering, acknowledgements, etc. The current nacksplanation proposal has the downside of requiring ames to support two different kinds of flows, with different semantics for each. I'm claiming that we can have one flow type, with straightforward semantics, and just layer a diagnostic channel over the top. This diagnostic would support arbitrary sized nacksplanation messages (as normal, fragmented, acknowledged ames messages), providing additional context for nacks and plugging the nack space leak without formal, blocking cross-talk between flows.

The downsides are that the nack space leak would still be formally present and practically possible (if our peer's ames implementation does not in fact request nacksplanations), and that the end to end performance of delivering fully-contextualized negative-acknowledgements to end-applications would be worse. But the upsides are a much simpler conceptual model, and, I believe, a much simpler implementation.

~master-morzod


On Fri, Jun 14, 2019 at 2:00 PM Ted Blackman <ted@tlon.io> wrote:
Oh, there is a complication there: we'd need to have a separate packet pump for each naxplanation flow, so they would really be separate flows in some sense. There'd be cross-talk between a normal flow and its naxplanation flow, but not among normal flows.


~rovnys-ricfer
To unsubscribe from this group and stop receiving emails from it, send an email to dev+unsubscribe@urbit.org.

Philip Monk

unread,
Jun 14, 2019, 8:47:41 PM6/14/19
to Theodore Blackman, ohA...@gmail.com, dpc, urbit-dev, Joe Bryan
> Philip, I'm curious about your thoughts on having one naxplanation flow per normal flow to prevent normal flows from blocking each other due to nacking.

Conceptually, I think you're right that's cleaner. The oob message really is per flow.

Usually in pubsub you want both sides to be able to cancel a subscription. In fact, an intermediary often can cancel it as well.

The problem with throwing away flow state and reusing flow numbers is you need to know how to respond to packets that got stuck in the proverbial Singapore and may arrive minutes or hours later. So, you can't reuse flow numbers unless you include a separate nonce per iteration of the flow - but at that point you may as well just get a new flow number.

If you view "closing a flow" as an explicit acknowledgment that they've received all the acknowledgments for all the packets they've sent on that flow, then you can safely throw away all flow state. If you later receive a packet on that flow (ie a flow < greatest flow number but with no flow state), you can safely drop it. This doesn't truly kill the flow since it can't be reused, it just marks it with an implicit tombstone that it's useless now. We haven't properly explored the implications of ending a flow, and I'd rather not do so because it's a whole mess of special cases. Anything that deals with flows has to handle the possibility that the flow might not exist.

Formally, of course, the nack packet isn't strictly required. You could just send fragment acks and positive message acks in-band and naxplanations out of band. A naxplanation implies a negative message ack. In practice, I think this would screw up naive congestion control, and it would unnecessarily complicate the receiver logic by inverting the layers: the naxplainer layer would trigger actions at the packet layer, which is upside down.

<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote m_-6984788387074050469gmail-m_2903714908164242293m_-4887240938165550617gmail-m_4417555230222024556m_-346483829874182476m_1956485342146697230gmail-m_-1204485662685887324sh-color-black m_-6984788387074050469gmail-m_2903714908164242293m_-4887240938165550617gmail-m_4417555230222024556m_-346483829874182476m_1956485342146697230gmail-m_-12044856

Ted Blackman

unread,
Jun 18, 2019, 2:39:58 PM6/18/19
to Philip Monk, ohA...@gmail.com, dpc, urbit-dev, Joe Bryan
Some updates on this question.

First of all, if we receive a naxplanation message but haven't received a message nack packet, we should treat the naxplanation as a message nack and not wait for the actual message nack packet, which is now redundant and should be ignored if received later.

This way we maintain the semantics of immediately acking a "subscription update" message, without waiting for confirmation of processing. Naxplanations are a form of subscription update, so we shouldn't have to wait.

Most of this system compiles now:

Now I'm thinking about how local vanes should receive "request" messages, which are identified only by bone, not by duct. The two options I can think of are:
1) Ames sends the bone along with the message, and the vane response also includes the bone
2) Ames makes  a new duct for each bone, and the vane just responds on the same duct.

I'm slightly leaning toward keeping it as a bone, but I'm especially curious what Joe and Philip think would be best here.

I'm also going to propose some move tags for forward and backward flows and message acknowledgments.  Since this is a good opportunity for bikeshedding, I expect some pushback.

Pass a request to a remote ship:

local vane → %pass %pull → local ames → "request" message → remote ames → %pass %pull → remote vane

Give a response to a remote ship:

local vane → %give %push → local ames → "subscription update" message → remote ames → %give %push → remote vane

Acknowledge a remote request (positively or negatively):

local vane → %give %echo → local ames → message (n)ack → remote ames → %give %echo → remote vane

%push and %echo are unused. %pull is currently used in a non-conflicting manner by Gall to cancel a subscription, which might be confusing, but I feel like the symmetry of those terms means Ames should get precedence on those tags in this case.

Alternatively, we could rename %pull to %want, which old Ames used for a different purpose and is no longer taken in new Ames.  I refuse to use %east and %west because I can never remember which is which — they can be swapped without making any more or less sense.


~rovnys-ricfer


On Fri, Jun 14, 2019 at 5:47 PM, Philip Monk <phi...@tlon.io> wrote:
> Philip, I'm curious about your thoughts on having one naxplanation flow per normal flow to prevent normal flows from blocking each other due to nacking.

Conceptually, I think you're right that's cleaner. The oob message really is per flow.

Usually in pubsub you want both sides to be able to cancel a subscription. In fact, an intermediary often can cancel it as well.

The problem with throwing away flow state and reusing flow numbers is you need to know how to respond to packets that got stuck in the proverbial Singapore and may arrive minutes or hours later. So, you can't reuse flow numbers unless you include a separate nonce per iteration of the flow - but at that point you may as well just get a new flow number.

If you view "closing a flow" as an explicit acknowledgment that they've received all the acknowledgments for all the packets they've sent on that flow, then you can safely throw away all flow state. If you later receive a packet on that flow (ie a flow < greatest flow number but with no flow state), you can safely drop it. This doesn't truly kill the flow since it can't be reused, it just marks it with an implicit tombstone that it's useless now. We haven't properly explored the implications of ending a flow, and I'd rather not do so because it's a whole mess of special cases. Anything that deals with flows has to handle the possibility that the flow might not exist.

Formally, of course, the nack packet isn't strictly required. You could just send fragment acks and positive message acks in-band and naxplanations out of band. A naxplanation implies a negative message ack. In practice, I think this would screw up naive congestion control, and it would unnecessarily complicate the receiver logic by inverting the layers: the naxplainer layer would trigger actions at the packet layer, which is upside down.

Joe Bryan

unread,
Jun 18, 2019, 4:12:58 PM6/18/19
to Ted Blackman, Philip Monk, ohA...@gmail.com, dpc, urbit-dev
Once more into the bikeshed ...

I think %echo for acknowledgements is misleading. I propose %aver.

And I think it's worth considering that %pull/%push are really just synonyms for remote %pass/%give. I'm tempted to say we should just therefore just use %pass and %give in the local sending vane. Of course, then we'd have lots of [*duct %pass %a %pass ...] and [*duct %give %give ...], which is not exactly the ideal aesthetic.

Another thought: do we need different tags for sending on flows of opposite polarity? We could have a single "send a message" card, %pass'ing it to a "forward flow" and %give'ing it on a "backwards flow". In this case, I'd be tempted to keep using %want (ie, [*duct %pass %a %want ...] and [*duct %give %want ...]).

--------------

For context, here's the interface to old (current) ames (which doesn't not have directional flows):

Send a message:

local vane -> %want -> local ames -> message -> remote ames -> %west -> remote vane

Acknowledge a message

local vane -> %mack -> local ames -> ack-message -> remote ames -> %woot -> remote vane

~master-morzod


Philip Monk

unread,
Jun 18, 2019, 4:49:40 PM6/18/19
to Joe Bryan, ohA...@gmail.com, dpc, urbit-dev, Ted Blackman
A note:  if we receive a naxplanation for message n+1 before an ack for n (positive or negative) we do still have to wait for the ack for n before passing back the naxplanation.

> Now I'm thinking about how local vanes should receive "request" messages, which are identified only by bone, not by duct. The two options I can think of are:
> 1) Ames sends the bone along with the message, and the vane response also includes the bone
> 2) Ames makes  a new duct for each bone, and the vane just responds on the same duct.

Ames should put the bone in the duct.  Ducts' sole reason for existence is to be the place to send responses back to, so you shouldn't need any other data.

> "subscription update"

I don't think we should use pubsub terminology since Ames does not provide a pubsub interface.  It provides forward and backward flows, but no affordances specific to subscriptions.

> %pull

It feels weird that commands are pulls.  It makes sense for requesting a subscription, but not for poking.  |hi turns into a "pull", which doesn't seem right.

> Another thought: do we need different tags for sending on flows of opposite polarity? We could have a single "send a message" card, %pass'ing it to a "forward flow" and %give'ing it on a "backwards flow". In this case, I'd be tempted to keep using %want (ie, [*duct %pass %a %want ...] and [*duct %give %want ...]).

I agree that it's really just pass and give across the network.  I don't like "%want" because it doesn't evoke anything good in me.  I would be happy with "%mess" or similar.  My only hesitation with using the same name for pass and give is that it might be a little less obvious which way the flows are going, but I think it's fine.



On Tue, Jun 18, 2019 at 1:12 PM, Joe Bryan <j...@tlon.io> wrote:
Once more into the bikeshed ...

I think %echo for acknowledgements is misleading. I propose %aver.

And I think it's worth considering that %pull/%push are really just synonyms for remote %pass/%give. I'm tempted to say we should just therefore just use %pass and %give in the local sending vane. Of course, then we'd have lots of [*duct %pass %a %pass ...] and [*duct %give %give ...], which is not exactly the ideal aesthetic.

Another thought: do we need different tags for sending on flows of opposite polarity? We could have a single "send a message" card, %pass'ing it to a "forward flow" and %give'ing it on a "backwards flow". In this case, I'd be tempted to keep using %want (ie, [*duct %pass %a %want ...] and [*duct %give %want ...]).

--------------

For context, here's the interface to old (current) ames (which doesn't not have directional flows):

Send a message:

local vane -> %want -> local ames -> message -> remote ames -> %west -> remote vane

Acknowledge a message

local vane -> %mack -> local ames -> ack-message -> remote ames -> %woot -> remote vane

~master-morzod


Ted Blackman

unread,
Jun 18, 2019, 4:59:47 PM6/18/19
to Joe Bryan, Philip Monk, oha...@gmail.com, dpc, urbit-dev
I'm fine with %aver. I don't want to use %give and %pass because it would violate our general rule of not reusing tags in different parts of the system. This would be especially bad here given that the two uses are so close to each other.

It might be fine to have the same tag for both forward and backward flows. I think %want is misleading in that case, though.

Candidates:
%mess
%memo
%mail
%post
%word
%news
%chat
%heed
%page
%gist
%pith
%beep
%chit
%spam


I'm leaning toward %word. Whatever it is, it should feel good symmetrically as either a request or a subscription update.


~rovnys-ricfer

Ted Blackman

unread,
Jun 18, 2019, 5:04:47 PM6/18/19
to Joe Bryan, Philip Monk, oha...@gmail.com, dpc, urbit-dev
More candidates

%tell
%line
%ring
%buzz

I like %buzz


~rovnys-ricfer

Ted Blackman

unread,
Jun 18, 2019, 5:17:33 PM6/18/19
to Joe Bryan, Philip Monk, oha...@gmail.com, dpc, urbit-dev
I don't think we could ever receive the naxplanation for n+1 before the naxplanation for n. Messages have to be received in order, and the nacker will enqueue naxplanation n to be sent before naxplanation n+1, because it tried to process n first.


~rovnys-ricfer

Philip Monk

unread,
Jun 18, 2019, 5:21:25 PM6/18/19
to Ted Blackman, oha...@gmail.com, dpc, urbit-dev, Joe Bryan
Right, but we could receive a naxplanation for n+1 before a positive ack for n.  Are you saying we should interpret a naxplantion for n+1 as a positive ack for all < n+1 (not previously acked)?


On Tue, Jun 18, 2019 at 2:17 PM, Ted Blackman <t...@tlon.io> wrote:
I don't think we could ever receive the naxplanation for n+1 before the naxplanation for n. Messages have to be received in order, and the nacker will enqueue naxplanation n to be sent before naxplanation n+1, because it tried to process n first.


~rovnys-ricfer
On Tue, Jun 18 2019 at 2:04 PM, Ted Blackman <ted@tlon.io> wrote:
More candidates

%tell
%line
%ring
%buzz

I like %buzz


~rovnys-ricfer
On Tue, Jun 18 2019 at 1:59 PM, Ted Blackman <ted@tlon.io> wrote:
I'm fine with %aver. I don't want to use %give and %pass because it would violate our general rule of not reusing tags in different parts of the system. This would be especially bad here given that the two uses are so close to each other.

It might be fine to have the same tag for both forward and backward flows. I think %want is misleading in that case, though.

Candidates:
%mess
%memo
%mail
%post
%word
%news
%chat
%heed
%page
%gist
%pith
%beep
%chit
%spam


I'm leaning toward %word. Whatever it is, it should feel good symmetrically as either a request or a subscription update.


~rovnys-ricfer

Ted Blackman

unread,
Jun 18, 2019, 5:34:34 PM6/18/19
to Philip Monk, oha...@gmail.com, dpc, urbit-dev, Joe Bryan
Ah, I hadn't thought of that... but yes, sounds legit. Naxplanations do all come in order, so if we get one on n+1, then we know the remote has positively acked everything from the last naxplanation we've processed through n.


~rovnys-ricfer

Philip Monk

unread,
Jun 18, 2019, 5:39:04 PM6/18/19
to Ted Blackman, oha...@gmail.com, dpc, urbit-dev, Joe Bryan
Urbit's official motto has been "never ack an ack", but perhaps we should change it to "don't ever ack an ack, but it's okay to ack a nack as long as you never nack a nack".


On Tue, Jun 18, 2019 at 2:34 PM, Ted Blackman <t...@tlon.io> wrote:
Ah, I hadn't thought of that... but yes, sounds legit. Naxplanations do all come in order, so if we get one on n+1, then we know the remote has positively acked everything from the last naxplanation we've processed through n.


~rovnys-ricfer

Ted Blackman

unread,
Jun 18, 2019, 5:53:37 PM6/18/19
to Philip Monk, oha...@gmail.com, dpc, urbit-dev, Joe Bryan
That's got a nice ring to it. Also, never naxplain a naxplanation.

The management has informed me that the word "naxplanation" is "far too silly", which is obviously true. While we're bikeshedding, let's take suggestions on replacements.

Candidates:
nack-trace
... any others?


~rovnys-ricfer


On Tue, Jun 18, 2019 at 2:39 PM, Philip Monk <phi...@tlon.io> wrote:
Urbit's official motto has been "never ack an ack", but perhaps we should change it to "don't ever ack an ack, but it's okay to ack a nack as long as you never nack a nack".


Philip Monk

unread,
Jun 18, 2019, 5:59:22 PM6/18/19
to Ted Blackman, oha...@gmail.com, dpc, urbit-dev, Joe Bryan
IMO naxplanation is a perfect name; it's the sort of name you discover rather than the sort you make up.  However, "nack report" is fine.  More fancifully, it could be an "excuse" for why it couldn't process the packet.


On Tue, Jun 18, 2019 at 2:53 PM, Ted Blackman <t...@tlon.io> wrote:
That's got a nice ring to it. Also, never naxplain a naxplanation.

The management has informed me that the word "naxplanation" is "far too silly", which is obviously true. While we're bikeshedding, let's take suggestions on replacements.

Candidates:
nack-trace
... any others?


~rovnys-ricfer


Reply all
Reply to author
Forward
0 new messages