Coyotos Endpoints

6 views
Skip to first unread message

Jonathan S. Shapiro

unread,
Mar 3, 2026, 11:47:01 PM (9 days ago) Mar 3
to cap-talk
My other response to William...

Endpoints were an attempt to solve two problems, and I'm still not all that happy with them.

The first is that things like the space bank implement a lot of objects. The earlier generations relied on red segments to pass what amounted to a user-level object ID. Red segments had way too many things going on, but when they went away we needed something to take up that functionality.

The second is that the start key path and the resume key path had some essential differences that were required by the way "at most once" was specified. This led to a bunch of complications in the IPC path that made the assembly version kind of unpleasant. With the addition of the payload match mechanism, endpoints and their entry capabilities offered a way to regularize and simplify a bunch of that.

What ended up happening is that the case where you don't have a red segment more or less went away, but endpoint objects provided a smaller replacement for red segments that wasn't trying to serve way too many masters and had a simpler execution path.

The current version suffers from the fact that the allocation count is only 20 bits, which means that it will overflow if used for at-most-once resume key behavior. With the conclusion that we can reduce the OID size a bit, I think that's mostly resolved. But for extreme cases there is going to need to be a protocol for replacing endpoints every 2^30 calls or so


Very early on, I had the idea that endpoints would end up being a mutual rendezvous object. The idea was to support multiple listeners and the caller didn't know which listener would actually receive the message. There are two problems with this idea:
  1. As with something like NGINX, it's helpful to talk to the same receiver once you establish a connection. There's enough context going on that you want something like a session, which KeyKOS doesn't have.
  2. The capabilities end up pointing in the wrong direction, with the effect that the list of running processes becomes too large to manage. In effect, nobody ever enters a receiving state.

Unfortunately, I don't see endpoints going away. If they did, we'd have to re-invent resume capabilities and red segments. Endpoints are actually simpler.


Jonathan

William ML Leslie

unread,
Mar 4, 2026, 12:50:53 AM (9 days ago) Mar 4
to cap-...@googlegroups.com
On Wed, 4 Mar 2026 at 14:47, Jonathan S. Shapiro <jonathan....@gmail.com> wrote:
My other response to William...

Endpoints were an attempt to solve two problems, and I'm still not all that happy with them.

The first is that things like the space bank implement a lot of objects. The earlier generations relied on red segments to pass what amounted to a user-level object ID. Red segments had way too many things going on, but when they went away we needed something to take up that functionality.

The second is that the start key path and the resume key path had some essential differences that were required by the way "at most once" was specified. This led to a bunch of complications in the IPC path that made the assembly version kind of unpleasant. With the addition of the payload match mechanism, endpoints and their entry capabilities offered a way to regularize and simplify a bunch of that.

I use endpoints as a way to implement revocation, at least for user-implemented objects.  I miss having the ability to share a reference to an address space and later revoke that.  I know that I can bounce the allocation count, but if I'm doing this on every call to a system service, which I might like to do in some scenarios, I'll eat through them pretty quickly.

What ended up happening is that the case where you don't have a red segment more or less went away, but endpoint objects provided a smaller replacement for red segments that wasn't trying to serve way too many masters and had a simpler execution path.

The current version suffers from the fact that the allocation count is only 20 bits, which means that it will overflow if used for at-most-once resume key behavior. With the conclusion that we can reduce the OID size a bit, I think that's mostly resolved. But for extreme cases there is going to need to be a protocol for replacing endpoints every 2^30 calls or so

That's the payload match.  Overflowing payload match + allocation count is a little further in the future.

Very early on, I had the idea that endpoints would end up being a mutual rendezvous object. The idea was to support multiple listeners and the caller didn't know which listener would actually receive the message. There are two problems with this idea:
  1. As with something like NGINX, it's helpful to talk to the same receiver once you establish a connection. There's enough context going on that you want something like a session, which KeyKOS doesn't have.
  2. The capabilities end up pointing in the wrong direction, with the effect that the list of running processes becomes too large to manage. In effect, nobody ever enters a receiving state.


I contemplate this.  Specifically: if a process bound to one hardware thread sends an entry capability to a process bound to another hardware thread and it invokes it.  I would like it if invoking the entry capability would use whichever target is closest to the caller to avoid the NUMA tax.  I don't have any implementation planned.

I have per-cpu runnable queues; without that this question doesn't make so much sense.

--
William ML Leslie
A tool for making incorrect guesses and generating large volumes of plausible-looking nonsense.  Who is this very useful tool for?

Jonathan S. Shapiro

unread,
Mar 4, 2026, 2:49:47 AM (9 days ago) Mar 4
to cap-...@googlegroups.com
On Tue, Mar 3, 2026 at 9:50 PM William ML Leslie <william.l...@gmail.com> wrote:
I miss having the ability to share a reference to an address space and later revoke that.

That's easy. You stick the space in a containing GPT object and later revoke the container. It's no different from red segments, which were used the same way. I think I'm missing something.
 
The current version suffers from the fact that the allocation count is only 20 bits, which means that it will overflow if used for at-most-once resume key behavior. With the conclusion that we can reduce the OID size a bit, I think that's mostly resolved. But for extreme cases there is going to need to be a protocol for replacing endpoints every 2^30 calls or so

That's the payload match.  Overflowing payload match + allocation count is a little further in the future.

The two are not additive. 

I contemplate this.  Specifically: if a process bound to one hardware thread sends an entry capability to a process bound to another hardware thread and it invokes it.  I would like it if invoking the entry capability would use whichever target is closest to the caller to avoid the NUMA tax.  I don't have any implementation planned.

This is one of those slippery temptations. There are good reasons to do this in the kernel for efficiency reasons and because you gain access to low-level knowledge. As large multi-core systems gain levels of hierarchy it becomes inevitably necessary.

But there's a reason the HaL team referred to the bus interconnect chip (which handled this sort of thing) as the "b*tch chip".
 
I have per-cpu runnable queues; without that this question doesn't make so much sense.

That, I think, makes sense. Though I suspect we want run queues that can dispatch to a set of CPUs of sufficiently close type.


Jonathan
Reply all
Reply to author
Forward
0 new messages