I'm confused about MarkM's comments on memory management complexity. A lot of what we did in Coyotos never made it out into the world, so I wonder if it may be helpful to describe where we were in Coyotos when work stopped.
What I'm about to describe is how "normal" application memory was managed; there are bootstrap and support processes that manage their own memory in "bare metal" fashion - most notably the memory keepers themselves. These were a sufficient pain in the ass that our goal was to make fine grain memory management unnecessary for all but exceptional applications, and where necessary to encapsulate it in a library. The shift in Coyotos from Segments to GPTs made some of this easier.
In EROS, CapROS, and Coyotos, we were compiling to the ELF object file format using static linking. For security and isolation reasons, we did not support dynamic libraries in the style of UNIX/Linux. We already had the ability to unpack an ELF file into an EROS-style memory image, and we adapted this to Coyotos GPTs. This unpacking builds Coyotos structures; IIRC the ELF file is not incorporated into the system image.
Coyotos was the first system for which we built a bunch of higher-level application code, and two issues came to motivate a keeper that actually understood ELF.
The first was a change in the constructor logic. In the constructor protocol, if the yield template has a creator capability in the address space slot, the yield constructor is specified to invoke it in order to fabricate a fresh address space for the yield. This can take a while, so we wanted to make the yield do this work for itself. We moved to a design in which the yield is started with an immutable "iron man" trampoline address space that fabricates the intended address space and transfers control to the initial PC specified in the ELF file.
The second was that we wanted the option to share code and read-only data. This led us to create an "ELF Keeper". The ELF keepers for a given ELF image share a read-only copy of the original ELF image, including its metadata. It crafts the initial yield address space from this image, re-starts the yield from the start address specified in the ELF image, and then extends the heap and the stack in response to page faults, much as UNIX and linux do. In the end, this leads to application code that is completely oblivious to low-level memory management.
Future versions of the ELF keeper were intended to add support for explicit region management.
I'm not sure if this clarifies or clouds the issues that MarkM was describing, but perhaps it provides a starting point from which to figure out where the paths diverged.
Jonathan