Goblin semantics, and thinking through / planning for CapTP

588 views
Skip to first unread message

Christopher Lemmer Webber

unread,
May 18, 2020, 5:25:05 PM5/18/20
to cap-...@googlegroups.com
Hello,

I'm implementing CapTP for the first time ever in my life, though I've
been reading about it for a long time. Mark has told me that a nice
phrase they use at Agoric is "don't squander your ignorance". To that
end, I wonder if it might be a good idea for me to work through some of
my thinking live on a thread here. If people find this too noisy,
please tell me and I'll move it off-list.

(In case it's useful and to prevent doubts of copywrongs, I waive all
copyright into the public domain under CC0 1.0. Feel free to use this
if it results in anything resembling useful documentation... though it
probably won't.)

EDIT BEFORE SENDING: this message has gotten huge and I apologize
in advance unless you think it's great in which case you're welcome.


What is CapTP?
==============

(Feel free to skip this, and maybe the next section, if you're familiar
with CapTP already yourself. Or skip everything, I'm not the boss of
you!)

I recently described CapTP and VatTP to a colleague. Here is my attempt
to mirror those definitions, in short.

- VatTP: How you get secure connections between vats (or maybe
machines; that's up to debate). But it's really more about setting
up a way to get messages securely *between* these vats/machines,
rather than what is done with those messages.

- CapTP: This is the "what happens when you get a message" protocol and
is really what allows objects to talk to each other in a fully
distributed environment. It takes care of a bunch of things,
including linking objects across the vats, informing of promise
resolution, cooperative distributed (acyclic) garbage collection, etc
etc.

There have been a bunch of "CapTP"-like things over time. They aren't
all necessarily compatible. In the meanwhile, this makes CapTP more
like an abstract pattern, like "Lisp", rather than a specific standard,
like "R5RS Scheme". Like lisps, you'll likely find many common ideas
between dialects. It may be desirable to find The Great Unification at
one point but that hasn't happened yet. Maybe soon!


How do I learn more?
====================

Here are the main resources I am using to learn about CapTP:

- The Erights CapTP pages:
http://erights.org/elib/distrib/captp/index.html
- MarkM's thesis:
http://www.erights.org/talks/thesis/
- Cap'n Proto's docs:
https://capnproto.org/rpc.html
- More importantly, rpc.capnp, which is a beautiful document:
https://github.com/capnproto/capnproto/blob/master/c++/src/capnp/rpc.capnp
- This extremely wide aspect ratio video by MarkM:
https://www.youtube.com/watch?v=YXUqfgdDbr8
- Agoric's captp.js:
https://github.com/Agoric/agoric-sdk/blob/master/packages/captp/lib/captp.js
- The following Agoric SwingSet docs:
https://github.com/Agoric/agoric-sdk/blob/master/packages/SwingSet/docs/delivery.md
https://github.com/Agoric/agoric-sdk/blob/master/packages/SwingSet/docs/networking.md
(ok that latter one is really for VatTP)

If you are somehow completely new to ocaps (ok, nobody on this list is,
you can tell I'm over-engineering this email for the future), I
personally recommend "A Security Kernel Based on the Lambda Calculus":

http://mumble.net/~jar/pubs/secureos/secureos.html

The core idea is that (object) capabilities are just object references.
There's no need to layer a complex (and more insecure) security
architecture on top of our programming languages because the way
programmers already program, if we take it seriously, already *is* our
programming model: passing references around to functions (or objects)
is basically the full idea. If you don't have it in scope, you can't
use it. More advanced patterns flow from this core idea; we're not
covering them here but the "Ode to the Grannovetter Diagram" writeup
explains many of them in brief:

http://erights.org/elib/capability/ode/index.html


Core structures
===============

I'm going to borrow some slides from a talk I gave on Goblins recently:

https://gitlab.com/dustyweb/talks/-/blob/master/spritely/friam-2020/goblins-talk.org

Jump to "Goblins Architecture", though really *most* of this is common
across other ocap'y systems like E, Agoric's stuff, Cap'n Proto, blah
blah blah.

Here are the abstractions, as used in Goblins, layered. Moving from
inner most part outward:

(machine (vat (actormap {refr: (mactor object)})))

- object: Some sort of encapsulated thing that can be talked to via
call-and-return invocation and asynchronous passing of messages.
Manages its own state, though another way to say that is "decides
after handling one message/invocation how it will respond to the next
message/invocation". In fact, in Goblins this is usually just a
procedure representing the current message handler. Often times
these things may support multiple methods.

In Goblins, objects resemble "classic actors", although that term may
be subject to bikeshedding.

- mactor (Goblins-only?): Stands for "meta-actor"; there are actually
a few core kinds of these and they wrap the object (eg this is where
promises vs non-promises are distinguished in Goblins).

- refr: Also known as "ref" elsewhere in the ocap community; this is
the reference that is used to communicate with the object. If you
have it, you can communicate, if not, you can't.

- actormap: Mapping of refrs to the objects they represent. In
Goblins, if an object specifies it would like to "become" a new
message handler, this is updated (in a transactional way).
Can be used on its own, but only for non-distributed programs.

- vat: An event loop. Wraps the actormap datastructure, handles
passing messages to it. Handles messages one "turn" at a time,
however objects may send asynchronous messages to objects in any vat
that can be established a connection to and even do immediate calls
to other objects that are in the same vat ("near" each other).

- Machine: Some sort of abstract machine or OS process that may have
one or multiple vats in it. (Agoric does fancy things so that they
can even treat blockchains and other such things as abstract
"machines"... we're not doing anything so fancy in Goblins yet.)


Zooming in on the Vat
=====================

Looking just on the vat-and-deeper levels, this looks something like the
following, borrowed and adjusted a bit from MarkM's dissertation:

.-----------------------.
|Internal Vat Schematics|
'======================='

stack heap
($) (actormap)
.-------.----------------------. -.
| | | |
| | .-. | |
| | (obj) .-. | |
| | '-' (obj) | |
| __ | '-' | |
| |__>* | .-. | |- actormap
| __ | (obj) | | territory
| |__>* | '-' | |
| __ | | |
| |__>* | | |
:-------'----------------------: -'
queue | __ __ __ | -.
(<-) | |__>* |__>* |__>* | |- event loop
'------------------------------' -' territory

In the upper-right box is abstractly the actormap datstructure
representing references pointing to objects. If we just do synchronous
programming, we can add in the left-hand column which resembles a call
stack for call-and-return behavior. However these calls can only be
done between objects in this same actormap/vat. Adding in the bottom
row we see messages queued to be handled. Each message is handled,
one "turn" at a time, from this queue, kicking off a call stack (again,
the left hand column) starting with the message invoking some object in
the actormap/heap with some arguments. In general, when a "turn" is
complete, if there is a promise waiting to be fulfilled attached to this
message, it uses the return value from this first call.


Crossing vat and machine boundaries (hello CapTP)
=================================================

Of course, vats aren't just limited to speaking to just themselves.
We want to speak to other vats, including on other machines!

Visually, this looks something like the following (sorry, might render
better in the link to my talk above):

.----------------------------------. .----------------------.
| Machine 1 | | Machine 2 |
| ========= | | ========= |
| | | |
| .--------------. .---------. .-. .-. |
| | Vat A | | Vat B | | \______| \_ .------------. |
| | .---. | | .-. | .-| / | / | | Vat C | |
| | (Alice)----------->(Bob)----' '-' '-' | | .---. | |
| | '---' | | '-' | | | '--->(Carol) | |
| | \ | '----^----' | | | '---' | |
| | V | | | | | | |
| | .----. | | .-. .-. | .------. | |
| | (Alfred) | '-------/ |______/ |____---( Carlos ) | |
| | '----' | \ | \ | | '------' | |
| | | '-' '-' '------------' |
| '--------------' | | |
| | | |
'----------------------------------' '----------------------'

Here we see, with nested bulleted points representing "what contains
what":

- Machine 1, with a connection to Machine 2
- Vat A
- The object Alice (holding references to: Alfred, Bob)
- The object Alfred
- Vat B
- The object Bob (holding a reference to: Carol)
- Machine 2, with a connection to Machine 1
- Vat C
- The object Carol
- The object Carlos (holding a reference to: Bob)

At the boundaries of the vats are triangle-looking things. These, in
theory, represent tables of "live references".

- The top connection between the triangle-looking things represents
references Machine 1 has to objects in Machine 2.
- The top left triangle-looking thing is Machine 1's imports
(from Machine 2)
- The top right triangle-looking thing is Machine 2's expors
(to Machine 1)
- The bottom connection represents the reverse, Machine 2's references
to objects in Machine 1
- Bottom left being Machine 1 exporting to Machine 2
- Bottom right being Machine 2 importing from Machine 1

These tables are numerical indices. For example, Machine1+VatB's
reference to Carol in Machine2+VatC may look like the following:

- Machine1Imports: {<remote-carol-ref>: 3}
- Machine2Exports: {3: <carol-ref>}

(Side note, when I hear "import" and "export" and then look at the
arrows, I get confused, because I think of "import", arrow-wise, "being
shipped to" the side that is importing, such as packages "ship to" an
importer in a trading situation and "ship from" an exporter. The arrows
are being tricky though, because the we're importing a reference "to" an
object that never leaves its location.)

But this either isn't a complete picture, or doesn't represent other
CapTP implementations (remember, Goblins hasn't fully implemented it
yet), so caveats from "really existing" CapTP systems:

- Traditionally, imports/exports have been on the vat level, rather
than on the machine level

- Also usually there are two other tables in addition to
imports/exports: questions/answers. These corresponds to "future
resolutions to promises" (as well as some "promise pipelining" stuff
but we'll talk about that later). A way to think of this is: if Bob
sends a message to Carol using the <- operator in Goblins, Bob should
get back a response that will eventually be resolved with Carol's
response... but when we cross the network divide and Machine2+VatC
gets that message saying "hey, call Carol... and when you're done,
fulfill this thing", we need some way to refer to that.

*SCREECH!* Let's slam the brakes on that last statement for a second.
Because there could be another solution (ignoring promise pipelining for
a second) that gets rid of questions/answers: Machine1+VatB could set up
an export for Machine2+VatC that refers to the promise-resolver that
will fulfill Bob's promise. We could just say in the message to Carol:
"and once you have an answer, fulfill the promise with
<this-resolver-i-just-exported-for-you>"!

Well that seems to solve that just fine and dandy so why bloat our
protocol with these extra two question/answer tables? Shouldn't
import/export be fine?


The desiderata of promise pipelining
====================================

Well, we said we'd get back to "promise pipelining" at some point, so
here we go.

We could send a message to the remote car factory and say "make me a
car!" and hold onto that promise. We could wait for it to resolve to
get a reference to that car, but *only then* would we be able to tell it
to drive.

So this is:

.-- Ask car factory for car
|
| .--- Made the car, sending back the reference
| |
| | .--- Got the reference, tell the car I want to turn it on!
| | |
| | | .--- Turned on the car, telling you it makes a
| | | | "vroom vroom" noise
| | | |
| | | | .--- Finally I have heard my car go "vroom vroom"
| | | | |
V V V V V

B => A => B => A => B

That's a lot of round trips when we knew that we wanted to drive the car
immediately. What if we could instead say, "make me a car, and then
I want to turn it on immediately!" That would instead look like:


.--- Ask car factory for car, and once you have that car, turn it on
|
| .--- Okay I made the car.
| | Okay, now I will turn on that car, telling you it makes a
| | "vroom vroom" noise"
| |
| | .--- Now I get to hear my car go "vroom vroom" already!
| | |
V V V

B => A => B

This is nice for a few reasons: we can start talking about what we'd
like to do immediately instead of waiting for it to be a possibility.
But most importantly, it reduces the round trips of our system, which
are often the most expensive part in a networked environment:

"Machines grow faster and memories grow larger.
But the speed of light is constant and New York is not getting any
closer to Tokyo."
-- MarkM in Robust Composition: Towards a Unified Approach
to Access Control and Concurrency Control


So does that mean we need questions/answers too?
================================================

So now that brings us back to: do we need questions/answers in addition
to these imports/exports? And... actually I'm not so sure.

It strikes me that, when exporting, Machine1+VatB could say "I'm
allocating this object reference for you, and it's a resolver type".
Then when importing, Machine2+VatC could make note of that, and
when it resolves it, immediately make note in its imports table.
Thus when Carol has that answer ready for Bob, Vat C can make note
of that and still use the reference to Carol.

I still don't see the need for separate questions/answers tables.
Maybe I'm missing something. Maybe it's obvious in practice.


Cooperative acyclic garbage collection
======================================

Maybe Bob doesn't need the reference to Carol anymore, or maybe Bob
doesn't even exist anymore. If nobody in Vat B is holding onto a
reference to Carol anymore, then Vat B should let Vat C know so that Vat
C can sever that entry in their database to Carol. That should allow
Vat C to garbage collect Carol if nobody else is holding onto a
reference either.

If Bob and Carol both hold references to each other, neither might ever
GC. Let's hope that doesn't happen!


3-party introductions
=====================

Once you've implemented all that, what happens if Alice wants to spend a
message about Carol to Dave, but Dave is on Machine 3 in Vat D (not
pictured above). Well what the heck do we do now?

There's a whole thing about handoff tables. I feel like this is a
complicated subject, and one I want to write an entirely separate email
about because it still hurts my brain a little. The MarkM talk I linked
to above explains it reasonably though. So ok, we'll just say we can do
that. (There also has been something called "vines" used historically
though I've never been completely clear on what a "vine" is... maybe it
doesn't matter anymore.)


SturdyRefs, or handoff-only, or certificates?
=============================================

So far we've talked about capabilities using double-ended,
network-spanning c-lists (the import/export tables, numerically
ordered). Great... but how do you bootstrap a connection at all? Let's
say Bob in Vat B wants to talk to Carol in Vat C, but they don't have a
connection "yet". Well if Bob was *introduced* to Carol through someone
else (using those handoff tables or whatever) that seems fine. But this
is a weird bootstrapping problem. When Machine1+VatB+Bob has never
connected to *anyone* on *any other* machine, how on earth does Bob get
an entry point into the system at all?

This is one, but only one, justification for SturdyRefs. SturdyRefs are
long-lived network addresses, which we can think of like:

<object-id>@<vat-id>[.<machine-id>]

We could think that vat-id could be a public/(verification+encryption)
key fingerprint, and that immediately gives us a path to thinking about
how to send messages securely to <vat-id>. (<machine-id> is thus more
of a "hint" of how to get there... "oh yeah, I'm on this IP address or
whatevs".) <object-id> is what's called a swissNum, a sufficiently
unguessable random number / blob of randomly generated junk that we
shouldn't be able to brute force.

This is something we could put in a web hyperlink (indeed, "capability
URLs" are technically such a thing) or print on a business card as a QR
code, etc etc. Now we don't have to be born in the network to get
access to it. Once we set up a connection, we can, from there, start
setting up live references using our imports/exports tables between
vats/machines.

Sturdyrefs have some challenges:
- When do you make them expire / need to be renewed?

- They're easy to leak, and it can make re-establishing relationships
difficult if intrusion occurs. Not only do you give away all your
outgoing authority, this resembles a "we were broken into so now all
our users have to reset their passwords" type problem... but arguably
it's much worse because ocap systems may be constructed such that
users don't really know where all the sturdyrefs they rely on exist
in their machine.

- They don't work with systems like blockchains where "which can't hold
secrets" (though necessarily require on secrets being externalized).

So I guess we have a couple of other options:

- Apparently Agoric's stuff is rolling out without them but I'm not
really sure how. I know "bootstrap objects" exist but I think of
those as "system-level objects" and really only there (if they need
to be at all) as a special plumbing object, and aren't sufficient if
everyone has access to them. My best guess is that what you would do
is have Carol in Vat C *anticipate* Bob in Vat B's arrival... "When
Vat B gets here, let's pre-allocate a reference to Carol for them".
Is that how it works?

- Or we could use ocap certificate chains, eg like zcap-ld or CapCert
or the Zebra Copy stuff, etc etc. This adds some structural overhead
but pleasantly removes a large portion of the leaking risks;
recovering from an intrusion may leak some private data, but you
don't need to ask users something resembling "we were broken into so
plese reset your passwords" (but potentially much worse, because
you're now asking your users to debug their running object capability
systems).

Both of those are nice, but neither of them covers the "here's a
blog/social networking post where I mention something interesting" use
case. Particularly, consider if a post is encoded in some document
structure that is stored in something like tahoe-lafs (or datashards or
etc)... how do you encode it as a link in this offline-stored data?

So I'd like to get around the need for sturdyrefs, but I see use cases
for them where they're still desireable. And they're just such a dang
easy way to bootstrap connectivity in a system.


Store imports/exports tables in vats or machines?
=================================================

If multiple vats are on the same machine, who is responsible for the
imports/exports tables? Does each vat provide them? Or should it be on
the machine level?

I think there are a lot of tradeoffs here... admittedly this is one of
the things I'm struggling with most. I'll have more to say in a
forthcoming email maybe.


Store and forward vs break-on-disconnect
========================================

Assuming we have "live references" at all, we are left with some
decisions on what to do about connections. Maybe this is more of a
VatTP thing, I'm unsure, but there seem to be CapTP considerations.

Let's contrast two approaches:

- Live connections which break on disconnect: this was the E approach,
and you use a sturdyref (though maybe we could use a certificate
chain or whatever) to start the connection between vats/machines,
from which you bootstrap your access to live references. On
connection severance (for whatever reason), all live references
break and throw relevant errors.

This is an extremely sensible choice for a distributed video game
like Electric Communities, and seems extremely sensible for my own
use case likewise. If I disconnect from a real-time game, I want my
interface to reflect that.

- Store and forward networks where undelivered messages are always
"waiting in transit". This is really nice for peer to peer systems
where users may go offline a lot, and thus it's an appealing
direction to me for social networks. It could even be very appealing
for turn-based games. This is the direction Agoric is going, but
their motivation appears to be primarily "how do we collaborate with
blockchains" oriented.

I feel like there is a strong desiderata for both of these cases. I
will probably start with the former but I'd like to support the latter.

Is supporting both really feasible?


Procedures vs objects-with-methods as first class?
==================================================

Guy L. Steele nicely broke down how both objects are a poor man's
procedures, and procedures are a poor man's objects.

https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/msg03277.html

This is a bit vague, because what "object" means is a bit vague:

http://www.mumble.net/~jar/articles/oo.html

So let's clarify that we're talking about "objects with methods" vs
"procedures", which as many know once you can have one, you can build
the other out of it:

- E chose to make objects-with-methods first-and-foremost: a procedure
is just a special kind of object that merely has a .run() method.
- Goblins goes down the classic Scheme route:
https://dl.acm.org/doi/10.1145/62678.62720
Procedures are first class, and some (but not all) procedures take a
first argument, which is used for method dispatch.

If we want to make Goblins and Agoric's SwingSet inter-compatible, how
do we do it? Which one "wins"?

Looking at the TC39 Eventual Send proposal suggests another path:

https://github.com/tc39/proposal-eventual-send

I personally find EventualGet to be highly disturbing, so let's ignore
it. Use getters, not attributes! :)

So EventualGet being ignored, this proposal gives EventualApply and
EventualSend. These correspond to procedures and method invocation
respectively. (Note that I don't like calling method invocation "send";
for me, "send" is something we support in Goblins and really means
sending an asynchronous message as opposed to call-and-return
invocation. That seems more correct anyway, and "send" thus resembles
dropping something off at the post office. I have never liked "send" as
a way to refer to "invoke a method" for this reason.)

Separating these out into two different ways of *calling things* (as
opposed to two different ways of *constructing things*) is
interesting, and actually seems justifiable to me. An object can itself
provide both "a way to be invoked as a procedure" and "a way to be
invoked with methods". I could support this (and it may even simplify
some "meta methods" headaches I was considering surrounding supporting
the interfaces grant we're working on... more on that later).

It also removes the need to "make a decision" of whether or not to make
procedures and objects-with-methods two separate things or to build it
out of one thing's abstraction. As long as we support both ways of
invoking/sending, we are good.

(But seriously, fire EventualGet into the sun.)


Message ordering
================

I don't understand E-Order and it kind of intimidates me. I'm just
being honest. It would be great to rectify this.

I think as a first step I'm just going to do a roughly-FIFO type thing
that doesn't do too much in terms of message ordering across vat
boundaries. I know there are reasons expressed by MarkM and especially
apparently Dean, but I'm actually unsure: is the complexity worth it?
And how hard is it to do, really?

Maybe in the future I'll Get It (TM) but I'll probably start out without it.


BONUS: Distributed cyclic GC
============================

Obviously I am not going to do this anytime soon but it breaks my brain
that MarkM told me something like "Original E had distributed cyclic GC
support". Apparently it is complicated, involves some hooks into the
local garbage collector, and is rarely needed and tricky enough that
Mark said he doesn't bother to ask for it anymore, but I'm just gonna
say that it both blows my mind and completely befuddles me that some
version of E ever had something like this.

What huh what huh what? And was it ever documented how it works so that
sages in the future can help us sweep out our networks from unneeded
junk? Or is it an idea lost to the sands of time?

I find it personally fairly mystifying.


BONUS: What happens when multiple vats/machines resolve the same resolver?
==========================================================================

This is kind of a side note so I left it for the end.

It strikes me that if promise-resolver pairs are first class, one could
do some really goofy things and hand the same resolver to two different
vats to resolve... first one wins. If Vat C thinks it just successfully
resolved the resolver, it can immediately move forward with pipelined
messages waiting on the result, while Vat D thinks the same, and moves
forward with messages waiting on its own conflicting result. Both of
them may think they can move forward with plans when it actually isn't
safe.

The right answer seems to be, "promise pipelining is something only
set up at the CapTP layer and isn't exposed as something first class
so users shouldn't be able to create messed up situations like this
because we never gave them first-class ability to do it... and there's
only so much you can trust stuff moving across the network layer
in opaque actor type systems anyway."

Which seems true enough for our live-actor'y things. I bet you could
create some sneaky vulnerabilities in a cross-blockchain system what
wasn't anticipating this and took advantage of promise pipelining, and
wasn't engineered to handle this scenario. Dunno.


What's next, assuming I keep sending messages about these
=========================================================

Hi, do you hate this thread yet?

I've left some things I'm uncertain about above. Thoughts welcome.
I'll also share more as I implement, assuming people are open to me
continuing to do so on this list.

I think in an upcoming email I may try to break down some of the common
message types sent in CapTP... that might help me think about what *I*
should implement, too.

But I guess we'll see... what do you think? Is this, and subsequent
similar, message(s) worthwhile/welcome?

- Chris

Christopher Lemmer Webber

unread,
May 18, 2020, 6:21:54 PM5/18/20
to cap-...@googlegroups.com
Christopher Lemmer Webber writes:

> (Side note, when I hear "import" and "export" and then look at the
> arrows, I get confused, because I think of "import", arrow-wise, "being
> shipped to" the side that is importing, such as packages "ship to" an
> importer in a trading situation and "ship from" an exporter. The arrows
> are being tricky though, because the we're importing a reference "to" an
> object that never leaves its location.)

Just clarifying why this is confusing, based on a conversation I just
had on IRC. If I think, "Vat A is giving Vat B a reference to Alice, so
it's exporting it", that makes sense.

But the usual ocap diagrams look like this:

(Alice) ---> (Bob)

And suddenly when we diagram the tables it looks like

(Alice) --[imports]--[exports]--> (Bob)

... and that looks super confusing. The arrow is pointing the wrong
way! (But I know why.)

Rob Markovic

unread,
May 18, 2020, 6:41:34 PM5/18/20
to cap-...@googlegroups.com

I put beer goggles on and ask, what's TP stand for?

++ Rob


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/875zcs28ao.fsf%40dustycloud.org.

Baldur Jóhannsson

unread,
May 18, 2020, 7:02:34 PM5/18/20
to cap-talk
True Paper! No, seriously, it stands for Transport Protocol.

Suspect it was inspired by the Hyper-Text Transport Protocol.

And it had nothing to do with the other kind of TP that stores ran into shortage of.

-Baldur

Kevin Reid

unread,
May 18, 2020, 7:15:09 PM5/18/20
to cap-...@googlegroups.com
On Mon, May 18, 2020 at 2:25 PM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
What happens when multiple vats/machines resolve the same resolver?

Within CapTP, a reference to a resolver is, I believe, never handed off to another machine, so this case can't arise, but I don't remember exactly how the 3-party-introduction operates.

However, CapTP in E is built on top of a local promise system that does expose resolvers for general use, and the question does arise there (not specific to multiple machines). In that case, all implementations have been "the first to resolve wins", since it would be Bad if a resolved promise changed targets. However, there is one other question: what does the loser of the race see?

1. E chose to throw an exception in response to the second resolve() message to indicate that there was a probable bug.

2. Waterken chose to silently do nothing, on the grounds that that way a resolver is a purely write-only communication channel — the holder of a resolver cannot learn whether someone else already used it.

Mark S. Miller

unread,
May 18, 2020, 7:26:42 PM5/18/20
to cap-...@googlegroups.com
Nice message -- please keep them coming!

FWIW, resending with the ascii art in, hopefully for each of you reading, a fixed width font. (I'm doing this in gmail)

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Mark S. Miller

unread,
May 18, 2020, 7:50:59 PM5/18/20
to cap-...@googlegroups.com
OMG, I was writing a long explanation about why we still need a Vine, and found that we do not. We did need a Vine when 3-party introductions were based on swiss numbers. Once we introduced the handoff tables, we no longer do. VatC will retain Carol in VatC's VatA-gifts-for-VatB handoff table until VatB looks it up, placing Carol in VatC's C-to-B exports table.

Somehow, though I introduced the handoff tables over 10 years ago, I never noticed that they make the Vine irrelevant. Thanks!

Mark S. Miller

unread,
May 18, 2020, 8:06:06 PM5/18/20
to cap-...@googlegroups.com
On Mon, May 18, 2020 at 4:26 PM Mark S. Miller <ma...@agoric.com> wrote:

If, before singularity, this gets implemented again for modern distributed ocap protocols (captp, cap'n proto, goblins, ...), I will be pleasantly surprised.
 
Feel free to take that as a challenge. But not first!

William ML Leslie

unread,
May 18, 2020, 8:07:24 PM5/18/20
to cap-talk
On Tue, 19 May 2020 at 07:25, Christopher Lemmer Webber
<cwe...@dustycloud.org> wrote:
> - Also usually there are two other tables in addition to
> imports/exports: questions/answers. These corresponds to "future
> resolutions to promises" (as well as some "promise pipelining" stuff
> but we'll talk about that later). A way to think of this is: if Bob
> sends a message to Carol using the <- operator in Goblins, Bob should
> get back a response that will eventually be resolved with Carol's
> response... but when we cross the network divide and Machine2+VatC
> gets that message saying "hey, call Carol... and when you're done,
> fulfill this thing", we need some way to refer to that.
>
> *SCREECH!* Let's slam the brakes on that last statement for a second.
> Because there could be another solution (ignoring promise pipelining for
> a second) that gets rid of questions/answers: Machine1+VatB could set up
> an export for Machine2+VatC that refers to the promise-resolver that
> will fulfill Bob's promise. We could just say in the message to Carol:
> "and once you have an answer, fulfill the promise with
> <this-resolver-i-just-exported-for-you>"!
>
> Well that seems to solve that just fine and dandy so why bloat our
> protocol with these extra two question/answer tables? Shouldn't
> import/export be fine?
>

You've successfully figured out that "who allocates the index" is one
important distinction between the tables, the other is "which way the
references point". The import table maps far refs to their id, wheras
the export table maps ids to the objects we are exporting.

Keep in mind, too, that the caller may not actually want to receive
the result, they may want to simply pass it elsewhere or discard it.

--
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered
under copyright law. You absolutely MAY reproduce any part of it in
accordance with the copyright law of the nation you are reading this
in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without
prior contractual agreement.

Ian Denhardt

unread,
May 18, 2020, 8:10:56 PM5/18/20
to Christopher Lemmer Webber, cap-...@googlegroups.com
Quoting Christopher Lemmer Webber (2020-05-18 17:25:01)
> EDIT BEFORE SENDING: this message has gotten huge and I apologize
> in advance unless you think it's great in which case you're welcome.

Hah, I enjoyed it anyway. I'll try to weigh in on a few things and share
what experience I have to offer from writing an FP implementation of
Cap'n Proto (specifically the Haskell implementation), and from using a
handful of different libraries in different languages for the same
protocol.

> So does that mean we need questions/answers too?
> ================================================
>
> So now that brings us back to: do we need questions/answers in addition
> to these imports/exports? And... actually I'm not so sure.
>
> It strikes me that, when exporting, Machine1+VatB could say "I'm
> allocating this object reference for you, and it's a resolver type".
> Then when importing, Machine2+VatC could make note of that, and
> when it resolves it, immediately make note in its imports table.
> Thus when Carol has that answer ready for Bob, Vat C can make note
> of that and still use the reference to Carol.
>
> I still don't see the need for separate questions/answers tables.
> Maybe I'm missing something. Maybe it's obvious in practice.

I do think it's possible to get rid of Q&A tables in a way that
simplifies things. But first to review: with a call like:

x.foo().bar().baz(),

the caller needs to be able to:

1. Address the message foo() to x.
2. Address the message bar() to the result of (1).
3. Address the message baz() to the result of (2).

This means the sender needs to choose the addresses for the intermediate
results, and this one thing the separate tables get you:

+--------------------------+-----------------+-------------------+
| | Lives in Sender | Lives in Receiver |
+--------------------------+-----------------+-------------------+
| ID is Sender Allocated | Exports | Questions |
| ID is Receiver Allocated | Answers | Imports |
+--------------------------+-----------------+-------------------+

Passing in a resolver doesn't solve the problem on its own because you
have no way of addressing whatever the receiver eventually feeds to the
resolver. If you drop questions & answers, you somehow need to figure
out how to pick addresses for intermediate results.

That said, I *do* think there's an opportunity for a simplification that
gets rid of the Q&A tables and also some message types. I'll use
capnproto terminology to make things concrete:

- Make object IDs a pair of (who allocated, numeric ID), instead of
just a numeric ID.
- Replace the `Call` message's `questionId` field with a fresh object
id N, which will be inserted into the receiver's exports table as
(remote allocated, N), and marked as a promise that will be resolved
later (the protocol already has the latter concept).
- When the call returns, instead of sending a `Return` message, send
a `Resolve` message, which uses the usual promise resolution logic
to provide the result. We can drop the `Return` message type from the
protocol entirely, as it is unused.
- Drop the `Finish` message type too; you can just use a `Release`
message for the same purpose, now that the result is just another
capability.

This also has the neat effect that on top of having simplified the
protocol, it also becomes easy to add fire-and-forget calls; just omit
the promise.

> Store and forward vs break-on-disconnect
> ========================================
>
> [...]
>
> Is supporting both really feasible?

My first instinct here is to do this at a lower level, i.e. use the
disconnect error approach but provide a "persistent transport" that
hides transient disconnects. I haven't though through this deeply
though.

> If we want to make Goblins and Agoric's SwingSet inter-compatible, how
> do we do it? Which one "wins"?

I think this actually doesn't need to resolved at the protocol level;
you can map it to different languages in different ways, as is natural.
At the protocol level there's a `Call` message that has a payload, and
there's a return value that is expected. In capnproto the call message
has an interface id and a method id, but rather than at the language
level using those to work out what method to call, you could just pass
them to the (single) function, and have it switch on those itself. In
the Haskell implementation I ended up mapping things to type classes
because it made diamond dependencies on interfaces easier to deal with,
but another design I considered was to just have actors receive messages
of a single type, which is a discriminated union. i.e. for an interface
like:

interface SomeInterface {
foo @0 A -> B;
bar @1 C -> D;
}

You'd have a message type like:

data SomeInterface_Msg
= Foo A (Fulfiller B)
| Bar C (Fulfiller D)


Even with the protocol in its current state, if I were to do it all
over again I'd be tempted to go that route; in hindsight I think it
would be nicer in some ways, even with the diamond dependency issues.

> Message ordering
> ================
>
> I don't understand E-Order and it kind of intimidates me. I'm just
> being honest. It would be great to rectify this.

E-order is roughly just that the messages sent via a particular
capability should be delivered in FIFO. The disembargo stuff that shows
up at the protocol level is implementation detail to make sure this
abstraction doesn't leak even when cross-vat promises are resolved.

> I think as a first step I'm just going to do a roughly-FIFO type thing
> that doesn't do too much in terms of message ordering across vat
> boundaries. I know there are reasons expressed by MarkM and especially
> apparently Dean, but I'm actually unsure: is the complexity worth it?
> And how hard is it to do, really?

It's not *that* hard to implement, at least in the two-party version,
but it's also possible to design the protocol (and capnproto is designed
this way) such that an implementation can get around this by just
responding "not implemented" to Resolve messages and keep sending calls
to the promise, not the resolved object. This prevents path shortening
from working, but it's a good stepping stone to get the basic
abstraction working without extra effort. The Go implementation does
this, and the first versions of the Haskell implementation did the same
thing. You can absolutely add this later, though it might make sense to
sanity check with folks that the particulars of your design facilitate
doing that.

Hope this is useful,

-Ian

Mark S. Miller

unread,
May 18, 2020, 8:27:46 PM5/18/20
to cap-...@googlegroups.com
On Mon, May 18, 2020 at 4:26 PM Mark S. Miller <ma...@agoric.com> wrote:
The answer doesn't start with promise pipelining. Consider

let resolve;

const bob = harden({
    foo() {
        return new Promise(r => resolve);
    }
}

Alice says:

const p = bobP~.foo();

or without the sugar

const p = E(bobP).foo();

For those more familiar with E, the equivalent might be, depending on my rusty memory:

var resolve;

def bob {
    to foo() {
        const [p, r] = Ref.promise();
        resolve = r;
        return p;
    }
}

Alice says:

def p = bobP <- foo();

Let's start considering the situation where everything is in one vat. Once Alice sends off the foo() message, where is the authority to resolve p?

It is no longer with Alice. Even though Alice created this promise, she immediately gave away her authority to resolve it.

First, it is in the air, traveling with the foo() message.

Then when the foo() message arrives at Bob, Bob gets the authority to resolve it. But not even Bob gets the resolver! Rather, that promise will be resolved by whatever Bob returns. By returning a fresh promise and holding on to the resolver, Bob has obtained that authority. At that point, exclusively! To emphasize this exclusive nature, at Agoric we have taken to saying that Bob is now "the decider". Notice "the".

Now let's revisit with Alice in VatA and Bob in VatB. To Alice, the local proxy for bob is the decider. To VatA, VatB is the decider. To VatB, Bob is the decider.

When Alice then does one of

p~.bar()
E(p).bar()  // no sugar
p <- bar()  // E

where should the message go? After all, we don't know where the object is that p will designate. It might even be on VatA. However, the unambiguously best answer is "send it to the decider". The steps by which it will come to be determined where that object is starts with the decision made by the decider. Therefore, if bar() needs to be routed elsewhere, that will be known at the decider before it is known anywhere else.
 


What's next, assuming I keep sending messages about these
=========================================================

Hi, do you hate this thread yet?

NO!
 

I've left some things I'm uncertain about above.  Thoughts welcome.
I'll also share more as I implement, assuming people are open to me
continuing to do so on this list.

I think in an upcoming email I may try to break down some of the common
message types sent in CapTP... that might help me think about what *I*
should implement, too.

But I guess we'll see... what do you think?  Is this, and subsequent
similar, message(s) worthwhile/welcome?

YES!

Mark S. Miller

unread,
May 18, 2020, 9:32:39 PM5/18/20
to cap-talk
Don't forget ftp. Actually, feel free to forget it ;)


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.


--
  Cheers,
  --MarkM

Mark S. Miller

unread,
May 18, 2020, 9:37:43 PM5/18/20
to cap-talk
On Mon, May 18, 2020 at 4:15 PM Kevin Reid <kpr...@switchb.org> wrote:
On Mon, May 18, 2020 at 2:25 PM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
What happens when multiple vats/machines resolve the same resolver?

Within CapTP, a reference to a resolver is, I believe, never handed off to another machine, so this case can't arise, but I don't remember exactly how the 3-party-introduction operates.

However, CapTP in E is built on top of a local promise system that does expose resolvers for general use, and the question does arise there (not specific to multiple machines). In that case, all implementations have been "the first to resolve wins", since it would be Bad if a resolved promise changed targets. However, there is one other question: what does the loser of the race see?

In E, the resolver itself is just a normal pass-by-proxy object. The comm system makes no special case for "sending" a resolving. You just get remote proxies to the stationary resolver. Whatever vat holds the actual resolver is the decider.
 

1. E chose to throw an exception in response to the second resolve() message to indicate that there was a probable bug.

2. Waterken chose to silently do nothing, on the grounds that that way a resolver is a purely write-only communication channel — the holder of a resolver cannot learn whether someone else already used it.

JavaScript chose the Waterken semantics. I don't remember anymore if I was for or against that on tc39 at the time. Agoric of course adopts the JS semantics.

 

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Ian Denhardt

unread,
May 18, 2020, 10:06:24 PM5/18/20
to Mark S. Miller, cap-talk
Quoting Mark S. Miller (2020-05-18 21:37:31)

> However, CapTP in E is built on top of a local promise system that does
> expose resolvers for general use, and the question does arise there
> (not specific to multiple machines). In that case, all implementations
> have been "the first to resolve wins", since it would be Bad if a
> resolved promise changed targets. However, there is one other question:
> what does the loser of the race see?

It's worth observing that depending on the system there's another option
here: If you're in a language with ownership types, you can just prevent
the aliasing in the first place, statically. Obviously not always
applicable, but interesting.

-Ian

William ML Leslie

unread,
May 18, 2020, 10:19:37 PM5/18/20
to cap-talk
It's more a modality problem - a promise can be resolved at most once,
even if only one scope has access to it.

Ian Denhardt

unread,
May 18, 2020, 11:13:58 PM5/18/20
to William ML Leslie, cap-talk
Quoting William ML Leslie (2020-05-18 22:19:24)

> > It's worth observing that depending on the system there's another option
> > here: If you're in a language with ownership types, you can just prevent
> > the aliasing in the first place, statically. Obviously not always
> > applicable, but interesting.
> >
>
> It's more a modality problem - a promise can be resolved at most once,
> even if only one scope has access to it.

Yes, by ownership types I was thinking of substructural type systems,
like Rust's "affine" types (use at most once). Sorry if the loose
terminology was confusing.

-Ian

Matt Rice

unread,
May 19, 2020, 4:42:16 AM5/19/20
to cap-...@googlegroups.com
On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
<cwe...@dustycloud.org> wrote:

> There's no need to layer a complex (and more insecure) security
> architecture on top of our programming languages because the way
> programmers already program, if we take it seriously, already *is* our
> programming model:

This seems like you wanted to say "already *is* our security model",
otherwise it seems somewhat tautological.
https://xkcd.com/703/
I also think "on top of our programming languages", should really be
"on top of memory safe programming languages".

There seems to also be a distinction that could be made (having only
read this far as of yet),
that (I don't know what to call memory unsafe programming languages,
lassiez faire?),
But taking one of those and *adding* a security model -- to make up
for its insecurity has been so far unworkable,
While taking a memory safe programming language and *keeping it*
secure by *not adding*
insecurity is a workable and realistically achievable approach.

shrug

William ML Leslie

unread,
May 19, 2020, 5:15:26 AM5/19/20
to cap-talk
On Tue, 19 May 2020 at 18:42, Matt Rice <rat...@gmail.com> wrote:
>
> On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
> <cwe...@dustycloud.org> wrote:
>
> > There's no need to layer a complex (and more insecure) security
> > architecture on top of our programming languages because the way
> > programmers already program, if we take it seriously, already *is* our
> > programming model:
>
> This seems like you wanted to say "already *is* our security model",
> otherwise it seems somewhat tautological.
> https://xkcd.com/703/
> I also think "on top of our programming languages", should really be
> "on top of memory safe programming languages".
>

I understood "on top of our programming languages" to be an indictment of this:

https://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html

the Java SecurityManager has been a massive footgun and attempts at
using it rather than paring everything down to capability semantics is
the main reason that having a java client installed (or worse, having
the web plugin installed) is considered a security risk. People like
to quibble about versions, but putting band-aids on a sieve won't
quickly make it waterproof.

Neil Madden

unread,
May 19, 2020, 5:44:19 AM5/19/20
to cap-...@googlegroups.com


On 19 May 2020, at 10:15, William ML Leslie <william.l...@gmail.com> wrote:

On Tue, 19 May 2020 at 18:42, Matt Rice <rat...@gmail.com> wrote:

On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
<cwe...@dustycloud.org> wrote:

There's no need to layer a complex (and more insecure) security
architecture on top of our programming languages because the way
programmers already program, if we take it seriously, already *is* our
programming model:

This seems like you wanted to say "already *is* our security model",
otherwise it seems somewhat tautological.
https://xkcd.com/703/
I also think "on top of our programming languages", should really be
"on top of memory safe programming languages".


I understood "on top of our programming languages" to be an indictment of this:

https://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html

the Java SecurityManager has been a massive footgun and attempts at
using it rather than paring everything down to capability semantics is
the main reason that having a java client installed (or worse, having
the web plugin installed) is considered a security risk.  People like
to quibble about versions, but putting band-aids on a sieve won't
quickly make it waterproof.

On that note, I found this interesting wording in the Java 11 Secure Coding Guidelines (https://www.oracle.com/java/technologies/javase/seccodeguide.html):

Guideline 0-5 / FUNDAMENTALS-5: Minimise the number of permission checks

Java is primarily an object-capability language. SecurityManager checks should be considered a last resort. Perform security checks at a few defined points and return an object (a capability) that client code retains so that no further permission checks are required.

Section 9 elaborates:

Although Java is largely an object-capability language, a stack-based access control mechanism is used to securely provide more conventional APIs.

Well, it made me chuckle anyway…

— Neil

Matt Rice

unread,
May 19, 2020, 5:55:03 AM5/19/20
to cap-...@googlegroups.com
On Tue, May 19, 2020 at 9:15 AM William ML Leslie
<william.l...@gmail.com> wrote:
>
> On Tue, 19 May 2020 at 18:42, Matt Rice <rat...@gmail.com> wrote:
> >
> > On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
> > <cwe...@dustycloud.org> wrote:
> >
> > > There's no need to layer a complex (and more insecure) security
> > > architecture on top of our programming languages because the way
> > > programmers already program, if we take it seriously, already *is* our
> > > programming model:
> >
> > This seems like you wanted to say "already *is* our security model",
> > otherwise it seems somewhat tautological.
> > https://xkcd.com/703/
> > I also think "on top of our programming languages", should really be
> > "on top of memory safe programming languages".
> >
>
> I understood "on top of our programming languages" to be an indictment of this:
>
> https://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html
>
> the Java SecurityManager has been a massive footgun and attempts at
> using it rather than paring everything down to capability semantics is
> the main reason that having a java client installed (or worse, having
> the web plugin installed) is considered a security risk. People like
> to quibble about versions, but putting band-aids on a sieve won't
> quickly make it waterproof.

Trying to understand where I disconnect,
I guess any all languages are sufficient to derive the logic behind a policy,
even those which lack the inherent ability to enforce it. I was
basically nitpicking
that removing the security manager from something like java gets you no closer
to actually enforcing policy derived within the language.
Anyhow the statement is a bit strong for my taste, given the subtlety
of prescribing vs passive enforcement of policy.

William ML Leslie

unread,
May 19, 2020, 6:35:50 AM5/19/20
to cap-talk
On Tue, 19 May 2020 at 19:55, Matt Rice <rat...@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 9:15 AM William ML Leslie
> <william.l...@gmail.com> wrote:
>
> Trying to understand where I disconnect,
> I guess any all languages are sufficient to derive the logic behind a policy,
> even those which lack the inherent ability to enforce it. I was
> basically nitpicking
> that removing the security manager from something like java gets you no closer
> to actually enforcing policy derived within the language.
> Anyhow the statement is a bit strong for my taste, given the subtlety
> of prescribing vs passive enforcement of policy.
>

Would love to know more of what you're thinking - I read the caveat
that static analysis, type systems, and protected memory are still
valuable security tools even in a capability-safe environment, but
I've been known to misread things.

William ML Leslie

unread,
May 19, 2020, 6:42:49 AM5/19/20
to cap-talk
On Tue, 19 May 2020 at 19:44, Neil Madden <neil....@forgerock.com> wrote:
> On that note, I found this interesting wording in the Java 11 Secure Coding Guidelines (https://www.oracle.com/java/technologies/javase/seccodeguide.html):
>
> Guideline 0-5 / FUNDAMENTALS-5: Minimise the number of permission checks
>
> Java is primarily an object-capability language. SecurityManager checks should be considered a last resort. Perform security checks at a few defined points and return an object (a capability) that client code retains so that no further permission checks are required.
>
> Section 9 elaborates:
>
> Although Java is largely an object-capability language, a stack-based access control mechanism is used to securely provide more conventional APIs.
>
> Well, it made me chuckle anyway…
>
> — Neil
>

Wow!

I am still horrified by this:

https://docs.oracle.com/javase/tutorial/networking/urls/connecting.html

I guess it's "primarily an object-capability language with a primarily
ambient standard library" ?

Christopher Lemmer Webber

unread,
May 19, 2020, 6:52:06 AM5/19/20
to cap-...@googlegroups.com
Ahaha, and here I thought maybe I'd send this at the end of the work day
yesterday and nobody would actually read it... woke up to a full inbox.

Kevin Reid writes:

> On Mon, May 18, 2020 at 2:25 PM Christopher Lemmer Webber <
> cwe...@dustycloud.org> wrote:
>
>> What happens when multiple vats/machines resolve the same resolver?
>>
>
> Within CapTP, a reference to a resolver is, I believe, never handed off to
> another machine, so this case can't arise, but I don't remember exactly how
> the 3-party-introduction operates.
>
> However, CapTP in E is built on top of a local promise system that does
> expose resolvers for general use, and the question does arise there (not
> specific to multiple machines). In that case, all implementations have been
> "the first to resolve wins", since it would be Bad if a resolved promise
> changed targets. However, there is one other question: what does the loser
> of the race see?
>
> 1. E chose to throw an exception in response to the second resolve()
> message to indicate that there was a probable bug.
>
> 2. Waterken chose to silently do nothing, on the grounds that that way a
> resolver is a *purely write-only communication channel* — the holder of a
> resolver cannot learn whether someone else already used it.

"What does the loser of the race see" is a good way to put it. Before I
reply to that, let's move forward with, "what does the loser of the race
see when they're confident they already won the promise pipelining race,
and already went off to go eat some winners-victory cake, only to find
out after eating an entire slice that someone else won?"
"The decider" is a good way of putting things, except it gives me
flashbacks to GWB:

https://youtu.be/irMeHmlxE9s?t=44

Of course, "the" decider makes it sound like there can only be one. But
I like Kevin's "what does the loser of the race see", which is a context
that we may have set up multiple potential deciders.

> Now let's revisit with Alice in VatA and Bob in VatB. To Alice, the local
> proxy for bob is the decider. To VatA, VatB is the decider. To VatB, Bob is
> the decider.
>
> When Alice then does one of
>
> p~.bar()
> E(p).bar() // no sugar
> p <- bar() // E
>
> where should the message go? After all, we don't know where the object is
> that p will designate. It might even be on VatA. However, the unambiguously
> best answer is "send it to the decider". The steps by which it will come to
> be determined where that object is starts with the decision made by the
> decider. Therefore, if bar() needs to be routed elsewhere, that will be
> known at the decider before it is known anywhere else.

I'm actually not too concerned from the perspective of Alice at the
moment. Let's say that, in theory, something goofy happened... Vat A
actually sent messages to two different vats. Bob on Vat B and Carol on
Vat C are now both "the decider" and are ready to do something under the
assumption that they're the ones who resolve that promise.

- Bob's promise pipeline has Bob resolve p with cake, and then proceeds
with the next instruction in the pipeline, which is for Bob to eat
the cake.
- Carol's promise pipeline has Carol resolve p with pie, and then
proceeds with the next instruction in the pipeline, which is for
Carol to throw the pie.

We can think of this as a one-phase part of a commit, where Bob and
Carol both immediately assumed that they were the "decider" and could
move forward under the next step of the system.

In an opaque and "live actor'y" mutually suspicious system, I suppose
that Bob and Carol don't know and don't care what promise they're
resolving anyway. The thought is really "I've been given some numerical
index which, once I decide what it is, I'm told to use my decision in
this next step too. What's it resolving? Who cares. I'm just
following these steps... if Alice screwed up the pipeline by declaring
Too Many Cooks*, that's her problem."

But thinking about resolving a promise as a one or two phase commit, and
promise pipelining moving off of the assumption that "I'm the decider"
when maybe there are multiple deciders, sounds like a potential race
condition in the case that you *do* care waht the promise is.

When would that be a problem? And why did I say "blockchains"?

Well I have *no idea* how the Agoric blockchain-machine-mappings work.
My guess is: they probably don't have this problem. But if the mapping
did somehow operate off of the assumption that "I'm the decider" for
sure, Bob eats his cake when actually Carol "won", will this be caught?
Is there a problem with "moving forward under those assumptions"?

And I guess it depends on how the blockchain stuff maps. It probably
isn't a problem. But it's what I was thinking about, out loud. "Too
many cooks/deciders should fail safe".

I'm not sure it ended up being a useful observation, but it lead to some
interesting conversation anyway. :)

- Chris

* Too Many Cooks not failing safe:**
https://www.youtube.com/watch?v=QrGrOK8oZG8

** "This is the worst case of intro-nitis I've ever seen" sounds like
it should fit in somewhere to some ocap conversation

Christopher Lemmer Webber

unread,
May 19, 2020, 6:55:19 AM5/19/20
to cap-...@googlegroups.com
Mark S. Miller writes:

>> BONUS: Distributed cyclic GC
>> ============================
>>
>> Obviously I am not going to do this anytime soon but it breaks my brain
>> that MarkM told me something like "Original E had distributed cyclic GC
>> support". Apparently it is complicated, involves some hooks into the
>> local garbage collector, and is rarely needed and tricky enough that
>> Mark said he doesn't bother to ask for it anymore, but I'm just gonna
>> say that it both blows my mind and completely befuddles me that some
>> version of E ever had something like this.
>>
>> What huh what huh what? And was it ever documented how it works so that
>> sages in the future can help us sweep out our networks from unneeded
>> junk? Or is it an idea lost to the sands of time?
>>
>
> http://erights.org/history/original-e/dgc/
>
> If, before singularity, this gets implemented again for modern distributed
> ocap protocols (captp, cap'n proto, goblins, ...), I will be pleasantly
> surprised.
>
> Feel free to take that as a challenge. But not first!

Horray!

Yeah, it'll be a long time until I have enough space clear (and
understanding of how GCs work), if ever, before I get to tackle
this... but excited that it's there to reference. Thank you!

Christopher Lemmer Webber

unread,
May 19, 2020, 6:57:47 AM5/19/20
to cap-...@googlegroups.com
Mark S. Miller writes:

>> 3-party introductions
>> =====================
>>
>> Once you've implemented all that, what happens if Alice wants to spend a
>> message about Carol to Dave, but Dave is on Machine 3 in Vat D (not
>> pictured above). Well what the heck do we do now?
>>
>> There's a whole thing about handoff tables. I feel like this is a
>> complicated subject, and one I want to write an entirely separate email
>> about because it still hurts my brain a little. The MarkM talk I linked
>> to above explains it reasonably though. So ok, we'll just say we can do
>> that. (There also has been something called "vines" used historically
>> though I've never been completely clear on what a "vine" is... maybe it
>> doesn't matter anymore.)
>>
>
> OMG, I was writing a long explanation about why we still need a Vine, and
> found that we do not. We did need a Vine when 3-party introductions were
> based on swiss numbers. Once we introduced the handoff tables, we no longer
> do. VatC will retain Carol in VatC's VatA-gifts-for-VatB handoff table
> until VatB looks it up, placing Carol in VatC's C-to-B exports table.
>
> Somehow, though I introduced the handoff tables over 10 years ago, I never
> noticed that they make the Vine irrelevant. Thanks!

That's funny. I worked through my understanding of 3 party
introductions in my head without assuming vines were relevant, because
you didn't mention them in the talk I listened to you talking about, so
I assumed you must have worked it out without vines being necessary.
Glad to contribute from that misunderstanding.

Didn't handoff tables also come about from you misunderstanding
something Alan Karp said? :)

Christopher Lemmer Webber

unread,
May 19, 2020, 7:00:32 AM5/19/20
to cap-...@googlegroups.com
Matt Rice writes:

> On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
> <cwe...@dustycloud.org> wrote:
>
>> There's no need to layer a complex (and more insecure) security
>> architecture on top of our programming languages because the way
>> programmers already program, if we take it seriously, already *is* our
>> programming model:
>
> This seems like you wanted to say "already *is* our security model",
> otherwise it seems somewhat tautological.
> https://xkcd.com/703/
> I also think "on top of our programming languages", should really be
> "on top of memory safe programming languages".

Yes this is true... but we can expand that a lot further because there
are many other language features in many languages that make those
languages themselves ocap unsafe (eg dynamic scope support in many
lisps, etc etc). So memory safety is one thing but not the only one if
we go down that path...

I guess what you're saying though is that it's the fact that the runtime
protects you from being able to access anything you weren't handed
access to as an argument is the underlying system providing memory
safety, in which case, true. Though there's something to brevity in
increasing understanding, where we can always follow up after the
initial a-ha. :)

Christopher Lemmer Webber

unread,
May 19, 2020, 7:07:59 AM5/19/20
to cap-...@googlegroups.com
William ML Leslie writes:

> On Tue, 19 May 2020 at 18:42, Matt Rice <rat...@gmail.com> wrote:
>>
>> On Mon, May 18, 2020 at 9:25 PM Christopher Lemmer Webber
>> <cwe...@dustycloud.org> wrote:
>>
>> > There's no need to layer a complex (and more insecure) security
>> > architecture on top of our programming languages because the way
>> > programmers already program, if we take it seriously, already *is* our
>> > programming model:
>>
>> This seems like you wanted to say "already *is* our security model",
>> otherwise it seems somewhat tautological.
>> https://xkcd.com/703/
>> I also think "on top of our programming languages", should really be
>> "on top of memory safe programming languages".
>>
>
> I understood "on top of our programming languages" to be an indictment of this:
>
> https://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html
>
> the Java SecurityManager has been a massive footgun and attempts at
> using it rather than paring everything down to capability semantics is
> the main reason that having a java client installed (or worse, having
> the web plugin installed) is considered a security risk. People like
> to quibble about versions, but putting band-aids on a sieve won't
> quickly make it waterproof.

Yes, that was the intent.

Though the most clear example I've seen of this was this talk:

https://www.youtube.com/watch?v=FNxS1oe3Bzw
https://arxiv.org/abs/1807.09377

The "security mechanisms" added there leads to an explosion of code and
confusion, whereas starting with the assumption that "argument passing
and lexical closures were already enough" means you could have written
all of that and as "normal, straightforward code" and it would have had
the security you wanted in ocap-land.

BTW, I don't mean to hate on this research in that it seems like it was
excellent work assuming that you start with their assumptions, that you
*needed to* layer something on top. I asked the presenter in the Q&A
whether they had compared it something to like Rees' "security kernel
based on the lambda calculus" paper:

https://youtu.be/FNxS1oe3Bzw?t=1184

They hadn't heard of it. To me, the takeaway there is helping people
understand that we already have "Lambda: The Ultimate Security
Abstraction."

Christopher Lemmer Webber

unread,
May 19, 2020, 7:08:23 AM5/19/20
to cap-...@googlegroups.com
This is hilarious.

Christopher Lemmer Webber

unread,
May 19, 2020, 7:10:45 AM5/19/20
to cap-...@googlegroups.com
Note that I'm not asking whether import/export is redundant, but whether
import+answer and export+question is redundant, even if handled
abstractly through negative indices in the same table, as E appears to
do. :)

Matt Rice

unread,
May 19, 2020, 8:18:04 AM5/19/20
to cap-...@googlegroups.com
On Tue, May 19, 2020 at 10:35 AM William ML Leslie
<william.l...@gmail.com> wrote:
>
> On Tue, 19 May 2020 at 19:55, Matt Rice <rat...@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 9:15 AM William ML Leslie
> > <william.l...@gmail.com> wrote:
> >
> > Trying to understand where I disconnect,
> > I guess any all languages are sufficient to derive the logic behind a policy,
> > even those which lack the inherent ability to enforce it. I was
> > basically nitpicking
> > that removing the security manager from something like java gets you no closer
> > to actually enforcing policy derived within the language.
> > Anyhow the statement is a bit strong for my taste, given the subtlety
> > of prescribing vs passive enforcement of policy.
> >
>
> Would love to know more of what you're thinking - I read the caveat
> that static analysis, type systems, and protected memory are still
> valuable security tools even in a capability-safe environment, but
> I've been known to misread things.
>

While i'd certainly like to, probably for another thread -- It isn't
really what I was getting at.

I think Chris's "brevity in increasing understanding, where we can
always follow up after the
initial a-ha", covers my intent there well.

Essentially just "memory safe" as a stand in/catch all for a system in
which ambient authority has not yet been added,
Or it isn't enough to just remove the security manager, it is merely
one source of ambient authority, and usually is a symtom of another.

William ML Leslie

unread,
May 19, 2020, 8:53:09 AM5/19/20
to cap-talk
On Tue, 19 May 2020 at 21:10, Christopher Lemmer Webber
Sure, and I think it's worth exploring what CapTP/cc would look like
and how it would function. I'm wondering how promise pipelining still
works.

Let's say we have something like:

def send(connection, target, *args):
answer, question = connection.promise_and_resolver()
export_id = connection.add_export(question)
connection.send(Call(connection.imports[target],
[Export(export_id)] + map(connection.get, args))
return answer

sending messages to the promise wouldn't quite work - there'd need to
be some extra plumbing to get the messages to pass to the other side
rather than enque waiting for the promise to resolve.

Similarly, because the answer is now defined on the sender side, the
receiver will also need extra plumbing to send only resolve messages,
and enqueue anything else.

I suspect you'd need to make answers and questions very strange refs
in order to eliminate the answer/question concept cleanly.