Testing for a confused deputy vulnerability

163 views
Skip to first unread message

Alan Karp

unread,
Oct 10, 2019, 1:07:10 PM10/10/19
to cap-...@googlegroups.com
I recently attended a presentation where the speaker showed a system using OAuth bearer tokens that had a confused deputy vulnerability.  I tried to explain the problem to him, but I didn't have a simple enough example on the tip of my tongue.  Let me know what you think of the following.

Alice invokes Bob's service, passing as an argument the designation of Carol's service.  Can Bob invoke Carol's service with Alice's permissions?  If not, you have a confused deputy vulnerability.

I expect the answer will be to have Bob authenticate as Alice when invoking Carol.  In that case, I'll point out the gross violation of POLA that involves.  It may also be that Bob passes an argument to Carol that Bob has permission to use but Alice does not.  That's exactly the example in Norm Hardy's paper.

--------------
Alan Karp

Raoul Duke

unread,
Oct 10, 2019, 1:22:06 PM10/10/19
to cap-...@googlegroups.com
is there not yet quickcheck for security tokens? :) an automatically checkable general simplified code model for security tokens so one could write the example in code & then have some prolog/SAT engine spit out violation examples? or is that not so doable mathematically or practically?

Alan Karp

unread,
Oct 10, 2019, 1:33:40 PM10/10/19
to cap-...@googlegroups.com
OAuth bearer tokens are security tokens.  He just got them by authenticating when making the request, in essence achieving ambient authority.

It sure would be nice to have an automated tool, which Joey is for Java.  However, in this case, I'm looking for a simple way to identify the problem and explain it without needing to bring up ocaps.  That will come later.

--------------
Alan Karp


On Thu, Oct 10, 2019 at 10:22 AM Raoul Duke <rao...@gmail.com> wrote:
is there not yet quickcheck for security tokens? :) an automatically checkable general simplified code model for security tokens so one could write the example in code & then have some prolog/SAT engine spit out violation examples? or is that not so doable mathematically or practically?

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAJ7XQb58H5GQyTCMxqA6iAv_%2BKfpzkBqgoUZvi_Ppvs5FSJBMw%40mail.gmail.com.

Mark S. Miller

unread,
Oct 10, 2019, 1:43:00 PM10/10/19
to cap-...@googlegroups.com, Toby Murray
Generalizing a bit, an excellent question! I am not aware of any automated systems for spotting likely confused deputy problems (aka co-mingled authority problems). I am not sure why, but I suspect it is for the same reason that most access control formalisms cannot even talk about confused deputy. (cc'ing Toby whose formalism did.) Such a detector could be useful even if it were not sound in either direction, as long as it had a high enough accuracy to usefully direct the remaining manual work.

Ethereum contracts would be a juicy corpus to target. Ethereum's primary access control mechanism is the "msg.sender" check. As background, anyone can send a message to any Ethereum contract, which is to say, an Ethereum contract can receive a message from anywhere. Therefore, it needs to check something to determine how it should react to the message. The message has an unforgeable field, "msg.sender", that is not from the sender but rather identifies the identity of the *immediate* sender. IOW, if contract A invokes contract B, B will see as msg.sender A's identity.

Ethereum has other mechanisms as well. But targeting only msg.sender should detect the majority of not-yet-diagnosed confused deputy problems in Ethereum smart contracts.

If you do spot such flaws, please disclose them responsibly if possible.



On Thu, Oct 10, 2019 at 10:22 AM Raoul Duke <rao...@gmail.com> wrote:
is there not yet quickcheck for security tokens? :) an automatically checkable general simplified code model for security tokens so one could write the example in code & then have some prolog/SAT engine spit out violation examples? or is that not so doable mathematically or practically?

--

Tristan Slominski

unread,
Oct 10, 2019, 2:08:37 PM10/10/19
to cap-...@googlegroups.com
Example I typically use is "bank customer", "bank", and "Mint" (dashboards for bank accounts). I have to give Mint my username and password so it can log into the bank as me to get account data. That's confused deputy vulnerability because if Mint gets confused and logs in as someone else, I have access to someone else's data.

I then point to the nature of the problem which is that "bank" asks "who are you" instead of "what can you do".

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Matt Rice

unread,
Oct 10, 2019, 7:58:20 PM10/10/19
to cap-...@googlegroups.com
On Thu, Oct 10, 2019 at 5:43 PM Mark S. Miller <ma...@agoric.com> wrote:
>
> Generalizing a bit, an excellent question! I am not aware of any automated systems for spotting likely confused deputy problems (aka co-mingled authority problems). I am not sure why, but I suspect it is for the same reason that most access control formalisms cannot even talk about confused deputy. (cc'ing Toby whose formalism did.) Such a detector could be useful even if it were not sound in either direction, as long as it had a high enough accuracy to usefully direct the remaining manual work.
>

I don't know it seems to be something that must be built into the
underlying substrate of the system you are building upon, e.g. at some
point you need a proof that your deputy treats different sets of
authorities as disjoint, and how can this be done without starting
from two distinct empty sets and maintaining the disjointedness as
authority is added?

I've experimented some in polymorphic type & effect systems, which
allow you to describe functions like map, where map is polymorphic
over the effect of the function passed into it.
So if 'f' has authority, map has authority, similarly if f is pure map is pure.

I think we can say since map has a single effect polymorphic variable
f, map itself is clear of confused deputy problems. Assuming map
iterates over values and values alone do not have effect.

The somewhat relevant part is that if we assume our substrate is the
only source of effects.
given that it seems possible, to build an FFI of sorts, like the way
we build FFI from some language to C, where we erase the effects to
export to the lesser language. And infer the effects when compiling
down to the substrate.

As an example lets say there is another function a monomorphic
pure_map, which checks that f must be pure. The intuition is that we
cannot write this pure_map function in the lesser language, where
effects have been erased, however since all effects come from the
substrate if we write a function foo, and infer all the effects it has
during compilation. We can cross over the FFI, call pure_map on it,
and pure_map can still accurately type check foo in the more strict
context. Producing errors for effect system violations which the
language foo is written in cannot begin to describe.

This at least seems to provide for the opposite case, a checker that
some deputy is in fact not confused. In a way it's trying to push
something like Stiegler's Emily under the hood... but also including
some reflection on the authority of functions. Then using this
reflection there are at least a few patterns which can be built and
then exported to the lesser environment.

Anyhow, i've been messing around with this with the idea of exporting
to the lesser language of System-F, and checking existing code not
written with these checks in mind. But the whole thing entails
writing a couple of compilers, and rewriting the normal standard
library in the target language.

It doesn't in fact know anything about confusion, but just checks that
the effects matched in the type signatures, match according to some
specific preordained check which I the bridge function (between
disjoint sets of authority) the author of the bridge have ordained as
correct pattern.

The goal of this, was somewhat different -- rather than warn about
bits of code that suffer from confused deputy problems... extract code
which happens to fit patterns that do not suffer from confused deputy
problems, and see how much is left.

But it doesn't really have any way to partition these into definitions
of sets which are confused/not confused. As in I can think of
patterns you could match to ensure confusion.
Sorry for the somewhat long hand-wavy outline.

Christopher Lemmer Webber

unread,
Oct 14, 2019, 4:42:59 PM10/14/19
to cap-...@googlegroups.com
Alan Karp writes:

> I recently attended a presentation where the speaker showed a system using
> OAuth bearer tokens that had a confused deputy vulnerability. I tried to
> explain the problem to him, but I didn't have a simple enough example on
> the tip of my tongue. Let me know what you think of the following.
>
> Alice invokes Bob's service, passing as an argument the designation of
> Carol's service. Can Bob invoke Carol's service with Alice's permissions?
> If not, you have a confused deputy vulnerability.

In other words, if you gave the designation but no authority, it has a
confused deputy?

Alan Karp

unread,
Oct 14, 2019, 5:21:27 PM10/14/19
to cap-...@googlegroups.com
Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:

> Alice invokes Bob's service, passing as an argument the designation of
> Carol's service.  Can Bob invoke Carol's service with Alice's permissions?
> If not, you have a confused deputy vulnerability.

In other words, if you gave the designation but no authority, it has a
confused deputy?

I think so, but I wonder if there's a corner case I'm missing.  

--------------
Alan Karp

Kevin Reid

unread,
Oct 15, 2019, 12:19:44 AM10/15/19
to cap-...@googlegroups.com
There are at least one, and I think several, degenerate cases where "then you have a vulnerability" is false; for example, if Carol's service does not require any authority (other than arguments-and-return data) or if the reference to Carol's service is itself the relevant authority (Carol's service is a self-contained mutable object) — then there is no bug even if the protocol has no support for transferring permissions. That one can be fixed by changing "but no authority" to something like "but omitted relevant authority".

I haven't thought of a corner case that isn't playing with the "no authority" part in empty-set/which-way-are-your-quantifiers-nested ways.


William ML Leslie

unread,
Oct 15, 2019, 1:17:10 AM10/15/19
to cap-talk
I have been trying co prepare an intro to capabilities starting from the modern problems that they can solve, and i have been phrasing CDV this way:

0. An invoker designates an access-controlled resource in a request to a service.
1. The service performs a protected operation on that resource.
2. The invoker would not have been permitted to perform that operation itself, at least, not with the same parameters or in the same environment.
3. The intention of the invocation was not to provide attenuated access to the resource.

The edge case i like to think about is LD_PRELOAD. The common vector is to convince a setuid binary to run code that you indeed have execute access to, but to have it now run as root. This feels to me like CDV, and yet the only strange thing is the environment that the code runs in, and the intention of the binary author. I suspect that intention really matters with CDV, and indeed capabilities in general allow us to be very specific about intention. That is their magic.

Alan Karp

unread,
Oct 15, 2019, 1:35:03 AM10/15/19
to cap-...@googlegroups.com
William ML Leslie <william.l...@gmail.com> wrote:

I have been trying co prepare an intro to capabilities starting from the modern problems that they can solve

When I presented Capabilities 101 at a recent Internet Identity Workshop, the ability to delegate got the most interest from the ACL crowd.

--------------
Alan Karp

William ML Leslie

unread,
Oct 15, 2019, 5:56:07 AM10/15/19
to cap-talk
I love the motivating example of getting your neighbour to put Marc's
car into your garage, and I think anywhere else that would have been
the perfect place to start. But my current employer, Alliance, are
different; in a way I find difficult to explain.

Sure, we develop websites, in a fairly standard set of frameworks, in
what has become a fairly standard language. But it seems like between
the sysadmins and the managers somewhere, they are actually pretty
switched on about security. The thing is, to some extent, they know
it. So I'm going to give them the same feeling I got when reading
"ACLs Don't", and I'm going to do it first up. I'm going to dive into
CSRF and probably something to do with local files. I'll show why
ambient authority means that the attacker doesn't need anything
special, just be allowed to name a resource.

Then I'm going to show that actually, we already have a kind of
capability discipline going on. Everyone has some password manager
here, so I'm going to show them how LastPass is actually a powerbox,
even if it kind of works backwards. And everyone here uses OSX, so
I'm going to show them how the capability sandbox protects them from
nefarious Word macros. Then I'll show them how good UI demands that
intent is expressed clearly wherever authority is granted.

And *then* I'm going to hit them with the mathematical model. I've
written complex security code - code that says, ok, are you this kind
of user? Then we need to check this, and that. Are you that kind of
user? Then we have to set up certain fields of your newly created
objects a certain way, etc. But capabilities make this so, so much
cleaner. The minimal set of axioms required to express capability
patterns are great.

I'll briefly describe object-capability systems, as well as
crypto-capability systems.

Finally, I'll drive it home with your example of delegation, and
probably cover Horton.

Maybe in there also cover why MFA is not much of a mitigation against
phishing, and why none of the major operating systems have a response
to BadUSB.

I don't know. I'm constantly in awe at this field, and dumbfounded
that it isn't the entire content at every RSA conference, why every
security audit and post mortem doesn't start with "why not
capabilities" or "capabilities is why this should never happen", and
why we're still password-protecting our websites. I guess the more
angles we can come at "hey, here's the right way to build secure
systems", the better.

--
William Leslie

Notice:
Likely much of this email is, by the nature of copyright, covered
under copyright law. You absolutely MAY reproduce any part of it in
accordance with the copyright law of the nation you are reading this
in. Any attempt to DENY YOU THOSE RIGHTS would be illegal without
prior contractual agreement.

William ML Leslie

unread,
Oct 15, 2019, 6:01:10 AM10/15/19
to cap-talk
Sorry for the flow on reply: I meant to say that my employer seem to
really crave being able to statically audit the security rules. By
the time I get to delegation, then, I want to already have covered the
ocap model.

Alan Karp

unread,
Oct 15, 2019, 12:16:50 PM10/15/19
to cap-...@googlegroups.com
I like your approach to educating your employer, and I agree with your assessment of the RSA conference.  I used to be able to get my talk proposals accepted, but I haven't had any luck in the past several years.  My best guess is that they changed the selection committee to be more commercial oriented, since most of the abstracts I read these days refer to actual products.

One comment on the OSX sandbox.  I don't believe it uses capabilities.  It looks like Polaris and Bromium.  It's least privilege at application granularity but not capabilities.  (Talk about full circle.  Bromium based its design on what we did with Polaris at HP Labs, but they did it better.  The Bromium web site now says they were acquired by HP.)

Horton is a bit tricky to explain.  Mark Miller pointed out that SCoopFS gives ACL people the UI they want, so maybe that's a better starting point.

There is another question that always comes up.  How can a normal human being possibly manage all these fine-grained permissions?  The answer is Marc Stieger's insight of using acts of designation as acts of authorization.  The point is that no matter how many resources you have, you need to designate one to work on it.  That act of designation is how the target application gets permission to use the resource.  So, double clicking on a .key file gives Keynote permission to edit that file.

--------------
Alan Karp


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Christopher Lemmer Webber

unread,
Jan 28, 2020, 10:07:43 AM1/28/20
to cap-...@googlegroups.com
Alan Karp writes:

> I recently attended a presentation where the speaker showed a system using
> OAuth bearer tokens that had a confused deputy vulnerability. I tried to
> explain the problem to him, but I didn't have a simple enough example on
> the tip of my tongue. Let me know what you think of the following.
>
> Alice invokes Bob's service, passing as an argument the designation of
> Carol's service. Can Bob invoke Carol's service with Alice's permissions?
> If not, you have a confused deputy vulnerability.

I like this test, though here's a challenge: doesn't rights
amplification, in many cases, fail this test? If Bob can't invoke
Carol's service with Alice's permissions because Bob doesn't have an
unsealer that Alice does, doesn't this fail the above definition?

I've had as suspicion for a while that rights amplification, if used
haphazardly, is an opportunity for re-introduction of confused-deputies,
but I think there is probably a set of patterns we can recommend that is
not confused-deputy-vulnerable, but I have never seen these identified.
Instead mostly I've seen "and you can get back group-like kinds of
access with rights amplification!" without the forewarning that if you
do it wrong, you get confused deputies again.

The most extreme and obvious form of doing it wrong is "you have a
bucket of unsealers, search in that bueket for whatever unsealer may
apply to give you access". I think that even beyond that, it is
dangerous/possible.

As for why, I think we can analyze that the reason confused deputies
more or less do not occur in ocaps is that "you only have access to what
you have access to". There's a graph of things that can be done, and
your power to do things is indicated by what portions of the graph you
can see. There's no distinction between holding onto a reference and
accessing it so confused deputies do not occur simply because you can't
identify an object that you'd like to do more with and simply don't have
more permissions for.

That is, unless you do one of two things:
- Add rights amplification
- Add eq? and start accruing private associative information that
affects your behavior regarding an "identity".

In which case you're no longer at the simple-graph-of-access perspective
anymore.

Thoughts?

(If you're wondering if this ties into the social network
vulnerabilities thread I've left hanging, it does. More on that later.)

- Chris

Mark S. Miller

unread,
Jan 28, 2020, 12:13:09 PM1/28/20
to cap-talk
This seems right.


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.


--
  Cheers,
  --MarkM

Alan Karp

unread,
Jan 29, 2020, 2:16:11 PM1/29/20
to cap-...@googlegroups.com
My understanding is that you use a sealed box to pass a capability through someone you don't trust to use it.  Say that Alice wishes to provide Bob with a capability that can be used only by Carol with some input provided by Bob, even if that input is pure data.  Alice gives Bob the sealed box and Carol the unsealer.  Bob passes the sealed box with his additional input to Carol.  Carol knows which unsealer to use with that box and invokes the capability that was in the box.  There is no confused deputy.

What if Carol doesn't know which unsealer to use and picks one that matches out of a bucket.  She is still using the unsealer Alice gave her on the box Alice gave to Bob.  Where is the confused deputy?  

The only confused deputy vulnerability I can see is if Bob can somehow tell Alice which box to use without providing her with the capability to the box, but that's a separation of designation from authorization.

--------------
Alan Karp


On Tue, Jan 28, 2020 at 7:07 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:

Mark S. Miller

unread,
Jan 29, 2020, 4:52:34 PM1/29/20
to cap-...@googlegroups.com
The box designates the cap inside the box, but by itself doesn't give access to it. If Alice provides Bob with the "wrong" box, she can choose for Bob which authority Bob would then use, among choices none of which Alice can use.

Because Client Utility used rights amplification everywhere, I remember discussing, back when we were at HP, that rights amplification does create a possible CD hazard. But it depends on the rights amplification pattern and on how it is used. I don't think we ever encountered an accidental CD due to rights amplification within an ocap system. But I also don't think we looked very hard to try to find one. I am not very worried about CD arising from ocap rights amplification, but I admit I may be underestimating the danger.






Alan Karp

unread,
Jan 29, 2020, 6:06:00 PM1/29/20
to cap-...@googlegroups.com
Mark S. Miller <ma...@agoric.com> wrote:

The box designates the cap inside the box, but by itself doesn't give access to it. If Alice provides Bob with the "wrong" box, she can choose for Bob which authority Bob would then use, among choices none of which Alice can use.

In the classic confused deputy, just replace the string designating the billing file with a box containing a capability to write that file, and you've got a confused deputy vulnerability.  There is an important difference, though.  Bob's API can require an unsealed write capability for the output file.  Allowing a box for it is a bug.  

I think giving Alice a box with the write capability to the billing file can also be considered a bug.  Recall, that Alice might properly be able to read the billing file but not write it.  Designation by string name doesn't expose that distinction.  Giving Alice a read capability, or even a box containing one, does.

--------------
Alan Karp

Mark S. Miller

unread,
Jan 29, 2020, 9:59:21 PM1/29/20
to cap-...@googlegroups.com
We agree. The conclusion is that right amplification make cd possible. But it is unlikely to actually cause a cd by accident. That's why your short statement is a great heuristic for finding CDs, but is not a def of CDs.


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Kevin Reid

unread,
Jan 29, 2020, 11:23:09 PM1/29/20
to cap-...@googlegroups.com
On Wed, Jan 29, 2020 at 3:06 PM Alan Karp <alan...@gmail.com> wrote:
Mark S. Miller <ma...@agoric.com> wrote:

The box designates the cap inside the box, but by itself doesn't give access to it. If Alice provides Bob with the "wrong" box, she can choose for Bob which authority Bob would then use, among choices none of which Alice can use.

In the classic confused deputy, just replace the string designating the billing file with a box containing a capability to write that file, and you've got a confused deputy vulnerability.  There is an important difference, though.  Bob's API can require an unsealed write capability for the output file.  Allowing a box for it is a bug.

Allowing a box is not a problem by itself; rather, there would be a problem if there was a box which is to be unsealed by an unsealer that Bob obtained for other purposes, in which case Bob is disregarding that purpose. For example, if there were some protocol by which Alice provides Bob an unsealer, and then later makes a request containing a corresponding box, then there is no problem (as long as Bob does not throw all their unsealers in a bucket).

Consider this question: In general, if Bob has successfully unsealed a box, what does Bob know — in what ways is Bob able to not be confused — about the result thereof?

Bob knows that the sealer was used to make the box, and Bob knows that the box was sent to Bob in a message. The act of sealing is itself a time-delayed message to holders of the unsealer. Bob has received two interrelated messages (the 'bare' one and the one implied by sealing), and proceed according to knowledge of the semantics of both messages and their relationship; if Bob uses only one of them then Bob (or the protocol Bob is implementing) is faulty.

If Bob the compiler unseals the output file capability with some sealer (call the sealer/unsealer pair [X]) and as a consequence overwrites the billing file, then we have a problem, but it could be of several kinds, for example:
1. Somebody sealed the billing file with the [X] sealer when they shouldn't have; the sealing does not imply that Alice doesn't have authority to overwrite the file. (For example, [X] could mean "This is a file you can only overwrite with outputs of the trusted compiler.")
2. [X]'s purpose is to manage access to billing accounts (so that customers can choose where the bill goes to by passing the box for that account). Bob should not have used the [X] unsealer when processing the output file, only the billing file.
3a. Boxes sealed with the [X] sealer are not supposed to be handed out to Alice; they imply something about the file that is not true but Bob proceeds as if it is true, and the fault is not with Bob but with the misuse of the sealer. (This doesn't fit into the compiler example readily,)
3b. Alice has legitimate access to the box, but should not have put the box into the "output" position of the request to Bob (Alice got an argument list mixed up or something).

Case 2 is a confused deputy if Bob's error was to use a bucket of unsealers. Cases 1 and 3 could be confused deputies elsewhere than Bob, but it's not Bob's designer's fault that a sealer-using protocol was misused.

Sealed boxes are not a confused deputy hazard in particular; they're a hazard for all sorts of bugs if you use them without having a plan for what a box means, and they're a confused deputy hazard if misused in the, admittedly tempting, bucket-of-unsealers fashion. (There is a sound way to use a bucket of unsealers: you have to pair each unsealer with sufficient information about what it means, and take that information into proper account.)

To restate the important part since I dumped a bunch of thoughts here:
The act of sealing is a time-delayed message to holders of the unsealer. To use a box correctly, one must know the meaning of that message, and relate it correctly to the message in which the box arrived.

Rob Markovic

unread,
Jan 30, 2020, 12:06:34 AM1/30/20
to cap-...@googlegroups.com
I think there needs to be a time limited portion of this exchange so it cannot be reused later again.

A TTL if you will. After the timeout it needs to be re-issued or re-requested.

++ Rob


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Alan Karp

unread,
Jan 30, 2020, 1:14:49 AM1/30/20
to cap-...@googlegroups.com
Rob Markovic <rob.ma...@gmail.com> wrote:

I think there needs to be a time limited portion of this exchange so it cannot be reused later again.

Time outs are either too short or too long.  OAuth 2 started with a 5 minute default timeout, then it went to an hour.  Now, it's often as long as a week.  Explicit revocation is better.  Having said that, there are uses for timeouts.  If there's an explicit contract involved, the timeout can coincide with its end.  Also, a timeout is a reasonable backup when the revoker might be off line or otherwise unable to revoke.

--------------
Alan Karp

Rob Markovic

unread,
Jan 30, 2020, 1:25:11 AM1/30/20
to cap-...@googlegroups.com
I remember this from your talk Alan. 

Explicit revocation may be a better workflow overall as dynamic timeouts are something average people won't know how to set correctly for the occasion. 

Having a full circle workflow from issue to revocation is better but only if revocation is strictly required and an error if not set or connected. Otherwise the lazy ones will omit it or set it as permanent which weakens the overall construct. 

-- Rob

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
--

Alan Karp

unread,
Jan 30, 2020, 1:18:19 PM1/30/20
to cap-...@googlegroups.com
Kevin highlights an important point that I missed in Chris Webber's email that started this thread.  Alice doesn't designate the resource by providing the box; Bob does by choosing the unsealer.  If Bob is expecting a box containing a capability to an output file provided by Alice, he will use an unsealer appropriate for Alice.  If Alice provides a box containing a capability to the billing file, the unseal operation will fail.  I now understand Chris's point about having a confused deputy vulnerability if Bob pulls any unsealer that matches out of a bucket of them. 

That raises an interesting question of how Bob knows what unsealer to use in general.  If Carol gives Alice the sealed box and Bob the unsealer, how does Bob know to apply that unsealer to the box Alice provides?  Carol could have given Alice more than one sealed box and Bob the corresponding unsealers.  Alice could pick any one of them to designate the compiler output file.  Even if Carol told Bob which unsealers to use for Alice's requests, does he have to try them one at a time?

--------------
Alan Karp


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Rob Markovic

unread,
Jan 30, 2020, 6:34:37 PM1/30/20
to cap-...@googlegroups.com
It would seem logical to have ocaps tied to originating labels. There would be no confusion then.
Or if no labels are provided, a separate bucket for each, tracked by such an identifier.

There's no good reason to mix ocaps in one bucket. Each should have exactly one match, unless designated for multiple resources.

++ Rob

Kevin Reid

unread,
Jan 30, 2020, 8:59:11 PM1/30/20
to cap-...@googlegroups.com
On Thu, Jan 30, 2020 at 10:18 AM Alan Karp <alan...@gmail.com> wrote:
Kevin highlights an important point that I missed in Chris Webber's email that started this thread.  Alice doesn't designate the resource by providing the box; Bob does by choosing the unsealer.  If Bob is expecting a box containing a capability to an output file provided by Alice, he will use an unsealer appropriate for Alice.  If Alice provides a box containing a capability to the billing file, the unseal operation will fail.  I now understand Chris's point about having a confused deputy vulnerability if Bob pulls any unsealer that matches out of a bucket of them. 

That raises an interesting question of how Bob knows what unsealer to use in general.  If Carol gives Alice the sealed box and Bob the unsealer, how does Bob know to apply that unsealer to the box Alice provides?

I hold that the answer to that question is inherently application-specific; that is, specific to the design of the protocol in which the box and unsealer are being used. In general, two unsealers are from unrelated protocols and they have no reason to be brought together.
 
Carol could have given Alice more than one sealed box and Bob the corresponding unsealers.  Alice could pick any one of them to designate the compiler output file.  Even if Carol told Bob which unsealers to use for Alice's requests, does he have to try them one at a time?

Presumably if Carol is doing this, then there is a purpose, in the protocol in which Carol and Bob are participating, for having more than one unsealer. (If there isn't a purpose, then Carol should stop making things unnecessarily complicated.) Perhaps Bob should try each of two unsealers in sequence, and do something different with the contents. In general, Bob may try multiple unsealers and then proceed according to a lookup table which tells Bob what to do with the contents, if that makes sense for the protocol.

There can be a way to detect which unsealer to use, but this is merely an efficiency feature (O(n) to O(1)). For example, E's builtin sealers have a reliable way to ask for a common identifying object from the sealer, unsealer, or box; the  __optSealedDispatch mechanism used this to allow Bob to determine which unsealer to use, and Bob would then handle the box according to that unsealer (each unsealer used in this fashion corresponds to a distinct "method name"). I believe I remember hearing that KeyKOS also had a mechanism for trying multiple unsealer-equivalents as a single operation (in the "can opener", I think?) and knowing which one succeeded.

There is even a degenerate case where the unlabeled bucket-of-unsealers is a sound approach: if Bob has multiple unsealers for which the associated processing requirements are completely identical. For example, suppose the contents of boxes are secret data which Alice should not get to see, but may perform certain operations on, and Bob is the party trusted by Carol and Charlie to perform those operations correctly and without leaking the data. Then Carol and Charlie may separately provide unsealers to Bob and boxes to Alice, and happen to have the same expectations of Bob such that Bob need not behave differently. But this property would be lost if, for example, the system came to have incompatible protocol versions (definitions of what Bob is to do) such that Carol wants Bob to execute v2a on Carol's boxes and Charlie wants Bob to execute v2b on Charlie's boxes — then Bob needs to remember which unsealer is which again.

We can also consider the silly implementation where Bob uses many sealers for unrelated purposes, but always tries all of them when presented with a box; Bob may use Bob's information about which unsealer succeeded to know how to use, or disregard, the contents of the box, and still be sound. But this is a waste of computation and coupling (in the software engineering sense) compared to only trying the unsealers that are potentially relevant to the message Bob received.

Alan Karp

unread,
Jan 30, 2020, 9:13:15 PM1/30/20
to cap-...@googlegroups.com
"a common identifying object from the sealer" is what I was looking for.

--------------
Alan Karp


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Christopher Lemmer Webber

unread,
Jan 31, 2020, 10:13:07 AM1/31/20
to cap-...@googlegroups.com
I think it would also be helpful if I provided a specific scenario to
discuss. My thinking about this came from a paper which finally helped
me understand the power of rights amplification and why I should care
about it, and then incidentally upon later reflection, also lead to my
doubts. The paper is this one:

https://www.uni-weimar.de/fileadmin/user/fak/medien/professuren/Virtual_Reality/documents/publications/capsec_vr2008_preprint.pdf

"Object Capability Security in Virtual Environments"

In particular, there is a very good example in there of a teacher in a
classroom in which there are proxy objects that represent students.

(From memory:) Each student has various methods that anyone can access,
(eg, student.description() to see their profile description), etc.
However some students are noisy, and the teacher should have
capabilities to perform administrative aspects such as silencing a
problematic student from being able to send messages to the channel. So
it is better if teachers are given an unsealer for admin privileges, and
then there is a student.admin() procedure that returns a sealed
capability to do things like silencing (so,
unsealer(student.admin()).silence() or something.

Now the teacher can do things relative to each student that others can
not who have access to the same designation. Is this an opportunity for
a confused deputy vulnerability?

I have a shorter example I've been thinking of, I'll follow it up in the
next post.
>> <https://groups.google.com/d/msgid/cap-talk/CANpA1Z1pivmN-Tq7AjaoC9Z8ZDEct7SC_o5sVbusUMn6At5wtA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>

Christopher Lemmer Webber

unread,
Jan 31, 2020, 10:44:41 AM1/31/20
to cap-...@googlegroups.com, Jonathan A Rees
Here's the shorter example.

Say we have a blog. We'll start out with the assumption that there are
two capabilities:

;; Writing to the blog. Holding this capability lets you write.
(post-to-blog #:subject "Blah blah, new blogpost!"
#:contents "It's been a long time since I've blogged, but...")

;; Read from the blog. Widely available, almost everyone has this!
(blog 'name) ; => "A Rather Fine Blog"
(blog 'posts) ; => (list <post1> <post2> <post3>)

Ok. So let's say I've got <post1>, and have bound it to the name post1
in my current scope.

(post 'availability) ; => 'public

(post1 'subject) ; => "Blah blah, new blogpost!"
(post1 'date) ; => "2020-01-31T10:19:01-05:00"
(post1 'contents) ; => "It's been a long time since I've blogged, but..."}

(post1 'edit) ; => <sealed blog-admin>
(post1 'delete) ; => <sealed blog-admin>

This is a good idea because we want everyone to be able to read the
blog, and I'd like to be able to edit individual posts if I'm a blog
admin, but I don't want to have to "remember" which admin capabilities
belong to each post (especially if other people have made posts on the
blog in my absence).

Now the question is, do we want a comments section? Perhaps the authors
of this blog agreed that they wanted to be able to "hand out" the
authority to make comments on the blog.

(post1 'comment) ; => <sealed commenters-only>

We also might want to "lock" some blogposts... it's ok that people know
that they're there, but not everyone can access them. Patreon does
this; some blogposts let you see the subject, but to read the contents
you need to become a donor. Maybe <post2> is like that.

(post2 'availability) ; => 'private
(post2 'subject) ; => "Behind the scenes preview of my next animation!"
(post2 'contents) ; => <sealed patrons-only>

Now this seems like a desirable system to me. It also *clearly* does
not pass Alan's test. Let's see if we can create a confused deputy.

Well, we should know that first of all, all social systems by definition
require identity, and thus eq?. If we're going to talk about someone,
we need to know that we're talking about the same person (grant matcher
problem shows up everywhere in social systems). Right off the bat,
let's acknowledge that eq? already introduces a context in which
confused deputies can occur if we're talking about some people
associating information about an identity that others might not be aware
of. Consider the evil boss Mallet, who wants to snoop in on their
employees. In particular, Bob called in sick on Tuesday; Mallet would
like to know, "was Bob actually out drinking?"

Mallet and Bob both mutually know Alice. But maybe Alice doesn't know
that Mallet is Bob's boss. So Mallet says to Alice, "Oh, you know Bob?
Wow, that guy can really drink someone under the table. He was telling
me that you two went out to some interesting places on Monday night...
I'm looking for some recommendations to bring a friend. Were they any
good?" And Alice proceeds to laugh and talk about how wasted Bob got,
and that yeah the beer at Old Joe's Drinking Tavery was really good.

Even in this informational system, I'd argue this is a confused deputy.

So let's now try to apply that to a blogging system. Let's say there's
a fun bot out there, called StudlyBot that runs the studlification-caps
algorithm over blogposts. For example, it would turn this paragraph
into:

So let'S Now try tO aPpLy thAt tO A bloggIng SyStem. Let'S sAy theRe's
A fUn bot out theRe, cAllEd STuDlyBoT thAt RuNs thE StudlifiCatiOn-cAps
alGorithm oVeR blOgpOstS. FOr example, It Would tUrn this paRagRaph
iNto:

(M-x studlify-region in emacs, if you want to try it yourself.)

So what StudlyBot does is that if you give it access to read and comment
on your posts, someone can point it at a blogpost and it'll make a
hilarious studlified version of the blogpost as a comment. So funny!

Oh also, since you're the one that requested it, it'll also send the
studlified post to you, the requestor's, inbox.

You probably see where I'm going with this.

So, Mallet would like to read some blogposts he doesn't have access to.
But the person who runs the blog is a huge fan of StudlyBot, and thus
gave him access to *read and comment on all posts*.

So, Mallet merely points StudlyBot at a blogpost he'd like to read, and
gets Cc'ed on a studlified version of the blogpost in his inbox. Wow,
thanks studlybot! How generous.

Maybe you think this is foolishly, naively implemented, and nobody would
write rights-amplification-using-code like this in practice. I'm not so
sure. I would observe that the way StudlyBot was written though, from
Mallet's exploitation perspective it's little different than if
StudlyBot had ACL group access to the post, and we all would have
wagged our fingers at an ACL approach.

Clearly StudlyBot was open to a confused deputy attack. The real
mistake was adding the Cc: field, but that's an easy and very familiar
kind of confused deputy mistake. My question is: were there
recommendations we could make that could have helped prevent it? Are
there principles of good rights-amplification-using-behaviors that can
help keep us safe?

- Chris

Kevin Reid

unread,
Jan 31, 2020, 11:57:16 AM1/31/20
to cap-...@googlegroups.com
On Fri, Jan 31, 2020 at 7:13 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
(From memory:) Each student has various methods that anyone can access, (eg, student.description() to see their profile description), etc. However some students are noisy, and the teacher should have capabilities to perform administrative aspects such as silencing a problematic student from being able to send messages to the channel.  So it is better if teachers are given an unsealer for admin privileges, and then there is a student.admin() procedure that returns a sealed capability to do things like silencing (so, unsealer(student.admin()).silence() or something.

Note that the teacher('s user agent; I don't think this is an appropriate case for raw human-operated access) could equally well have student 'public profile' caps paired with 'admin this student's presence in this caps' in storage. This gives the teacher the same authority but does not involve a sealed box.
 
Now the teacher can do things relative to each student that others can not who have access to the same designation.  Is this an opportunity for a confused deputy vulnerability?

Sure. Student A falsely says Student B was doing something disruptive, and the teacher silences Student B. The teacher is a confused deputy (using their authority incorrectly on behalf of Student A).

Note that this is a confused deputy of the ordinary human type. In order to have a confused deputy in the sense of a formally specifiable security bug, we would need a specification for what the teacher does, which this example does not and cannot have — this is a case where human judgement is expected to be used. We cannot apply the same type of reasoning, because the systems we build for humans work in identities and ambient-ish authority more than the ones we build for programs do, as a fact about their intended user interface, not their implementation.

Christopher Lemmer Webber

unread,
Jan 31, 2020, 12:19:51 PM1/31/20
to cap-...@googlegroups.com
Kevin Reid writes:

> On Fri, Jan 31, 2020 at 7:13 AM Christopher Lemmer Webber <
> cwe...@dustycloud.org> wrote:
>
>> (From memory:) Each student has various methods that anyone can access,
>> (eg, student.description() to see their profile description), etc. However
>> some students are noisy, and the teacher should have capabilities to
>> perform administrative aspects such as silencing a problematic student from
>> being able to send messages to the channel. So it is better if teachers
>> are given an unsealer for admin privileges, and then there is a
>> student.admin() procedure that returns a sealed capability to do things
>> like silencing (so, unsealer(student.admin()).silence() or something.
>>
>
> Note that the teacher('s user agent; I don't think this is an appropriate
> case for raw human-operated access) could equally well have student 'public
> profile' caps paired with 'admin this student's presence in this caps' in
> storage. This gives the teacher the same authority but does not involve a
> sealed box.

Agreed but it does add "eq?" in such a way that I think I've argued also
introduces the possibility of a confused deputy in this email chain
(even though I labeled it "confusion introduced by the introduction of
rights amplification or eq?". The fundamental problem seems to be the
same: if Alice is the teacher and Bob is the student (classroom-proxy),
Alice does have more authority over the student than Mallet does, even
though now we're doing it via identity association rather than
unsealing.

But perhaps there is some superiority to this approach over unsealing;
I'm genuinely unsure, and am trying to scope out the situation.

>> Now the teacher can do things relative to each student that others can not
>> who have access to the same designation. Is this an opportunity for a
>> confused deputy vulnerability?
>
>
> Sure. Student A falsely says Student B was doing something disruptive, and
> the teacher silences Student B. The teacher is a confused deputy (using
> their authority incorrectly on behalf of Student A).
>
> Note that this is a confused deputy of the ordinary human type. In order to
> have a confused deputy in the sense of a formally specifiable security bug,
> we would need a *specification for what the teacher does*, which this
> example does not and cannot have — this is a case where human judgement is
> expected to be used. We cannot apply the same type of reasoning, because
> the systems we build for humans work in identities and ambient-ish
> authority more than the ones we build for programs do, as a fact about
> their intended user interface, not their implementation.

Yes, I agree. It's similar also to the Mallet-as-evil-boss scenario I
had laid out in the email I wrote following the one you responded
to... so I'm glad we agree that once we introduce identity comparison,
even/especially in a human situation rather than a program situation,
confused deputy risks appear.

(I would love your analysis of my blogpost example in that next email,
btw!)

Mark S. Miller

unread,
Jan 31, 2020, 12:39:26 PM1/31/20
to cap-talk, Martin Scheffler
On Fri, Jan 31, 2020 at 7:13 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
I think it would also be helpful if I provided a specific scenario to
discuss.  My thinking about this came from a paper which finally helped
me understand the power of rights amplification and why I should care
about it, and then incidentally upon later reflection, also lead to my
doubts.  The paper is this one:

  https://www.uni-weimar.de/fileadmin/user/fak/medien/professuren/Virtual_Reality/documents/publications/capsec_vr2008_preprint.pdf

"Object Capability Security in Virtual Environments"

Glad you found that. Cool paper! 

cc Martin Scheffler, the first author


 


--
  Cheers,
  --MarkM

Christopher Lemmer Webber

unread,
Jan 31, 2020, 12:43:49 PM1/31/20
to cap-...@googlegroups.com, Jonathan A Rees
Another vulnerability related to this, and I think related to one in
Kevin's previous emails... and also one I directly put in my cap-chat
email from not long ago (and was wondering if people would identify):

Let's say Mallet has a blog. Alice also has a blog, but some of the
blogposts are sealed for friends-only:

(define alice-blogpost
(first (alice-blog 'posts)))
(alice-blogpost 'contents) ; => <sealed alice-friends>

Mallet is not one of Alice's friends (meaning: does not have the
alice's-friends unsealer), and so can't unseal the capability to
retrieve the contents.

However, Mallet realizes that Bob is a friend of Alice, and Bob also
reads Mallet's blog, and so Mallet comes up with a clever plan to find
out what it's about.

(post-mallet-blogpost #:contents (alice-blog 'contents) ...)

;; the post we just made
(define mallet-blogpost
(first (mallet-blog 'posts)))

(eq? (alice-blogpost 'contents)
(mallet-blogpost 'contents)) ; => #t

Mallet's blogpost has comments enabled and on. Bob reads the post and
is confused, posting the following comment:

"I don't understand... I'm pretty sure Alice made this blogpost about
her mother's death as friends-only to her own blog. How did you get
this on your blog?"

Another prank... no sealers/unsealers or eq? up my sleeve in this one!

Mallet posts a mean blogpost saying awful, terrible things, and puts the
replies capability as being the same one that goes to one of Alice's
blogposts. Suddenly Alice gets a reply to one of her nicer blogposts
with the following comment: "Disgusting. I can't believe you wrote
something this terrible. I advise you to take this blogpost down."

Of course, the person who wrote the reply thought they were replying to
Mallet....

Christopher Lemmer Webber

unread,
Jan 31, 2020, 12:49:21 PM1/31/20
to cap-...@googlegroups.com, Martin Scheffler
I'm a huge fan of that paper. It really opened my eyes to a lot of
things and helped me understand how to build the things I wanted to
build. I hope it doesn't appear to be a knock on it, quite the
opposite. So thank you Martin for writing it! :)

Martin, I also wrote up a follow-up post that I think is relevant to
framing this, if that's of interest:

https://groups.google.com/d/msg/cap-talk/5Q8BM3aW0Gw/lHzTgXaQAgAJ

(The thread in general is probably useful for context, though.)

Christopher Lemmer Webber

unread,
Jan 31, 2020, 1:50:48 PM1/31/20
to cap-...@googlegroups.com, Martin Scheffler
Going to continue a bit here...

We're mid-Friam, but thought I'd encode an observation. I don't think
it fully covers my concerns from the blog example, but I still think
this is worth recording here:

- Hunting in a bag for a unsealer-to-fit-the-job: the worst approach

- Sealers/Unsealers that all come from a "trusted domain": safer,
though it does resemble some perimeter-security issues

- Passing a sealed box to a closure which already has and anticipates
the use of a single unsealer: safest approach, and least likely
to have confused deputies

Another good observation (largely from Kevin Reid) is that confused
deputies come from the realm of value-judgements; humans are value
judgement machines, and are frequently confused deputies, but it is
appropriately a role for humans frequently. Horton is an example of
making those kinds of value judgements explicitly in a human domain (and
quarantining where that is). There's a lesson in there somewhere on how
to identify a good design use for rights amplification, but I haven't
put my finger on it exactly.

Kevin Reid

unread,
Feb 2, 2020, 7:36:51 PM2/2/20
to cap-...@googlegroups.com
Commenting on cwebber's examples, as requested:

On Fri, Jan 31, 2020 at 7:44 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
So what StudlyBot does is that if you give it access to read and comment on your posts, someone can point it at a blogpost and it'll make a hilarious studlified version of the blogpost as a comment.  So funny!

Oh also, since you're the one that requested it, it'll also send the studlified post to you, the requestor's, inbox.

You probably see where I'm going with this.

So, Mallet would like to read some blogposts he doesn't have access to. But the person who runs the blog is a huge fan of StudlyBot, and thus gave him access to *read and comment on all posts*.

So, Mallet merely points StudlyBot at a blogpost he'd like to read, and gets Cc'ed on a studlified version of the blogpost in his inbox.  Wow, thanks studlybot!  How generous.

If we ignore for the moment the desire for identities, then the natural capability structure as I see it is that each authorization of StudlyBot to read posts is granted to an independent instance thereof. If that is true, then the point at which Mallet's attack fails is that Mallet presumably does not have access to invoke the relevant instance of StudlyBot.

(If we build a multi-user communication system that has bots in it and wishes to strongly support least authority, I think it might make sense for the system to inherently support hierarchical identities, such that all the instances can announce themselves as separate instances of the same service — so a user doesn't have to deal with seeing StudlyBot.1 through StudlyBot.99 but rather the system automatically recommends the correct instance for the context if possible. Being able to make this distinction also resolves questions of, when the bot is customizable, whose customizations apply to any situation.)
 
Clearly StudlyBot was open to a confused deputy attack.  The real mistake was adding the Cc: field, but that's an easy and very familiar kind of confused deputy mistake.  My question is: were there recommendations we could make that could have helped prevent it?

I'd say that the way StudlyBot behaves falls into the bucket-of-unsealers antipattern — it has authority to read posts, which it must exercise using an unsealer, and yet it also makes use of that authority on behalf of a requester who need not demonstrate any authorization of their own for using that unsealer.

On to the second example!

On Fri, Jan 31, 2020 at 9:43 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
Let's say Mallet has a blog.  Alice also has a blog, but some of the blogposts are sealed for friends-only:

  (define alice-blogpost
    (first (alice-blog 'posts)))
  (alice-blogpost 'contents) ; => <sealed alice-friends>

Mallet is not one of Alice's friends (meaning: does not have the alice's-friends unsealer), and so can't unseal the capability to retrieve the contents.

However, Mallet realizes that Bob is a friend of Alice, and Bob also reads Mallet's blog, and so Mallet comes up with a clever plan to find out what it's about.

  (post-mallet-blogpost #:contents (alice-blog 'contents) ...)
 …
Mallet's blogpost has comments enabled and on.  Bob reads the post and is confused, posting the following comment: …

Let me propose how this should fail: Most blog/social user interfaces display to the user whether a post is public or restricted access — this is important precisely to let the reader know that they shouldn't share without checking first. In this case, we suppose that Bob has a UA which possesses the alice's-friends unsealer —

I was going to say that Bob's UA should warn Bob that an Alice-labeled box was found in Mallet's post, but I have an even better idea:

Your attack depends on the well-known property that comments on a post often refer to the contents of the post. Furthermore, it is generally the case that comments on a post have the same access restrictions as the post itself. Therefore, there is an elegant protocol solution here: Bob should send his comment to a reply capability that is within the sealed box, part of the post contents. That way, even if Bob's UA does not provide any warnings or Bob does not notice them, Mallet still cannot read the comment — it becomes a comment on Alice's original post and Mallet cannot read it, and Alice or Bob have enough information to detect the attempted attack.

We can describe this situation as the sealed box and reply capability forming something somewhat like a Voluntary Oblivious Compliance membrane around the content "conversations among Alice's friends". The original policy is set by Alice when she constructs the capability to reply to her posts.
 
Mallet posts a mean blogpost saying awful, terrible things, and puts the replies capability as being the same one that goes to one of Alice's blogposts.  Suddenly Alice gets a reply to one of her nicer blogposts …

To address this as well as other possible attacks, it would be advisable such that the metadata of a reply (in particular, within the part that is "signed" as coming from Bob) must identify the post it is replying to. This is important from a semantic perspective, not just the practical usage of capabilities, to ensure that the context in which the words were written is available.

This also prevents someone taking some of Bob's regular posts and sending them elsewhere as replies, making Bob look like a spammer.

This also can (but whether this is a good idea is debatable as a matter of social technology) allow Bob to put on record which version of a post he replied to — many systems allow editing posts, which is valuable for posting corrections and retractions, but also allows making anyone who replies to the post look foolish, if there is no record that Bob replied to a different version.

Christopher Lemmer Webber

unread,
Feb 3, 2020, 7:48:25 AM2/3/20
to cap-...@googlegroups.com
Kevin Reid writes:

> Commenting on cwebber's examples, as requested:
>
> On Fri, Jan 31, 2020 at 7:44 AM Christopher Lemmer Webber <
> cwe...@dustycloud.org> wrote:
>
>> So what StudlyBot does is that if you give it access to read and comment
>> on your posts, someone can point it at a blogpost and it'll make a
>> hilarious studlified version of the blogpost as a comment. So funny!
>>
>> Oh also, since you're the one that requested it, it'll also send the
>> studlified post to you, the requestor's, inbox.
>>
>> You probably see where I'm going with this.
>>
>> So, Mallet would like to read some blogposts he doesn't have access to.
>> But the person who runs the blog is a huge fan of StudlyBot, and thus gave
>> him access to *read and comment on all posts*.
>>
>> So, Mallet merely points StudlyBot at a blogpost he'd like to read, and
>> gets Cc'ed on a studlified version of the blogpost in his inbox. Wow,
>> thanks studlybot! How generous.
>>
>
> If we ignore for the moment the desire for identities, then the natural
> capability structure as I see it is that each authorization of StudlyBot to
> read posts is granted to an independent instance thereof. If that is true,
> then the point at which Mallet's attack fails is that Mallet presumably
> does not have access to invoke the relevant instance of StudlyBot.

Ignoring identities is a heck of a thing to do in a social network,
though. This sounds possibly ok for a bot... it doesn't work as well
for a user, say Samantha. Even in StudlyBot's case, in general multiple
people talking about StudlyBot may want to refer to the same StudlyBot.

I'm also unconvinced that users will be capable of selecting the correct
StudlyBot. I imagine users submitting frustrated reports: I keep trying
to message StudlyBot but ever since you made this change I can't figure
out which one to use!

> (If we build a multi-user communication system that has bots in it and
> wishes to strongly support least authority, I think it might make sense for
> the system to inherently support hierarchical identities, such that all the
> instances can announce themselves as separate instances of the same service
> — so a user doesn't have to deal with seeing StudlyBot.1 through
> StudlyBot.99 but rather the system automatically recommends the correct
> instance *for the context* if possible. Being able to make this distinction
> also resolves questions of, when the bot is customizable, whose
> customizations apply to any situation.)
>
>
>> Clearly StudlyBot was open to a confused deputy attack. The real mistake
>> was adding the Cc: field, but that's an easy and very familiar kind of
>> confused deputy mistake. My question is: were there recommendations we
>> could make that could have helped prevent it?
>
>
> I'd say that the way StudlyBot behaves falls into the bucket-of-unsealers
> antipattern — it has authority to read posts, *which it must exercise using
> an unsealer*, and yet it also makes use of that authority on behalf of a
> requester who need not demonstrate any authorization of their own for using
> that unsealer.

I agree. My concern is that there isn't a nice way to deal with it in a
social network system that's actually a nice user experience... I'm not
convinced that users can survive correctly navigating the multiple
instances mapped to individual actors idea.

Is there an inverse way to do it: users *addressing* StudlyBot message a
single StudlyBot, but internally StudlyBot has subdivided its logic per
user... the "closure over an expected unsealer" approach?

Even then, I'm not sure: how does either of these fix the Cc: bug? What
advice should we give users so that they can recognize not to perform
the Cc: bug?

> On to the second example!
>
> On Fri, Jan 31, 2020 at 9:43 AM Christopher Lemmer Webber <
> cwe...@dustycloud.org> wrote:
>
>> Let's say Mallet has a blog. Alice also has a blog, but some of the
>> blogposts are sealed for friends-only:
>>
>> (define alice-blogpost
>> (first (alice-blog 'posts)))
>> (alice-blogpost 'contents) ; => <sealed alice-friends>
>>
>> Mallet is not one of Alice's friends (meaning: does not have the
>> alice's-friends unsealer), and so can't unseal the capability to retrieve
>> the contents.
>>
>> However, Mallet realizes that Bob is a friend of Alice, and Bob also reads
>> Mallet's blog, and so Mallet comes up with a clever plan to find out what
>> it's about.
>>
>> (post-mallet-blogpost #:contents (alice-blog 'contents) ...)
>
> …
>
> Mallet's blogpost has comments enabled and on. Bob reads the post and is
>> confused, posting the following comment: …
>
>
> Let me propose how this should fail: Most blog/social user interfaces
> display to the user whether a post is public or restricted access —

I agree with this, and we came to similar conclusions in the RWoT paper
on secure UIs we were building. Scope should be clear and distinct and
worked into the UX.

> this is important precisely to let the reader know that they shouldn't
> share without checking first. In this case, we suppose that Bob has a
> UA which possesses the alice's-friends unsealer —
>
> I was going to say that Bob's UA should warn Bob that an Alice-labeled box
> was found in Mallet's post, but I have an even better idea:

That was the answer I was anticipating...

> Your attack depends on the well-known property that comments on a post
> often refer to the contents of the post. Furthermore, it is generally the
> case that comments on a post have the same access restrictions as the post
> itself. Therefore, there is an elegant protocol solution here: Bob should
> send his comment *to a reply capability that is within the sealed box, part
> of the post contents*. That way, even if Bob's UA does not provide any
> warnings or Bob does not notice them, Mallet still cannot read the comment
> — it becomes a comment on Alice's original post and Mallet cannot read it,
> and Alice or Bob have enough information to detect the attempted attack.

Ah, good design!

> We can describe this situation as the sealed box and reply capability
> forming something *somewhat like* a Voluntary Oblivious Compliance membrane
> around the content "conversations among Alice's friends". The original
> policy is set by Alice when she constructs the capability to reply to her
> posts.
>
>
>> Mallet posts a mean blogpost saying awful, terrible things, and puts the
>> replies capability as being the same one that goes to one of Alice's
>> blogposts. Suddenly Alice gets a reply to one of her nicer blogposts …
>
>
> To address this as well as other possible attacks, it would be advisable
> such that the metadata of a reply (in particular, within the part that is
> "signed" as coming from Bob) must *identify the post it is replying to.* This
> is important from a semantic perspective, not just the practical usage of
> capabilities, to ensure that the context in which the words were written is
> available.

Yes, I agree.

I think it's interesting to see that once we introduced an
identity-centric system, dealing with the inescapable vulnerabilities
that come with that seem to only be solved by introducing more identity.

> This also prevents someone taking some of Bob's regular posts and sending
> them elsewhere as replies, making Bob look like a spammer.
>
> This also *can* (but whether this is a good idea is debatable as a matter
> of social technology) allow Bob to put on record *which version of* a post
> he replied to — many systems allow editing posts, which is valuable for
> posting corrections and retractions, but also *allows* making anyone who
> replies to the post look foolish, if there is no record that Bob replied to
> a different version.

Also agree.

Really good review Kevin, thank you so much! :)
- Chris

Kevin Reid

unread,
Feb 3, 2020, 10:34:08 AM2/3/20
to cap-...@googlegroups.com
On Mon, Feb 3, 2020 at 4:48 AM Christopher Lemmer Webber <cwe...@dustycloud.org> wrote:
> If we ignore for the moment the desire for identities, then the natural
> capability structure as I see it is that each authorization of StudlyBot to
> read posts is granted to an independent instance thereof. If that is true,
> then the point at which Mallet's attack fails is that Mallet presumably
> does not have access to invoke the relevant instance of StudlyBot.

Ignoring identities is a heck of a thing to do in a social network, though.

What I mean is: "First, let's examine the problem if we weren't trying to build an identity-based system, then port that back to identities."
 
This sounds possibly ok for a bot... it doesn't work as well for a user, say Samantha.

Yes; this ties back to my remarks at the last Friam that humans can't be subdivided and therefore make hopefully-not-confused-deputy judgements all the time, and our systems need to support that.
 
  Even in StudlyBot's case, in general multiple people talking about StudlyBot may want to refer to the same StudlyBot.

My idea was that those people would likely share a social context that the right instance could be inferred from. But this is more 'idea worth trying' than 'general solution to this problem'.
 
Is there an inverse way to do it: users *addressing* StudlyBot message a single StudlyBot, but internally StudlyBot has subdivided its logic per user... the "closure over an expected unsealer" approach?

Sure. Lots of bots already internally implement a "when I receive a message from X, act according to my internal 'account' for X" model.

Even then, I'm not sure: how does either of these fix the Cc: bug?  What advice should we give users so that they can recognize not to perform the Cc: bug?

In the Land of Ideal Software Engineering, the author writing a well-subdivided bot will be aided by their own architecture when they are doing something that spans multiple users. Think of it as a tiny multi-user ocap system, where the social identities are how the users 'log in' from outside the system.
 
I think it's interesting to see that once we introduced an identity-centric system, dealing with the inescapable vulnerabilities that come with that seem to only be solved by introducing more identity.

True, but also note that this is the same kind of thing as, for example, building a system on cryptographic signatures even if those signatures are not large-scale person-identities: you need to ensure that the signature covers everything that defines the correct interpretation of the message, or you become vulnerable to replay attacks and such.

I'd like to frame it less as 'introducing more identity, more-is-better', but rather 'the protocol that introduces identities but neglects to fully account for them is unsound'.

Also note that the idea of associating a reply with, not 'the message it's replying to', but 'enough context to correctly interpret the semantics of the reply', is a general security principle (as in signed messages as I mentioned above), and it is always applicable even if the system contains no identities. For example, if you send a message to a capability you received, it must not be possible for an attacker to redirect it to another recipient, even if the attacker has no information about the message or the original recipient.

Mark S. Miller

unread,
Feb 3, 2020, 2:12:11 PM2/3/20
to cap-talk
This is becoming *really* interesting. And important! You two should co-author something explaining the problems and solutions in a self contained manner.


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.


--
  Cheers,
  --MarkM

Christopher Lemmer Webber

unread,
Feb 3, 2020, 5:17:09 PM2/3/20
to cap-...@googlegroups.com
Thanks Mark... I had a lot more thoughts on this after this morning,
considering this breakdown as a further examination of both introducing
ocaps and why they're important, and then introducing the identity part
of the conversation, but then I started cleaning my kitchen after lunch
and started self-doubting myself and just cleaned the fridge out
instead. It's nice to have re-assurance that this is going somewhere
good and worthwhile, time-wise.

I want to make something clear btw: I'm being a bit of a devil's
advocate about the introduction of confused deputies via rights
amplification and eq?, even though these are things that get
re-introduced into an ocap context, and I hope I'm not giving the
impression that I think that since I think these risks re-appear that
we're on the same level of problems as say, ACLs. I actually do think
that we have the right fundamental primitives to better navigate
things... if I can just express how...

I'll try to do a braindump this evening before I fully forget where I
was going.
>> but rather 'the protocol that introduces identities but *neglects to
>> fully account for them* is unsound'.
>>
>> Also note that the idea of associating a reply with, not 'the message it's
>> replying to', but 'enough context to correctly interpret the semantics of
>> the reply', is a general security principle (as in signed messages as I
>> mentioned above), and it is always applicable even if the system contains
>> no identities. For example, if you send a message to a capability you
>> received, it must not be possible for an attacker to redirect it to another
>> recipient, even if the attacker has no information about the message or the
>> original recipient.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "cap-talk" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to cap-talk+u...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/cap-talk/CANkSj9XLfwoKJZACvVfbw7V8Nx0W0QFPg%2Br_62ScKPv7o8xH2A%40mail.gmail.com
>> <https://groups.google.com/d/msgid/cap-talk/CANkSj9XLfwoKJZACvVfbw7V8Nx0W0QFPg%2Br_62ScKPv7o8xH2A%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>>
>
>
> --
> Cheers,
> --MarkM

Christopher Lemmer Webber

unread,
Feb 3, 2020, 6:16:19 PM2/3/20
to cap-...@googlegroups.com, Kevin Reid, Alan Karp, Mark S. Miller, Jonathan A Rees
I'm going to write down some of the ideas I had this morning, in bullet
point form.

The following outline takes two weird metaphors and kind of switches
between them: ocap security as the lambda calculus, access as a magical
forest. I'm relatively confinced of the former as a good paradigm
(although, maybe not accessible to everyone), but the latter is a bit
weirder. (There's another one, which the latter is derived from, of
"access as a graph, which from an object's perspective a portion of
which is lit up.") Bear with me. I am trying to lay out some ideas
that have been bubbling in the back of my mind for some time.

- (Part 1) Ocaps: Lambda, the Ultimate Security Mechanism

- (A reference to the original Lambda Papers by Sussman & Steele)

- Expand on an introduction to ocaps by relating to ordinary
programming (as another way of distilling JAR's thesis).
Argument passing and lexical scope is enough to do everything
(footnote: until it isn't, when we introduce eq? and
sealers/unsealers... but we will unveil this secret later).

- This has several advantages:

- "It's just ordinary argument passing". Programmers are already
used to this.

- Ambient Authority vs POLA

- Delegation and attenuation

- Composability of complex authority patterns as just normal,
everyday programming

- Safe from confused deputies (really!)

- ACLs Don't reference goes here, as a contrast

- Why are ocaps safe from confused deputies?

- Confused deputy: someone else can do something that you can't,
but you want to be able to do that too, so you want to trick them
into letting you do it.

- Old browser

- (Humans are giant confused deputies, but at least humans have
some level of "judgement" where we can sometimes save ourselves
from bad calls.)

- An access graph (or: a walk in a magical forest)

- From any given point or "perspective", a portion of the graph
lights up (including attenuated access).

- There is a magic forest: we can walk through it, and "what you
see is what you can do". (A rephrasing of "don't separate
designation from authority".)

- We aren't the only ones walking through this forest: while
our scope is limited, we ourselves cannot see the whole
forest... but we can communicate and coordinate with others
who can see more than we can. (Sometimes they may choose
also to relay only partial information, aka attenuation.)

- (This might not be a great metaphor admittedly; give it more
thought. It becomes more useful in the next section)

- Note that at this stage we have not yet introduced identity
or "hidden in plain sight" (sealed) information/authority.

- It turns out you can do almost anything you want in this
system. Entire societies exist in it. It is powerful enough
to build almost everything you want. And confused deputies
simply don't exist: emember that we don't even consider
identity at this level of abstraction... what you see is what
you can do , so the very notion of "someone else can do
something that you can't" doesn't even apply.

- (Part 2) Scoped breakage of our lambda purity

- Unfortunately, Lambda was only sort of the ultimate. Argument
passing is not quite enough. We will sometimes also need:

- A way of conversing about "the same thing", aka identity.
- Forest metaphor: if we both come to the same clearing, can
we both know we are pointing at the same tree?
- If we both know Phil the magic Pixie, can we both talk about
Phil? (It would be a weird world if we couldn't.)
(Also, this is the grant matcher problem)
- Accountability: we need to be able to build up a sense of
someone to decide if we should give them more rights or revoke
some that they currently have.

- A way of hiding information in plain sight, aka
sealers/unsealers.
- These are more or less like encryption-without-encryption via
language mechanisms (observation (thx JAR): any problems we are
identifying here also exist in usage of encryption)
- Hidden information
- Rights amplification: revealing keys/doors to walk through.

- (Bring up trademarks/signatures as kind of between these two
worlds?)

- We need both of these things, but they are about to cause problems
for us.

- We are no longer at "What you can see is what you can do". Let's
perform a mild paradigm shift: now we are at "The references you
possess are what you can do". (Which is still more powerful/safer
than "your identity is what you can do" which has enormous ambient
authority problems.)

- Why does this shift matter? By shifting from "if you can see it,
you can do it" we have introduced the possibility of referring to
others who can do things that we cannot. Instead of the graph
fully lighting up with all the things we can do, we will reach
verticies in the graph with "lock symbols" over them. Some objects
have keys which can open them, some do not. When should they
be opened... and who might you accidentally let in behind you?
Are we all now security guards, making judgement calls about who
we should let pass? That doesn't bode well for computer programs
especially (and it doesn't bode well in general, either).

- We wish we had the simplicity of our world where there was no
distinction between designation and authority again! Yet we can't
escape the need for identity and hidden information.

- Sounds like we need some patterns!
- Horton + petnames
- An entry/exit point for identity-based granting of new
authority
- Accountability
- A way for sharing and converting on local networks of identity
understanding, without centralized authority
- Safe usage of rights amplification
- Bad: "bucket of unsealers"
- Better-ish: trust domains
- Best: a closure which already "anticipates" the use of a
particular unsealer
- Also important: sealing a bundle of related rights together
- Proxy rights amplifiers which can be managed (Chris, expand on
wtf you mean)
- Trademarks and social networks (expand on observations from Kevin
Reid earlier in this thread)
- Clear designation of public vs private stuff (how to cleanly tie
this in? Hadn't discussed the social network context much before
and now we're discussing it a bunch)
- What else???

- Yes, we re-introduced confused deputies, but this is still better
than ACLs
- More composable
- More "natural": still works in terms of familiar programming
flows (many programming languages have eq?, and the addition of
sealers/unsealers not hard to grok or compose)
- Dramatically less ambient authority
- Can be "quarantined" to smaller surfaces where we have to be
more careful (eq? and rights amplification are "caution signs"),
and we can identify patterns for dealing with those situations

- Conclusions
- Lambda is the ultimate security mechanism because
- It works the way programmers are already used to programming
and is just as composable
- More secure
- We can't escape identity and hidden information
- These do re-introduce confused deputies, but knowing that we
can develop patterns to correctly deal with that
- Horton allows a clean entry/exit point for identity "judgements".
We can let judgement-call-systems (humans, and some kinds of AI)
perform those tasks and let the rest of the system hum along
- Horray for lambda... horray for ocaps!

Okay that's an awful long list of bullet points. I don't know, what
does anyone think? Would this be worth fleshing out into a real thing?
(*cough cough*, too much forest pixie dust, I know.)

Thoughts welcome,
- Chris

Rob Markovic

unread,
Feb 3, 2020, 6:19:46 PM2/3/20
to cap-...@googlegroups.com, Kevin Reid, Alan Karp, Mark S. Miller, Jonathan A Rees

Would anyone care to participate in an 'Awesome OCaps' project list and accept contributions via git?

Kevin Reid

unread,
Feb 3, 2020, 10:26:03 PM2/3/20
to cap-...@googlegroups.com
On Mon, Feb 3, 2020 at 11:12 AM Mark S. Miller <eri...@gmail.com> wrote:
This is becoming *really* interesting. And important! You two should co-author something explaining the problems and solutions in a self contained manner.

I'm providing design-intuition-based spot patches to the proposed protocols and plausible-sounding candidates for general principles; I don't know that there's enough here to be coherent as a document that should have a reasonably concise conclusion.

Christopher Lemmer Webber

unread,
Feb 4, 2020, 7:12:04 AM2/4/20
to cap-...@googlegroups.com
Kevin Reid writes:

> On Mon, Feb 3, 2020 at 11:12 AM Mark S. Miller <eri...@gmail.com> wrote:
>
>> This is becoming *really* interesting. And important! You two should
>> co-author something explaining the problems and solutions in a self
>> contained manner.
>>
>
> I'm providing design-intuition-based spot patches to the proposed protocols
> and plausible-sounding *candidates *for general principles; I don't know
> that there's enough here to be coherent as a document that should have a
> reasonably concise *conclusion*.

I also think my "lambda the ultimate security mechanism" paper is
admittedly maybe different than the paper MarkM was suggesting we write,
which I'm guessing was more of a continuation of this specific social
network example as a way to explore how to identify and plug confused
deputy potentials (though I don't know for sure what Mark was
suggesting, I am just full of self doubt having dropped the writeup on
the thread last night).

Nonetheless, I think Kevin is right that we're currently "spot patching"
examples, which is good; the only clear takeaway so far is that hunting
for unsealers in a bucket is a huge antipattern and gateway to confused
deputies imo, whereas a closure over code that "anticipates" a specific
unsealer is safer. But I think that we reviewed some other examples
that weren't related to that (the "binding which reply capability
belongs to which post" bit is one).

There is a place for this though; I am still thinking about and working
on writing up OcapPub, and it's no surprise that the questions I am
asking directly apply there. So that's one paper in which we can
incorporate this feedback (and maybe even provide explainations of what
vulnerabilities are opened if we don't take the spot-patched mitigations
we've identified here...).

I am nervous that the "lambda: the ultimate security mechanism" writeup
I did on this thread looks unrelated and maybe like a distraction:
that's more or less a writeup of "the way Chris perceives ocaps as being
'ordinary programming'" and extrapolting from there. I'm not sure if
it's interesting enough to expand. I do have motivation though: I would
like to have that be part of Spritely's collection of documentation (an
introduction to ocaps by appealing to the concepts that programmers
mostly already understand). But maybe I am just re-treading ground that
Rees and others have already written about and am wasting time.
Feedback welcome.

A little imposter-y this morning,
- Chris

Mark S. Miller

unread,
Feb 4, 2020, 10:51:02 AM2/4/20
to cap-...@googlegroups.com
This imposter-y concern about whether, for example, Rees already said it all so why bother, reminds me of one of my favorite quotes:

“If old truths are to retain their hold on men’s minds, they must be restated in the language and concepts of successive generations. What at one time are their most effective expressions gradually become so worn with use that they cease to carry a definite meaning. The underlying ideas may be as valid as ever, but the words, even when they refer to problems that are still with us, no longer convey the same conviction; the arguments do not move in a context familiar to us; and they rarely give us direct answers to the questions we are asking. This may be inevitable because no statement of an ideal that is likely to sway men’s minds can be complete: it must be adapted to a given climate of opinion, presuppose much that is accepted by all men of the time, and illustrate general principles in terms of issues with which they are concerned.”

--Friedrich Hayek, The Constitution of Liberty

Always give full credit where due. To the extent that earlier works were the first statements of great truths --- Dennis and Van Horn, Hewitt and Baker, Saltzer and Schroder, Hardy, Rees, Stiegler and Yee, Zooko  --- convey that to your reader with full appreciation. But re-explain it in your own terms and the terms that best speak to your current audience. Looking back on it later, you will find that your explanation brought a new synthesis of these ideas. Even a small delta on great truths can be a great contribution.


 
 - Chris


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Christopher Lemmer Webber

unread,
Feb 4, 2020, 2:56:36 PM2/4/20
to cap-...@googlegroups.com
Wow, thanks Mark. This is what I really needed to hear today.

Dan Connolly

unread,
Feb 5, 2020, 9:00:55 AM2/5/20
to cap-...@googlegroups.com, Kevin Reid, Alan Karp, Mark S. Miller, Jonathan A Rees
I would, in case that's not obvious. Is there context to this question that I'm missing?

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Rob Markovic

unread,
Feb 5, 2020, 1:32:56 PM2/5/20
to cap-...@googlegroups.com, Kevin Reid, Alan Karp, Mark S. Miller, Jonathan A Rees
Thanks Dan,

Due to the unusual format and my first glance, I did not realize that is exactly the repository I was suggesting. My bad.

Hence a good place to review and add any ocap content that isn't listed.

Stay awesome,

++ Rob

Baldur Jóhannsson

unread,
Feb 6, 2020, 8:08:55 PM2/6/20
to cap-talk
While reading through these bullet points of Chris Webbers brain dump the word/concept-of gevulot from Hannu Rajaniemi's Quatum Thief pops up. (Highly recomend that author btw)

Gevulot is an information access control system
used by a society on Mars to keep track of who is
allowed to know what by whoom.
It sounds a lot like that „magical forrest“ in certain ways.

But yeah, some combination of Horton, petnames, and nyms system is worth looking into.
Perhaps Randy Farmer and Marc Steigler might have
some pointers. Heck, looking into the context
?zones/areas? of Everyware that MarcS and Alan Karp (and possibly others too) formulated might
well worth the while. (Reminds me, I have some ui/ux ideas connected to Everyware, Alans Karp Automotive Assitance example of smartphone ocap system, Blockly (for composing attenuation and delegation gadgets and such (yes I nest my parens) and Vandaium/macaroons/ActiveCapCerts)

To extend the analogy a bit, some things are hidden and only seen via spectacles that can
translate private colours into something more undestandable.

Góðar stundir.
-Baldur


Baldur Jóhannsson

unread,
Feb 6, 2020, 8:24:13 PM2/6/20
to cap-talk
> Even a small delta on great truths can be a great contribution.

This is why I relish these kinds of discussions.
A relish that helps keeping that black dog away.

(The one Winston Churchill wrote about.)

Words without connection to today are words without meaning and soon lose currency.

(is: Orð án tengsla við nú til dags eru
merkingasnauð og brátt missa gildi)

So, lets keep at it ;-)
-Baldur


Tristan Slominski

unread,
Feb 8, 2020, 7:39:34 AM2/8/20
to cap-...@googlegroups.com

"we will reach verticies in the graph with "lock symbols" over them”


I don’t understand this part of the analogy. What does this correspond to? I can see how “some objects have keys” can correspond to opaque references I cannot identify, but how do I perceive a vertex with a lock on it?


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Christopher Lemmer Webber

unread,
Feb 8, 2020, 8:15:13 AM2/8/20
to cap-...@googlegroups.com
The locked vertices are meant to represent something where designation
*doesn't* immediately give you authority (for most of our systems, we
want to combine designation and authority). For the purposes of this,
that means something that requires unsealing (so rights amplification).

Does that make sense?

Tristan Slominski

unread,
Feb 8, 2020, 8:20:48 AM2/8/20
to cap-...@googlegroups.com
Ah ok, thank you. I don't deal much with rights amplification so it didn't immediately register. It makes sense.

Matt Rice

unread,
Feb 8, 2020, 12:58:21 PM2/8/20
to cap-...@googlegroups.com
Not sure if it would help the metaphor, but one thing to look is hypergraphs,
They have a corresponding incidence structure.
https://en.wikipedia.org/wiki/Hypergraph
https://en.wikipedia.org/wiki/Incidence_structure

unsealing could be viewed as basically adding a hyperedge to a
hypergraph, I guess.

incidentally the paohvis software for visualizing hypergraphs linked
from there looks
like it should be nice for vizualizing authority.
http://www.di.uniba.it/~buono/paohvis/paoh.html
> To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/87v9oh43vk.fsf%40dustycloud.org.
Reply all
Reply to author
Forward
0 new messages