Access control use cases

9 views
Skip to first unread message

Alan Karp

unread,
Aug 18, 2025, 1:39:10 PMAug 18
to cap-...@googlegroups.com
I recently gave a talk at Stanford on this topic that generated enough interest for me to write up a document, https://alanhkarp.com/UseCases.pdf.  Comments will be appreciated and resented in equal measure.

--------------
Alan Karp

Jonathan S. Shapiro

unread,
Aug 25, 2025, 10:16:23 PM (13 days ago) Aug 25
to cap-...@googlegroups.com
Based on a quick read, here are my thoughts and questions:

1. It seems to me that the chained delegation section is, in effect, a reductio problem. Summarizing to make sure I've understood, the initial problem is that some people want to restrict Bob's delegation of the originally shared object to Dave. The proposed counter is that if this is prohibited, Bob will simply share Bob's credentials with Dave. If I'm reading the context on this correctly, by "credentials" you mean something equivalent to Bob's login authority. The problem with this as a justification is that - perhaps more than any other authority I can think of - the very last access right we want (or Bob wants) Bob to share is their principal credentials and compared to that sharing the one we're worrying about here is completely insignificant.

And in point of fact we can prevent Bob from sharing their principal credentials using certain kinds of 2FA. I can't stop Bob from handing their fob to Dave, but the fact that they can't both hold the fob sure messes with Bob's day makes it impractical as a long-term hackaround for Bob and Dave. And I think there's a fair chance that a suitably crafted login audit system could identify the anomalous logins pretty quickly.

In the face of this, I have to ask whether "I'll just share my credentials" is still a credible counterargument to unauthorized sharing if we assume modern login guards?

In any case, isn't it true that sharing a proxy object is better for Bob, completely automatable, and harder to detect?


2, The revocation discussion feels like a straw dog. First, "revoke all the way down" is rarely what we want from a social point of view. The most common case is probably the one where, as you note, Dave has been fired. What's wanted from a social point of view is identify-based revocation rather than delegation-following revocation. That's not always the right thing to want, but it is often a reasonable and appropriate thing to want.

Aside: the fact that the "grant" operation was removed in OKL4, leaving "map" (which is inherently revocable) as the only way to share capabilities, has the consequence that all capability transfers must be performed by an agent that is part of the application's TCB. It is essentially impossible to write an application that can manage and recover from unanticipated revocations - especially when what is being revoked is memory access rights to something. It also means that the sharing agent must outlive the receiver in order for the receiver's resource pool to have any sort of sensible semantics

Google tries to do an imperfect but vaguely reasonable thing in the "departing user" situation: when deleting a user, it asks what new user should take over ownership of the departing user's resources.

3. In independent delegations, I think it's a fascinating and situational question whether what we want to revoke is the delegation or the access. There are use cases for both, but I think that in social terms what's commonly desired is to revoke the access. There's also an underlying question here about how overlapping delegations combine from an authority perspective - an example of which you note when you talk about the problems with RBAC systems in combined delegations.

4. There is a way in which I find CSRF amusing. The underlying problem is that origin-based access execution control is a form of identity-based access control based on the originating device's identity..., or in some concept variations on the identity of the host that supplied the code that is currently being obeyed. For many use cases, capability sharing isn't a solution because there's no clear contact between the parties at which an authorizing act of sharing might occur.

5. When you discuss "transitive access", do I understand correctly that the threat here is that Bob can implement a proxy? If so, I agree, and I think this is actually the more realistic circumvention threat you want in the discussion of chained delegation. To make matters fun, a machine-readable interface specification makes the construction of a proxy object 100% automatable.


Quick responses, and I don't know that I'd particularly go to the mat on any of them. If they provoke some useful thought on your part, that seems good.


Jonathan


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CANpA1Z2p0WEa2bo3U_jgwQdgr8f6BbqaZbiBTOyeUMD4gcSSrQ%40mail.gmail.com.

Alan Karp

unread,
Aug 26, 2025, 5:04:37 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
Thanks for the thoughtful remarks.  My responses are inline.

--------------
Alan Karp


On Mon, Aug 25, 2025 at 7:16 PM Jonathan S. Shapiro <jonathan....@gmail.com> wrote:
Based on a quick read, here are my thoughts and questions:

1. It seems to me that the chained delegation section is, in effect, a reductio problem. Summarizing to make sure I've understood, the initial problem is that some people want to restrict Bob's delegation of the originally shared object to Dave.

The desire to block re-delegation is almost universal.  Even SPKI has a do-not-delegate bit despite Bill Frantz's best efforts.
 
The proposed counter is that if this is prohibited, Bob will simply share Bob's credentials with Dave. If I'm reading the context on this correctly, by "credentials" you mean something equivalent to Bob's login authority.

Yes.
 
The problem with this as a justification is that - perhaps more than any other authority I can think of - the very last access right we want (or Bob wants) Bob to share is their principal credentials and compared to that sharing the one we're worrying about here is completely insignificant.

Sharing login credentials is a huge problem.  Ping Identity runs webinars on how to keep your employees from doing it.  The HP employee handbook said that sharing your login password with anyone but IT was a firing offence.  In spite of that, virtually all managers shared their passwords with their admins as the only way to get their work done. 

And in point of fact we can prevent Bob from sharing their principal credentials using certain kinds of 2FA. I can't stop Bob from handing their fob to Dave, but the fact that they can't both hold the fob sure messes with Bob's day makes it impractical as a long-term hackaround for Bob and Dave. And I think there's a fair chance that a suitably crafted login audit system could identify the anomalous logins pretty quickly.

Indeed, you can, and some organizations, such as the NSA, do prevent credential sharing.  Those mechanisms add a lot of friction, but I don't think that's the real reason they're not used.  I think companies understand that absent the ability to delegate credential sharing is necessary for people to get their work done. 

In the face of this, I have to ask whether "I'll just share my credentials" is still a credible counterargument to unauthorized sharing if we assume modern login guards?

It is until corporations adopt more stringent measures. 

In any case, isn't it true that sharing a proxy object is better for Bob, completely automatable, and harder to detect?

Yes, but most people don't know how to set one up. 


2, The revocation discussion feels like a straw dog. First, "revoke all the way down" is rarely what we want from a social point of view. The most common case is probably the one where, as you note, Dave has been fired. What's wanted from a social point of view is identify-based revocation rather than delegation-following revocation. That's not always the right thing to want, but it is often a reasonable and appropriate thing to want.

The "hazard" is sock puppets, which is solved by revoking all the way down.  

Aside: the fact that the "grant" operation was removed in OKL4, leaving "map" (which is inherently revocable) as the only way to share capabilities, has the consequence that all capability transfers must be performed by an agent that is part of the application's TCB. It is essentially impossible to write an application that can manage and recover from unanticipated revocations - especially when what is being revoked is memory access rights to something. It also means that the sharing agent must outlive the receiver in order for the receiver's resource pool to have any sort of sensible semantics

That sounds like an artifact of that particular implementation.  Do you think that case is a hazard distinct from the ones in my paper? 

Google tries to do an imperfect but vaguely reasonable thing in the "departing user" situation: when deleting a user, it asks what new user should take over ownership of the departing user's resources.

I believe that's common practice in the enterprise. 

3. In independent delegations, I think it's a fascinating and situational question whether what we want to revoke is the delegation or the access. There are use cases for both, but I think that in social terms what's commonly desired is to revoke the access. There's also an underlying question here about how overlapping delegations combine from an authority perspective - an example of which you note when you talk about the problems with RBAC systems in combined delegations.

The common case is an employee in a support role who is doing work for different individuals.  They may independently delegate access to the same resource.  I believe the cases where you want to revoke access rather than just a delegation arise from a higher level policy that needs intervention by someone other than the delegators.  Perhaps that's a hazard distinct from the ones in the paper. 

4. There is a way in which I find CSRF amusing. The underlying problem is that origin-based access execution control is a form of identity-based access control based on the originating device's identity..., or in some concept variations on the identity of the host that supplied the code that is currently being obeyed. For many use cases, capability sharing isn't a solution because there's no clear contact between the parties at which an authorizing act of sharing might occur.

You might be able to use an OAuth-like flow where the page tries to access the resource and is denied.  The system then redirects to the user who can grant access.  The page then tries again and succeeds. 
  

5. When you discuss "transitive access", do I understand correctly that the threat here is that Bob can implement a proxy?

No, it's about composing independent services.  An example is a voice remote to access either your streaming service or your smart thermostat.  In an ACL based system, you can fool the voice service into returning the viewing history of another customer, or the voice service can erroneously set your thermostat based on a request to see the movie Fahrenheit 451.

My transitive access paper goes into more detail.
 
If so, I agree, and I think this is actually the more realistic circumvention threat you want in the discussion of chained delegation. To make matters fun, a machine-readable interface specification makes the construction of a proxy object 100% automatable.

So does AI vibe coding. 

Quick responses, and I don't know that I'd particularly go to the mat on any of them. If they provoke some useful thought on your part, that seems good.

Very valuable.  Thanks. 


Jonathan


On Mon, Aug 18, 2025 at 10:39 AM Alan Karp <alan...@gmail.com> wrote:
I recently gave a talk at Stanford on this topic that generated enough interest for me to write up a document, https://alanhkarp.com/UseCases.pdf.  Comments will be appreciated and resented in equal measure.

--------------
Alan Karp

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CANpA1Z2p0WEa2bo3U_jgwQdgr8f6BbqaZbiBTOyeUMD4gcSSrQ%40mail.gmail.com.

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Mark S. Miller

unread,
Aug 26, 2025, 5:38:13 PM (12 days ago) Aug 26
to cap-...@googlegroups.com

If so, I agree, and I think this is actually the more realistic circumvention threat you want in the discussion of chained delegation. To make matters fun, a machine-readable interface specification makes the construction of a proxy object 100% automatable.

In E, AmbientTalk, HardenedJS/Endo/OCapN, and Spritely/OCapN there are no machine readable interface specs, but the construction of proxy objects and membranes are 100% automated.

Separately, even at the raw JS language level, proxies and membranes are imperfect, practically transparent for almost all purposes (really!), 100% automated, and not dependent on any interface spec.


 

So does AI vibe coding. 

WAT?
 
--
  Cheers,
  --MarkM

Mark S. Miller

unread,
Aug 26, 2025, 5:43:10 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
KeyKOS (unlike EROS and Coyotos) had no machine readable interface specs, but proxies and membranes there were also imperfect, practically transparent for almost all purposes (really!), and 100% automatable. I write "automatable" rather than "automated" because, IIRC, there was a complete paper design everyone believed in but it was never implemented.

--
  Cheers,
  --MarkM

Alan Karp

unread,
Aug 26, 2025, 5:50:05 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
On Tue, Aug 26, 2025 at 2:38 PM Mark S. Miller <eri...@gmail.com> wrote:

So does AI vibe coding. 

WAT?

When you tell an LLM what program you want, and the AI generates the code.

--------------
Alan Karp

Mark S. Miller

unread,
Aug 26, 2025, 6:11:02 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
> When you tell an LLM what program you want, and the AI generates the code.


Well, it does generate some code. What do you mean by "the" code? It does *not* reliably generate code that does what you said you wanted. Hence "vibe".

Watching the current wave of LLM stuff, I'm reminded of something I got from Herb Simon over 40+ years ago, I think. Perhaps in "Architecture of Complexity" but I don't remember. Paraphrasing:

When a mechanism has been shaped by selective pressure or design, you can understand it in two different ways. When it is working "correctly", what it does reflects the selective or design pressure of what it is "supposed" to do. IOW, the "why". Very different mechanisms that serve the same "why" can exhibit very similar behavior when they work "correctly". None of this so far is surprising or unfamiliar. For many simple tasks including simple code writing tasks, LLMs work "correctly" quite often, leading us to all sorts of unfounded but intuitive assumptions about "how" it is doing this.

The side to Simon's observation is that when a mechanism malfunctions, i.e., fails to fulfill its "purpose", how it malfunctions is uniquely revealing of the underlying "how". Two different mechanisms shaped by the same selective pressures to behave "correctly" in very similar ways will nevertheless malfunction in very different ways. These differences are often the best clue about the differences in the "how"s they use to achieve similar "why"s.

When LLMs fail, whether at code or prose, the failures often reveal how terribly little they actually "understand" about the tasks we ask them to do, at which they usually succeed.

Btw, I suspect that Simon's principle is also why Tolstoy's observation about families rings true.


If anyone remembers where Simon actually talks about this, please post. I'd love to reread and see how far my 40+ year stale memory has drifted.



--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.


--
  Cheers,
  --MarkM

Mark S. Miller

unread,
Aug 26, 2025, 6:12:27 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
On Tue, Aug 26, 2025 at 3:10 PM Mark S. Miller <eri...@gmail.com> wrote:
> When you tell an LLM what program you want, and the AI generates the code.


Well, it does generate some code. What do you mean by "the" code? It does *not* reliably generate code that does what you said you wanted. Hence "vibe".

Watching the current wave of LLM stuff, I'm reminded of something I got from Herb Simon over 40+ years ago, I think. Perhaps in "Architecture of Complexity" but I don't remember. Paraphrasing:

When a mechanism has been shaped by selective pressure or design, you can understand it in two different ways. When it is working "correctly", what it does reflects the selective or design pressure of what it is "supposed" to do. IOW, the "why". Very different mechanisms that serve the same "why" can exhibit very similar behavior when they work "correctly". None of this so far is surprising or unfamiliar. For many simple tasks including simple code writing tasks, LLMs work "correctly" quite often, leading us to all sorts of unfounded but intuitive assumptions about "how" it is doing this.

The side to Simon's observation

Meant: "The flip side..."
 
is that when a mechanism malfunctions, i.e., fails to fulfill its "purpose", how it malfunctions is uniquely revealing of the underlying "how". Two different mechanisms shaped by the same selective pressures to behave "correctly" in very similar ways will nevertheless malfunction in very different ways. These differences are often the best clue about the differences in the "how"s they use to achieve similar "why"s.

When LLMs fail, whether at code or prose, the failures often reveal how terribly little they actually "understand" about the tasks we ask them to do, at which they usually succeed.

Btw, I suspect that Simon's principle is also why Tolstoy's observation about families rings true.


If anyone remembers where Simon actually talks about this, please post. I'd love to reread and see how far my 40+ year stale memory has drifted.



On Tue, Aug 26, 2025 at 2:50 PM Alan Karp <alan...@gmail.com> wrote:
On Tue, Aug 26, 2025 at 2:38 PM Mark S. Miller <eri...@gmail.com> wrote:

So does AI vibe coding. 

WAT?

When you tell an LLM what program you want, and the AI generates the code.

--------------
Alan Karp

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CANpA1Z2AMvKzs3s63LbYpPKxi4-3SJjW2HmQkktSi4CUGqmZHg%40mail.gmail.com.


--
  Cheers,
  --MarkM


--
  Cheers,
  --MarkM

Alan Karp

unread,
Aug 26, 2025, 6:21:55 PM (12 days ago) Aug 26
to cap-...@googlegroups.com
There are a number of examples of vibe coding available on the web.  In one of them, they constructed a multi-agent system to trade stocks.  (No actual money on the line, of course.)  At one point, the LLM produced incorrect code, which the speaker said happened about 20% of the time.  He simply reran the prompt, and the LLM produced code that worked.  I guess it's a good thing that one of the things AI does reasonably well is generate tests.

--------------
Alan Karp


Reply all
Reply to author
Forward
0 new messages