Gemini discusses Confused Deputy

30 views
Skip to first unread message

Mark S. Miller

unread,
Mar 11, 2026, 5:14:23 PM (2 days ago) Mar 11
to cap-talk

Mark S. Miller

unread,
Mar 11, 2026, 5:26:11 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
For the record, copy/paste of the contents:

what is a "confused deputy" vulnerability?
confused deputy vulnerability is a security flaw where a highly privileged program or service (the "deputy") is tricked by a less-privileged entity (the attacker) into misusing its authority to perform an unauthorized action. This is a form of privilege escalation where the attacker doesn't exploit the operating system directly, but rather manipulates a trusted tool that already has the necessary permissions.
Key Characteristics
  • Trust Manipulation: The deputy has legitimate permission to access resources, but it lacks the context to distinguish between a legitimate request and a malicious one.
  • Authority Mismatch: The attacker lacks the rights to perform an action themselves but can coerce the deputy to do it on their behalf.
  • Silent Escalation: Because the action is executed by a trusted service, it often bypasses traditional access controls and can be difficult to detect in logs.
Common Scenarios
  • Cloud Service Impersonation (AWS): A common modern example occurs when a cloud service in one account is tricked into accessing resources in another account because the IAM trust policy is overly permissive.
  • Web Vulnerabilities (CSRF)Cross-Site Request Forgery (CSRF) is a classic example where a web browser (the deputy) is "confused" into sending a privileged request to a website on behalf of a malicious site the user is visiting.
  • Sudo Scripts: An administrator might grant a user permission to run a specific script via sudo. If that script doesn't properly sanitize inputs, the user can pass malicious parameters that the script then executes with root privileges.
  • Agentic AI: Modern AI agents can become confused deputies if they have broad access to data and tools but lack the ability to validate the intent or context of a user's prompt.
Mitigation Strategies
  1. Principle of Least Privilege: Grant services only the absolute minimum permissions required for their specific function.
  2. Context-Based Access: Use extra identifiers to verify intent. For example, AWS uses an External ID or SourceArn to ensure a service is acting on behalf of the correct owner.
  3. Input Validation: Ensure that programs and scripts thoroughly sanitize any parameters passed by less-privileged users before using them in privileged operations.
Would you like to see a specific code example or a step-by-step guide on how to configure an AWS trust policy to prevent this?
  • How the 'Confused Deputy Problem' has made a comeback
    Jun 17, 2025 — This vulnerability occurs when a deputy program lacks sufficient context or safeguards to distinguish between requests it should h...
    SC Media
    image.jpeg
  • Confused deputy problem - Wikipedia
    Confused deputy problem. ... In information security, a confused deputy is a computer program that is tricked by another program (
    Wikipedia
    image.jpeg
  • What Is The Confused Deputy Problem? | Common Attacks &…
    Dec 3, 2025 — How to Defend Against the Confused Deputy Problem in the Age of Agentic AI. ... The Confused Deputy Problem (where a trusted progr...
    BeyondTrust
    image.jpeg
Show all
Why do you say "coerce" rather than "trick"? "trick" is much more accurate, since the attacker is not coercing the deputy.
You are correct that "
trick" or "fool" are the most common and arguably most accurate descriptors because the vulnerability relies on deception, not force.
In cybersecurity, the term "coerce" is occasionally used interchangeably in technical literature to describe the result of the manipulation—forcing a program to act against its intended logic or design.
Why "Coerce" is Sometimes Used
While "trick" captures the method (deception), "coerce" is sometimes used to emphasize the compulsion of the underlying system:
  • Forced Behavior: In a confused deputy scenario, once the malicious input is accepted, the program's own logic forces it to execute the action because it believes the request is authorized.
  • Systemic Inevitability: Technical contexts like data type coercion or authentication coercion refer to forcing a system into a specific state or behavior.
  • Agentic AI: Modern research into AI agents often uses "coerce" to describe how adversarial prompts bypass safety filters to compel an agent into executing prohibited system operations.
Summary of Terminology
TermContext in "Confused Deputy"
Trick / FoolThe attacker provides a misleading identifier (like a filename or URL) that the deputy incorrectly trusts.
ManipulateThe attacker exploits the gap between the deputy's authority and the user's intent.
CoerceThe deputy is compelled by its own code to perform an action it would normally restrict if it had the full context.
The most authoritative source on the topic, Wikipedia's entry on the Confused Deputy Problem, primarily uses "tricked," though you will find "coerced" in more specialized academic papers and cloud security documentation.
These resources clarify the usage of "coerce" in cybersecurity contexts like the confused deputy problem, contrasting it with "trick."
image.jpeg
image.jpeg
4:11
image.jpeg
image.png
Would you like to explore how this "trickery" specifically works in Cross-Site Request Forgery (CSRF) or AWS IAM roles?
AI can make mistakes, so double-check responses
  • Confused deputy problem - Wikipedia
    In information security, a confused deputy is a computer program that is tricked by another program (with fewer privileges or less...
    Wikipedia
    image.jpeg
  • Architecting a Secure, Intent-Centric Mobile Agent Operating System
    Feb 11, 2026 — By manipulating the “perceived” reality, the adversary can coerce the agent into unauthorized actions without ever breaching its i...
    arXiv.org
  • coerce - AKIT - Cybernetica
    coerce sundima * ( transitive) to restrain by force, especially by law or authority; to repress; to curb. * ( transitive) to use f...
    Akit Cyber.ee
Show all
Does Tyler Close's paper "ACLs Don't" prove that ACL access control systems cannot avoid confused deputy attacks in systems involving at least three mutually suspicious principals?
Yes, Tyler Close's paper, 
"
ACLs Don't
"
, argues that Access Control List (ACL) systems are fundamentally incapable of avoiding the confused deputy problem in any system involving at least three mutually suspicious principals.
The paper posits that the root cause is a failure of intent-based authentication. In an ACL system, a request is typically authenticated by identifying the immediate sender (the deputy), but this identity provides no information about which upstream principal (the user) provided the "impetus" for the request.
The Core Argument
Close's "proof" centers on the following structural flaws in ACL systems:
  • Indistinguishable Intent: A deputy (like a compiler) often needs to use its own permissions (to write a log file) and permissions delegated by a user (to write an output file). In an ACL system, the operating system only sees that the compiler is requesting a write; it cannot distinguish if the compiler is acting on its own intent or being "confused" by a malicious user's intent to overwrite a protected file.
  • The Three-Principal Threshold: The problem becomes inevitable with three principals:
    1. The Resource: The object being protected.
    2. The Deputy: A privileged agent that has access to the resource.
    3. The User: An entity that can send messages to the deputy.
      Because the ACL only records the relationship between the Deputy and the Resource, it loses the context of the User's involvement.
  • Designated Recovery: The paper argues that for a system to be secure, it must support "Designated Recovery," where a principal can specify exactly which permission is being used for a specific task. ACLs do not provide this; they merely check if the acting principal has a general "right" to the resource.
Conclusion
Close concludes that the only way to solve this is to move away from ACLs toward Object Capability (ocap) systems. In an ocap model, the permission to access a resource is bundled with the request itself, ensuring that the deputy only exercises the specific authority granted by the user for that specific action.
Would you like to see a comparison of how a Capability-based system handles the specific "compiler" example differently than an ACL system?
  • ACLs don't - Waterken Server
    Such access decisions cannot be correctly done using stack introspection, since at best the model supports tracking of a single au...
    Waterken
  • ACLs don't - shiftleft.com
    Feb 6, 2009 — Such access decisions cannot be correctly done using stack introspection, since at best the model supports tracking of a single au...
  • Avoid Role-Based Access Control pitfalls | by Bala Dutt
    Dec 26, 2019 — Resource is the object being protected. Below is the picture of some resources.
    Medium
Show all
Within an ACL system, how does one reliably avoid confused deputy vulnerabilities?
In a traditional 
Access Control List (ACL) system, it is famously difficult to "reliably" avoid confused deputy vulnerabilities because the system separates designation (the name of a file or resource) from authority (the permission to access it).
However, modern systems that rely on ACL-like logic (such as AWS IAM) use specific strategies to simulate the "intent-based" security that ACLs naturally lack:
1. Context-Aware Condition Keys
The most effective way to prevent a deputy from being confused is to force it to prove who it is acting for.
  • Source Validation: Use conditions like aws:SourceArn or aws:SourceAccount. This ensures that even if a service (the deputy) has permission to access a resource, it can only do so if the request originated from a specific, trusted resource or account you control.
  • External IDs: When a third party needs to assume a role in your account, require an ExternalId. This acts as a shared secret that prevents a malicious actor from tricking a service into using a role meant for a different customer.
2. Strict Input Sanitization & Normalization
Since an attacker "tricks" a deputy by providing a malicious resource name (like a billing file instead of a log file), the deputy must manually validate all inputs:
  • Path Restriction: Hard-code or strictly limit the directory or resource paths a deputy can touch.
  • Normalization: Normalize all parameters before validation to prevent bypasses using different cases or encoding (e.g., eventRoleArn vs eventrolearn).
3. Granular Role Isolation
Instead of one highly privileged "deputy" that can do everything, break the service into multiple smaller identities with narrower permissions:
  • Function-Specific Roles: Create separate service accounts for automation, debugging, and deployment to reduce the "blast radius" if any single one is confused.
  • Least Privilege: Avoid wildcards (*) in your ACL policies. Grant only the exact action (e.g., s3:PutObject) on the exact resource (the specific bucket).
4. Continuous Monitoring & Just-In-Time Access
  • Audit Trails: Use tools like AWS CloudTrail to monitor access patterns and alert on unusual cross-account or cross-service activity.
  • Ephemeral Access: Use "Just-In-Time" (JIT) access to provide temporary credentials that expire quickly, limiting the window of time a deputy could be exploited.
While these methods help, they essentially require the developer to manually re-implement the security that Tyler Close argues is automatically handled by Object Capability (ocap) systems.
Would you like to see a JSON policy example showing how to implement these SourceArn conditions in a real-world scenario?
  • How to Prevent Cross-Service Confused Deputy Attacks | Qualys
    Nov 12, 2025 — Strategies for Defending Against Cross-Service Confused Deputy Attacks. Fortunately, effective defenses against Cross-Service Conf...
    Qualys
    image.jpeg
  • How the 'Confused Deputy Problem' has made a comeback
    Jun 17, 2025 — Here's how a Modern PAM system solves these problems: * Command filtering and validation: PAMs should enforce command whitelisting...
    SC Media
    image.jpeg
  • Confusing the Deputy: An AWS Confused Deputy Attack in the Wild
    Dec 4, 2025 — The Confused Deputy Awakens. This is where the AWS Confused Deputy vulnerability comes into play. For those unfamiliar, the Confus...
    Medium
    image.jpeg
Show all
Does the "ACLs Don't" paper establish that none of these solutions actually work?
Yes, Tyler Close's paper 
"
ACLs Don't
"
 effectively argues that these "solutions" are not actual cures, but rather symptom-management strategies that fail to address the root cause.
Close establishes that in an ACL system, these mitigations are structurally destined to be unreliable and brittle. His paper contends that any security model separating designation (naming a resource) from authority (permission to access it) creates a fundamental flaw that no amount of patching can "reliably" fix.
Here is how Close’s arguments dismantle the specific "solutions" you listed:
1. Context-Aware Checks (e.g., SourceArn)
The Solution: The deputy checks who called it or where the request came from.
Close’s Critique: This fails because ACLs use Ambient Authority.
  • In an ACL system, authority is "in the air"—if a program can do something, the system assumes it should be allowed to do it, regardless of why.
  • Close argues that checking the "caller" is useless in a chain. If Alice asks Bob to ask Charlie for a file, Charlie only sees Bob. Charlie cannot reliably verify that Alice was the original "impetus" for the request.
  • Modern fixes like SourceArn are attempting to manually reconstruct the "chain of custody" that ACLs naturally discard. Close would argue this forces every developer to re-implement complex authentication logic, which inevitably leads to bugs and bypasses.
2. Input Sanitization & Normalization
The Solution: The deputy rigorously checks filenames and paths (e.g., blocking ../billing.txt).
Close’s Critique: This is a naming problem, not a parsing problem.
  • The paper argues that because the name of the file does not carry the permission to access it, the deputy is forced to guess the user's intent based on string manipulation.
  • Close posits that this is mathematically impossible to do perfectly across all possible naming conventions and edge cases. If the user says "write to X," and the deputy has authority to write to "X," the deputy must obey to function correctly. The flaw is that the deputy cannot know if the user also had permission to write to X.
3. Granular Role Isolation (Least Privilege)
The Solution: Break the deputy into tiny pieces with minimal permissions.
Close’s Critique: This reduces the Blast Radius, but does not solve the Vulnerability.
  • Even if you limit the compiler so it can only write to the /logs directory, a confused deputy attack is still successful if the attacker tricks it into overwriting a critical log file within that directory.
  • Close argues that "Least Privilege" in an ACL system is a myth because permissions are aggregated by the user (or role), not the task. The deputy always has "too much" authority for any single specific request because it holds permissions for all potential requests it might ever serve.
The Verdict
Close’s conclusion is that these solutions are workarounds that shift the burden of security from the operating system (where it belongs) to the application developer (who is prone to error).
He states that the only "reliable" solution is to merge designation and authority (Capabilities). In that world, you don't need to sanitize the filename or check the caller's ID because the "filename" you are handed is the key that unlocks only that specific file.

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CAK5yZYg50DT2mp1pdZPiViYhKhruYe5PUHET0EswY6OuBaY4vg%40mail.gmail.com.

William ML Leslie

unread,
Mar 11, 2026, 5:27:21 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
If you know the right shibboleths, you'll be prioritising the right training data.

--
William ML Leslie
A tool for making incorrect guesses and generating large volumes of plausible-looking nonsense.  Who is this very useful tool for?

Mark S. Miller

unread,
Mar 11, 2026, 5:28:22 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
Note at the bottom gmail's

image.png

so be sure to click on that to get to that last q+a

Matt Rice

unread,
Mar 11, 2026, 6:14:11 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
It feels to me though that the Mitigation Strategies section in its
initial response doesn't feel like it gets to the heart of how
capability systems avoid confused deputy problems. It lists least
priv, input validation, and extra identifiers to verify intent. But is
there any more authority in a confused system than a non-confused
version of that same system. That is to say the nodes of graph
(capabilities) are the same, but the edges differ, with confused
systems having extra (unintended) edges between parties.

The way I would describe the mitigation strategy is having deputies
not wield authority on behalf of others. By forcing user actions to
provide their own authority you eliminate the risk of confusion. Not
really familiar enough with the input validation or extra identifiers
strategies to feel confident enough to provide any form of worthwhile
comments.

Alan Karp

unread,
Mar 11, 2026, 6:27:36 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
Pretty good but I found some problems.  I'm particularly sensitive to them because many of the people I'm working with on AI agent safety have these misunderstandings.  While Gemini's statements about "ACLs Don't" largely reflect my comments, most people wouldn't know to ask the AI about that paper.

I've had exchanges with people who thought each of the following was a confused deputy vulnerability.
  • Cloud Service Impersonation: The problem isn't a permissive IAM trust policy.  It's because the only delegation mechanism is impersonation, which is quite common in microservices.
  • Sudo scripts: The confused deputy has nothing to do with improperly sanitized inputs.
  • Principle of Least Privilege: Solves other problems, but it doesn't have anything to do with confused deputy.
  • Context-Based Access: The code in the compiler that updates the log file is acting on behalf of the compiler, not the requester.
  • Input Validation: In many situations the deputy knowing what permissions a requester has can violate policy or even the law (HIPAA).  
Pretty much the entire section on mitigation in ACL systems reflects a fundamental misunderstanding of the problem that I've now seen multiple times.  It's based on the notion that the deputy has a bug that causes it to access some resource.  One system I studied has 3 key features, one of which is there because of this misunderstanding.  

I've found that using the definition "A confused deputy uses its permissions on a resource designated by someone else." is one most people get.  That doesn't mean they internalize it right away.  It often takes them several tries.

One thing I haven't heard people talk about in Norm's confused deputy example is how the compiler gets permission to read a.c and write a.out.  Did I just miss that part?

--------------
Alan Karp


--

Jonathan S. Shapiro

unread,
Mar 11, 2026, 6:48:30 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
On Wed, Mar 11, 2026 at 3:27 PM Alan Karp <alan...@gmail.com> wrote:
Pretty much the entire section on mitigation in ACL systems reflects a fundamental misunderstanding of the problem that I've now seen multiple times.  It's based on the notion that the deputy has a bug that causes it to access some resource.

Agreed. The underlying cause is that the concept of ambient authority and the problems that arise from separating ambient authority from designation basically aren't understood at all by the general software development community, or even by most of the security community.

I've found that using the definition "A confused deputy uses its permissions on a resource designated by someone else." is one most people get.  That doesn't mean they internalize it right away.  It often takes them several tries.

That seems like a useful starting definition for opening a discussion. Step two (which of course you know) is that the deputy actually has no way to associate the specific permissions it intends to use with the resource it is wielding.


Jonathan

Alan Karp

unread,
Mar 11, 2026, 7:32:46 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
On Wed, Mar 11, 2026 at 3:48 PM Jonathan S. Shapiro <jonathan....@gmail.com> wrote:

That seems like a useful starting definition for opening a discussion. Step two (which of course you know) is that the deputy actually has no way to associate the specific permissions it intends to use with the resource it is wielding.

That's not generally true for certificate systems, such as zcaps or Macaroons, which enumerate the delegated permissions and identify the resource.
 
--------------
Alan Karp

Matt Rice

unread,
Mar 11, 2026, 8:29:27 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
On Wed, Mar 11, 2026 at 10:27 PM Alan Karp <alan...@gmail.com> wrote:
>
>
> One thing I haven't heard people talk about in Norm's confused deputy example is how the compiler gets permission to read a.c and write a.out. Did I just miss that part?

I don't think he did describe it, he describes how you don't need to
name the statistics file (the subject of the whole confusion), because
now it may exist outside of a filesystem. Compilers though are a weird
beast where inputs typically map to or reference other inputs by name,
thus the names of inputs typically do matter more than *most* other
uses of capabilities, where we can consider them harmful. I don't feel
like he needed to go into this additional complexity to describe the
problem, so he avoided it.

I do remember a separate note of Norm's where he does describe build
systems in keykos as a continuously evaluated function.
Not exactly a description of the compiler or a detailed description of
how it mapped inputs to outputs exactly though.
http://www.cap-lore.com/CapTheory/KK/ContEval.html

Anyhow by the time you have added "a.out" though you've pretty much
lost this war, the system Norm describes as a continuous function is
fairly different and avoids a lot of the issues that plague unix
compilers... I feel like it is likely easier to compare it to a LSP
server where the lsp typically maintains a hash table of sources/build
environment in memory and a complete mapping of the graph from inputs
to outputs (but in an lsp this happens with knowledge of the language
while Norm describes it without that, where the hash table would be an
external process or capability), but I feel like something LSP like is
still a better starting point because at least it begins from the
perspective of being continuously evaluated.

The other way to describe it is as Norm does in his note, to relate it
more to functional programming. Anyhow that was the only note I know
of where he discussed compiler input on keykos...

Mark S. Miller

unread,
Mar 11, 2026, 11:11:10 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
Correct me if I'm wrong, but Jonathan's point is that in a ***non-cap system***, the deputy actually has no way... . Our whole point is that the deputy can do so in a cap system. 

zcap is a cap system. Macaroons is cap-like enough, and certainly more cap-like than ACL-like. So they're examples, not counter-examples.

 
 
--------------
Alan Karp

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

William ML Leslie

unread,
Mar 11, 2026, 11:34:49 PM (2 days ago) Mar 11
to cap-...@googlegroups.com
On Thu, 12 Mar 2026 at 08:27, Alan Karp <alan...@gmail.com> wrote:
One thing I haven't heard people talk about in Norm's confused deputy example is how the compiler gets permission to read a.c and write a.out.  Did I just miss that part?

Not sure if any ex-Tymshare people are still tuned in to cap-talk, but I did think I understood this -

Norm describes: "Our operating system was much like Unix ((TM) of AT&T) in its protection structures."  I took this to mean that a program you run has access to your things, that's why it could read your source and write output.  It also has access to write to the billing file using a setuid-like facility, the home files license.

Matt Rice

unread,
Mar 12, 2026, 12:11:05 AM (yesterday) Mar 12
to cap-...@googlegroups.com
On Thu, Mar 12, 2026 at 3:34 AM William ML Leslie
<william.l...@gmail.com> wrote:
>
> On Thu, 12 Mar 2026 at 08:27, Alan Karp <alan...@gmail.com> wrote:
>>
>> One thing I haven't heard people talk about in Norm's confused deputy example is how the compiler gets permission to read a.c and write a.out. Did I just miss that part?
>
>
> Not sure if any ex-Tymshare people are still tuned in to cap-talk, but I did think I understood this -
>
> Norm describes: "Our operating system was much like Unix ((TM) of AT&T) in its protection structures." I took this to mean that a program you run has access to your things, that's why it could read your source and write output. It also has access to write to the billing file using a setuid-like facility, the home files license.
>

I always took this "like Unix" to be strictly about the system
vulnerable to confused deputies, In my prior response, I interpreted
Alan's question to be about the version of the compiler described on a
capability system without confused deputies which did not seem to be
described in great detail, instead it focuses on how the billing file
is *not* accessed via the file system, but as a capability given to
and captured by the compiler at it's instantiation/install time. As a
coarse approximation one could consider a trivial compiler limited to
passing file descriptors and probably not be far off.

Alan Karp

unread,
Mar 12, 2026, 12:32:59 AM (yesterday) Mar 12
to cap-...@googlegroups.com
On Wed, Mar 11, 2026 at 8:11 PM 'Mark S. Miller' via cap-talk <cap-...@googlegroups.com> wrote:

Correct me if I'm wrong, but Jonathan's point is that in a ***non-cap system***, the deputy actually has no way... . Our whole point is that the deputy can do so in a cap system. 

My bad!

--------------
Alan Karp

Matt Rice

unread,
Mar 12, 2026, 12:44:40 AM (yesterday) Mar 12
to cap-...@googlegroups.com
Sorry, I should really clarify that this FD only compiler is very
different than the continuous evaluation compiler Norm described,
my point is that it is probably the simplest "pure cap" compiler which
I can think of as an example of a compiler not suffering from confused
deputies
for example it is easy to imagine taking a json dictionary with keys
as filenames and values as input to be a very portable compiler on any
capability system due to a limited reliance on system calls or
persistence.

My point was more we could use that as a simplest example, however
unlikely to be what Norm had in mind.

William ML Leslie

unread,
Mar 12, 2026, 12:52:27 AM (yesterday) Mar 12
to cap-...@googlegroups.com
Oh? In KeyKOS this would be a Factory, but one that can't confine (and certainly can't verify as confined) its yield.  Can't say I know how such a thing felt from a user on CMS or UNIX.  On Coyotos, you can add things to the TOOL set when building the constructor that you want new instantiations to have access to.  We don't yet have any unconfined examples I'm aware of but outside of verification the behaviour is basically the same - the process has a bunch of capabilities sitting in its registers or address space that it can use.  See, for example, how the tty has access to the constructor for its irqhelper:


If you were going to have a UI for such a thing, the user could have a bunch of ways (menu bars, icons, clim/transient, notebook, keybind) to call (impure) functions and these can close over whatever you like.  By default, when you have added a function, either by writing it or by installing it from source or a package manager, you might have access to inspect and alter its closure from your own shell, but arbitrary callers don't need that level of access.

Matt Rice

unread,
Mar 12, 2026, 12:59:37 AM (yesterday) Mar 12
to cap-...@googlegroups.com
On Thu, Mar 12, 2026 at 4:52 AM William ML Leslie
<william.l...@gmail.com> wrote:
>
> On Thu, 12 Mar 2026 at 14:11, Matt Rice <rat...@gmail.com> wrote:
>>
>> On Thu, Mar 12, 2026 at 3:34 AM William ML Leslie
>> <william.l...@gmail.com> wrote:
>> >
>> > On Thu, 12 Mar 2026 at 08:27, Alan Karp <alan...@gmail.com> wrote:
>> >>
>> >> One thing I haven't heard people talk about in Norm's confused deputy example is how the compiler gets permission to read a.c and write a.out. Did I just miss that part?
>> >
>> >
>> > Not sure if any ex-Tymshare people are still tuned in to cap-talk, but I did think I understood this -
>> >
>> > Norm describes: "Our operating system was much like Unix ((TM) of AT&T) in its protection structures." I took this to mean that a program you run has access to your things, that's why it could read your source and write output. It also has access to write to the billing file using a setuid-like facility, the home files license.
>> >
>>
>> I always took this "like Unix" to be strictly about the system
>> vulnerable to confused deputies, In my prior response, I interpreted
>> Alan's question to be about the version of the compiler described on a
>> capability system without confused deputies which did not seem to be
>> described in great detail, instead it focuses on how the billing file
>> is *not* accessed via the file system, but as a capability given to
>> and captured by the compiler at it's instantiation/install time. As a
>> coarse approximation one could consider a trivial compiler limited to
>> passing file descriptors and probably not be far off.
>
>
> Oh? In KeyKOS this would be a Factory, but one that can't confine (and certainly can't verify as confined) its yield.

KeyKOS factories could have "holes" in them, described here at least
(among other places)
http://www.cap-lore.com/CapTheory/KK/Factory.html you could leak a
"controlled amount of data", for the billing file
presumably you could get by some fixed size integer leaking, and
verify that no more than that could be leaked, but that is probably
the extent.

> Can't say I know how such a thing felt from a user on CMS or UNIX. On Coyotos, you can add things to the TOOL set when building the constructor that you want new instantiations to have access to. We don't yet have any unconfined examples I'm aware of but outside of verification the behaviour is basically the same - the process has a bunch of capabilities sitting in its registers or address space that it can use. See, for example, how the tty has access to the constructor for its irqhelper:
>
> https://bitbucket.org/coyotos/coyotos/src/92f3572075a1223362218233915a910ae01222cf/src/base/mki/coyotos/driver/TTY.mki#lines-62
>
> If you were going to have a UI for such a thing, the user could have a bunch of ways (menu bars, icons, clim/transient, notebook, keybind) to call (impure) functions and these can close over whatever you like. By default, when you have added a function, either by writing it or by installing it from source or a package manager, you might have access to inspect and alter its closure from your own shell, but arbitrary callers don't need that level of access.
>
> --
> William ML Leslie
> A tool for making incorrect guesses and generating large volumes of plausible-looking nonsense. Who is this very useful tool for?
>
> --
> You received this message because you are subscribed to the Google Groups "cap-talk" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/cap-talk/CAHgd1hG5hp-4uax-21x215Ez5we%2BYeo%2BKH78xxkOvq19PgpuSw%40mail.gmail.com.
Reply all
Reply to author
Forward
0 new messages