Protecting in-browser capabilities

Skip to first unread message

Jonathan S. Shapiro

Dec 2, 2021, 9:24:18 AM12/2/21
to cap-talk
I'm looking at a conjoined authority problem (customer support) in a browser-based client/server application, and trying to figure out if there is a way to apply capability intuitions to arrive at a half decent solution. If there's any group that will know, it's you folks. :-)

I think we can rule out anything carried in a browser cookie; that's about as ambient as it gets. Thankfully, it is still possible to design our APIs in such a way that every call to our service requires a capability as an argument.

The catch is: those capabilities would need to reside in javascript-accessible memory, and the major application frameworks like VueJS and its cohort really haven't given much (if any) thought to encapsulation. Meanwhile, development cost considerations drive production browser applications to incorporate second-party libraries that are subject to injected vulnerabilities.

I won't even get started on the problems of client-side storage and session persistence.

I have a profoundly half-assed approach that will probably be sufficient for my current application, but it doesn't generalize worth a damn.

Short of writing an entirely new client-side framework, does anyone know of a way out of this swamp for browser-based applications?


Ben Laurie

Dec 2, 2021, 10:20:52 AM12/2/21
Starting with the bleedin' obvious:

Tristan Slominski

Dec 2, 2021, 10:58:34 PM12/2/21
No idea if this is at all close to being useful...

A while back, I built a capability-based web console. Think like AWS Console... single console providing access to different services. Each service was responsible for serving its own UI and was guaranteed to be served on a specific URL path (the console itself was a service which other services registered with and reserved a url path on). The console itself functioned as a proxy, so it would accept a browser request for, say /services/<service name>, but then proxy that request to <service name> on the backend and respond with whatever UI the backing service sent.

This allowed for scoping service-specific capabilities into HTTP Only, Secure cookies, bound to specific paths. The service knew what path it was on, so it could update its own capabilities in the cookie as required. This mechanism provided the web UI experience from login to logout. 

In order to use capabilities gained out of band of this mechanism (someone copy and pasting something), the console also had a /proxy/<service_name> API endpoint which the frontend would call using a POST request and including the capability in the request body. This would be converted to an API call to the backing service, which was a capability API. Maybe worth noting that to call /proxy/<service_name> at all required a console capability, stored in an HTTP Only, Secure, path bound cookie as well.

For things like "magic link" to login and such, the capability was contained in the URL fragment (onetime use, so meh), and a /login endpoint would serve javascript that would read the fragment on the browser and turn it into a /proxy/account API call, which would redirect as required on success.

You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit

Tristan Slominski

Dec 2, 2021, 11:03:00 PM12/2/21
I should add, the capabilities stored in the cookies were in essence "powerboxes" containing things like "query capability", "create X capability", etc. They could probably be reduced to a single "powerbox" capability of "get me my capabilities".

Kris Kowal

Dec 5, 2021, 7:07:50 PM12/5/21
I don’t know any path to bringing ocaps to bear on a web view yet, but the problem is on my mind. Isolating the DOM is challenging. The Caja group made Domida, of course, but the scope of taming a DOM in a way that preserves framework compatibility is nearly limitless. It might be worth trying again using a ServiceWorker to isolate network access.

But, much depends on what you need from capabilities. Granting and isolating multiple tenants with partial control over non-overlapping fragments of the DOM might not be feasible, but capabilities can be brought to bear in other ways. 

One of the problems you mention is supply chain attacks. The folks at MetaMask built LavaMoat to mitigate that problem specifically, including a tool that generates a preliminary “policy” for subletting capabilities to dependency packages based on static analysis.

LavaMoat in turn uses SES (“Hardened JavaScript”) to provide an environment suitable for capabilities. SES makes all the shared primordials shallowly immutable and provides evaluators for isolating tenant programs. LavaMoat isolates npm packages and endows them with limited access to powerful modules and globals. That gives you a place to stand to defend against submarines that might appear in your transitive dependencies.

Of course, it’s not a panacea. There’s a short list of caveats that require some well-known work-arounds when they come up, but a surprising amount of code in npm *doesn’t* monkey-patch and *doesn’t* reach for modules they don’t really need.

As for the rest of the capabilities story, you’ll find a range of packages in npm under the “agoric” and “endo” orgs that answer how to CapTP or how to bundle a plugin for isolated execution.

But the money question is:

On Thu, Dec 2, 2021 at 6:24 AM Jonathan S. Shapiro <> wrote:
Short of writing an entirely new client-side framework, does anyone know of a way out of this swamp for browser-based applications?

I think the only answer is to embrace your secret desire to build a new client-side framework.
Reply all
Reply to author
0 new messages