Hi Tony, I finally feel ready to start answering your question. Here's Part 1:
I often find the standard top-level taxonomy of security useful; the division into integrity, confidentiality, and availability. A basic engineering principle I learned from KeyKOS, that AFAIK is followed by everything in the KeyKOS family including seL4, and that we followed in E, Joe-E, Caja, and Dr.SES:
Whenever possible, integrity should not depend on confidentiality.
Further, for interactions between mutually suspicious entities sharing a mutually trusted platform, such as a single machine, this is always possible. In that context, the recommendation becomes an absolute:
(Among things on a shared TCB)
Integrity must never depend on confidentiality
The Gedanken experiment that I've used to think about this is:
Start with a KeyKOS system. Now imagine we add to the architecture a user-mode instruction for reading any word of physical memory. On such a system, it is no longer possible to keep secrets, so cryptography is impossible. But among processes running on one shared KeyKOS kernel, integrity is not compromised in the slightest.
In my previous discussion of this Gedanken experiment, I missed the issue of login authentication. The strong statement is still correct, in that it is an interaction with suspicious agents outside the shared TCB, i.e., an alleged human at a keyboard. However, this is not an avoidable problem for the utility of KeyKOS as originally conceived.
This separation was mostly implicit in the local E design, but I cannot think of any exceptions. I think local E follows this perfectly.
This separation became explicit during the Joe-E design, where we were careful about the principle:
None of Joe-E's security claims depends on any party's lack of knowledge. Even if any party might be omniscient, all Joe-E's security claims would still hold.
For local E, with one exception, all my strong claims have been about integrity only. I knew that E was not in a position to make strong claims about availability. With one exception, I knew that E was not in a position to make claims about confidentiality. Least authority and defensive consistency are only about integrity. See section 5.7 of my thesis "A Practical Standard for Defensive Programming" for the criteria used for the claims "By convention, unless stated otherwise, E and its libraries are engineered to achieve and
support the following forms of robustness. The examples and mechanisms presented in this
dissertation should be judged by this standard as well."
The one exception, which I don't think is anywhere mentioned in the thesis, E is designed to support "loggable fail-stop non-determinism". This means that the spawner of a confined E computation can ensure that two runs of the same computation, from the same logs of inputs, if neither fails, must take the same conditional branches and produce the same overt outputs. E does allow non-deterministic failure, such as by memory exhaustion, where one replica might fail while another does not. This property enables some claims about confidentiality:
Such computation cannot read non-overt (covert and side) channels, unless those channels are already encoded into its loggable inputs. In other words, the confiner knows that they only need to worry about the same inputs that the confiner knows to worry about logging.
These new attacks, Meltdown and Spectre, are a closer approximation to my KeyKOS Gedanken experiment than anything I ever imagined might be possible. However, the attacks directly only reveal information, and only to those that can somehow measure time. These attacks are severe! Although for local E, I avoided making a strong confidentiality claims that are invalidated by these attacks, I certainly built E to achieve a measure of confidentiality that now needs to be re-evaluated. I am not in any realistic sense claiming that local E is practically immune to these attacks. But I do find it very satisfying that this stratification of security claims was robust against this unexpected class of new attacks.
Distributed E of course depends on cryptography. Due to the influence of client utility, live distributed E uses the distributed c-list technique, which avoids depending on the unguessability of *transmitted* secrets, i.e., swiss numbers. However, the transition from offline to live does depend on this: for dereferencing either captp uris or sturdy refs. CapCert was never implemented, but like SPKI/SDSI, never depends on the unguessability of transmitted secrets, period. I believe that E's captp uris and sturdy refs could be replaced with something more like the CapCert introduction, but I have not explored it. Possibly, client utility already demonstrates a logically equivalent case. Alan?