Information Security Requires Strongly-Typed Actors and Theories
https://papers.ssrn.com/abstract=3418003
Physical Indeterminacy in Digital Computation
https://papers.ssrn.com/abstract=3459566
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/87bldeow1d.fsf%40dustycloud.org.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CA%2BymXc0nWE2gqf20%3Dex5vpR72o3dkd_hpN-4OckfVMPVDaRJ7w%40mail.gmail.com.
Carl:Writing as someone who has done my time in the trenches on formal and rigorous specifications, I am of two minds:For working engineers, I'd say that the ideal level of rigor today (barring improvements in the comprehensiveness of core CS education that would require more credit hours for a degree) is what the IBM 360 Principles of Operation achieved. While written in English, it is a rigorous and clear specification that was successful in maintaining a compatible processor architecture family over a 30 year period. But equally important, it was written in a language, style, and organization that provided pragmatically relevant and useful guidance to the practitioners.
- It might be good to have a more formal specification, though we both know that correct specification is more elusive than any of us might like, but
- It's irrelevant for the target audience here, because formal specification is done in a language, style, and framework that is largely alien to this audience of practitioners.
I'm merely saying that a specification document must speak to its audience if it is to be useful. I think there is an interesting discussion to be had in how that type of writing might be applied to security concerns.As an aside, my confidence in formal methods was greatly reduced by experience. The problem is that (a) it's too easy to prove the wrong thing, (b) it doesn't scale and it doesn't iterate at reasonable cost, and (c) the set of problems we can successfully apply it to is too specialized, and (d) global pool of people who can do it isn't large enough to be interesting.All of these issues may, in time, come to be resolved. That said, if our goal is justified confidence, there is a great deal that can be done with various forms of typing that hasn't been adequately explored - or hadn't been when I last did a deep dive on the literature. And yes, that's a kind of formal specification, but one that is expressed in a way that speaks to the practitioners, and one that can often be applied in a "variable weight" way, focusing effort where it is most needed.With regard to your proposition about Actors, I have two small thoughts and one larger one:
- Some properties require typing, but not all. Notably: systemic information flow properties do not appear to be best addressed by type-based validation.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAAP%3D3QNYPxvpMGidyDNSZoUvKT2BNBAajr2Jp5nQbTqE%3Dy9j2Q%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAAP%3D3QNYPxvpMGidyDNSZoUvKT2BNBAajr2Jp5nQbTqE%3Dy9j2Q%40mail.gmail.com.
On Tue, Jan 26, 2021 at 1:03 PM Jonathan S. Shapiro <jonathan....@gmail.com> wrote:With regard to your proposition about Actors, I have two small thoughts and one larger one:
- Some properties require typing, but not all. Notably: systemic information flow properties do not appear to be best addressed by type-based validation.
Hi Jonathan, could you expand on this? That's my intuition as well, but I don't understand it well enough to argue against the common wisdom of the people doing this. Thanks.
Programmers who are proficient in Java and C++ should be able to deal with formal properties of systems.Only specialized software engineers need run the proof engines :-)
The IBM 360 voluminous approach will not provide the automation that we need for Universal Intelligent Systems.Of course, we still need to write articles and documentation in natural language as well ;-)
As you note, formal properties can be as difficult as large software systems to develop and debug.
Security isn't about confidence. It's about results in the field. Confidence is a sales tool. Security is an objectively measurable and characterizable condition. Both are important, and both are needed, but we shouldn't neglect the second for the first.
The two main thought streams about security concern themselves with (a) confidence and (b) process. What the fuck happened to considerations of (c) structure, and (d) architecture, which are the main things that ultimately determine behavior in the field?
Security comes at a cost, and our persistent inability to frame the cost/benefit tradeoff of security-related engineering is part of why the business case for it is so elusive. Without a business case, it simply won't happen.
Sustainable revenue from software is ultimately driven by on-going change. The majority of software exists in a compulsory feature arms race for economic and business reasons, and the way we have been approaching verification over the last forty years has largely ignored this. There is a fundamental impedance mismatch, and Carl's assertion above about verification specialists seems very close to the heart of it.
// t is nullable<T> prior to this line:
if (t === null) handle_it();
// t is typed as [non-nullable] T from here down
(∀[e:FromOutside vSw] P[e] // if P holds for every event from outside S
⋀ ∀[e:Event vSw, // and for every event of S
f:ImmediatelyFollows vew] // and for every f immediately following e
P[e]⇒P[f]) // if P holds for the e, then P holds for f
⇒ ∀[e:Event vSw] P[e] // then P holds for every event of S
Address Information Transfer
Axiom
∀[P:PropositionviwAddresses ] // For every predicate P on addresses of propositions
(∀[e:FromOutside vSw] P[AddressInfo[e]] // if P holds for the address info of every event from outside S
⋀ ∀[x:ActorvSw]
x:Primordial ⇒ P[AddressInfo[Creation[x]]] // and if x is primordial, P holds for the address info in the creation event of x
⋀ ∀[e:Reception vxw, e1:Scoped Svew] P[AddressInfo[e]]⇒P[AddressInfo[e1]])
// and if for every reception e of x, if P holds for the address info of e, then P holds for
// the address info of every event of in the region of S in mutual exclusion of e
⇒ ∀[e:EventvSw] P[AddressInfo[e]] // then P holds for the address info of every event of S
In the theory Actors, the following can be proved
Theorem. ∀[e1:Event, e2:Event ] e1↝e2 ⇒ AddressInfo[e1]⊑AddressInfo[e2]
[Hewitt and Baker 1977] may have been the first to begin characterizing Actor address information transfer among different computers in "Laws of Locality".
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CA%2BymXc2qTYcFYPRC2Tutm7%2BAqK-W%3Dz6oW%2B3-POfBPUyhYYQ68A%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAK5yZYjEYXDOscf9b%3D2YVMR0GnVMaFscknUA76-ubGzZO_K8EQ%40mail.gmail.com.
Physical Indeterminacy in Digital Computation
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/0358607b-ef70-4e56-a9b6-662db364c3e2n%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/36bd675f-7c4b-44f8-b929-6e6a853eb560n%40googlegroups.com.
I alluded to it in the opening, but I think a better path forward would be to define a new complexity class, Polynomial Actors or PA for short (please don't let this be the name, please somebody choose a better name) which is like P but for Actor computation. Given a linear/quadratic/etc. number of events, how much work can get done with Actors? Is it more than with Turing machines, which is how P is typically defined, or with quantum computers, like how BQP is defined? The existing comparison with NP would then be reusable as a demonstration that P vs PA is a meaningful and interesting question regardless of how P vs NP is settled.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/ccc3209a-b29b-457f-a79c-48c1632a803an%40googlegroups.com.
Hm. To ask an implicit question, is it really possible to send two messages concurrently to an actor so that one of those messages might be indefinitely delayed, but both messages will both be delivered for sure eventually? The usage of "concurrently" and "faster" makes me curious about the actual physical difficulties of constructing Actor machines.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CANpA1Z1jeoBC_XgpGTBj6HQ8r7jZ%2BzwSMG3LZum%2B6aY6zpBS5A%40mail.gmail.com.
PS. If a dependency is required that another Actor has received a message M,often a better bet is to first receive a response that the other Actor has received M.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CA%2BymXc0aaByUztb-esbVeu1Hm_LXFtOF9X7DPV00vxeudpktgQ%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CABrd9SSpt1m5uf8i3EUT9xK%2BRVmmp0KMELeYy0hujQNMS%2BV3Pw%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/r480Ps-10146i-CE7663DF9E324D829A2F40A2F1D4EDF0%40Williams-MacBook-Pro.local.
PS. If a dependency is required that another Actor has received a message M,often a better bet is to first receive a response that the other Actor has received M.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CANpA1Z1Cx_UCWJNGAuzJ-YvFz6xLFN-AUxq9TJo6%3DCTRwhx%3Dpg%40mail.gmail.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CA%2BymXc0y%2Btn-s4goeHTXj8-YkK82M%2Bn%3D-CnXEhHfYZ96gC9C3g%40mail.gmail.com.
PS. Even if Jonathan is correct that we can't deal with formal properties...
Physical Indeterminacy in Digital Computation
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAAP%3D3QMV2N2cMkQto%2BpViH8vbZ6ijvjA2SQ0X6Ev5VX0FnwKiA%40mail.gmail.com.
On Sun, Jan 31, 2021 at 3:56 AM Carl Hewitt <carl.e...@gmail.com> wrote:PS. Even if Jonathan is correct that we can't deal with formal properties...That's not quite what I said, but it is accurate enough for now. I think everybody here has read about just how many iterations it took to arrive at a correct formal specification for something as simple as a sort algorithm. That specification problem hasn't gotten substantially easier in modern mechanized provers.
Because they are generally graph properties, the specification of information flow objectives is actually easier than the specification of sort algorithms. Even so, it isn't unknown to arrive at a correct formal capture of a mis-stated property. From memory, the originally published seL4 isolation verification inadvertently did this. I'd have to go back to the paper and re-read, but my memory is that they arrived at a correct formalization of a slightly incorrect goal statement on that very early work. What's remarkable is not that a mistake may have been made, but that the mistake (again from memory) was in the original informal statement of the desired property, and none of the reviewers caught it. Because it was a "plain language" issue, the humans had a pretty good chance of noticing the problem. Had it been an omission in the formal maths, the likelihood of catching it would have been so close to zero as makes no difference. And it wasn't the type of error that is readily caught in the course of the verification. Verification often can be used as a co-validation of the refinement of the goal specification, and it can sometimes discover contradictions in the goal statement, but it does not diagnose omissions or errors in the goal statement.The justifiable confidence achieved from formal verification is the weaker of the verification itself and the specification. We know that the verification efforts are difficult, but more critically, we know that creating the specifications is an activity with a very high error rate. And that's just for simple properties. End-to-end correctness properties are anything but simple.It is appropriate to ask "How can we improve the specification process?" and I expect that will be an open, long-term research process. But it is also reasonable to ask "Can the confidence we are actually getting be had with a simpler mechanism that can be used by a broader audience, e.g. slightly stronger typing combined with rigorous (but not mechanized) argument?" I currently believe that (for now) the answer is "yes".This is not, by the way, an argument about the expense of verification. It's pretty well documented that the reduction in error/bug rate that comes from verification is a significant win over the long haul. Dependent type systems can be pushed a good ways into that, but the complexity of a dependent type has a way of exploding quickly when it is pushed in this sort of way.Unfortunately, there is a serious cost to verification: time to market. While verification yields more robust results, it also tends to drive longer delivery cycles. In the market, it is frequently the case that arriving first, even with a weaker offering, provides market victory. In my view, this remains an open problem to be solved for verification to become mainstream. In essence: the process needs to be much more incremental.Jonathan
The time-to-market issue can be addressed by co-developing implementations with their formal specifications.
On Tue, Feb 2, 2021 at 11:04 AM Carl Hewitt <carl.e...@gmail.com> wrote:The time-to-market issue can be addressed by co-developing implementations with their formal specifications.I think you're in the wrong order of magnitude by several orders,
On Tue, Feb 2, 2021 at 10:49 AM Jonathan S. Shapiro <jonathan....@gmail.com> wrote:I think everybody here has read about just how many iterations it took to arrive at a correct formal specification for something as simple as a sort algorithm. That specification problem hasn't gotten substantially easier in modern mechanized provers.I have not seen that one. Do you have a link?
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAAP%3D3QMxURy0sB5y0Cgi5a9WLG3G257DSyMGTf7u5rgNPtCNFg%40mail.gmail.com.
I would hazard a guess that this is the talk Using Formal Methods to
Eliminate Exploitable Bugs given by Kathleen Fisher Friday, May 20,
2016 3:30pm
as part of the HACMS project https://ts.data61.csiro.au/projects/TS/SMACCM/
Pysical Indeterminacy in Digital Computation
Regards,
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CA%2BymXc2qTYcFYPRC2Tutm7%2BAqK-W%3Dz6oW%2B3-POfBPUyhYYQ68A%40mail.gmail.com.
--Cheers,
--MarkM
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/CAK5yZYjEYXDOscf9b%3D2YVMR0GnVMaFscknUA76-ubGzZO_K8EQ%40mail.gmail.com.
--
You received this message because you are subscribed to the Google Groups "cap-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cap-talk+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/0358607b-ef70-4e56-a9b6-662db364c3e2n%40googlegroups.com.
the native language of a topos is a higher-order theory.
To view this discussion on the web visit https://groups.google.com/d/msgid/cap-talk/94cd8b33-ee88-4bf6-b2f0-6d6336822273n%40googlegroups.com.
the native language of a topos is a higher-order theory.
On Tuesday, February 16, 2021 at 6:55:31 AM UTC-8 carl.e...@gmail.com wrote:
The mathematical theory of categories [McLarty 1991] is inadequate forthe foundations of Computer Science becauseit lacks precision since it is a 1st-order theory.Every 1st-order theory has nonstandard models,which means that the theory hasnot precisely defined subject matter of the theory.On the other hand, the theory Actors hasjust one model up to a unique isomorphism.See the following for more details:Pysical Indeterminacy in Digital Computation"Everything is an actor" implies that there is a category of actors. We can sketch such a category briefly.