Beginnings of Linear Logic for foundations

114 views
Skip to first unread message

Corbin Simpson

unread,
May 20, 2019, 1:13:32 PM5/20/19
to Metamath
Hi!

I've started work on what I'm thinking of as "lset.mm": https://gist.github.com/3e1cfa0062bed748234f573558211869

I have a few thoughts, both about Metamath and about this particular adventure:

* I tried several different flavors of syntax before picking this one. I tried having sequent-style symmetric turnstiles, but that had terrible bookkeeping problems. I thought about postfix negation, as is traditional for linear logic (it looks like exponentiation with _|_ ) but I had trouble writing the rules. (I am stopping myself from writing a Metamath syntax compiler.)

* As a category-theoretic desire, I'd like to be able to enter isomorphisms as axioms without incurring a double-entry penalty. I'd also like to not have to define both introduction and elimination axioms for new syntax built on old connectives.

* Relatedly, I want to have to only express a single associative law for each connective. Digging down into parens is unpleasant. I was considering weakening the turnstile at the head of those axioms to `wff` or something along those lines, but I'm not yet thinking enough like the Metamath parser to be able to predict what will go wrong with that. (As a language designer, this is screaming to me "you're putting too much into the syntax", but aren't we putting *everything* into the syntax?)

* set.mm rushes to set up implication and then to move deduction into the object language, where we have more powerful deductive tools. However, implication is not as primitive in (classical) linear logic, and is honestly more pleasantly defined in terms of the multiplicative connectives; this not only leaves a gap before the introduction of implication, but cannot solve all upcoming issues since the additive connectives do not interact well with implication.

Further, in this file so far, it seems like I can apply ~ ax-i or ~ id in order to convert the Metamath-level rules into lollipops. Maybe this isn't a big deal, just a matter of ergonomics and my own inexperience.

* ⅋ is not in ASCII and there appears to be no good replacement. On the plus side, maybe this will lead to a demystification of this operator.

* Nomenclature is difficult, as always.

* This was a little hard, but mostly what is hard is looking at nearly 600KLOC of set.mm and trying to imagine how much of that I'll be able to get to. I know that there's a lot of intermediate stuff that I can pick and choose to prove, depending on where I'm going, but I haven't even reached the integers.

* I need to replace 0 and 1, probably, if I *do* reach the integers. Z and O maybe?

* I'm not a mathematician. This is probably a problem, or at least problematic.

* The proof assistant is alright. I'm able to integrate it with my editor somewhat, and to prove one or two proofs at a time, slowly.

* Proof verification is so fast that it changes how I'm thinking about my workflow compared to other languages like Coq.

Thanks,

~ C.

fl

unread,
May 20, 2019, 3:02:23 PM5/20/19
to Metamath
It doesn't seem they have a decent set of axioms. You won't go very far.

--
FL

Mario Carneiro

unread,
May 20, 2019, 8:53:00 PM5/20/19
to metamath
Hi Corbin,

Welcome to the metamath mailing list. This is very cool, a very strong first exploration into linear logic in metamath.

On Mon, May 20, 2019 at 1:13 PM Corbin Simpson <mostawe...@gmail.com> wrote:
* I tried several different flavors of syntax before picking this one. I tried having sequent-style symmetric turnstiles, but that had terrible bookkeeping problems. I thought about postfix negation, as is traditional for linear logic (it looks like exponentiation with _|_ ) but I had trouble writing the rules. (I am stopping myself from writing a Metamath syntax compiler.)

* ⅋ is not in ASCII and there appears to be no good replacement. On the plus side, maybe this will lead to a demystification of this operator.
 
I think the syntax you've settled on is good, although the notation could be better. My suggestion would be to use /\ and \/ in place of |"| and |_| for notational similarity (I'm not sure what these are approximating). I realize that the connectives behave slightly differently in linear logic but it seems easier to borrow the usual notation for half of your operators and invent new notation (like box and diamond) for the other half.

Have you thought about using /\ and \/ and && and || ? Also + and * seem reasonable choices for the "plus" and "times" connectives, or if you prefer, (+) and (x) to make circle-plus and circle-times. Also using & for "with" is a no-brainer. One technique used in set.mm for variation symbols is to have a marker like ~ or _ next to it; so you might try options like "~&" or "^&" for par. Or just use \/ since that's not otherwise being used and par is at least disjunctive-ish. Or just pick a random other symbol like "%". It's not that important in the long run, people will get used to whatever you use.

Obviously I've made an inconsistent set of suggestions; pick your favorite subset. Below I will use (+) and (x) and & and \/.
 
* As a category-theoretic desire, I'd like to be able to enter isomorphisms as axioms without incurring a double-entry penalty. I'd also like to not have to define both introduction and elimination axioms for new syntax built on old connectives.

* Relatedly, I want to have to only express a single associative law for each connective. Digging down into parens is unpleasant. I was considering weakening the turnstile at the head of those axioms to `wff` or something along those lines, but I'm not yet thinking enough like the Metamath parser to be able to predict what will go wrong with that. (As a language designer, this is screaming to me "you're putting too much into the syntax", but aren't we putting *everything* into the syntax?)

One thing I thought reading your file is "you desperately need iff". There is an alarming number of axioms in the file, maybe more than 50%. I'm not completely up on linear logic but you should be able to define a connective that means iff; I think it is (A <-> B) is ( A -o B ) & ( B -o A ). You should introduce this early, give only the axioms you need to characterize lolly and iff, and then use iff to define the other axioms like distributivity of various kinds.

Duality should allow you to cut down on most of the axioms. The postfix _|_ operator is not a negation in the logic, but rather a defined operation on expressions. To encode this in metamath we need a judgment for ~A = B, for example:

$c ~= $.
$a wff ( ph ~= ps ) $.   $( this is slightly abusive as we don't really want ~= to be an operator, just a top level judgment. But the alternative requires more sorts $)

$e |- ( ph ~= ps ) $a |- ( ps ~= ph ) $.
$a |- ( 1 ~= F ) $.
$a |- ( 0 ~= T ) $.
$e |- ( ph ~= ch ) $e |- ( ps ~= th ) $a |- ( ( ph (x) ps ) ~= ( ch & th ) ) $.
$e |- ( ph ~= ch ) $e |- ( ps ~= th ) $a |- ( ( ph (+) ps ) ~= ( ch \/ th ) ) $.
$e |- ( ph ~= ps ) $a |- ( ! ph ~= ? ps ) $.

We still need a negation in the logic to denote the negation of an atom:

$c ~ $.
$a wff ~ ph $.
$a |- ( ph ~= ~ ph ) $.

You might consider this not parsimonious since ~= is provably equvalent to ~ A <-> B once you have the axioms to say that. But you have to either introduce equality or equals-not early on in order to define negation that is used in the axioms.

Linear logic lends itself well to a one-sided sequent calculus, but since we have to model the comma as an operator on wffs it makes sense to use par since the operator already exists. So then it just boils down to writing the axioms of LL where par substitutes for the context disjunction. Since the context disjunction is commutative and associative we need that for par:

$e |- ( ph \/ ps ) $a ( ps \/ ph ) $.
$e |- ( ( ph \/ ps ) \/ ch ) $a ( ph \/ ( ps \/ ch ) ) $.
$e |- ( ph \/ ( ps \/ ch ) ) $a ( ( ph \/ ps ) \/ ch ) $.

These on their own aren't good enough to put any disjunct wherever we want though, as there is no way to apply associativity to a subterm in the context. In classical logic, we would deduce ( ( ps /\ ph ) /\ ch ) from ( ( ph /\ ps ) /\ ch ) by using modus ponens to reduce to ( ( ( ph /\ ps ) /\ ch ) -> ( ( ps /\ ph ) /\ ch ) ) and then use an implication lemma to reduce to ( ( ph /\ ps ) -> ( ps /\ ph ) ) which is an application of symmetry. Having equals here would be again very nice, and we know it's coming but maybe we can't wait because we're still setting up initial proof rules.

Let's give in to temptation and add = as a primitive connective to the logic. This means we don't need ~= anymore, and we have the following:

$c <-> $.
$a |- ( ph <-> ps ) $.
$e |- ( ph <-> ps ) $e |- ( ch <-> ps ) $a |- ( ph <-> ps ) $.
$e |- ph $e |- ( ph <-> ps ) $a |- ps $.

This should eliminate the need for stating most things as inference rules. Let's get back to contexts:

$a |- ( ( ph \/ ps ) <-> ( ps \/ ph ) ) $.
$a |- ( ( ( ph \/ ps ) \/ ch ) <-> ( ph \/ ( ps \/ ch ) ) ) $.
$e |- ( ph <-> ps ) $e |- ( ch <-> th ) $a ( ( ph \/ ps ) <-> ( ch \/ th ) ) $.

Nice, only one of them is an inference rule now. It would be nice to have the last one as a closed form lemma too but that needs the lolly and we aren't there yet (and there are also linearity worries there). In the spirit of parsimony, we can define half of the connectives in terms of negation. (Let's say that \/, &, 1, T, ! are primitive and the others are defined.) Let's put in the other equality axioms too:

$e |- ( ph <-> ps ) $e |- ( ch <-> th ) $a ( ( ph & ps ) <-> ( ch & th ) ) $.
$e |- ( ph <-> ps ) $a |- ( ~ ph <-> ~ ps ) $.

We should also have an equality axiom for equality itself, but we don't really need to use <-> as a proper wff and once we have the full axiom system it will be provable. The context disjunction now works correctly, so we can get on with the real axioms:

$a |- ( ~ ~ ph <-> ph ) $.
$a |- ( ~ ph \/ ph ) $.
$e |- ( ph \/ ps ) $e |- ( ~ ps \/ ch ) $a |- ( ph \/ ch ) $.
$a |- 1 $.
$a |- ( ph \/ T ) $.
$e |- ( ph \/ ps ) $e |- ( ph \/ ch ) $a |- ( ph \/ ( ps & ch ) ) $.
$e |- ( ~ ph \/ ps ) $e |- ( ~ ps \/ ph ) $a |- ( ph <-> ps ) $.

The only rule we need for negation is DNE because of our choice to define the other connectives, but we add a rule for proving an equality because otherwise we won't be able to characterize it. We define the dual connectives:

$a |- ( ( ph (+) ps ) <-> ~ ( ~ ph & ~ ps ) ) $.
$a |- ( ( ph (x) ps ) <-> ~ ( ~ ph \/ ~ ps ) ) $.
$a |- ( ? ph <-> ~ ! ~ ph ) $.
$a |- ( 0 <-> ~ T ) $.
$a |- ( F <-> ~ 1 ) $.
$a |- ( ( ph -o ps ) <-> ( ~ ph \/ ps ) ) $.

The cost of having these as definitions is that we still need the rules that were associated to them, but now they are rules for negated forms of the original connectives. At least we have the lolly now so these look good.

$e |- ( ph -o ch ) $e |- ( ps -o th ) $a ( ( ph \/ ps ) -o ( ch \/ th ) ) $.
$a ( ( ph & ps ) -o ph ) $.
$a ( ( ph & ps ) -o ps ) $.

The wikipedia rules for exponentials are a bit weird since they involve both of them at once. Possibly this can be optimized.

$e |- ph $a |- ( ph \/ ? ps ) $.
$e |- ( ph \/ ( ? ps \/ ? ps ) ) $a |- ( ph \/ ? ps ) $.
$e |- ( ? ph \/ ps ) $a |- ( ? ph \/ ! ps ) $.
$e |- ( ph \/ ps ) $a |- ( ph \/ ? ps ) $.

From these, I think all the axioms in your axiomatization are derivable.

* set.mm rushes to set up implication and then to move deduction into the object language, where we have more powerful deductive tools. However, implication is not as primitive in (classical) linear logic, and is honestly more pleasantly defined in terms of the multiplicative connectives; this not only leaves a gap before the introduction of implication, but cannot solve all upcoming issues since the additive connectives do not interact well with implication.

The natural deduction style in set.mm doesn't work as well in linear logic because of the need to split the context, but the natural analogue is to have a disjunct ( ph \/ ps ) where ps is the thing you are trying to prove and ph is the context. This basically saying that ~ ph are the "resources" to prove ps. With the lolly in place it's about the same.

Further, in this file so far, it seems like I can apply ~ ax-i or ~ id in order to convert the Metamath-level rules into lollipops. Maybe this isn't a big deal, just a matter of ergonomics and my own inexperience.

I might be missing something about how linear logic or this axiomatization works, but I am suspicious of this claim. Suppose I have a rule $e |- foo $a |- bar $. for some particular formulas foo and bar. This says that if foo is derivable then so is bar, but this does not imply ( foo -o bar ). In fact this is even more evident in LL because the lolly implies some kind of resource consumption, but I can pick foo and bar so that doesn't work. For example ( 1 \/ 1 ) is not provable because there is no way to get rid of that second resource, so ( F -o ph ) is similarly not derivable. But a rule $e |- F $a |- ph $. is conservative because |- F is not provable so the rule can never be used. So if I add the axiom $e |- F $a |- ph $. there is no way you can derive $p |- ( F -o ph ) $.
 
* Nomenclature is difficult, as always.

Try to use the set.mm style of giving label fragments to each symbol, and string those together to make your axioms. For example \/ "par" & "wi" (+) "pl" (x) "mu" and then you get things like ax-wi and df-pl and so on. Or if there is a similar set.mm theorem, you could name it after that; for example I would be inclined to call ( ( ph & ps ) -o ph ) ax-simpl. The most important thing is to find a convention and stick to it as rigidly as you can, at least until you get comfortable with it.
 
* This was a little hard, but mostly what is hard is looking at nearly 600KLOC of set.mm and trying to imagine how much of that I'll be able to get to. I know that there's a lot of intermediate stuff that I can pick and choose to prove, depending on where I'm going, but I haven't even reached the integers.

As FL says, this isn't really in the same arena as set.mm, it's more of a propositional logic extension. If you want to do set theory you should work out how first order logic works in this setting, and then also figure out what you want to achieve with this. It might be personal bias but I'm not really sure what LL has to offer set theory. I would be more interested in using LL for type theory and program logics, which is where they are usually seen.

* I need to replace 0 and 1, probably, if I *do* reach the integers. Z and O maybe?

That's certainly planning too far ahead. I would actually recommend something more like 0. and 1. , and the dots would go on the primitive wffs not the numbers. This already has precedent in set.mm; this is how the lattice top and bottom functions are defined (http://us2.metamath.org/mpeuni/df-p0.html).
 
* I'm not a mathematician. This is probably a problem, or at least problematic.

Eh, it's overrated.
 
* The proof assistant is alright. I'm able to integrate it with my editor somewhat, and to prove one or two proofs at a time, slowly.

What is your editor? Which prover interface are you using? It might surprise you to know there's more than one, but there is a whole ecosystem of different verifiers and so if your setup isn't working for you try a different one. In particular I recommend mmj2, which I have spent some time on (as a user and as a developer) and I find it to be quite usable.
 
* Proof verification is so fast that it changes how I'm thinking about my workflow compared to other languages like Coq.

I'm glad someone noticed. I started with metamath and have since branched out to more mainstream theorem provers, and I can't say how much I feel this every day. When verification is literally 4 orders of magnitude faster with an alternate strategy, you have to think something has gone wrong in the community process to think that this is okay.

> A lot of people seem to think that performance is about doing the same thing, just doing it faster, and that is not true. That is not what performance is all about. If you can do something really fast, really well, people will start using it differently. -Linus Torvalds

I can tell you that being able to check the entire math library in a few seconds means that you can do it cheaply on startup and always have an accurate view of what theorems are working and which need work. This is just not possible in systems like Coq and Lean where building the library means sending it off to CI and not hearing back for a few hours (and burning a lot of CPU cycles the whole time). Mainstream theorem provers have a lot to learn from metamath, and this is a big one, I think. I want to scale MM to industrial grade projects to take advantage of this, because that's when people notice.

Mario Carneiro

Corbin Simpson

unread,
May 20, 2019, 8:57:19 PM5/20/19
to Metamath
I had some other thoughts that I forgot to include:

* Based on a technique I believe I learned from https://blog.plover.com/math/24-puzzle-2.html , it would seem at least plausible that I could syntactically encode associative-and-commutative structures as bags of some sort. I'm not sure if this is a worthwhile avenue of exploration, but I know at least one popular parenthesis-based language family, Lisp, does this for some associative operations.

* I wonder whether double-lollipop should be an additive or multiplicative conjunction.

On Monday, May 20, 2019 at 12:02:23 PM UTC-7, fl wrote:
It doesn't seem they have a decent set of axioms. You won't go very far.


Totally understandable. Would there be a better set of axioms for doing this work? I don't know much about linear logic, and it seems like every author has a different presentation. Also, of course, if there's a preferred set of axioms that were waiting for some planned work along these same lines, then I'd prefer to go with what everybody else wants to do.

How far should I expect to get? Is there some ceiling that you're anticipating? I'm hoping that, after I finish adding axioms and proving a base and doing some sanity checks, I'll be able to reach anything in set.mm. In particular, I've convinced myself of ~ ax-1 but with lollipops on a whiteboard, and I think (but am not sure) that ~ ax-2 is also valid in linear logic. It is hopefully a matter of typing, proving, and trying to remember the difference between the four connectives.

Mario Carneiro

unread,
May 20, 2019, 9:19:49 PM5/20/19
to metamath
On Mon, May 20, 2019 at 8:57 PM Corbin Simpson <mostawe...@gmail.com> wrote:
I had some other thoughts that I forgot to include:

* Based on a technique I believe I learned from https://blog.plover.com/math/24-puzzle-2.html , it would seem at least plausible that I could syntactically encode associative-and-commutative structures as bags of some sort. I'm not sure if this is a worthwhile avenue of exploration, but I know at least one popular parenthesis-based language family, Lisp, does this for some associative operations.

You can't really represent bags in metamath in any way that differs meaningfully from just having a binary operator with explicit association. You would still need a way to construct lists of things, and that list-making constructor could be used in various ways to get different trees, and you need to manipulate the expression to move the constructors around, and then you are back to associativity.

Technically, what I just said is not true, because metamath is actually based on expressions as strings, and it can match expressions with different associations at different times for interesting effect. I strongly recommend against using this feature; it's easy to shoot yourself in the foot if you manipulate expressions that do not have a well defined syntax tree.
 
* I wonder whether double-lollipop should be an additive or multiplicative conjunction.

Wikipedia says additive. I think that's right, because otherwise you wouldn't get modus ponens: From |- ( ( A -o B ) (x) ( B -o A ) ) and |- A you can't conclude |- B unless you have a way to get rid of the function going the other way.
 
On Monday, May 20, 2019 at 12:02:23 PM UTC-7, fl wrote:
It doesn't seem they have a decent set of axioms. You won't go very far.

Totally understandable. Would there be a better set of axioms for doing this work? I don't know much about linear logic, and it seems like every author has a different presentation. Also, of course, if there's a preferred set of axioms that were waiting for some planned work along these same lines, then I'd prefer to go with what everybody else wants to do.

I said this in the other post, but what I think FL means is that you don't have the axioms needed to do any set theory, only propositional logic. This is useful for studying the logic itself but not its applications. That said, you seem to be fine with adding extra axioms as needed so I would expect when you get there you will find the appropriate axioms to use, so there's not really any fundamental barrier to going farther with this.
 
How far should I expect to get? Is there some ceiling that you're anticipating? I'm hoping that, after I finish adding axioms and proving a base and doing some sanity checks, I'll be able to reach anything in set.mm. In particular, I've convinced myself of ~ ax-1 but with lollipops on a whiteboard, and I think (but am not sure) that ~ ax-2 is also valid in linear logic. It is hopefully a matter of typing, proving, and trying to remember the difference between the four connectives.

* ax-1 is not provable in LL. It's dropping the resource "ps" which is not allowed. The linear analogue of ax-1 is ( ph -o ( ? ps -o ph ) ).
* ax-2 is also not provable. The resource "ph" is duplicated in this one. Possible analogues include
$p |- ( ( ph -o ( ps -o ch ) ) -o ( ( th -o ps ) -o ( ( ph & th ) -o ch ) ) ) $.
$p |- ( ( ! ph -o ( ps -o ch ) ) -o ( ( ! ph -o ps ) -o ( ! ph -o ch ) ) ) $.
* ax-3 is provable. I'll leave that as an exercise for you. (BTW are you aware contrapos has a typo? It's an instance of identity.)
* ax-mp is provable, I think you have this already.

More generally, I would not recommend following the beginning of set.mm, the order of things is all wrong for this context. But the area where you have to start thinking more carefully is what to do with ax-4 on. What is the linear analogue of ( A. x ph -> ph )? What is A. x ph anyway? Simple type theory nicely interprets LL but the analogue for forall is dependent type theory and here the theory (of linear DTT) is notoriously subtle. I've only glanced at your reference but Mike Shulman is a big type theorist so he's probably worked this all out.

Mario

Mario Carneiro

unread,
May 20, 2019, 11:54:51 PM5/20/19
to metamath
On Mon, May 20, 2019 at 9:19 PM Mario Carneiro <di....@gmail.com> wrote:
* I wonder whether double-lollipop should be an additive or multiplicative conjunction.

Wikipedia says additive. I think that's right, because otherwise you wouldn't get modus ponens: From |- ( ( A -o B ) (x) ( B -o A ) ) and |- A you can't conclude |- B unless you have a way to get rid of the function going the other way.

Actually, I think the <-> symbol I used in the axiomatization in the last post is better understood as an exponential, |- ( ( A <-> B ) <-> ! ( ( A -o B ) & ( B -o A ) ) ), which is the same regardless of whether you use additive or multiplicative conjunction. That's because the way I am using it is for ( ph <-> ps ) to mean "ph and ps are interchangeable in all contexts" which requires the ability to use this fact as many times as you want. Recall that |- ph implies |- ! ph, and all the rules I gave for <-> involved it being the root symbol, something outright provable, and so |- ( ( A -o B ) & ( B -o A ) ) and |- ( ( A -o B ) (x) ( B -o A ) ) and |- ! ( ( A -o B ) & ( B -o A ) ) are all interchangeable.

But the rule I gave for deducing <-> is wrong, or at least insufficiently powerful:

$e |- ( ~ ph \/ ps ) $e |- ( ~ ps \/ ph ) $a |- ( ph <-> ps ) $.

This can only be used to deduce <-> at the head, while what is needed is a characterization, i.e. a way to prove |- ( ( ph <-> ps ) <-> ... ) . But perhaps the most straightforward solution is to just say so, i.e.:

$a |- ( ( ph <-> ps ) <-> ! ( ( ph -o ps ) & ( ps -o ph ) ) ) $.

This leaves room for the double-lolly operation ( ph o-o ps ) <-> ( ( ph -o ps ) & ( ps -o ph ) ) to be a distinct symbol.

By the way, I found a program called llprover online for proving LL theorems; it uses the ascii notations "/\" with "\/" par "+" plus "*" times, which mostly matches my earlier advice of combining familiar symbols where possible and reasonable symbols for the rest. The par symbol is just ridiculous.

fl

unread,
May 21, 2019, 7:15:34 AM5/21/19
to meta...@googlegroups.com

How far should I expect to get? Is there some ceiling that you're anticipating?

Yes. Two ceilings: 1) you need a full set of axioms (predicate calculus and a set theory) 
2) If you have a constructive logic, you will have to express ideas like: I have a proof
in this state and in all subsequent states but not in that one? How do you do that.

Girard's original work is here.


And his archives deserve to be read.


The axioms of his predicate calculus is here p. 11.


Unfortunately it is inGentzen's style.

--
FL

Glauco

unread,
May 21, 2019, 3:34:07 PM5/21/19
to Metamath
Hi and welcome, Corbin.

* The proof assistant is alright. I'm able to integrate it with my editor somewhat, and to prove one or two proofs at a time, slowly.

did you try mmj2?

I alt + tab between it and an external (more powerful, general purpose) editor, just to search / replace strings, and (once you get used to it) it's not bad at all. A couple of keyboard shortcuts to save/reload your current .mmp file and you can focus on the proof.

I now feel like I'm the real bottleneck, not the IDE (and I'm confident, sooner or later, the community will further improve mmj2).


Glauco

fl

unread,
May 22, 2019, 7:12:07 AM5/22/19
to Metamath

2) If you have a constructive logic, you will have to express ideas like: I have a proof
in this state and in all subsequent states but not in that one? How do you do that.


I mean that you will have to express that the values of your individual variables change.

(I think there will be something like that. I think so but I'm not sure: you have to go deeper.)

--
FL 
Reply all
Reply to author
Forward
0 new messages