Some interesting things about proof assistants

18 views
Skip to first unread message

Lyle Kopnicky

unread,
Feb 10, 2019, 11:24:23 PM2/10/19
to pdxfunc
I learned some interesting things about proof assistants this weekend.

Firstly, I finally have the vaguest idea of what "cubical type theory" is about. Jon Sterling spoke at PDX Func about it in 2015, as being the type theory behind JonPRL. I really had no idea how it related to something like the calculus of inductive constructions used in Coq or Lean. I wanted to think it had something to do with the Barendregt lambda cube, but I was told it it doesn't. Well, since then I had taken some interest in homotopy type theory, and I was surprised to find yesterday that cubical type theory is a way of making a computationally feasible form of homotopy type theory (HoTT). Homotopy type theory involves treating equality between types A and B as meaning that a path exists from A to B. Normally a path is parameterized over an interval from 0 to 1, which I find sort of confusing - what in the world would be a actual path between two types? The HoTT book says we can forget about that, and just think of the high-level properties of homotopies. Phew.

But... cubical type theory actually treats those paths as real things in the proofs! And the name "cubical" comes from the idea that you can have an equality, say, A = B, and then B = C, and C = D, and those form three sides of a square, and the other side is A = D. Since you can have many dimensions of these, they call them cubes. I still have no idea what the point is of breaking down the paths into real intervals.

I also learned that Vladimir Voevodsky got the insight of homotopy type theory by studying Coq.

One thing we've talked about in the group before is how we like explicit proofs, avoiding tactics because we can't follow the steps. This is why I chose Login & Proof as a text - the Coq tutorials all start right off with tactics. Well, I found out that Coq originally only had tactics, and they added a declarative proof language (similar to Lean's "have" and "show") in 2006. And it was influenced by a proof checker called Mizar.

Then I read about Metamath, which is similar to Mizar, but better because it has a stable language. Metamath has a repository of over 20,000 proofs, in such detail that the only proof rule needed to be applied is substitution. The proof for 2 + 2 = 4 has over 26,000 steps. But you can zoom in to whatever level of detail you like. And then there's Ghilbert, which can bet translated back and forth with Metamath, and they have a proof editor on the web site that renders using LaTeX.

I guess if you want to understand the proofs behind standard mathematical theorems, Metamath or Ghilbert is a good place to look, as well as a place to add your own proofs. If you want to prove things about arbitrary theorems or pieces of code, you're better off with something like Lean - it's probably a lot faster, and will automate some of the details for you.

- Lyle

Lyle Kopnicky

unread,
Feb 10, 2019, 11:48:44 PM2/10/19
to pdxfunc
By the way, the latest incarnation of cubical type theory is to be found in Cubical Agda and redtt.

--
You received this message because you are subscribed to the Google Groups "pdxfunc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdxfunc+u...@googlegroups.com.
To post to this group, send email to pdx...@googlegroups.com.
Visit this group at https://groups.google.com/group/pdxfunc.
For more options, visit https://groups.google.com/d/optout.

Matt Rice

unread,
Feb 11, 2019, 2:22:56 AM2/11/19
to pdx...@googlegroups.com
Cool, something that I saw linked somewhere recently, is this paper
A Brief introduction to type theory and the univalence axiom,
which for it's brevity is a very good overview, I highly recommend it if you haven't read that already.
I found it much more approachable than the HoTT book.
https://math.uchicago.edu/~may/REU2015/REUPapers/Macor.pdf

I've been very interested in the type theories of the Cedille theorem prover, https://cedille.github.io/
There is a 2001 paper aptly titled, Induction is not derivable in second order type theory.

Where Coq and theorem provers derived from it generally use the calculus of inductive constructions,
Cedille is different in that it adds the minimal types necessary to the calculus of constructions needed to derive induction.
I.e. in lean when you define a datatype like nat, inductive recursor is generated e.g. nat.rec_on, is generated from the structure of the inductive type.

with cedille, you have to derive it using built in types.
they built 2 variations of the type system, one using dependent intersection types, and a form of equality.
and an earlier version using a recursive type, Victor Maia recently posted an article about his preference for that due to it's simplicity,

and this week posted this (40 line!) javascript theorem prover:

As you mentioned about explicit proofs, one of the interesting things about these languages is well, there aren't any tactics at all,
so it really is lambdas all the way down.

With the caveat that this is just a prototype which hasn't been proven sound,
In fact what it (currently implements) been proven inconsistent: https://en.wikipedia.org/wiki/System_U
but he makes a really interesting conjecture that by requiring terms normalized to weak head normal form,
to which the known inconsistent terms should not normalize.

And if that doesn't work out it there is always implementing it on top of the calculus of constructions.
I believe that would be same type system as the original type which was shown consistent by erasure to System-F

Anyhow, I managed play a bit with the latter, and prove I guess a cast, from a natural to predecessor,
then show that (pred (succ n)) = n (using the Eq.cong and Nat from main.escoc)
(Note that this is using a Scott encoded nat, so it's a bit different than one most are familiar with).

It really is a neat time in type theory to see so many new and fascinating ideas emerging.

```
Nat.pred
: {a : (Nat a)} (Nat (Nat.pred a))
= [a]
  (a [a] (Nat (Nat.pred a))
   [prev : (Nat prev)] prev
   Nat.zero)

eq_n_pred_succ_n
: {n : (Nat n)}
 (Eq Nat (Nat.pred (Nat.succ n)) n (eq_n_pred_succ_n n))
= [n]
  (n [self](Eq Nat (Nat.pred (Nat.succ self)) self (eq_n_pred_succ_n self))
   [pred : (Nat pred)] ((Eq.cong Nat Nat (Nat.pred (Nat.succ pred)) pred (eq_n_pred_succ_n pred)) Nat.succ)
   (Eq.refl Nat Nat.zero))
main = (eq_n_pred_succ_n Nat.0)
```


On Sun, Feb 10, 2019 at 8:24 PM Lyle Kopnicky <lyle...@gmail.com> wrote:
--

Lyle Kopnicky

unread,
Feb 13, 2019, 12:09:31 AM2/13/19
to pdxfunc
Thanks, Matt! Yeah, that "Brief introduction to type theory and the univalence axiom” paper looks interesting - I’ve read a few pages of it so far.

I’d seen Victor Maia’s blog post because it was linked from the Haskell Reddit. That does sound interesting.

Scott-encoded! Wow, hadn’t heard of that one before. Just read about it a bit on Wikipedia. Yet another thing named after Dana Scott.

I agree, it is a great time in type theory!
Reply all
Reply to author
Forward
0 new messages