77 views

Skip to first unread message

Jan 1, 2014, 12:03:02 AM1/1/14

to HomotopyT...@googlegroups.com

Hi, I'm a homotopy theorist and a HOL Light programmer and I'm really excited by Vladimir's leadership in the formal proofs world. But I'm having a lot of trouble understanding the HoTT book. Can we go through this recent popular article? It sounds really great:

http://blogs.scientificamerican.com/guest-blog/2013/10/01/voevodskys-mathematical-revolution/

[Coq] was based on something called Martin-Löf type theory, and he

just couldn’t get his head around it.[...] He realized that all

this type theory stuff could be translated to be equivalent to

homotopy theory,

I would really like to understand that! I apologize if this is clearly explained in the HoTT book. Maybe that's all anyone has to explain to me right now, and my questions below can wait.

“Set-theoretic approach to foundations of mathematics work well

until one starts to think about categories since categories cannot

be properly considered as sets with structures due to the required

invariance of categorical constructions with respect to

equivalences rather than isomorphisms of categories.”

OK, I see that Vladimir's Univalence Axiom is coming in here to provide the invariance. But if UF/Coq only good for category theory? I would hope that UF/Coq was more useful than that. I can think of a lot of ways that HOL Light makes it hard to formalize pure math, but these problems don't (I think???) have much to do with category theory. Rob Arthan, a stable homotopy theorist and HOL programmer told me that the stirring intro to Andrew's Type Theory is overselling the case, that HOL is better than ZFC in formalizing math, but it's far from perfect.

Rapidly, he figured out that every proof system but one was

inadequate for what he had in mind. And that last one — Coq — he

couldn’t understand.

Why is that? The only advantage that I know that Coq has over HOL Light is type quantification, which you need to define things with universal properties, like fibration.

Essentially, you have to give [your proof assistant] the same

education you got — except that you have to do it in a far more

exacting way.

Sure, I've done plenty of that in HOL Light (see

http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

which is a bit more recent than the version in the HOL Light distribution) and seen others do a lot more, e.g. John Harrison has 19,000 lines in topology.ml in a directory Multivariate that's 8,452 KB.

But this approach circumvents all that labor.

I can't imagine how, unless lots of UF/Coq people have already done the labor.

Not only that, but the language the computer understands is far

closer to natural mathematical language.

Sure, and any type theory would be, but what I really want to learn is how UF/Coq is closer to natural mathematical language than HOL. Let me crudely describe an enduring headache in HOL. Our sets are either types or "subsets" of types, and you can make function types alpha->beta for types alpha & beta, but given sets A & B, there is no type A->B, so you have define your function from A to B on the whole type, and that causes trouble. Is this problem fixed in Coq with dependent types, or in UF somehow? John Harrison quotes Bob Solovay as saying that Coq's dependent types are too hard for mortals to understand. Do Coq's dependent types become usable if we're also using UF?

In fact, Voevodsky says, it’s a bit like playing a video game. You

interact with the computer. “You tell the computer, try this, and

it tries it, and it gives you back the result of its actions,”

Voevodsky says. “Sometimes it’s unexpected what comes out of

it. It’s fun.”

That's true for any proof assistant, and I've spent lots of time playing this video game, and part of the game is that I don't really understand the difference between REWRITE_TAC, SIMP_TAC, MESON_TAC and MP_TAC. I have a "feel" for the difference from many hours playing the video game. Is there something about UF/Coq that makes these automated provers more usable, powerful, or describable?

He also predicts that this will lead to a blossoming of

collaboration, pointing out that right now, collaboration requires

an enormous trust, because it’s too much work to carefully check

your collaborator’s work. With computer verification, the computer

does all that for you, so you can collaborate with anyone and know

that what they produce is solid. That creates the possibility of

mathematicians doing large-scale collaborative projects that have

been impractical until now.

This sounds great, and also Vladimir's hope/prediction in Wired that we'll soon formalize our paper before submitting them, but I think there's a somewhat different advantage of proof assistants. I think math is really about proofs that humans can understand, and that human mathematicians take great joy in understanding such proofs. I think it's quite easy in the modern pure math world for folks to not really prove their theorem, there's so much pressure to publish, or show our cutting edge fields are growth industries that can attract talented workers. I'm not going to say that's bad, but I think that way you lose some the joy of really understanding the proofs. I'm hoping that our proof assistants will restore this joy that we've maybe lost some of. Partly with that in mind, I've worked pretty hard to write readable code, starting with Freek W's miz3.ml, which I generalized to readable.ml. John H is a big proponent of readable code, and he makes fun of the usual HOL Light coding as EAR_OF_BAT_TAC. Do you think that Coq/UF is particularly readable code, or have you made good advances in readability?

--

Best,

Bill

http://blogs.scientificamerican.com/guest-blog/2013/10/01/voevodskys-mathematical-revolution/

[Coq] was based on something called Martin-Löf type theory, and he

just couldn’t get his head around it.[...] He realized that all

this type theory stuff could be translated to be equivalent to

homotopy theory,

I would really like to understand that! I apologize if this is clearly explained in the HoTT book. Maybe that's all anyone has to explain to me right now, and my questions below can wait.

“Set-theoretic approach to foundations of mathematics work well

until one starts to think about categories since categories cannot

be properly considered as sets with structures due to the required

invariance of categorical constructions with respect to

equivalences rather than isomorphisms of categories.”

OK, I see that Vladimir's Univalence Axiom is coming in here to provide the invariance. But if UF/Coq only good for category theory? I would hope that UF/Coq was more useful than that. I can think of a lot of ways that HOL Light makes it hard to formalize pure math, but these problems don't (I think???) have much to do with category theory. Rob Arthan, a stable homotopy theorist and HOL programmer told me that the stirring intro to Andrew's Type Theory is overselling the case, that HOL is better than ZFC in formalizing math, but it's far from perfect.

Rapidly, he figured out that every proof system but one was

inadequate for what he had in mind. And that last one — Coq — he

couldn’t understand.

Why is that? The only advantage that I know that Coq has over HOL Light is type quantification, which you need to define things with universal properties, like fibration.

Essentially, you have to give [your proof assistant] the same

education you got — except that you have to do it in a far more

exacting way.

Sure, I've done plenty of that in HOL Light (see

http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

which is a bit more recent than the version in the HOL Light distribution) and seen others do a lot more, e.g. John Harrison has 19,000 lines in topology.ml in a directory Multivariate that's 8,452 KB.

But this approach circumvents all that labor.

I can't imagine how, unless lots of UF/Coq people have already done the labor.

Not only that, but the language the computer understands is far

closer to natural mathematical language.

Sure, and any type theory would be, but what I really want to learn is how UF/Coq is closer to natural mathematical language than HOL. Let me crudely describe an enduring headache in HOL. Our sets are either types or "subsets" of types, and you can make function types alpha->beta for types alpha & beta, but given sets A & B, there is no type A->B, so you have define your function from A to B on the whole type, and that causes trouble. Is this problem fixed in Coq with dependent types, or in UF somehow? John Harrison quotes Bob Solovay as saying that Coq's dependent types are too hard for mortals to understand. Do Coq's dependent types become usable if we're also using UF?

In fact, Voevodsky says, it’s a bit like playing a video game. You

interact with the computer. “You tell the computer, try this, and

it tries it, and it gives you back the result of its actions,”

Voevodsky says. “Sometimes it’s unexpected what comes out of

it. It’s fun.”

That's true for any proof assistant, and I've spent lots of time playing this video game, and part of the game is that I don't really understand the difference between REWRITE_TAC, SIMP_TAC, MESON_TAC and MP_TAC. I have a "feel" for the difference from many hours playing the video game. Is there something about UF/Coq that makes these automated provers more usable, powerful, or describable?

He also predicts that this will lead to a blossoming of

collaboration, pointing out that right now, collaboration requires

an enormous trust, because it’s too much work to carefully check

your collaborator’s work. With computer verification, the computer

does all that for you, so you can collaborate with anyone and know

that what they produce is solid. That creates the possibility of

mathematicians doing large-scale collaborative projects that have

been impractical until now.

This sounds great, and also Vladimir's hope/prediction in Wired that we'll soon formalize our paper before submitting them, but I think there's a somewhat different advantage of proof assistants. I think math is really about proofs that humans can understand, and that human mathematicians take great joy in understanding such proofs. I think it's quite easy in the modern pure math world for folks to not really prove their theorem, there's so much pressure to publish, or show our cutting edge fields are growth industries that can attract talented workers. I'm not going to say that's bad, but I think that way you lose some the joy of really understanding the proofs. I'm hoping that our proof assistants will restore this joy that we've maybe lost some of. Partly with that in mind, I've worked pretty hard to write readable code, starting with Freek W's miz3.ml, which I generalized to readable.ml. John H is a big proponent of readable code, and he makes fun of the usual HOL Light coding as EAR_OF_BAT_TAC. Do you think that Coq/UF is particularly readable code, or have you made good advances in readability?

--

Best,

Bill

Jan 1, 2014, 12:34:56 AM1/1/14

to ric...@math.northwestern.edu, HomotopyT...@googlegroups.com

Keep in mind that popular science articles are generally written by journalists, not scientists, and hence are liable to contain inaccuracies and exaggerated claims. A better introduction for a mathematician is Awodey's survey paper:

http://www.andrew.cmu.edu/user/awodey/preprints/TTH.pdf

or the HoTT book. If you can bring up specific things in the book that are causing you difficulty, people might be able to help explain.

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.

For more options, visit https://groups.google.com/groups/opt_out.

Jan 1, 2014, 3:38:26 AM1/1/14

to HomotopyT...@googlegroups.com

On 1/1/14, 12:03 AM, ric...@math.northwestern.edu wrote:

>

> Why is that? The only advantage that I know that Coq has over HOL

> Light is type quantification, which you need to define things with

> universal properties, like fibration.

I'm writing as a relative novice (though I do help organize the NYC HoTT
>

> Why is that? The only advantage that I know that Coq has over HOL

> Light is type quantification, which you need to define things with

> universal properties, like fibration.

reading group), so hesitant to weigh in. However, it seems to me that

the questions you present are A) unfortunately based on a popularization

of the work, and B) relate to bigger questions about the relationship of

proof assistants in different families (the HOL family vs. the

Martin-Lï¿½f derived family including both Coq and Agda), and questions in

particular relating not just to "usability" but to the nature of their

approach and foundations. So I'd like to try to explain what I find

compelling about HoTT and what the point seems to be to me, at least.

The decent points in what you'll find below are probably all better and

more clearly presented in the introduction to the HoTT book. The rotten

points are my fault entirely.

One useful reference may be another popularization, but one from "the

horse's mouth" so to speak, which is Voevodsky's lecture at the

Heidelberg Laureate forum, which I found gave a more serious summary of

his aims:

http://www.heidelberg-laureate-forum.org/blog/video/lecture-thursday-september-26-vladimir-voevodsky/

But beyond that, I think that it is worth examining what Martin-Lï¿½f was

up to in the first place in deriving his intuitionistic type theory. His

lectures on the meaning of logical constants are very rich and I've

reviewed them many times:

http://www.ae-info.org/attach/User/Martin-L%C3%B6f_Per/OtherInformation/article.pdf

What HOL-derived systems give us are exactly what it "says on the box"

-- attempts to automate and formalize traditional proofs in which we can

then define the whole of traditional mathematics.

The intuitionistic type theory project has been different from the start

-- an attempt to provide _foundations_ for mathematics, of which

automated proof systems then became a logical outgrowth. HOL exists

because we have programmed it. Martin-Lï¿½f type theory exists prior to

and absent any particular implementations of theorem provers based on

it. These foundations were, as originally conceived, perhaps not to be

of mathematics as we had traditionally practiced it, but rather

mathematics on a constructivist basis, which might mean changing the

mathematics themselves as we went along.

With all due hand-waving, MLTT does not give a "partitioned" system

where on the one hand we have a logical meta-system which we use to

reason about mathematical objects "external" to it, but rather a

"unitary" or "monist" system in which everything we construct is at once

a type/term/proof/mathematical object. Thus we do not build a system to

reason about mathematics, but rather we have a mathematical system that

(we hope) itself already contains (or contains the potential for) all of

mathematics, have we the wit to express it.

That's already the (or at least "a") intuitionist project, as I see it,

well prior to the introduction of Homotopy Type Theory.

What HoTT itself now brings to the table is the remarkable coincidence

(of course nothing in math is 'just' a coincidence) that under the

proper interpretation, we can also treat intuitionistic types as

_spaces_ and so a (or perhaps "the") natural setting for understanding

type theory is itself topology, and conversely a (perhaps "the") natural

formalization of homotopy theory should be in type theoretic language.

So by chapter 2 of the HoTT book, we can, using purely pen-and-paper

methods, use MLTT to encode a value that gives witness to

Eckmann-Hilton. We do not do this by defining objects to "represent"

loop spaces, then encoding facts about such objects, then using the

rules of our system to chain these facts together. Rather, we do this by

recognizing that we _already_ have a representation of loop spaces given

by MLTT, in the form of equality types at given points, and that the

"facts" about these spaces are provided by the rules MLTT already gives

us for working with such types!

From the above "coincidence" there have now been derived extensions to

MLTT (such as the univalence axiom and higher inductive types). There

has also been introduced a way to _truncate_ our infinity groupoid of

types and recover not only intuitionistic, but classical mathematics!

This latter was quite a revelation for me.

If what you want is a better proof assistant, then Coq, Isabelle, Twelf,

Agda, etc. are all perhaps more suited for some purposes and less so for

others. There are matters of tooling and taste involved.

The work on HoTT is rather about mathematics (and type theory, and

logic, and programming -- after all, they're all the same thing!), and

an attempt to formulate it "internally." Ultimately one would hope this

will lead to better proof assistants and easier proofs. There are

already some proofs (such as the above mentioned Eckmann-Hilton) that

are clearly elegant in this framework. With time, I would hope there

will be many others.

On the "tactics" question, note that Agda doesn't really have tactics at

the moment. Rather it encourages setting up everything "just right" to

begin with. Since Agda also is based in MLTT, a great deal of the HoTT

work has been translated to Agda as well. You may find reviewing the

repository along with the book a worthwhile experience.

Cheers,

Gershom

Jan 1, 2014, 9:02:38 AM1/1/14

to ric...@math.northwestern.edu, homotopytypetheory

In addition to what has been said:

1. UF will provide us with a semantic understanding of inductive types.3. > Rob Arthan, a stable homotopy theorist and HOL programmer told me that the stirring intro to Andrew's Type Theory is overselling the case, that HOL is better than ZFC in formalizing math, but it's far from perfect.

Which introduction are you referring to? What were Rob Arthan's arguments?

Jan 1, 2014, 11:33:34 AM1/1/14

to Bas Spitters, ric...@math.northwestern.edu, homotopytypetheory

On Wed, Jan 1, 2014 at 6:02 AM, Bas Spitters <b.a.w.s...@gmail.com> wrote:

> 1. UF will provide us with a semantic understanding of inductive types.

> A lot has already been done, see for instance the papers by Voevodsky and by

> Awodey, Gambino and Sojakova.

I don't see how this is a selling point of HoTT to someone not already
> 1. UF will provide us with a semantic understanding of inductive types.

> A lot has already been done, see for instance the papers by Voevodsky and by

> Awodey, Gambino and Sojakova.

familiar with it, since we already had a set-theoretic semantic

understanding of inductive types. What exactly do you mean?

Jan 1, 2014, 11:38:08 AM1/1/14

to ric...@math.northwestern.edu, Vladimir Voevodsky, HomotopyT...@googlegroups.com

Hi Bill,

my I ask you please to be careful with statements such as

I I'm really excited by Vladimir's leadership in the formal proofs world.

There are many people involved in the univalent foundations project now and some of them are exploring directions which I did not do any work on and which I would not claim a leadership position in. Some have moved well beyond the ideas which I have introduced (i.e. Thierry Coquand and his group in their work on the cubical sets model). While a do consider myself to be the founder of the univalent foundations project and will probably retain one of the leading roles in it for a while it is incorrect to consider me to be *the* leader of the project (leave alone of the formal proofs world as whole).

Let me crudely describe an enduring headache in HOL. Our sets are either types or "subsets" of types, and you can make function types alpha->beta for types alpha & beta, but given sets A & B, there is no type A->B, so you have define your function from A to B on the whole type, and that causes trouble. Is this problem fixed in Coq with dependent types, or in UF somehow?

Yes, definitely it is fixed in UF. In UF every set is a type (but not every type is a set).

John Harrison quotes Bob Solovay as saying that Coq's dependent types are too hard for mortals to understand. Do Coq's dependent types become usable if we're also using UF?

Yes. That's precisely what I have mentioned in the interview - that I could not understand Coq until I have invented the univalent model. With this model in mind the type theory of Coq becomes intuitive (except for some "details" such as a baroque universe management or weird rules for general fix point definitions but those can be simply avoided as it is done in Foundations library).

Is there something about UF/Coq that makes these automated provers more usable, powerful, or describable?

Which automated provers?

Do you think that Coq/UF is particularly readable code, or have you made good advances in readability?

There are various opinions on this. I find Coq used in the style of my Foundations library (with very little automation) to be quite readable especially when one can follow the code line-by-line in something like Proof General or CoqIDE. One of the things I plan to do today is to continue reading the RezkCompletion library written by others and I am looking forward to it as to something which is easier and more fun than reading a usual math paper.

Vladimir.

On Jan 1, 2014, at 12:03 AM, ric...@math.northwestern.edu wrote:

Hi, I'm a homotopy theorist and a HOL Light programmer and But I'm having a lot of trouble understanding the HoTT book. Can we go through this recent popular article? It sounds really great:

Jan 1, 2014, 11:47:19 AM1/1/14

to Gershom Bazerman, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On Jan 1, 2014, at 3:38 AM, Gershom Bazerman wrote:

From the above "coincidence" there have now been derived extensions to MLTT (such as the univalence axiom and higher inductive types). There has also been introduced a way to _truncate_ our infinity groupoid of types and recover not only intuitionistic, but classical mathematics! This latter was quite a revelation for me.

There seems to be a lot of confusion around the issue of classical mathematics in UF. The ability to encode classical mathematics using UF has nothing to do with truncations. It requires (for proper formulation of the law of excluded middle) to have the notion of h-levels (which can be defined using basic MLTT) but it does not require truncations.

V.

Jan 1, 2014, 11:49:55 AM1/1/14

to Michael Shulman, richter, homotopytypetheory

I was thinking about the recent bug in Coq's implementation:

--- Vladimir's variant ---

Variable Heq : (False -> False) = True.

Fixpoint contradiction (u : True) : False :=

contradiction (

match Heq in (_ = T) return T with

| eq_refl => fun f:False => match f with end

end

).

Definition c : False := contradiction ( I ) .

---

---

And the difficulties with the set-theoretic encoding of Coq's Prop:

E.g.

http://www.lix.polytechnique.fr/~werner/publis/cc.pdf

E.g.

http://www.lix.polytechnique.fr/~werner/publis/cc.pdf

IIRC this was what Solovay was wondering about, but I may be mistaken.

Jan 1, 2014, 11:52:10 AM1/1/14

to Vladimir Voevodsky, Gershom Bazerman, HomotopyT...@googlegroups.com

On Wed, Jan 1, 2014 at 8:47 AM, Vladimir Voevodsky <vlad...@ias.edu> wrote:

> There seems to be a lot of confusion around the issue of classical

> mathematics in UF. The ability to encode classical mathematics using UF has

> nothing to do with truncations. It requires (for proper formulation of the

> law of excluded middle) to have the notion of h-levels (which can be defined

> using basic MLTT) but it does not require truncations.

However, although hsets and hpropositions (being hlevels 0 and -1) can
> There seems to be a lot of confusion around the issue of classical

> mathematics in UF. The ability to encode classical mathematics using UF has

> nothing to do with truncations. It requires (for proper formulation of the

> law of excluded middle) to have the notion of h-levels (which can be defined

> using basic MLTT) but it does not require truncations.

be *defined* in unaltered MLTT, they do not yet give classical

mathematics, since there is no way to construct things such as

quotients and hpropsitional disjunction. Truncations are a natural

way to obtain those things, although there are others.

Jan 1, 2014, 12:02:29 PM1/1/14

to Bas Spitters, Michael Shulman, richter, homotopytypetheory

Since I haven't seen it mentioned yet: the reference to "UF/Coq" in the title of the original question looks a bit strange to me. Coq is a tool that has been used to develop some aspects of UF formally, but it's not the only one. You aren't obligated to buy into Coq if you want to do UF.

Jan 2, 2014, 11:44:41 AM1/2/14

to Vladimir Voevodsky, HomotopyT...@googlegroups.com

Thanks, Vladimir, and thanks to everyone else who responded.

Yes. That's precisely what I have mentioned in the interview - that

I could not understand Coq until I have invented the univalent

model. With this model in mind the type theory of Coq becomes

intuitive [...] Yes, definitely it is fixed in UF. In UF every set

I haven't had much luck with either paper, but I haven't tried very hard.

(except for some "details" such as a baroque universe management or

weird rules for general fix point definitions but those can be

simply avoided as it is done in Foundations library).

Excellent, I don't need to know this now, but I'll keep it mind.

it is incorrect to consider me to be *the* leader of the project

(leave alone of the formal proofs world as whole)

I apologize for my carelessness. I didn't mean to slight your coworkers. What I meant is that it's fantastic that a top-rated mathematician like you has taken such a public interest in formal proofs, being written up in Wired and Time. So of course I now need to learn about your work, and the work of everyone else in the UF project.

> Is there something about UF/Coq that makes these automated

> provers more usable, powerful, or describable?=20

Which automated provers?

Sorry for being unclear. Automated provers is probably the wrong term. I was referring to the HOL Light "automated provers" REWRITE_TAC, SIMP_TAC, MESON_TAC and MP_TAC. I know something about what they do, and from many happy hours spent playing the video game, I I have a "feel" for the difference between them. But I don't know exactly. I've been told that other HOLs (e.g. HOL4) are different. Here are some vague thoughts:

MESON_TAC will prove first order logic results "automatically", but it's not supposed to be good at equational reasoning. So for many FOL tasks (I think working under quantifiers) you have to use REWRITE_TAC, which in a sense just blindly substitutes. REWRITE_TAC usually performs more rewriting than ONCE_REWRITE_TAC, but I've written code where ONCE_REWRITE_TAC worked and REWRITE_TAC. SIMP_TAC is a much more powerful version of REWRITE_TAC, sort of a combination of REWRITE_TAC and MESON_TAC. However I've been surprised recently that MESON_TAC will prove things for me that SIMP_TAC won't. My question was: Is there something about UF/Coq that makes these "automated provers" more usable, powerful, or describable?

Just to clarify, let me give a simple example of what I mean, from HilbertAxiom_read.ml from

http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

partly to bring up code readability, as that's something else I wanted to discuss.

fol = MESON_TAC, simplify = SIMP_TAC

let SegmentSubtraction = theorem `;

∀ A B C A' B' C'. B ∈ Open (A,C) ∧ B' ∈ Open (A',C') ⇒

seg A B ≡ seg A' B' ⇒ seg A C ≡ seg A' C'

⇒ seg B C ≡ seg B' C'

proof

intro_TAC ∀ A B C A' B' C', H1, H2, H3;

¬(A = B) ∧ ¬(A = C) ∧ Collinear A B C ∧ Segment (seg A' C') ∧ Segment (seg B' C') [Distinct] by fol H1 B1' SEGMENT;

consider Q such that

B ∈ Open (A,Q) ∧ seg B Q ≡ seg B' C' [defQ] by fol - C1OppositeRay;

seg A Q ≡ seg A' C' [AQ_A'C'] by fol H1 H2 - C3;

¬(A = Q) ∧ Collinear A B Q ∧ A ∉ Open (C,B) ∧ A ∉ Open (Q,B) []

proof simplify defQ B1' ∉; fol defQ B1' H1 B3'; qed;

C ∈ ray A B ━ {A} ∧ Q ∈ ray A B ━ {A} [] by fol Distinct - IN_Ray IN_DIFF IN_SING;

C = Q [] by fol Distinct - AQ_A'C' H3 C1;

fol defQ -;

qed;

`;;

The point is that we could write down proofs of the various steps, but I don't have to, but MESON_TAC and SIMP_TAC will "automatically" create proofs using the list results they're given. I assume there's a lot of this "automation" going on in Coq.

I find Coq used in the style of my Foundations library (with very

little automation) to be quite readable especially when one can

follow the code line-by-line in something like Proof General or

CoqIDE.

That's really great.

One of the things I plan to do today is to continue reading the

RezkCompletion library written by others and I am looking forward

to it as to something which is easier and more fun than reading a

usual math paper.

I'm happy to hear that Charles's work is getting coded up. Let me quickly respond to everyone else, and thanks again to them for graciously responding to me.

Gershom, I'll look at Martin-Löf's paper and read the rest of your interesting post

http://www.ae-info.org/attach/User/Martin-L%C3%B6f_Per/OtherInformation/article.pdf

Bas, I was referring to the preface of Peter Andrews's book To Truth Through Proof:

PA> Both type theory (otherwise known as HOL) and axiomatic set theory

PA> [ZC/FOL] can be used [to formalize math], and it is really an

PA> accident of intellectual history that at present mostly logicians

PA> and mathematicians are more familiar with axiomatic set theory

PA> than with type theory. It is hoped that this book will help

PA> remedy this situation. [... experts recognize that HOL is better

PA> than ZC/FOL] for formalizing what mathematicians actually do.

Rob Arthan explained to me that HOL was better than ZC/FOL but it wasn't good enough. The only argument Rob actually made is that HOL Light and most HOLs don't have Coq's type quantification which you need to define universal properties like fibration. I can think of other problems. Sets are represented in HOL by terms of type

alpha->bool

we're supposed to say that a subset of alpha is the same thing as a function alpha->bool. But they're not the same thing, and I keep wondering when this is going to cause a problem.

KW> You aren't obligated to buy into Coq if you want to do UF.

Kevin, that's pretty interesting. Could one build UF on top of HOL Light? I would think not, but it would be cool if you could. Perhaps you mean that UF is math rather than code?

--

Best,

Bill

Yes. That's precisely what I have mentioned in the interview - that

I could not understand Coq until I have invented the univalent

model. With this model in mind the type theory of Coq becomes

is a type (but not every type is a set).

I'm really happy to hear this. So homotopy theory fixes the problem Bob Solovay noted with Coq. So I need to learn how homotopy theory makes Coq intuitive. Do you think the HoTT book the best source for this, or Steve's paper? http://www.andrew.cmu.edu/user/awodey/preprints/TTH.pdf
I haven't had much luck with either paper, but I haven't tried very hard.

(except for some "details" such as a baroque universe management or

weird rules for general fix point definitions but those can be

simply avoided as it is done in Foundations library).

it is incorrect to consider me to be *the* leader of the project

(leave alone of the formal proofs world as whole)

> Is there something about UF/Coq that makes these automated

Which automated provers?

Sorry for being unclear. Automated provers is probably the wrong term. I was referring to the HOL Light "automated provers" REWRITE_TAC, SIMP_TAC, MESON_TAC and MP_TAC. I know something about what they do, and from many happy hours spent playing the video game, I I have a "feel" for the difference between them. But I don't know exactly. I've been told that other HOLs (e.g. HOL4) are different. Here are some vague thoughts:

MESON_TAC will prove first order logic results "automatically", but it's not supposed to be good at equational reasoning. So for many FOL tasks (I think working under quantifiers) you have to use REWRITE_TAC, which in a sense just blindly substitutes. REWRITE_TAC usually performs more rewriting than ONCE_REWRITE_TAC, but I've written code where ONCE_REWRITE_TAC worked and REWRITE_TAC. SIMP_TAC is a much more powerful version of REWRITE_TAC, sort of a combination of REWRITE_TAC and MESON_TAC. However I've been surprised recently that MESON_TAC will prove things for me that SIMP_TAC won't. My question was: Is there something about UF/Coq that makes these "automated provers" more usable, powerful, or describable?

Just to clarify, let me give a simple example of what I mean, from HilbertAxiom_read.ml from

http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

partly to bring up code readability, as that's something else I wanted to discuss.

fol = MESON_TAC, simplify = SIMP_TAC

let SegmentSubtraction = theorem `;

∀ A B C A' B' C'. B ∈ Open (A,C) ∧ B' ∈ Open (A',C') ⇒

seg A B ≡ seg A' B' ⇒ seg A C ≡ seg A' C'

⇒ seg B C ≡ seg B' C'

proof

intro_TAC ∀ A B C A' B' C', H1, H2, H3;

¬(A = B) ∧ ¬(A = C) ∧ Collinear A B C ∧ Segment (seg A' C') ∧ Segment (seg B' C') [Distinct] by fol H1 B1' SEGMENT;

consider Q such that

B ∈ Open (A,Q) ∧ seg B Q ≡ seg B' C' [defQ] by fol - C1OppositeRay;

seg A Q ≡ seg A' C' [AQ_A'C'] by fol H1 H2 - C3;

¬(A = Q) ∧ Collinear A B Q ∧ A ∉ Open (C,B) ∧ A ∉ Open (Q,B) []

proof simplify defQ B1' ∉; fol defQ B1' H1 B3'; qed;

C ∈ ray A B ━ {A} ∧ Q ∈ ray A B ━ {A} [] by fol Distinct - IN_Ray IN_DIFF IN_SING;

C = Q [] by fol Distinct - AQ_A'C' H3 C1;

fol defQ -;

qed;

`;;

The point is that we could write down proofs of the various steps, but I don't have to, but MESON_TAC and SIMP_TAC will "automatically" create proofs using the list results they're given. I assume there's a lot of this "automation" going on in Coq.

I find Coq used in the style of my Foundations library (with very

little automation) to be quite readable especially when one can

follow the code line-by-line in something like Proof General or

CoqIDE.

One of the things I plan to do today is to continue reading the

RezkCompletion library written by others and I am looking forward

to it as to something which is easier and more fun than reading a

usual math paper.

Gershom, I'll look at Martin-Löf's paper and read the rest of your interesting post

http://www.ae-info.org/attach/User/Martin-L%C3%B6f_Per/OtherInformation/article.pdf

Bas, I was referring to the preface of Peter Andrews's book To Truth Through Proof:

PA> Both type theory (otherwise known as HOL) and axiomatic set theory

PA> [ZC/FOL] can be used [to formalize math], and it is really an

PA> accident of intellectual history that at present mostly logicians

PA> and mathematicians are more familiar with axiomatic set theory

PA> than with type theory. It is hoped that this book will help

PA> remedy this situation. [... experts recognize that HOL is better

PA> than ZC/FOL] for formalizing what mathematicians actually do.

Rob Arthan explained to me that HOL was better than ZC/FOL but it wasn't good enough. The only argument Rob actually made is that HOL Light and most HOLs don't have Coq's type quantification which you need to define universal properties like fibration. I can think of other problems. Sets are represented in HOL by terms of type

alpha->bool

we're supposed to say that a subset of alpha is the same thing as a function alpha->bool. But they're not the same thing, and I keep wondering when this is going to cause a problem.

KW> You aren't obligated to buy into Coq if you want to do UF.

Kevin, that's pretty interesting. Could one build UF on top of HOL Light? I would think not, but it would be cool if you could. Perhaps you mean that UF is math rather than code?

--

Best,

Bill

Jan 2, 2014, 12:10:49 PM1/2/14

to Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On Jan 2, 2014, at 11:44 AM, Bill Richter wrote:

> I'm really happy to hear this. So homotopy theory fixes the problem Bob Solovay noted with Coq. So I need to learn how homotopy theory makes Coq intuitive. Do you think the HoTT book the best source for this, or Steve's paper? http://www.andrew.cmu.edu/user/awodey/preprints/TTH.pdf

> I haven't had much luck with either paper, but I haven't tried very hard.

Vladimir.

Jan 2, 2014, 12:24:27 PM1/2/14

to Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On Thu, Jan 2, 2014 at 8:44 AM, Bill Richter

<ric...@math.northwestern.edu> wrote:

> So homotopy theory fixes the problem Bob Solovay noted with Coq.

I would hesitate to proclaim this to the world before receiving
<ric...@math.northwestern.edu> wrote:

> So homotopy theory fixes the problem Bob Solovay noted with Coq.

confirmation from Solovay himself. (-: It is true that homotopy type

theory has made dependent type theory easier to understand for many of

us mathematicians, although many of us were already familiar with

homotopy theory -- conversely, it may be the case that it has made

homotopy theory easier to understand for some dependent type

theorists. (-:

> Do you think the HoTT book the best source for this, or Steve's paper? http://www.andrew.cmu.edu/user/awodey/preprints/TTH.pdf

are still learning how to explain HoTT to newcomers, and it's

difficult because everyone comes in with a different background and

assumptions. Your feedback may help us improve the exposition for

other people coming from the HOL world.

Note, however, that neither are aimed at people interested in computer

formalization. As Vladimir says, if that's your main goal, then you

might be well-served by exploring the existing HoTT libraries

(http://homotopytypetheory.org/coq/) and trying to use it for a

project of your own.

> I find Coq used in the style of my Foundations library (with very

> little automation) to be quite readable especially when one can

> follow the code line-by-line in something like Proof General or

> CoqIDE.

>

> That's really great.

us working on the project find other styles of using Coq, such as that

used in the HoTT/HoTT library (which uses more automation), to be more

readable.

It's also not entirely clear to me how much the *readability* of Coq

code is influenced by HoTT specifically, versus other factors (such as

use of automation, typeclasses, etc.). One thing that we can say is

that various facts (such as homotopy-theoretic calculations) admit

different "synthetic" proofs by using HoTT which are much easier to

understand when formalized than a formalization of the usual

"classical" proof of the analogous fact would be. This is not really

comparing apples to apples, but it is still an advantage of HoTT for

the purposes of formalization.

> One of the things I plan to do today is to continue reading the

> RezkCompletion library written by others and I am looking forward

> to it as to something which is easier and more fun than reading a

> usual math paper.

>

> I'm happy to hear that Charles's work is getting coded up.

just named the "Rezk completion" after him because it's inspired by

his notion of complete Segal space from homotopy theory.

> Sets are represented in HOL by terms of type

> alpha->bool

> we're supposed to say that a subset of alpha is the same thing as a function alpha->bool. But they're not the same thing, and I keep wondering when this

is going to cause a problem.

"structural" sets than it does with HOL or HoTT.

Structural sets are the ones used in practice by most mathematicans;

they are just a collection of featureless "points" on which we can

impose structure (such as a group, a topological space, etc.). It

doesn't make sense to ask whether one abstractly given structural set

is a subset of another one.

Material sets are specified as subcollections of some pre-existing

collection of objects with definite identity (such as an ambient

structural set). The "sets" of ZFC are material, and one has to mimic

structural sets by forgetting about the internal identities of their

elements. However, there are other axiomatic set theories (such as

ETCS) whose basic "sets" are structural, and in which one can find

material sets as subsets of these.

Type theory in this regard is more like ETCS. The "types which are

sets" that Vladimir mentioned are structural sets, not material ones;

while subsets of those can be treated as material sets. In any case,

a subset of a structural set X can always be identified with its

characteristic function X -> prop (where prop may or may not be bool,

depending on whether you assume LEM), and this never causes any

problems.

> KW> You aren't obligated to buy into Coq if you want to do UF.

>

> Kevin, that's pretty interesting. Could one build UF on top of HOL Light? I would think not, but it would be cool if you could. Perhaps you mean that UF is math rather than code?

formalized in any proof assistant which supports intensional dependent

type theory. This does not, I believe, include HOL, but in addition

to Coq, a lot of formalization of UF/HoTT has also been done in Agda.

Mike

Jan 4, 2014, 12:08:08 AM1/4/14

to Vladimir Voevodsky, vlad...@ias.edu, HomotopyT...@googlegroups.com

start a project of converting your HilbertAxiom stuff into Coq

using univalent framework. I can help you with it by email.

Thanks, Vladimir! I'll do it. I'm grateful to you for your help. I starting installing Coq. I also downloaded Foundations-master.zip from https://github.com/vladimirias/Foundations.
using univalent framework. I can help you with it by email.

But I think my real problem is the homotopy theory. As a homotopy theorist, I'm practically from the 1960s: Toda's book, the classical Adams spectral sequence and Poincare complexes, although I can do model categories. I'm guessing from Michael's web page http://home.sandiego.edu/~shulman/papers/index.html or Steve Awodey's paper that HoTT is using a very different kind of homotopy theory. I don't know anything about your homotopy theory for schemes, étale cohomology, or your Fields medal solution of the Milnor conjecture. Can you recommend some homotopy theory that I need to learn in order to see the connection with Martin-Lof type theory? Can I understand your homotopy theory/Martin-Lof type theory connection with only my 1960s skills?

This is way off-topic, but since Coq uses ocaml, maybe somebody knows the answer. I have a HOL Light file readable.ml that's a "parser" that turns strings into tactics proofs, and it's based on some ocaml code

let exec = ignore o Toploop.execute_phrase false Format.std_formatter

o !Toploop.parse_toplevel_phrase o Lexing.from_string;;

that I believe folks on the ocaml newgroup said I shouldn't be using. They definitely said don't use Obj.magic. Is there some way to do my "parsing" with camlp? Here's a real dumb question: The basic Scheme exercise is to write a Scheme interpreter inside Scheme. That's easy because Scheme has a quote feature. Is it possible, or easy, to perform this exercise in ocaml? Write an ocaml interpreter inside ocaml? Do you need camlp for that?

--

Best,

Bill

Jan 4, 2014, 5:45:28 AM1/4/14

to Bill Richter, Vladimir Voevodsky, homotopytypetheory

A possibility would be to help formalize the book, or make the exercises:

Jan 4, 2014, 6:33:07 AM1/4/14

to Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

> Thanks, Vladimir, and thanks to everyone else who responded.

> Yes. That's precisely what I have mentioned in the interview - that

> I could not understand Coq until I have invented the univalent

> model. With this model in mind the type theory of Coq becomes

> intuitive [...] Yes, definitely it is fixed in UF. In UF every set

> is a type (but not every type is a set).

There is no reason not to believe Vladimir that HIS way of
> Yes. That's precisely what I have mentioned in the interview - that

> I could not understand Coq until I have invented the univalent

> model. With this model in mind the type theory of Coq becomes

> intuitive [...] Yes, definitely it is fixed in UF. In UF every set

> is a type (but not every type is a set).

understanding type theory was via homotopy theory. But that seems to

be a very individual feature :-)

Type theory is by now almost 45 years old and I think most people have

understood it via its naive set theoretic semantics which you find

e.g. in Martin-Loef's Bibliopolis book or in the books by Beeson and

Troelstra & van Dalen just to refer to the most accessible sources.

Since Bob Seely (and others I guess) in the 1980s observed that the

semantics of extensional type theory is given by locally cartesian

closed categories it was clear that every elementary topos with a

natural numbers object gives rise to a model of type theory.

Interpreting Prop as \Omega of the topos gives also rise to a model of

the calculus of constructions, albeit a proof irrelevant one.

Around 1995 it was observed that there are proof relevant models of

CoC where Prop is interpreted as the collection of partial equivalence

relations on a partial combinatory algebra (e.g. natural numbers with

Kleene application). I learnt this from a talk by Martin Hyland in 1985

who attributed this to Eugenio Moggi, at least for the case of polymorphic

lambda calculus. But the extension to CoC is sort of immediate and was

observed by many people including myself (in my Thesis and the subsequent

book).

In my Habil Thesis I gave a model of intensional type theory using the

glueing (or "sconing") of realizability models. But this still validated

uniqueness of identity proofs (actually, my eliminator K). To overcome

this Martin Hofmann and I introduced the groupoid model which was the first

model of univalence. We certainly didn't have the univalence axiom but

we observed that propositional equality coincided with being isomorphic

(actually, this was the motivation for looking at groupoids at all).

The simplicial sets model is doubtlessly much more nontrivial.

But it contains the naive set-theoretic model as a submodel and thus

is fairly classical since you don't get outside it as long as you

don't consider universes!

I think that simplicial sets and alike are good for modeling "equality

as isomorphism". This is compatible with type theory but not with set

theory or topos theory (with its built in proof irrelevance!). However,

the simplicial set model doesn't account for the constructivity features

of type theory which are much better captured by the above mentioned

realizability models.

This summer I speculated about building the groupoid model in type theory.

In discussion with Steve Awodey it got clear to me that one needs

function extensionality for this purpose. So one can construct the

groupoid model inside realizability models of extensional type theory

but not in intensional type theory itself. Not very surprising after all

since we know that Univalence entails Function Extensionality.

If you want to read more about this latter aspect you may read

http://www.mathematik.tu-darmstadt.de/~streicher/Barcelona_abstract.pdf.

In any case I don't think that homotopy models are the only way to

understand type theory though they seem to be the canonical way for

realizing "equality of types as isomorphism".

Thomas

Jan 4, 2014, 8:11:19 AM1/4/14

to Thomas Streicher, Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On 04/01/2014 11:33, "Thomas Streicher"

Theory than Homotopy Theory. Actually for me it rather works the other way

around: the encoding of Homotopy Theory in Type Theory gives me some idea

what Homotopy Theory is about.

>Type theory is by now almost 45 years old and I think most people have

>understood it via its naive set theoretic semantics which you find

What is the "naïve set theoretic semantics"? Set theory seems to be a dead

end, the idea that all mathematical concepts can be modelled by properties

of some sort of trees seems rather alien and arbitrary to me. As a

computer scientist with a particular fondness for typed functional

programming a "naïve type theoretic semantics" seems much more natural.

And indeed, its primitive concepts are functions, records and enumerations

which have a very direct understanding in how you use them when writing

programs.

The higher-dimensional features of Homotopy Type Theory arise naturally in

such an understanding. The question is how to we model the type of

equalities. While as a first approximation we may think that there is at

most one equality proof this view doesn't hold after a closer inspection.

In particular we don't want to model equality of types as the equality of

their names but by an extensional equivalence with some natural

properties. Hence, I think that what is called homotopy type theory can

arise just from a naïve type theoretic semantics and the fact that this

corresponds to notions in Homotopy Theory is an interesting coincidence.

As a consequence I think that the name "Homotopy Type Theory" is a bit of

a misnomer because it emphasize one particular approach to Type Theory. I

think Higher Dimensional Type Theory would be better.

>I think that simplicial sets and alike are good for modeling "equality

>as isomorphism". This is compatible with type theory but not with set

>theory or topos theory (with its built in proof irrelevance!). However,

>the simplicial set model doesn't account for the constructivity features

>of type theory which are much better captured by the above mentioned

>realizability models.

It seems that Thierry is claiming that cubical sets are better suited for

this. I still have to digest the paper...

>

>This summer I speculated about building the groupoid model in type theory.

>In discussion with Steve Awodey it got clear to me that one needs

>function extensionality for this purpose. So one can construct the

>groupoid model inside realizability models of extensional type theory

>but not in intensional type theory itself. Not very surprising after all

>since we know that Univalence entails Function Extensionality.

>If you want to read more about this latter aspect you may read

>http://www.mathematik.tu-darmstadt.de/~streicher/Barcelona_abstract.pdf.

>

>In any case I don't think that homotopy models are the only way to

>understand type theory though they seem to be the canonical way for

>realizing "equality of types as isomorphism".

So farŠ

Thorsten

>

>Thomas

This message has been checked for viruses but the contents of an attachment

may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

<stre...@mathematik.tu-darmstadt.de> wrote:

>> Thanks, Vladimir, and thanks to everyone else who responded.

>> Yes. That's precisely what I have mentioned in the interview - that

>> I could not understand Coq until I have invented the univalent

>> model. With this model in mind the type theory of Coq becomes

>> intuitive [...] Yes, definitely it is fixed in UF. In UF every set

>> is a type (but not every type is a set).

>

>There is no reason not to believe Vladimir that HIS way of

>understanding type theory was via homotopy theory. But that seems to

>be a very individual feature :-)

Indeed, I agree. Personally, I have a much better understanding of Type
>> Thanks, Vladimir, and thanks to everyone else who responded.

>> Yes. That's precisely what I have mentioned in the interview - that

>> I could not understand Coq until I have invented the univalent

>> model. With this model in mind the type theory of Coq becomes

>> intuitive [...] Yes, definitely it is fixed in UF. In UF every set

>> is a type (but not every type is a set).

>

>There is no reason not to believe Vladimir that HIS way of

>understanding type theory was via homotopy theory. But that seems to

>be a very individual feature :-)

Theory than Homotopy Theory. Actually for me it rather works the other way

around: the encoding of Homotopy Theory in Type Theory gives me some idea

what Homotopy Theory is about.

>Type theory is by now almost 45 years old and I think most people have

>understood it via its naive set theoretic semantics which you find

end, the idea that all mathematical concepts can be modelled by properties

of some sort of trees seems rather alien and arbitrary to me. As a

computer scientist with a particular fondness for typed functional

programming a "naïve type theoretic semantics" seems much more natural.

And indeed, its primitive concepts are functions, records and enumerations

which have a very direct understanding in how you use them when writing

programs.

The higher-dimensional features of Homotopy Type Theory arise naturally in

such an understanding. The question is how to we model the type of

equalities. While as a first approximation we may think that there is at

most one equality proof this view doesn't hold after a closer inspection.

In particular we don't want to model equality of types as the equality of

their names but by an extensional equivalence with some natural

properties. Hence, I think that what is called homotopy type theory can

arise just from a naïve type theoretic semantics and the fact that this

corresponds to notions in Homotopy Theory is an interesting coincidence.

As a consequence I think that the name "Homotopy Type Theory" is a bit of

a misnomer because it emphasize one particular approach to Type Theory. I

think Higher Dimensional Type Theory would be better.

>I think that simplicial sets and alike are good for modeling "equality

>as isomorphism". This is compatible with type theory but not with set

>theory or topos theory (with its built in proof irrelevance!). However,

>the simplicial set model doesn't account for the constructivity features

>of type theory which are much better captured by the above mentioned

>realizability models.

this. I still have to digest the paper...

>

>This summer I speculated about building the groupoid model in type theory.

>In discussion with Steve Awodey it got clear to me that one needs

>function extensionality for this purpose. So one can construct the

>groupoid model inside realizability models of extensional type theory

>but not in intensional type theory itself. Not very surprising after all

>since we know that Univalence entails Function Extensionality.

>If you want to read more about this latter aspect you may read

>http://www.mathematik.tu-darmstadt.de/~streicher/Barcelona_abstract.pdf.

>

>In any case I don't think that homotopy models are the only way to

>understand type theory though they seem to be the canonical way for

>realizing "equality of types as isomorphism".

Thorsten

>

>Thomas

>

>--

>You received this message because you are subscribed to the Google Groups

>"Homotopy Type Theory" group.

>To unsubscribe from this group and stop receiving emails from it, send an

>email to HomotopyTypeThe...@googlegroups.com.

>For more options, visit https://groups.google.com/groups/opt_out.

This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.
>--

>You received this message because you are subscribed to the Google Groups

>"Homotopy Type Theory" group.

>To unsubscribe from this group and stop receiving emails from it, send an

>email to HomotopyTypeThe...@googlegroups.com.

>For more options, visit https://groups.google.com/groups/opt_out.

This message has been checked for viruses but the contents of an attachment

may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation.

Jan 4, 2014, 10:26:22 AM1/4/14

to Bill Richter, HomotopyT...@googlegroups.com

On Jan 4, 2014, at 12:08 AM, Bill Richter <ric...@math.northwestern.edu> wrote:

> start a project of converting your HilbertAxiom stuff into Coq

> using univalent framework. I can help you with it by email.

>

> Thanks, Vladimir! I'll do it. I'm grateful to you for your help. I starting installing Coq. I also downloaded Foundations-master.zip from https://github.com/vladimirias/Foundations.

>

> But I think my real problem is the homotopy theory. As a homotopy theorist, I'm practically from the 1960s: Toda's book, the classical Adams spectral sequence and Poincare complexes, although I can do model categories. I'm guessing from Michael's web page http://home.sandiego.edu/~shulman/papers/index.html or Steve Awodey's paper that HoTT is using a very different kind of homotopy theory. I don't know anything about your homotopy theory for schemes, étale cohomology, or your Fields medal solution of the Milnor conjecture. Can you recommend some homotopy theory that I need to learn in order to see the connection with Martin-Lof type theory? Can I understand your homotopy theory/Martin-Lof type theory connection with only my 1960s skills?

All you need is the usual homotopy category of topological spaces or, to be more precise and technical, of Kan simplicial sets. It is also good to know what an LCCC (Locally Cartesian Closed Category) is.

V.

Jan 4, 2014, 3:39:20 PM1/4/14

to Thomas Streicher, Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On 04/01/14 11:33, Thomas Streicher wrote:

> There is no reason not to believe Vladimir that HIS way of

> understanding type theory was via homotopy theory. But that seems to

> be a very individual feature :-)

Here is my own alternative understanding, which is not incompatible

with Thorsten's or Vladimir's, but doesn't come from the same

"intuitions" (intuition = all we know, digested in our own particular

way).

Understanding MLTT via set theory, while helpful in some respects, is

misleading, particularly when it comes to understanding when two types

are equal. We can't say that they are equal if they have the same

elements: there is no mode of expression in type theory available to

formulate this.

So, it is not that "having the same elements" is the wrong notion of

equality for types.

To say that two types are equal if they have the same elements is "not

even wrong": there is no way to say that in type theory.

So one would say (following algebra and category theory) that two

types are "equal" means, by definition, that they are isomorphic. But

then we find a problem: the notion of isomorphism in type theory is

proof relevant (i.e. not an hproposition).

Vladimir says "no problem". Work with equivalence, which is logically

equivalent to isomorphism, but proof-irrelevant. (Great insight.)

We then happily take equivalence to be our notion of type equality, by

definition. (Equivalance = proof-irrelevant MLTT formulation of

isomorphism, without any need to extend MLTT with any axioms to

formulate that.)

What is bold about the univalence axiom is not that is says (roughly)

that isomorphic types are equal. This would indeed be rather bold (and

false) in the set-theoretical understanding of MLTT.

What is bold about the univalence axiom is that it posits that the

type of equivalences satisfies Martin-Loef's J-induction principle.

It is at this stage that one can say that both the set-theoretical and

the computational intuitions of type theory fail to foresee the

univalence axiom.

What we learn from the consistency of the univalence axiom is that

Martin-Loef's inductive definition of equality, when applied to the

universe, fails to characterize type equality as understood in the

set-theoretical interpretation of types (i.e. having the same

elements). But this of course is not surprising.

What is surprising is that equivalence (in some models, such as Kan

simplicial sets and Kan cubical sets) satisfies the J induction

principle.

That is, what Martin-Loef intended to capture equality, actually

captures equivalence in some (mathematically interesting!) models.

From a public relations perspective, the slogan "univalence =

isomorphic types are equal" is rather misleading.

What is going on is that what MLTT defines as equality (identity

types) can be consistently modelled as equivalence.

What is more interesting, is that this actually is computationally

consistent, after the cubical-set model.

In summary, my view is that univalence gives an induction principle for

equivalence, rather than postulate that equality means isomorphism.

Martin

Jan 4, 2014, 4:38:57 PM1/4/14

to Martin Escardo, Thomas Streicher, Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

I agree with everything you say, Martin. However, a minor clarification:

On 04/01/2014 20:39, "Martin Escardo" <m.es...@cs.bham.ac.uk> wrote:

>Vladimir says "no problem". Work with equivalence, which is logically

>equivalent to isomorphism, but proof-irrelevant. (Great insight.)

This is potentially misleading, equivalence itself is not proof-irrelevant

but the property of a function being an equivalence is.

T.

On 04/01/2014 20:39, "Martin Escardo" <m.es...@cs.bham.ac.uk> wrote:

>Vladimir says "no problem". Work with equivalence, which is logically

>equivalent to isomorphism, but proof-irrelevant. (Great insight.)

but the property of a function being an equivalence is.

T.

Jan 4, 2014, 4:50:04 PM1/4/14

to Altenkirch Thorsten, Thomas Streicher, Bill Richter, Vladimir Voevodsky, HomotopyT...@googlegroups.com

On 04/01/14 21:38, Altenkirch Thorsten wrote:

> I agree with everything you say, Martin. However, a minor clarification:

>

> On 04/01/2014 20:39, "Martin Escardo" <m.es...@cs.bham.ac.uk> wrote:

>

>> Vladimir says "no problem". Work with equivalence, which is logically

>> equivalent to isomorphism, but proof-irrelevant. (Great insight.)

>

> This is potentially misleading, equivalence itself is not proof-irrelevant

> but the property of a function being an equivalence is.

Glad to see we agree.

Best,

Martin

Jan 4, 2014, 4:53:27 PM1/4/14

to Vladimir Voevodsky, HomotopyT...@googlegroups.com

Yes, one definitely can [understand the homotopy theory/Martin-Lof

type theory connection with only 1960s homotopy theory skills].

Thanks, Vladimir. I'm happy with model categories (essential for fiberwise homotopy theory and even the unstable homotopy theory in Whitehead's book), LCCCs (or at least, closed monoidal categories) and simplicial sets (I like to say that there are two good model cats for topological spaces, the Strom model cat & simplicial sets, but the Quillen model cat for spaces is only good for stating results). BTW I just made and installed Coq at NWU (in /tmp, where I can't keep it), but I'm getting errors installing it on my home machine (where I bet I don't have enough libraries installed). I'll ask for help on Coq Club.

To everyone else: this is a fascinating discussion, but it's not giving me what I want. I want to explain to my HOL teammates (esp. John Harrison) that

1) 1960s homotopy theory explains Coq dependent types and that

2) therefore Coq/UF is a far superior proof assistant to HOL Light.

And I need a simple explanation of 1 & 2, because my HOL teammates know even less homotopy theory than I do. I know Vladimir says both 1 & 2 are true, but he's the genius, and that doesn't mean it's obvious. If I had to make a wild guess right now, I'd guess that 1 is not true, and instead, the way-past-1960s homotopy theory folks are doing nowadays (e.g. the n-categories of Michael and my coauthor John Baez) just happen to be related to complicated type theories. But I think that fields make progress when the "normal scientists" (like us) work their tails off to make the visions of the "paradigm shifters" (like Vladimir) come true (to use Kuhn's terminology). Maybe this forum isn't the right place to discuss this. If anyone interested in working with me on why 1 & 2 are true, maybe we should discuss this off-line. I like what Michael wrote

MS> We are still learning how to explain HoTT to newcomers, and it's

MS> difficult because everyone comes in with a different background

MS> and assumptions. Your feedback may help us improve the exposition

MS> for other people coming from the HOL world.

As well, Vladimir graciously offered to help me port my HOL code http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

to Coq/UF, but if someone else wants to teach me instead, that would be fine.

--

Best,

Bill

type theory connection with only 1960s homotopy theory skills].

Thanks, Vladimir. I'm happy with model categories (essential for fiberwise homotopy theory and even the unstable homotopy theory in Whitehead's book), LCCCs (or at least, closed monoidal categories) and simplicial sets (I like to say that there are two good model cats for topological spaces, the Strom model cat & simplicial sets, but the Quillen model cat for spaces is only good for stating results). BTW I just made and installed Coq at NWU (in /tmp, where I can't keep it), but I'm getting errors installing it on my home machine (where I bet I don't have enough libraries installed). I'll ask for help on Coq Club.

To everyone else: this is a fascinating discussion, but it's not giving me what I want. I want to explain to my HOL teammates (esp. John Harrison) that

1) 1960s homotopy theory explains Coq dependent types and that

2) therefore Coq/UF is a far superior proof assistant to HOL Light.

And I need a simple explanation of 1 & 2, because my HOL teammates know even less homotopy theory than I do. I know Vladimir says both 1 & 2 are true, but he's the genius, and that doesn't mean it's obvious. If I had to make a wild guess right now, I'd guess that 1 is not true, and instead, the way-past-1960s homotopy theory folks are doing nowadays (e.g. the n-categories of Michael and my coauthor John Baez) just happen to be related to complicated type theories. But I think that fields make progress when the "normal scientists" (like us) work their tails off to make the visions of the "paradigm shifters" (like Vladimir) come true (to use Kuhn's terminology). Maybe this forum isn't the right place to discuss this. If anyone interested in working with me on why 1 & 2 are true, maybe we should discuss this off-line. I like what Michael wrote

MS> We are still learning how to explain HoTT to newcomers, and it's

MS> difficult because everyone comes in with a different background

MS> and assumptions. Your feedback may help us improve the exposition

MS> for other people coming from the HOL world.

As well, Vladimir graciously offered to help me port my HOL code http://www.math.northwestern.edu/~richter/RichterHilbertAxiomGeometry.tar.gz

to Coq/UF, but if someone else wants to teach me instead, that would be fine.

--

Best,

Bill

Jan 4, 2014, 7:01:32 PM1/4/14

to Altenkirch Thorsten, Martin Escardo, Thomas Streicher, Bill Richter, HomotopyT...@googlegroups.com

On Jan 4, 2014, at 4:38 PM, Altenkirch Thorsten <psz...@exmail.nottingham.ac.uk> wrote:

> I agree with everything you say, Martin. However, a minor clarification:

>

> On 04/01/2014 20:39, "Martin Escardo" <m.es...@cs.bham.ac.uk> wrote:

>

>> Vladimir says "no problem". Work with equivalence, which is logically

>> equivalent to isomorphism, but proof-irrelevant. (Great insight.)

>

> This is potentially misleading, equivalence itself is not proof-irrelevant

> but the property of a function being an equivalence is.

>

> T.

V.

Jan 4, 2014, 7:36:13 PM1/4/14

to Vladimir Voevodsky, Altenkirch Thorsten, Thomas Streicher, Bill Richter, HomotopyT...@googlegroups.com

Let me develop Thorsten's remark for the record.

Let's say that f:X->Y is an isomorphism if we are in possession of

g:Y->X which is both a pointwise left and right inverse of f [homotopy

theorists will favour a different terminology for this notion]. We say

that f:X->Y is an equivalence if its fibers are contractible (see

other possible equivalent definitions in the HoTT book).

When I say that one works with equivalences rather isomorphisms I mean

that one works with maps f:X->Y which are equivalences. The point is

that "f is an equivalence" is always an hproposition but "f is an

isomorphism" is not necessarily an hproposition, although both are

logically equivalent (each one implies the other). This insight is due

to Vladimir, and makes sense in MLTT before we consider any model of

MLTT.

Martin

Jan 4, 2014, 7:52:58 PM1/4/14

to Martin Escardo, Altenkirch Thorsten, Thomas Streicher, Bill Richter, HomotopyT...@googlegroups.com

If we want to be precise historically then this insight is not due to me. As was pointed out to me recently by Max Kontsevich the fact tat in order to obtain correct notion of a homotopy equivalence one needs to consider two homotopies + 1 secondary homotopy was known to classics of homotopy theory starting with, probably, Boardman and Vogt. He also pointed out that if one applies this definition of weak equivalence to the map to the point then one recovers precisely "iscontr".

The precise statement which Max referred to is as follows - if one takes the free topological category generated by two objects, two morphisms in the opposite directions, two homotopies and one secondary homotopy as in the "correct" definition of a weak equivalence then the Hom-spaces of this topological category are contractible. If one does not consider a secondary homotopy or if one considers two secondary homotopies then one gets a topological category which is not contractible.

I did however had to re-invent it for UF and was not sure how to prove isapropiscontr constructively until I have developed quite a bit of the Foundations library (including gradth).

V.

PS By the way I am not sure what is the definition of a "free topological category" and would be quite curious to know.

The precise statement which Max referred to is as follows - if one takes the free topological category generated by two objects, two morphisms in the opposite directions, two homotopies and one secondary homotopy as in the "correct" definition of a weak equivalence then the Hom-spaces of this topological category are contractible. If one does not consider a secondary homotopy or if one considers two secondary homotopies then one gets a topological category which is not contractible.

I did however had to re-invent it for UF and was not sure how to prove isapropiscontr constructively until I have developed quite a bit of the Foundations library (including gradth).

V.

PS By the way I am not sure what is the definition of a "free topological category" and would be quite curious to know.

Jan 4, 2014, 8:06:19 PM1/4/14

to Vladimir Voevodsky, Altenkirch Thorsten, Thomas Streicher, Bill Richter, HomotopyT...@googlegroups.com

It is interesting to see that this insight comes (again!) from a

homotopy theory route.

What I have in mind is the insight that "f:X->Y is an isomomorphism"

has a logically equivalent, proof-irrelevant, definition in

intensional MLTT. It is interesting that intuition from homotopy

theory can lead you to such a construction, which can be written down

in MLTT and makes sense on its own, even if homotopy theory hadn't

been invented/discovered to explain it.

Martin

Jan 5, 2014, 1:06:13 PM1/5/14

to Bill Richter, homotopyt...@googlegroups.com, Vladimir Voevodsky

I believe that 1 is true, or at least that a 1960s understanding of homotopy theory can aid in understanding dependent types. The homotopy theory notions used in chapter two of the book are quite basic. Those of us who like and are familiar with n-categories often talk about HoTT in that language, but it isn't at all necessary.

I am more doubtful about your point 2 however. Coq is a superior proof assistant for doing homotopy type theory because it uses intensional dependent type theory, which is what HoTT is based on. But I don't see why the homotopy interpretation would make it a superior proof assistant in all ways. Or is what you mean that Coq is superior for other reasons, but isn't usable until you understand dependent types, which the homotopy interpretation helps with?

Jan 5, 2014, 2:00:17 PM1/5/14

to Michael Shulman, homotopyt...@googlegroups.com, vlad...@ias.edu

I believe that 1 is true, or at least that a 1960s understanding of

homotopy theory can aid in understanding dependent types. The

homotopy theory notions used in chapter two of the book are quite

basic.

Thanks, Michael. Let me use your earlier recommendation to explain my problem:
homotopy theory can aid in understanding dependent types. The

homotopy theory notions used in chapter two of the book are quite

basic.

http://www.andrew.cmu.edu/user/awodey/preprints/TTH.pdf

Let's go to Steve's conclusions on p 15:

3. Conclusion: The logic of homotopy

An important and lively research program in current homotopy theory

is the pursuit (again following Grothendieck [Gro83]) of a general

concept of "stack," subsuming sheaves of homotopy types, higher

groupoids, quasi-categories, and the like. [...] The homotopy

interpretation of Martin-Lof type theory into Quillen model

categories, and the related results on type-theoretic constructions

of higher groupoids, are analogous to the basic results

interpreting extensional type theory and higher-order logic in (1-)

toposes. They clearly indicate that the logic of higher

toposes---i.e., the logic of homotopy---is, rather remarkably, a

form of intensional type theory.

This is completely incomprehensible to me, and I suspect, to any 1960s-era homotopy theorist. I'll believe that Steve's stacks and toposes are important, and that I could probably learn this stuff, but I doubt I could explain it to John Harrison. It seems to me that the HoTT book (which I apologize for not having read more carefully) is written in the same far-from-1960s-era-homotopy style:

For instance, interpreting types as sheaves helps explain the

intuitionistic nature of type-theoretic logic, [...]

Indeed, higher-dimensional category theory (particularly the theory

of weak infty-groupoids) is now known to be intimately connected

to homotopy theory, as proposed by Grothendieck and now being

studied intensely by mathematicians of both sorts.

Now to Vladimir, talking about sheaves is like breathing, as he's an algebraic geometer, and sheaves are quite simple compared to schemes, the fundamental object according to Hartshorne's book on Algebraic Geometry. I don't know this stuff. I read Hartshorne's other book on Euclid and Hilbert's axioms.

--

Best,

Bill

Jan 5, 2014, 2:05:03 PM1/5/14

to Bill Richter, Vladimir Voevodsky, homotopyt...@googlegroups.com

If I may quote my sentence following the one you quoted:

> Those of us who like and are familiar with n-categories often talk about HoTT in that language, but it isn't at all necessary.

The sentence you quoted from the book is from the introduction. Have you tried chapter 2 as I suggested?

Jan 5, 2014, 2:29:16 PM1/5/14

to Michael Shulman, vlad...@ias.edu, homotopyt...@googlegroups.com

Have you tried chapter 2 as I suggested?

Thanks, Michael. No, I have not tried very hard to read Chapter 2, or any part of the HoTT book. If I spent 2 weeks doing nothing but reading the first two chapters, I bet I could say something a lot more intelligent than I'm saying now. But I just paged through Chapter 2 now. I can see the words fibrations and homotopies, which is great, but Chapter 2 starts and ends with Steve's stuff that I find so forbidding:
The central new idea in homotopy type theory is that types can be

regarded as spaces in homotopy theory, or higher-dimensional

groupoids in category theory.

The actual homotopical interpretation, with identity types as path

spaces, and type families as fibrations, is due to [AW09], who used

the formalism of Quillen model categories. An interpretation in

(strict) ∞-groupoids was also given in the thesis [War08].

I didn't think, though, that I could jump right to Chapter 2. I thought I needed to read Chapter 1 first, and the intro seems to agree:

A reader who is familiar with Martin-Lof type theory can quickly

skim it to pick up the particulars of the theory we are using.

However, readers without experience in type theory will need to

read Chapter 1, as there are many subtle differences between type

theory and other foundations such as set theory.

And that would tend to say that I need to understand dependent types before understanding its homotopy theory interpretation. Would you agree?

Anyway, I think my problem with the HoTT book right now has much less to do with homotopy theory than the fact that the type theory in Chapter 1 looks so different from the type theory I learned in HOL, and I'm not really an expert yet on HOL type theory. I'm just not at all familiar with Martin-Lof type theory (Isn't that MLTT?).

--

Best,

Bill

Jan 5, 2014, 2:42:21 PM1/5/14

to Bill Richter, homotopyt...@googlegroups.com, Vladimir Voevodsky

The introduction to chapter 2 gives some explanation of the homotopy interpretation in intended-to-be-elementary language, which is why I mentioned it. To read all of chapter 2, yes, you should read chapter 1 first. Chapter 1 is about MLTT. My hope would be that from chapter 1 you can get sufficient understanding of DTT to be able to understand chapter 2, which would then help get a deeper understanding of dependent types. But it's impossible to say until you try. So maybe we should put this conversation on hold until you've actually tried to read the book.

Jan 5, 2014, 2:51:27 PM1/5/14

to Michael Shulman, homotopyt...@googlegroups.com, vlad...@ias.edu

Thanks for the tips on reading Chapters 1 & 2, Michael.

So maybe we should put this conversation on hold until you've

actually tried to read the book.

Right. But let me make a conjecture now: the connection between UF/Coq and 1960s style homotopy theory is a lot harder to see, and more interesting & productive, than anyone has explained here yet.

--

Best,

Bill

So maybe we should put this conversation on hold until you've

actually tried to read the book.

--

Best,

Bill

Jan 5, 2014, 3:15:56 PM1/5/14

to Michael Shulman, Vladimir Voevodsky, Bill Richter, homotopyt...@googlegroups.com

On Jan 5, 2014, at 1:06 PM, Michael Shulman wrote:

I believe that 1 is true, or at least that a 1960s understanding of homotopy theory can aid in understanding dependent types. The homotopy theory notions used in chapter two of the book are quite basic. Those of us who like and are familiar with n-categories often talk about HoTT in that language, but it isn't at all necessary.

I am more doubtful about your point 2 however. Coq is a superior proof assistant for doing homotopy type theory because it uses intensional dependent type theory, which is what HoTT is based on. But I don't see why the homotopy interpretation would make it a superior proof assistant in all ways. Or is what you mean that Coq is superior for other reasons, but isn't usable until you understand dependent types, which the homotopy interpretation helps with?

Let's be precise here. Homotopy theory is needed not so much to understand dependent types (which can be understood on the example of propositional functions) but to understand:

1. Martin-Lof Identity Types.

2. Universes.

Without homotopy theory it is impossible to understand these two ingredients.

Vladimir.

Jan 5, 2014, 7:59:56 PM1/5/14