67 views

Skip to first unread message

Oct 27, 2019, 10:41:37 AM10/27/19

to Homotopy Type Theory

In [this](https://www.youtube.com/watch?v=zw6NcwME7yI&t=1680) 2014 talk

at IAS, Voevodsky talks about the history of his project of "univalent

mathematics" and his motivation for starting it. Namely, he mentions

that he found existing proof assistants at that time (in 2000) to be

impractical for the kinds of mathematics he was interested in.

Unfortunately, he doesn't go into details of what mathematics he was

exactly interested in (I'm guessing something to do with homotopy

theory) or why exactly existing proof assistants weren't practical for

formalizing them. Judging by the things he mentions in his talk, it

seems that (roughly) his rejection of those proof assistants was based

on the view that predicate logic + ZFC is not expressive enough. In

other words, there is too much lossy encoding needed in order to

translate from the platonic world of mathematical ideas to this formal

language.

Comparing the situation to computer programming languages, one might say

that predicate logic is like Assembly in that even though everything can

be encoded in that language, it is not expressive enough to directly

talk about higher level concepts, diminishing its practical value for

reasoning about mathematics. In particular, those systems are not

adequate for *interactive* development of *new* mathematics (as opposed

to formalization of existing theories).

Perhaps I am just misinterpreting what Voevodsky said. In this case, I

hope someone can correct me. However even if this wasn't *his* view, to

me it seems to be a view held implicitly in the HoTT community. In any

case, it's a view that one might reasonably hold.

However I wonder how reasonable that view actually is, i.e. whether e.g.

Mizar really is that much more impractical than HoTT-Coq or Agda, given

that people have been happily formalizing mathematics in it for 46 years

now. And, even though by browsing the contents of "Formalized

Mathematics" one can get the impression that the work consists mostly of

formalizing early 20th century mathematics, neither the UniMath nor the

HoTT library for example contain a proof of Fubini's theorem.

So, to put this into one concrete question, how (if at all) is HoTT-Coq

more practical than Mizar for the purpose of formalizing mathematics,

outside the specific realm of synthetic homotopy theory?

--

Nicolas

at IAS, Voevodsky talks about the history of his project of "univalent

mathematics" and his motivation for starting it. Namely, he mentions

that he found existing proof assistants at that time (in 2000) to be

impractical for the kinds of mathematics he was interested in.

Unfortunately, he doesn't go into details of what mathematics he was

exactly interested in (I'm guessing something to do with homotopy

theory) or why exactly existing proof assistants weren't practical for

formalizing them. Judging by the things he mentions in his talk, it

seems that (roughly) his rejection of those proof assistants was based

on the view that predicate logic + ZFC is not expressive enough. In

other words, there is too much lossy encoding needed in order to

translate from the platonic world of mathematical ideas to this formal

language.

Comparing the situation to computer programming languages, one might say

that predicate logic is like Assembly in that even though everything can

be encoded in that language, it is not expressive enough to directly

talk about higher level concepts, diminishing its practical value for

reasoning about mathematics. In particular, those systems are not

adequate for *interactive* development of *new* mathematics (as opposed

to formalization of existing theories).

Perhaps I am just misinterpreting what Voevodsky said. In this case, I

hope someone can correct me. However even if this wasn't *his* view, to

me it seems to be a view held implicitly in the HoTT community. In any

case, it's a view that one might reasonably hold.

However I wonder how reasonable that view actually is, i.e. whether e.g.

Mizar really is that much more impractical than HoTT-Coq or Agda, given

that people have been happily formalizing mathematics in it for 46 years

now. And, even though by browsing the contents of "Formalized

Mathematics" one can get the impression that the work consists mostly of

formalizing early 20th century mathematics, neither the UniMath nor the

HoTT library for example contain a proof of Fubini's theorem.

So, to put this into one concrete question, how (if at all) is HoTT-Coq

more practical than Mizar for the purpose of formalizing mathematics,

outside the specific realm of synthetic homotopy theory?

--

Nicolas

Oct 27, 2019, 1:23:01 PM10/27/19

to Nicolas Alexander Schmidt, Homotopy Type Theory

I believe it refers to his 2-theories:

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/e491d38b-b50a-6172-dca9-40d45fe1b6d2%40fromzerotoinfinity.xyz.

Nov 3, 2019, 2:29:40 AM11/3/19

to Nicolas Alexander Schmidt, Homotopy Type Theory

I know hardly anything about Mizar, so I can't comment on it. And I

don't know for sure exactly what Voevodsky meant (and perhaps no one

now does). But I am pretty sure that he was not thinking about

synthetic homotopy theory at the time, because the possibility of

synthetic homotopy theory wasn't really even imagined until later on

(specifically, with the advent of higher inductive types).

Instead, I expect he was thinking about category theory and higher

category theory. The advantage of univalence in such contexts is that

it formalizes the isomorphism-invariant behavior of category theory,

incorporating it directly into the logical structure of the

foundations so that you don't have to worry about whether your

theorems are invariant under isomorphism, or prove explicitly that

they are. Is this a substantial advantage over a ZFC-based system?

Maybe, maybe not. (One might argue that it's not much of an advantage

at all until univalence becomes "computational", as with cubical type

theories, so that transporting along it can be done automatically by

the proof assistant.) But there are other points to consider.

Firstly, before even getting to univalence, there is a difference

between membership-based set theories and type theories, which is more

closely analogous to the assembly language / high level programming

language divide. Type-theoretic foundations for mathematics, like

strongly/statically typed programming languages, provide automatic

"error-checking" for the user, catching various kinds of mistakes at

"compile time", and allow more intelligent compiler optimization and

inference due to the more informative nature of types. Dependent type

theories are even better at this. And just as the advantages of

static typing are not belied by the fact that a lot of people happily

write code in dynamically typed languages, so the advantages of typed

mathematics for formalization are not belied by the fact that some

mathematicians are happy to do without them. It's worth noting that

many of the recent high-profile formalizations of *recent*

mathematical results, such as the four-color theorem, the odd-order

theorem, and the Kepler conjecture, use type-theoretic proof

assistants like Coq and HOL rather than set-theoretic ones like Mizar.

From this perspective, the advantage of HoTT/UF is that it "fixes"

certain infelicities in previously existing type theories. Now we

understand quotients better and have more tools for doing without

setoids; now we know what the equality type of the universe is; now we

are better at computing with function extensionality and propositional

extensionality; etc. So HoTT/UF gives us the benefits of a

type-theoretic foundation while ameliorating some of the traditional

disadvantages of that approach.

But in my own opinion (which is, I believe, rather different from

Voevodsky's, at least in emphasis), all of this is kind of beside the

point. It's like arguing about whether or not my laptop can play

movies better than an 80s-era VCR; it overlooks the main point that my

laptop does *so much more* than just play movies. The real advantage

of type-theoretic, and homotopy-type-theoretic, foundations, is that

they have a plethora of categorical and higher-categorical models.

Proving a theorem once, in constructive homotopy type theory,

automatically entails the "internal" truth of the corresponding

theorem in all higher toposes. I think this is a much more important

selling point than whether or not we get a more practical system for

formalizing plain old set-based mathematics.

Mike

don't know for sure exactly what Voevodsky meant (and perhaps no one

now does). But I am pretty sure that he was not thinking about

synthetic homotopy theory at the time, because the possibility of

synthetic homotopy theory wasn't really even imagined until later on

(specifically, with the advent of higher inductive types).

Instead, I expect he was thinking about category theory and higher

category theory. The advantage of univalence in such contexts is that

it formalizes the isomorphism-invariant behavior of category theory,

incorporating it directly into the logical structure of the

foundations so that you don't have to worry about whether your

theorems are invariant under isomorphism, or prove explicitly that

they are. Is this a substantial advantage over a ZFC-based system?

Maybe, maybe not. (One might argue that it's not much of an advantage

at all until univalence becomes "computational", as with cubical type

theories, so that transporting along it can be done automatically by

the proof assistant.) But there are other points to consider.

Firstly, before even getting to univalence, there is a difference

between membership-based set theories and type theories, which is more

closely analogous to the assembly language / high level programming

language divide. Type-theoretic foundations for mathematics, like

strongly/statically typed programming languages, provide automatic

"error-checking" for the user, catching various kinds of mistakes at

"compile time", and allow more intelligent compiler optimization and

inference due to the more informative nature of types. Dependent type

theories are even better at this. And just as the advantages of

static typing are not belied by the fact that a lot of people happily

write code in dynamically typed languages, so the advantages of typed

mathematics for formalization are not belied by the fact that some

mathematicians are happy to do without them. It's worth noting that

many of the recent high-profile formalizations of *recent*

mathematical results, such as the four-color theorem, the odd-order

theorem, and the Kepler conjecture, use type-theoretic proof

assistants like Coq and HOL rather than set-theoretic ones like Mizar.

From this perspective, the advantage of HoTT/UF is that it "fixes"

certain infelicities in previously existing type theories. Now we

understand quotients better and have more tools for doing without

setoids; now we know what the equality type of the universe is; now we

are better at computing with function extensionality and propositional

extensionality; etc. So HoTT/UF gives us the benefits of a

type-theoretic foundation while ameliorating some of the traditional

disadvantages of that approach.

But in my own opinion (which is, I believe, rather different from

Voevodsky's, at least in emphasis), all of this is kind of beside the

point. It's like arguing about whether or not my laptop can play

movies better than an 80s-era VCR; it overlooks the main point that my

laptop does *so much more* than just play movies. The real advantage

of type-theoretic, and homotopy-type-theoretic, foundations, is that

they have a plethora of categorical and higher-categorical models.

Proving a theorem once, in constructive homotopy type theory,

automatically entails the "internal" truth of the corresponding

theorem in all higher toposes. I think this is a much more important

selling point than whether or not we get a more practical system for

formalizing plain old set-based mathematics.

Mike

Nov 3, 2019, 6:38:49 AM11/3/19

to Nicolas Alexander Schmidt, Homotopy Type Theory

There's also VV homotopy lambda calculus, which he later abandoned for MLTT:

Nov 3, 2019, 6:57:11 AM11/3/19

to Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

Forget even higher category theory. Kevin Buzzard now goes around telling the story of how even formally proving (using Lean) things in rather elementary commutative algebra from EGA that are stated as equalities was not obvious: the equality is really an isomorphism arising from a universal property. Forget trying to do anything motivic, when algebra is full of such equalities. This is not a problem with univalence, of course.

David

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAOoPQuRQPMkCFKYtAbB%2BpNK90XtFk%2BaVT_aY59U_-9t17sBzeA%40mail.gmail.com.

Nov 3, 2019, 2:13:23 PM11/3/19

to David Roberts, Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

But does univalence a la Book HoTT *actually* make it easier to reason

about such things? It allows us to write "=" rather than "\cong", but

to construct such an equality we have to construct an isomorphism

first, and to *use* such an equality we have to transport along it,

and then we get lots of univalence-redexes that we have to manually

reduce away. My experience formalizing math in HoTT/Coq is that it's

much easier if you *avoid* turning equivalences into equalities except

when absolutely necessary. (I don't have any experience formalizing

math in a cubical proof assistant, but in that case at least you

wouldn't have to manually reduce the univalence-redexes -- although it

seems to me you'd still have to construct the isomorphism before you

can apply univalence to make it an equality.)

> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAFL%2BZM_%3D%3DiLS16Vy7sGiEqNkCxOMYL4j%2BZFqKv5uJ-ivkuemEg%40mail.gmail.com.

about such things? It allows us to write "=" rather than "\cong", but

to construct such an equality we have to construct an isomorphism

first, and to *use* such an equality we have to transport along it,

and then we get lots of univalence-redexes that we have to manually

reduce away. My experience formalizing math in HoTT/Coq is that it's

much easier if you *avoid* turning equivalences into equalities except

when absolutely necessary. (I don't have any experience formalizing

math in a cubical proof assistant, but in that case at least you

wouldn't have to manually reduce the univalence-redexes -- although it

seems to me you'd still have to construct the isomorphism before you

can apply univalence to make it an equality.)

Nov 3, 2019, 2:46:32 PM11/3/19

to Michael Shulman, David Roberts, Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

The isomorphism invariance might be useful when you don't care about the actual stuff you get after transporting. If you have two different definitions of the same set, you can transport its *properties* along the isomorphism (that you still need to construct). For example, we can define rational numbers in two different ways: as arbitrary quotients and as reduced quotients. Then you can prove that they are isomorphic and that one of them gives you a field. Then you know that the other one is also a field. You may be interested in the actual definitions of the operations, but you dob't usually care about the actual proof of the properties. So, you construct the addition and the multiplication explicitly, but you get proofs that they determine a field "for free" from the other definition. If the proofs for one of the definitions is easier than for the other one, this might be a significant simplification.

Regards,

Valery Isaev

вс, 3 нояб. 2019 г. в 22:13, Michael Shulman <shu...@sandiego.edu>:

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAOvivQz47kSm9WbKDmUsndrpAJMkNwiWmVABqOFrVqyTOvSAbw%40mail.gmail.com.

Nov 3, 2019, 5:23:25 PM11/3/19

to Homotopy Type Theory

This discussion reminds me of this question: https://mathoverflow.net/questions/336191/cauchy-reals-and-dedekind-reals-satisfy-the-same-mathematical-theorems/336233?noredirect=1#comment840649_336233

M.

On Sunday, 3 November 2019 19:46:32 UTC, Valery Isaev wrote:

The isomorphism invariance might be useful when you don't care about the actual stuff you get after transporting. If you have two different definitions of the same set, you can transport itspropertiesalong the isomorphism (that you still need to construct). For example, we can define rational numbers in two different ways: as arbitrary quotients and as reduced quotients. Then you can prove that they are isomorphic and that one of them gives you a field. Then you know that the other one is also a field. You may be interested in the actual definitions of the operations, but you dob't usually care about the actual proof of the properties. So, you construct the addition and the multiplication explicitly, but you get proofs that they determine a field "for free" from the other definition. If the proofs for one of the definitions is easier than for the other one, this might be a significant simplification.Regards,Valery Isaev

вс, 3 нояб. 2019 г. в 22:13, Michael Shulman <shu...@sandiego.edu>:

But does univalence a la Book HoTT *actually* make it easier to reason

about such things? It allows us to write "=" rather than "\cong", but

to construct such an equality we have to construct an isomorphism

first, and to *use* such an equality we have to transport along it,

and then we get lots of univalence-redexes that we have to manually

reduce away. My experience formalizing math in HoTT/Coq is that it's

much easier if you *avoid* turning equivalences into equalities except

when absolutely necessary. (I don't have any experience formalizing

math in a cubical proof assistant, but in that case at least you

wouldn't have to manually reduce the univalence-redexes -- although it

seems to me you'd still have to construct the isomorphism before you

can apply univalence to make it an equality.)

>>>> To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

>>>> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/e491d38b-b50a-6172-dca9-40d45fe1b6d2%40fromzerotoinfinity.xyz.

>>

>> --

>> You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

>> To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

>> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAOoPQuRQPMkCFKYtAbB%2BpNK90XtFk%2BaVT_aY59U_-9t17sBzeA%40mail.gmail.com.

>

> --

> You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

> To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAFL%2BZM_%3D%3DiLS16Vy7sGiEqNkCxOMYL4j%2BZFqKv5uJ-ivkuemEg%40mail.gmail.com.

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

Nov 4, 2019, 1:43:08 PM11/4/19

to Michael Shulman, David Roberts, Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

On Sun, 3 Nov 2019 at 19:13, Michael Shulman <shu...@sandiego.edu> wrote:

But does univalence a la Book HoTT *actually* make it easier to reason

about such things?

I think this is a really interesting and important question.

I guess David was referring to my scheme fail of 2018. I wanted to formalise the notion of a scheme a la Grothendieck and prove that if R was a commutative ring then Spec(R) was a scheme [I know it's a definition, but many mathematicians do seem to call it a theorem, in our ignorance]. I showed an undergraduate a specific lemma in ring theory (https://stacks.math.columbia.edu/tag/00EJ) and said "that's what I want" and they formalised it for me. And then it turned out that I wanted something else: I didn't have R_f, I had something "canonically isomorphic" to it, a phrase we mathematicians like to pull out when the going gets tough and we can't be bothered to check that any more diagrams commute. By this point it was too late to turn back, and so I had to prove that 20 diagrams commuted and it wasn't much fun. I then got an MSc student to redo everything using universal properties more carefully in Lean and it worked like a dream https://github.com/ramonfmir/lean-scheme. A lot of people said to me at the time "you wouldn't have had this problem if you'd been using HoTT instead of DTT" and my response to this is still the (intentionally) provocative "go ahead and define schemes and prove that Spec(R) is a scheme in some HoTT system, and show me how it's better; note that we did have a problem, but we solved it in DTT". I would be particularly interested to see schemes done in Arend, because it always felt funny to me using UniMath in Coq (and similarly it feels funny to me to do HoTT in Lean 3 -- in both cases it could be argued that it's using a system to do something it wasn't designed to do). I think it's easy to theorise about this sort of thing but until it happens in practice in one or more of the HoTT systems I don't think we will understand the issue properly (or, more precisely, I don't think I will understand the issue properly). I have had extensive discussions with Martin Escardo about HoTT and he has certainly given me hope, but on the Lean chat I think people assumed schemes would be easy in Lean (I certainly did) and then we ran into this unexpected problem (which univalence is probably designed to solve), so the question is whether a univalent type theory runs into a different unexpected problem -- you push the carpet down somewhere and it pops up somewhere else.

I know this is a HoTT list but the challenge is also open to the HOL people like the Isabelle/HOL experts. In contrast to HoTT theories, which I think should handle schemes fine, I think that simple type theory will have tremendous problems defining, for example, tensor products of sheaves of modules on a scheme, because these are dependent types. On the other hand my recent ArXiv paper with Commelin and Massot https://arxiv.org/abs/1910.12320 goes much further and formalises perfectoid spaces in dependent type theory. I would like the people on this list to see this as a challenge. I think that this century will see the rise of the theorem prover in mathematics and I am not naive enough to think that the one I currently use now is the one which is guaranteed to be the success story. Voevodsky was convinced that univalence was the right way to do modern mathematics but I'm doing it just fine in dependent type theory and now he's gone I really want to find someone who will take up the challenge and do some scheme theory in HoTT, but convincing professional mathematicians to get interested in this area is very difficult, and I speak as someone who's been trying to do it for two years now [I recommend you try the undergraduates instead, anyone who is interested in training people up -- plenty of undergraduates are capable of reading the definition of a scheme, if they know what rings and topological spaces are]

To get back to the original question, my understanding was that Voevodsky had done a bunch of scheme theory and it had got him a Fields medal and it was this mathematics which he was interested in at the time. He wanted to formalise his big theorem, just like Hales did. Unfortunately he was historically earlier, and his mathematics involved far more conceptual objects than spheres in 3-space, so it was a much taller order. All the evidence is there to suggest that over the next 15 or so years his interests changed. The clearest evidence, in my mind, is that there is no definition of a scheme in UniMath. Moreover his story in his Cambridge talk https://www.newton.ac.uk/seminar/20170710113012301 about asking Suslin to reprove one of his results without using the axiom of choice (46 minutes in) kind of shocked me -- Suslin does not care about mathematics without choice, and the vast majority of mathematicians employed in mathematics departments feel the same, although I'm well aware that constructivism is taken more seriously on this list. I think it is interesting that Voevodsky failed to prove a constructive version of his theorem, because I think that some mathematics is better off not being constructive. It is exactly the interaction between constructivism and univalence which I do not understand well, and I think that a very good way to investigate it would be to do some highly non-constructive modern mathematics in a univalent type theory.

Kevin

PS many thanks to the people who have emailed me in the past telling me about how in the past I have used "HoTT", "univalence", "UniMath", interchangeably and incorrectly. Hopefully I am getting better but I am still keen to hear anything which I'm saying which is imprecise or incorrect.

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAOvivQz47kSm9WbKDmUsndrpAJMkNwiWmVABqOFrVqyTOvSAbw%40mail.gmail.com.

Nov 4, 2019, 4:11:10 PM11/4/19

to Kevin Buzzard, David Roberts, Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

Valery has a good point that transporting properties along an

equivalence is definitely somewhere that Book HoTT could get you some

mileage. But I suspect that a more significant advantage would come

from using a cubical type theory in which transport computes (as long

as the equivalence was defined constructively -- a good reason to care

about being constructive even putting aside philosophy and internal

languages). I'd be curious to know what those 20 diagrams were.

FWIW, I think that nowadays Coq *is* designed for Book HoTT, certainly

more than Lean 3 is. My understanding is that Lean 3 is actually

technically incompatible with univalence, whereas over the past decade

the Coq developers have incorporated various new features requested by

the HoTT community to improve compatibility, and the HoTT Coq library

is I believe one of the test suites that new Coq versions are tested

against to ensure that breakage is dealt with on one side or another.

I'm not sure how a proof assistant could be more designed for Book

HoTT than modern Coq and Agda are. (Arend is not designed for Book

HoTT, but for a flavor of HoTT that's partway to a cubical theory,

with an interval type representing paths.)

> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAH52Xb3s0%2BvweUaSQBMBNLa5mRc9F1jrsg2sSoFmcE_4%3DdAt1w%40mail.gmail.com.

equivalence is definitely somewhere that Book HoTT could get you some

mileage. But I suspect that a more significant advantage would come

from using a cubical type theory in which transport computes (as long

as the equivalence was defined constructively -- a good reason to care

about being constructive even putting aside philosophy and internal

languages). I'd be curious to know what those 20 diagrams were.

FWIW, I think that nowadays Coq *is* designed for Book HoTT, certainly

more than Lean 3 is. My understanding is that Lean 3 is actually

technically incompatible with univalence, whereas over the past decade

the Coq developers have incorporated various new features requested by

the HoTT community to improve compatibility, and the HoTT Coq library

is I believe one of the test suites that new Coq versions are tested

against to ensure that breakage is dealt with on one side or another.

I'm not sure how a proof assistant could be more designed for Book

HoTT than modern Coq and Agda are. (Arend is not designed for Book

HoTT, but for a flavor of HoTT that's partway to a cubical theory,

with an interval type representing paths.)

Nov 4, 2019, 6:21:03 PM11/4/19

to HomotopyT...@googlegroups.com

Dear Bas, Michael, David, Valery, Martín and Kevin,

thank you all for your replies. When I posed my question, I didn't

expect there to be so much room for debate. In particular, I am

surprised that it seems not at all clear how much univalence actually

helps the practicality of doing formal proofs (as opposed to any

theoretical benefits).

Kevin, I would like to second Michael's interest in seeing these 20

commutative diagrams. Moreover, I'd also be very interested in seeing

your "spaghetti code" (quote from the slides of your Big Proof talk): it

seems it should be informative to see where your initial approach went

wrong, and how much these problems and their solution had anything to do

at all with the formal system you were working in. Are your original

files perhaps available somewhere?

--

Nicolas

thank you all for your replies. When I posed my question, I didn't

expect there to be so much room for debate. In particular, I am

surprised that it seems not at all clear how much univalence actually

helps the practicality of doing formal proofs (as opposed to any

theoretical benefits).

Kevin, I would like to second Michael's interest in seeing these 20

commutative diagrams. Moreover, I'd also be very interested in seeing

your "spaghetti code" (quote from the slides of your Big Proof talk): it

seems it should be informative to see where your initial approach went

wrong, and how much these problems and their solution had anything to do

at all with the formal system you were working in. Are your original

files perhaps available somewhere?

--

Nicolas

Nov 4, 2019, 6:27:06 PM11/4/19

to Kevin Buzzard, Michael Shulman, Bas Spitters, homotopytypetheory, Nicolas Alexander Schmidt

Hi,

Going back a step from DTT vs HoTT...

> I think that simple type theory will have tremendous problems defining, for example, tensor products of sheaves of modules on a scheme, because these are dependent types.

this is probably where VV found himself just under 20 years ago. I

don't know what the state of play was around 2000, but certainly if he

didn't know about Coq, then VV's options for formal proof systems were

looking more like Mizar, MetaMath and similar, which are probably even

worse than simple type-theory systems: imagine trying to formally

define a scheme in the language of ZFC!

Best regards,

David

David Roberts

Webpage: https://ncatlab.org/nlab/show/David+Roberts

Blog: https://thehighergeometer.wordpress.com

Going back a step from DTT vs HoTT...

> I think that simple type theory will have tremendous problems defining, for example, tensor products of sheaves of modules on a scheme, because these are dependent types.

don't know what the state of play was around 2000, but certainly if he

didn't know about Coq, then VV's options for formal proof systems were

looking more like Mizar, MetaMath and similar, which are probably even

worse than simple type-theory systems: imagine trying to formally

define a scheme in the language of ZFC!

Best regards,

David

David Roberts

Webpage: https://ncatlab.org/nlab/show/David+Roberts

Blog: https://thehighergeometer.wordpress.com

Nov 5, 2019, 10:43:06 AM11/5/19

to Homotopy Type Theory

Re: "To get back to the original question, my understanding was that Voevodsky had done a bunch of scheme theory and it had got him a Fields medal and it was this mathematics which he was interested in at the time. He wanted to formalise his big theorem, just like Hales did."

I think he was more interested in formalizing things like his early work with Kapranov on higher categories, which turned out to have a mistake in it. He once told me that he wasn't interested in formalizing his proof of Bloch-Kato, because he was sure it was right. (I should have asked him at the time how he could be so sure!)

Re: "The clearest evidence, in my mind, is that there is no definition of a scheme in UniMath."

That's sort of accidental. In early 2014, expecting to speak at an algebraic geometry in the summer, he mentioned that one idea he had for his talk would be to formalize the definition of scheme in UniMath and speak about it. I think he was distracted from that by thinking about C-systems. The UniMath project aims at formalizing all of mathematics, so the definition of scheme is one of the next things that (still) needs to be done.

Re: "The clearest evidence, in my mind, is that there is no definition of a scheme in UniMath."

That's sort of accidental. In early 2014, expecting to speak at an algebraic geometry in the summer, he mentioned that one idea he had for his talk would be to formalize the definition of scheme in UniMath and speak about it. I think he was distracted from that by thinking about C-systems. The UniMath project aims at formalizing all of mathematics, so the definition of scheme is one of the next things that (still) needs to be done.

Nov 5, 2019, 3:29:56 PM11/5/19

to Homotopy Type Theory

He once told me that he wasn't interested in formalizing his proof of Bloch-Kato, because he was sure it was right. (I should have asked him at the time how he could be so sure!)

Oh this is interesting... do you remember when this conversation was happening? Because in these slides (https://www.math.ias.edu/vladimir/sites/math.ias.edu.vladimir/files/2014_09_Bernays_3%20presentation.pdf) he said "Next year I am starting a project of univalent formalization of my proof of Milnor’s Conjecture using this formalization of set theory as the starting point." (Page 11)

在 2019年11月5日星期二 UTC-8上午7:43:06，Daniel R. Grayson写道：

在 2019年11月5日星期二 UTC-8上午7:43:06，Daniel R. Grayson写道：

Nov 5, 2019, 6:14:07 PM11/5/19

to Homotopy Type Theory

On Monday, 4 November 2019 18:43:08 UTC, Kevin Buzzard wrote:

It is exactly the interaction between constructivism and univalence which I do not understand well, and I think that a very good way to investigate it would be to do some highly non-constructive modern mathematics in a univalent type theory

Regarding *old* mathematics, you have the well-ordering principle proved in UniMath (from the axiom of choice, of course).

Regarding your doubt about the interaction, we have that univalence is orthogonal to constructivism.

In fact, univalence is not *inherently* constructive. It was hard work to find a constructive interpretation of univalence (which happens to rely on cubical sets as in homotopy theory). In particular (even if I lam fond of constructive mathematics, as you know), I work with univalence axiomatically, as a black box, rather than as a construction, in my (formal and informal) mathematical developments. And I do prefer to work with univalence-as-a-specification rather than univalence-as-a-construction.

There is nothing inherently constructive about univalence. There is no a priori interaction between univalence and constructivism. There is only an a posteriori interaction, constructed by some of the constructively minded members of this list. The constructivity of univalence was an open problem for a number of year, and I would say that, even if it is solved via the cubical model, it is far from being fully understood.

Best,

Martin

Nov 5, 2019, 7:06:12 PM11/5/19

to Martín Hötzel Escardó, Homotopy Type Theory

Nov 6, 2019, 6:59:28 PM11/6/19

to Homotopy Type Theory

Sorry, at this point I don't remember precisely.

Nov 11, 2019, 1:26:20 PM11/11/19

to Stefan Monnier, Martín Hötzel Escardó, Homotopy Type Theory

For inductive families, one thing you can do is to think of them in terms of the translation to parametrized inductive types and identity types, so

> data Foo (A : Set) : Set where

> bar : Id U A UnaryNat -> Foo A

in which case bar applied to the Id U BinaryNat UnaryNat that you get from univalence gives a Foo BinaryNat.

A related perspective is to think of some transports as additional constructors for inductive families; see e.g. this approach to inductive families in cubical type theory https://www.cs.cmu.edu/~ecavallo/works/popl19.pdf

-Dan

> On Nov 5, 2019, at 7:06 PM, Stefan Monnier <mon...@iro.umontreal.ca> wrote:

>

>> members of this list. The constructivity of univalence was an open problem

>> for a number of year, and I would say that, even if it is solved via the

>> cubical model, it is far from being fully understood.

>

> In my case, I still find it odd in a situation such as:

>

> data Foo : Set -> Set where

> bar : Foo UnaryNat

>

> since transport supposedly allows us to take a proof of equivalence

> between UnaryNat and BinaryNat and turn a `bar` into something of type

> `Foo BinaryNat` although I can't see any way to directly construct an

> object of this type.

>

>

> Stefan

>

> data Foo (A : Set) : Set where

> bar : Id U A UnaryNat -> Foo A

in which case bar applied to the Id U BinaryNat UnaryNat that you get from univalence gives a Foo BinaryNat.

A related perspective is to think of some transports as additional constructors for inductive families; see e.g. this approach to inductive families in cubical type theory https://www.cs.cmu.edu/~ecavallo/works/popl19.pdf

-Dan

> On Nov 5, 2019, at 7:06 PM, Stefan Monnier <mon...@iro.umontreal.ca> wrote:

>

>> members of this list. The constructivity of univalence was an open problem

>> for a number of year, and I would say that, even if it is solved via the

>> cubical model, it is far from being fully understood.

>

>

> data Foo : Set -> Set where

> bar : Foo UnaryNat

>

> since transport supposedly allows us to take a proof of equivalence

> between UnaryNat and BinaryNat and turn a `bar` into something of type

> `Foo BinaryNat` although I can't see any way to directly construct an

> object of this type.

>

>

> Stefan

>

Nov 24, 2019, 1:11:19 PM11/24/19

to Nicolas Alexander Schmidt, Homotopy Type Theory

Kevin, I would like to second Michael's interest in seeing these 20

commutative diagrams. Moreover, I'd also be very interested in seeing

your "spaghetti code" (quote from the slides of your Big Proof talk): it

seems it should be informative to see where your initial approach went

wrong, and how much these problems and their solution had anything to do

at all with the formal system you were working in. Are your original

files perhaps available somewhere?

Sorry for the delay.

My original repo with a "bad" theorem is here:

The bad theorem is this:

The problem is that the rings denoted R_{f_i} (localisations of rings) are typically defined to mathematicians as "this explicit construction here" as opposed to "the unique up to unique isomorphism ring having some explicit property" and, as a mathematician, I formalised (or more precisely got Imperial College maths undergraduates Chris Hughes and Kenny Lau to formalise) the explicit construction of the localisation and then the explicit theorem as stated in the stacks project, not understanding at the time that this would lead to trouble. When I came to apply it, I ran into the issue that on this page

we have the quote: "If f, g in R are such that D(f)=D(g), then by Lemma 26.5.1 there are canonical maps M_f->M_g and M_g->M_f which are mutually inverse. Hence [we can regard M_f and M_g as equal]".

This is a typical mathematician's usage of the word "canonical" -- it means "I give you my mathematician's guarantee that I will never do anything to M_f which won't work in exactly the same way as M_g so you can replace one by the other and I won't mind".

As I explained earlier, by the time I noticed this subtlety it was too late, and this led to this horrorshow:

plus the lines following, all of which is applied here

. That last link is justifying a whole bunch of rewrites along canonical isomorphisms. These were the general lemmas I needed to prove that if A -> B was a map, and if A was canonically isomorphic to A' and if B was canonically isomorphic to B' and A' -> B' was the corresponding map, then the image of A was canonically isomorphic to the image of A' etc, all in some very specific situation where "canonically isomorphic" was typically replaced by "unique isomorphism satisfying this list of properties". Note that this is crappy old code written by me when I had no idea what I was doing. One very interesting twist was this: the universal property of localisation for the localised ring R_f is a statement which says that under certain circumstances there is a unique ring homomorphism from R_f; however in https://stacks.math.columbia.edu/tag/00EJ the map beta in the statement of the lemma is *not* a ring homomorphism and so a naive application the universal property was actually *not enough*! One has to reformulate the lemma in terms of equalisers of ring homomorphisms and remove mention of beta. None of this is mentioned in the stacks project website because we are covered by the fact that everything is "canonical" so it's all bound to work -- and actually this is *true* -- we are very good at using this black magic word.

All of this is fixed in Ramon Mir's MSc project

and the explanations of how it was fixed are in his write-up here

Briefly, one crucial input was that the localisation map R -> R_f could be characterised up to unique isomorphism in the category of R-algebras by something far more concrete than the universal property, and Ramon used this instead.

In HoTT one might naively say that these problems would not arise because isomorphic things are equal. However my *current* opinion is that it is not as easy as this, because I believe that the correct "axiom" is that "canonically" isomorphic objects are equal and that the HoTT axiom is too strong. I developed some rather primitive tools to rewrite certain statements along certain kinds of isomorphisms with some naturality properties, and mathematicians would be happy with these ideas. I can quite believe that if you stick to homotopy types with the model in your mind as being topological spaces up to homotopy then the HoTT axiom is perfect. However there are things other than topological spaces in mathematics, for example commutative rings, and in my mind the HoTT axiom just looks weird and wrong in ring theory, and I expect it to backfire when HoTT people start doing some serious ring theory. I might be wrong. Part of me hopes I'm wrong, in fact. But this will remain my opinion until someone comes along and formalises the definition of a scheme in one of the HoTT theories and proves the "theorem" that an affine scheme is a scheme (I write "theorem" in quotes because it is a construction, not a theorem). I had the pleasure of meeting Thorsten Altenkirch earlier this week and I asked him why this had not been done yet, and he told me that they were just waiting for the right person to come along and do it. A year or so ago I was of the opinion that more mathematicians should be using Lean but now I have come to the conclusion that actually more mathematicians should be engaging with *all* of the theorem provers, so we can find out which provers are the most appropriate for which areas. By "mathematicians" here I really mean my ugly phrase "proper mathematicians", i.e. people doing algebra/number theory/geometry/topology etc, people who have absolutely no idea what a type is and would even struggle to list the axioms of set theory. These people with their "canonical" constructions (a phrase which has no meaning) are the problem, and they're very hard to reach because currently these systems offer them nothing they need. I wrote a silly game

to help my 1st years revise for my exam (and the game takes things much further than the contents of the course), and I'm writing a real number game too, to help my 1st years revise analysis for their January exam. It would not be hard to write analogous such games in one of the HoTT theories, I would imagine.

From what I have seen, it seems to me that Isabelle is a fabulous tool for classical analysis, Coq is great for finite group theory, the Kepler conjecture is proved in other HOL systems, and the HoTT systems are great for the theory of topological spaces up to homotopy. But number theory has been around for millennia and unfortunately uses analysis, group theory and topological spaces, and I want one system which can do all of them. I think that we can only find out the limitations of these systems by doing a whole bunch of "proper mathematics" in all of them. I think it's appalling that none of them can even *state* the Hodge conjecture, for example. For me, that is a very simple reason for a "proper mathematician" to immediately reject all of them in 2019. But I digress.

Kevin

--

Nicolas

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/bc1a186e-4d33-0296-4b1b-b09ee8188037%40fromzerotoinfinity.xyz.

Nov 25, 2019, 7:25:37 PM11/25/19

to Kevin Buzzard, Homotopy Type Theory

Thanks for sharing the details, Kevin!

On Sun, Nov 24, 2019 at 10:11 AM Kevin Buzzard

<kevin.m...@gmail.com> wrote:

> In HoTT one might naively say that these problems would not arise because isomorphic things are equal.

This thought is indeed too naive, but, I think, not for the reason you

give. In a non-univalent type theory, you have to prove manually that

everything in sight respects isomorphisms. This is I think the point

you were making about why the original approach got so messy. HoTT

does solve this problem: because isomorphisms can be made into

equalities, everything automatically respects isomorphisms.

The problem is that in traditional "Book" HoTT, when you make an

isomorphism into an equality, the original isomorphism is only

remembered "up to homotopy", and then when you apply the transport

across that equality (i.e. you use the fact that everything

automatically respects it), it takes work to prove that what you get

out in fact coincides with what you thought it was supposed to be

(i.e. the result of actually applying the isomorphism you started by

defining). In fact, dealing with these "univalence-redexes" is so

painful that when formalizing mathematics in Book HoTT we (at least,

in the HoTT/Coq library) generally avoid making isomorphisms into

equalities as much as possible! The only place you might get some

mileage is when, as Valery pointed out, what you're transporting is

the mere truth of a proposition and so it doesn't matter what the

"result" of the transport is. So while it might be interesting to try

to formalize schemes in Book HoTT, I wouldn't expect much improvement,

and it's not a project I would wish on anyone.

What I do think would be worth doing and comparing would be to

formalize schemes in some kind of cubical type theory. In that case,

univalence actually computes and so you can hope that the proof

assistant can actually recover the original isomorphism for you

automatically. This may not be quite true

(https://groups.google.com/d/topic/homotopytypetheory/wCU0V8RD1LQ/discussion)

but it's much closer to true, and I think it would be extremely

interesting to formalize schemes in a cubical type theory and compare

to your developments. In a nutshell, one could say that the

composition algorithms of cubical type theory are a generic

implementation of the fact that everything definable in type theory

respects isomorphisms, so that essentially all of the nasty lemmas

should be proven for you already -- with, moreover, the proofs that

you would have given, rather than (as in Book HoTT) a proof that isn't

very useful because it takes a lot of effort to extract the "correct"

proof.

> However my *current* opinion is that it is not as easy as this, because I believe that the correct "axiom" is that "canonically" isomorphic objects are equal and that the HoTT axiom is too strong.

This doesn't really make sense to me; I can't figure out what you mean

by "too strong". It's certainly not inconsistent, since we have

semantic models; in particular, schemes defined in HoTT would

specialize in the simplicial set model to the classical notion of

scheme. And I can't imagine any way that the presence of univalence

could "get in the way" or "backfire". Usually when people say that

univalence sounds "wrong", it's because they're thinking of "equality"

as a substitutive mere property and "isomorphic types as equal" as

somehow collapsing to a categorical skeleton, which is of course a

misconception -- HoTT instead expands the notion of "equality" to

essentially mean "isomorphism" and requires transporting along it as a

nontrivial action. I doubt that that's what you have in mind, but

maybe you could explain what you do mean?

> By "mathematicians" here I really mean my ugly phrase "proper mathematicians", i.e. people doing algebra/number theory/geometry/topology etc, people who have absolutely no idea what a type is and would even struggle to list the axioms of set theory.

It would be nice if there were a phrase for this that didn't sound

insulting to us "improper" mathematicians. Mac Lane's similar phrase

"working mathematician" is not really any more flattering to the

"non-working" mathematicians. (-:

Mike

On Sun, Nov 24, 2019 at 10:11 AM Kevin Buzzard

<kevin.m...@gmail.com> wrote:

> In HoTT one might naively say that these problems would not arise because isomorphic things are equal.

give. In a non-univalent type theory, you have to prove manually that

everything in sight respects isomorphisms. This is I think the point

you were making about why the original approach got so messy. HoTT

does solve this problem: because isomorphisms can be made into

equalities, everything automatically respects isomorphisms.

The problem is that in traditional "Book" HoTT, when you make an

isomorphism into an equality, the original isomorphism is only

remembered "up to homotopy", and then when you apply the transport

across that equality (i.e. you use the fact that everything

automatically respects it), it takes work to prove that what you get

out in fact coincides with what you thought it was supposed to be

(i.e. the result of actually applying the isomorphism you started by

defining). In fact, dealing with these "univalence-redexes" is so

painful that when formalizing mathematics in Book HoTT we (at least,

in the HoTT/Coq library) generally avoid making isomorphisms into

equalities as much as possible! The only place you might get some

mileage is when, as Valery pointed out, what you're transporting is

the mere truth of a proposition and so it doesn't matter what the

"result" of the transport is. So while it might be interesting to try

to formalize schemes in Book HoTT, I wouldn't expect much improvement,

and it's not a project I would wish on anyone.

What I do think would be worth doing and comparing would be to

formalize schemes in some kind of cubical type theory. In that case,

univalence actually computes and so you can hope that the proof

assistant can actually recover the original isomorphism for you

automatically. This may not be quite true

(https://groups.google.com/d/topic/homotopytypetheory/wCU0V8RD1LQ/discussion)

but it's much closer to true, and I think it would be extremely

interesting to formalize schemes in a cubical type theory and compare

to your developments. In a nutshell, one could say that the

composition algorithms of cubical type theory are a generic

implementation of the fact that everything definable in type theory

respects isomorphisms, so that essentially all of the nasty lemmas

should be proven for you already -- with, moreover, the proofs that

you would have given, rather than (as in Book HoTT) a proof that isn't

very useful because it takes a lot of effort to extract the "correct"

proof.

> However my *current* opinion is that it is not as easy as this, because I believe that the correct "axiom" is that "canonically" isomorphic objects are equal and that the HoTT axiom is too strong.

by "too strong". It's certainly not inconsistent, since we have

semantic models; in particular, schemes defined in HoTT would

specialize in the simplicial set model to the classical notion of

scheme. And I can't imagine any way that the presence of univalence

could "get in the way" or "backfire". Usually when people say that

univalence sounds "wrong", it's because they're thinking of "equality"

as a substitutive mere property and "isomorphic types as equal" as

somehow collapsing to a categorical skeleton, which is of course a

misconception -- HoTT instead expands the notion of "equality" to

essentially mean "isomorphism" and requires transporting along it as a

nontrivial action. I doubt that that's what you have in mind, but

maybe you could explain what you do mean?

> By "mathematicians" here I really mean my ugly phrase "proper mathematicians", i.e. people doing algebra/number theory/geometry/topology etc, people who have absolutely no idea what a type is and would even struggle to list the axioms of set theory.

insulting to us "improper" mathematicians. Mac Lane's similar phrase

"working mathematician" is not really any more flattering to the

"non-working" mathematicians. (-:

Mike

Nov 26, 2019, 3:08:51 AM11/26/19

to Homotopy Type Theory

On Tuesday, November 26, 2019 at 1:25:37 AM UTC+1, Michael Shulman wrote:

On Sun, Nov 24, 2019 at 10:11 AM Kevin Buzzard

> However my *current* opinion is that it is not as easy as this, because I believe that the correct "axiom" is that "canonically" isomorphic objects are equal and that the HoTT axiom is too strong.

This doesn't really make sense to me; I can't figure out what you mean

by "too strong".

Of course I agree with Mike that univalence is not “too strong”: it merely implements the mathematically right notion of identity for types and other structures built from types, such as rings, etc.

But if I may venture a guess, I'd say that Kevin wants a “canonical reflection rule”: canonical identifications should correspond to judgmental equalities. We've had some discussions before about the underlying meaning of judgmental equality (invoking Frege's Sinn/Bedeutung distinction among other ideas), but I don't know whether we tried saying judgmental/definitional equality should include canonical equality, whatever that is.

This might be really useful, but I think we're still some ways off before we can implement this idea. The first question is whether “all diagrams of canonical identifications commute”. (Besides the obvious question of defining canonical identifications in the first place :-)

But the adventurous can start by playing around by adding canonical identities as rewriting rules in Agda: see Jesper Cockx' recent blog post: https://jesper.sikanda.be/posts/hack-your-type-theory.html

Ulrik

Nov 26, 2019, 2:14:42 PM11/26/19

to Homotopy Type Theory

On Tuesday, 26 November 2019 00:25:37 UTC, Michael Shulman wrote:

HoTT instead expands the notion of "equality" to

essentially mean "isomorphism" and requires transporting along it as a

nontrivial action. I doubt that that's what you have in mind, but

maybe you could explain what you do mean?

I think terminology and notation alone cause a lot of confusion (and I

have said this many times in this list in the past, before Kevin joined in).

Much of the disagreement is not a real disagreement but a

misunderstanding.

* In HoTT, or in univalent mathematics, we use the terminology

"equals" and the notation "=" for something that is not the same

notion as in "traditional mathematics".

* Before the univalence axiom, we had Martin-Loef's identity type.

* It was *intended* to capture equality *as used by mathematicians*

(constructive or not).

* But it didn't. Hofmann and Streicher proved that.

* The identity type captures something else.

It certainly doesn't capture truth-valued equality by default.

In particular, Voevosdky showed that it captures isomorphism of

sets, and more generality equivalence of ∞-groupoids.

But this is distorting history a bit.

In the initial drafts of his "homotopy lambda calculus", he tried

come up with a new construction in type theory to capture

equivalence.

It was only later that he found out that what he needed was

precisely Martin-Loef's identity type.

* So writing "x = y" for the identity type is a bit perverse.

People may say, and they have said "but there is no other sensible

notion of equality for such type theories.

That may be so, but because, in any case, *it is not the same

notion of equality*, we should not use the same symbol.

* Similarly, writing "X ≃ Y" is a bit perverse, too.

In truth-valued mathematics, "X ≃ Y" is most of the time intended

to be truth-valued, not set-valued.

(Exception: category theory. E.g. we write a long chain of

isomorphisms to establish that two objects are isomorphic. Then we

learn that the author of such a proof was not interested in the

existence of an isomorphism, but instead to provide an

example. Such a proof/construction is usually concluded by saying

something like "by chasing the isomorphism, we see that we can take

[...] as our desired isomorphism.)

* With the above out of the way, we can consider the imprecise slogan

"isomorphic types are equal".

The one thing that the univalence axiom doesn't say is that

isomorphic type are equal.

What it does say is that the *identity type* Id X Y is in canonical

bijection with the type X ≃ Y of equivalences.

* What is the effect of this?

- That the identity type behaves like isomorphism, rather than like

equality.

- And that isomorphism behaves a little bit like equality.

The important thing above is "a little bit".

In particular, we cannot *substitute* things by isomorphic

things. We can only *transport* them (just like things work as

usual with isomorphisms).

* Usually, the univalence axioms is expressed as a relation between

equality and isomorphism.

Where by equality we don't mean equality but instead the identity

type.

A way to avoid these terminological problems is to formulate

univalence as a property of isomorphisms, or more precisely

equivalences.

To show that all equivalences satisfy a given property, it is

enough to prove that all the identity equivalence between any two

types have this property.

* So, as Mike says above, most of the time we can work with type

equivalence rather than "type equality". And I do too.

Something that is not well explained at all in the literature is

when and how the univalence axiom *actually makes a difference*.

(I guess this is not well understood. I used to thing that the

univalence axioms makes a difference only for types that are not

sets. But actually, for example, the univalence axiom is needed (in

the absence of the K axiom) to prove that the type of ordinals is a

set.)

* One example: object classifiers, subtype classifiers, ...

* Sometimes the univalence axiom may be *convenient* but *not needed*.

I guess that Kevin is, in particular, saying precisely that. In all

cases where he needs to transport constructions along isomorphisms,

he is confident that this can be done without univalence. And I

would agree with this assessment.

Best,

Martin

Nov 26, 2019, 2:53:19 PM11/26/19

to Martín Hötzel Escardó, Homotopy Type Theory

Nicolas originally asked

"So, to put this into one concrete question, how (if at all) is HoTT-Coq

more practical than Mizar for the purpose of formalizing mathematics,

outside the specific realm of synthetic homotopy theory?"

more practical than Mizar for the purpose of formalizing mathematics,

outside the specific realm of synthetic homotopy theory?"

One issue with this question is that different people mean different things by "mathematics". As someone who was until recently an outsider to formalising but very much a "working mathematician", I think I've made it pretty clear on this forum and others that for the majority of people in e.g. my maths department, they would say that none of the systems have really got anywhere concerning modern day mathematics, which is the kind of mathematics that the majority of mathematicians working in maths departments are interested in. This is a dismal state of affairs and one which I would love to help change, but it needs a cultural change within mathematics departments. The Mizar stuff I've seen has mostly been undergraduate level mathematics. UniMath seems to be a massive amount of category theory and not much undergraduate level mathematics at all. On the other hand, "there is a lot more category theory in the HoTT systems than in Mizar, so maybe Hott-Coq is more practical than Mizar for category theory" might be an answer to the original question.

I'm interested in algebraic geometry, and in algebraic geometry there is this convention (explicitly flagged in Milne's book on etale cohomology) that the notation for "canonical isomorphism" (whatever that means) is "=". One can call this "mathematical equality" and it comes with a warning that it is not well-defined. I think one reason it's a challenge to formalise modern algebraic geometry is that none of the systems seem to have this kind of equality (perhaps because all of the systems demand well-defined definitions!). What I meant by my earlier post was that the `=` I have run into in Lean's type theory is too weak (canonically isomorphic things can't be proved to be equal) but the one I have run into in HoTT seems too strong (non-canonically isomorphic things are equal).

It would not surprise me if different areas of mathematics had different concepts of equality. Mathematicians seem to use the concept very fluidly. Maybe it is the case that the areas where "mathematical equality" is closer to e.g. Lean's `=` or Mizar's `=` or UniMath's `=` or Isabelle/HOL's `=` or whatever, will find that formalising goes more easily in Lean's type theory/Mizar/HoTT/HOL or whatever. The biggest problem with this conjecture is that equality is mostly a cut-and-dried thing at undergraduate level, and there is not enough harder maths formalised in any of these systems (obviously there are several monumental theorems but I'm dreaming about far greater things) for us to start to check. Note that people like Riemann had no idea about set theory but did a lot of complex analysis, and he really only needed to worry about equality of complex numbers and equality of functions from the complexes to the complexes, so I don't think it's a coincidence that a whole bunch of complex analysis is done in Isabelle/HOL, which seems to me to be the "simplest logic" of the lot. Probably all the systems model this equality well.

Kevin

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/13f496d9-1d01-4a2f-b5c1-826dd87aebf2%40googlegroups.com.

Nov 26, 2019, 3:40:48 PM11/26/19

to Homotopy Type Theory

On Tuesday, 26 November 2019 19:53:19 UTC, Kevin Buzzard wrote:

What I meant by my earlier post was that the `=` I have run into in Lean's type theory is too weak (canonically isomorphic things can't be proved to be equal) but the one I have run into in HoTT seems too strong (non-canonically isomorphic things are equal).

Given a *specific* isomorphism, in HoTT you get a *specific* element of the identity type.

Let me say this explicitly: equality in HoTT is not truth-valued. It collects all the possible ways to identify things. In the case of equality of types, the identity type collects the equivalences between them (rather than being the truth value expressing whether they are equivalent).

In fact, canonicity is at the heart of the univalence axiom: there is a canonical one-to-one correspondence between elements of the equality type (i.e. the identity type) and elements of the types of equivalences.

So, the univalence axiom just says that the elements of the good, old identity type can be understood to be precisely the equivalences.

You may wonder what the bonus of this is. This is the content of my last two bullet points in my message.

Martin

Nov 26, 2019, 5:18:49 PM11/26/19

to Kevin Buzzard, Martín Hötzel Escardó, Homotopy Type Theory

Personally, I'm doubtful that one can ascribe any precise meaning to

"canonical isomorphism", and therefore doubtful that one will be able

to implement a computer proof assistant that can distinguish a

"canonical isomorphism" from an arbitrary one. It's certainly worth

thinking about, but my personal opinion is that instead of designing a

proof assistant to match the way mathematicians think about "canonical

isomorphisms", mathematicians are going to need to change the way they

think. But it's just a question of going all the way down a road that

they've already taken one step along.

In formal systems like ZFC and Lean, almost no isomorphisms are

equalities. Being willing to treat "canonical" isomorphisms as

equalities is a first step into the brave new world of homotopy theory

and higher category theory, but it's hard to find a place to stand

when you've only taken that one step. In trying to make things

precise, I think one feels inexorably pulled to the more consistent,

and formalizable, position that *all* isomorphisms should be treated

as equalities. It's not the same notion of equality, as Martin says,

but it's a better replacement for it, which in particular can do

everything that the old notion of equality could do when used

"correctly" (e.g. equality of numbers and functions).

> To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAH52Xb3ynUB2gTYnnmtijsYFcwpNK84aTxLT0kwd5syASy31Aw%40mail.gmail.com.

"canonical isomorphism", and therefore doubtful that one will be able

to implement a computer proof assistant that can distinguish a

"canonical isomorphism" from an arbitrary one. It's certainly worth

thinking about, but my personal opinion is that instead of designing a

proof assistant to match the way mathematicians think about "canonical

isomorphisms", mathematicians are going to need to change the way they

think. But it's just a question of going all the way down a road that

they've already taken one step along.

In formal systems like ZFC and Lean, almost no isomorphisms are

equalities. Being willing to treat "canonical" isomorphisms as

equalities is a first step into the brave new world of homotopy theory

and higher category theory, but it's hard to find a place to stand

when you've only taken that one step. In trying to make things

precise, I think one feels inexorably pulled to the more consistent,

and formalizable, position that *all* isomorphisms should be treated

as equalities. It's not the same notion of equality, as Martin says,

but it's a better replacement for it, which in particular can do

everything that the old notion of equality could do when used

"correctly" (e.g. equality of numbers and functions).

Nov 26, 2019, 7:16:12 PM11/26/19

to Michael Shulman, Kevin Buzzard, Martín Hötzel Escardó, Homotopy Type Theory

Dear Michael,

You wrote:

<<Personally, I'm doubtful that one can ascribe any precise meaning to

"canonical isomorphism", and therefore doubtful that one will be able

to implement a computer proof assistant that can distinguish a

"canonical isomorphism" from an arbitrary one. >>

The notion of canonical isomorphism depends obviously on the context.

For example, the "canonical isomorphism"

(AxB)xC = Ax(BxC)

is likely to be the associativity isomorphism.

Maybe AI (deep learning) could be trained

to recognise the correct canonical isomorphism from the context

(=the proof). It may automatise

what is already automatic in our thinking.

Best,

André

________________________________________

From: homotopyt...@googlegroups.com [homotopyt...@googlegroups.com] on behalf of Michael Shulman [shu...@sandiego.edu]

Sent: Tuesday, November 26, 2019 5:18 PM

To: Kevin Buzzard

Cc: Martín Hötzel Escardó; Homotopy Type Theory

Subject: Re: [HoTT] Why did Voevodsky find existing proof assistants to be 'impractical'?

To view this discussion on the web visit https://groups.google.com/d/msgid/HomotopyTypeTheory/CAOvivQyAaJnLeJZHCMqWAu3f%2BOb72%2BMSSLmb8MjkBHoH05k_Rw%40mail.gmail.com.

Nov 26, 2019, 8:41:17 PM11/26/19

to Homotopy Type Theory

On Tuesday, November 26, 2019 at 4:18:49 PM UTC-6, Michael Shulman wrote:

Personally, I'm doubtful that one can ascribe any precise meaning to

"canonical isomorphism",

One standard example of a non-canonical isomorphism is that between a finite dimensional

vector space V and its dual V', so perhaps one may define a non-canonical isomorphism

to be one that one merely has, and a canonical isomorphism is one that one actually has.

Nov 26, 2019, 9:28:13 PM11/26/19

to Joyal, André, Michael Shulman, Kevin Buzzard, Martín Hötzel Escardó, Homotopy Type Theory

> Maybe AI (deep learning) could be trained to recognise the correct

> canonical isomorphism from the context (=the proof).

Can't we define it more formally as the isomorphism with the
> canonical isomorphism from the context (=the proof).

smallest proof?

Stefan

Nov 27, 2019, 3:22:52 AM11/27/19

to Homotopy Type Theory

At 2019-11-26T14:18:34-08:00, Michael Shulman wrote:

> Personally, I'm doubtful that one can ascribe any precise meaning to

> "canonical isomorphism", and therefore doubtful that one will be able

> to implement a computer proof assistant that can distinguish a

> "canonical isomorphism" from an arbitrary one.

I have usually assumed that canonical isomorphism meant functorial
> Personally, I'm doubtful that one can ascribe any precise meaning to

> "canonical isomorphism", and therefore doubtful that one will be able

> to implement a computer proof assistant that can distinguish a

> "canonical isomorphism" from an arbitrary one.

isomorphism, i.e., an isomorphism between an appropriate pair of

functors (which are generally left unspecified). As has been mentioned,

there is no functorial isomorphism from an arbitrary finite-dimensional

vector space V to its dual, whereas the usual isomorphism from V to

double dual is functorial, so the former is non-canonical, and the

latter canonical.

Another explanation I've often come across is that a canonical object is

independent of the choices (if any) that were made in its definition. I

guess the notion of independence depends on the context.

Raghu.

--

N. Raghavendra <ra...@hri.res.in>, http://www.retrotexts.net/

Harish-Chandra Research Institute, http://www.hri.res.in/

Nov 27, 2019, 5:12:43 AM11/27/19

to Martín Hötzel Escardó, Homotopy Type Theory

So writing "x = y" for the identity type is a bit perverse.

__ __

People may say, and they have said "but there is no other sensible

notion of equality for such type theories.

__ __

That may be so, but because, in any case, *it is not the same

notion of equality*, we should not use the same symbol.

__ __

Two mathematical objects are equal if they have the same properties. This is exactly what equals means in HoTT and hence we should use the same symbol and call it equality.
Everything else is confusing especially for the newcomer: “we have something in HoTT which is basically equality but we call it by a different name and we use a different symbol”???

__ __

Yes, it is different because equality of structures is not a proposition but a structure. But to insist that equality of structure is a proposition is just a confusion of conventional
Mathematics.

__ __

Independent of conventional Mathematics there is something like a naïve idea of equality which is preformal. And equality in HoTT is just a way to make this preformal notion
precise. And why is it so strange to say there is more than one way to objects can be equal?

__ __

As far is the notion of “canonical isomorphism”, Kevin is interested in, is concerned, I think this is an intensional aspect of an equality and hence not expressible within
HoTT. E.g. 3+4 and 7 are the same natural number but 7 is canonical.

__ __

Thorsten

__ __

__ __

__ __

__ __

__ __

**From: **<homotopyt...@googlegroups.com> on behalf of Martín Hötzel Escardó <escardo...@gmail.com>

**Date: **Tuesday, 26 November 2019 at 19:15

**To: **Homotopy Type Theory <homotopyt...@googlegroups.com>

**Subject: **Re: [HoTT] Why did Voevodsky find existing proof assistants to be 'impractical'?

__ __

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.

To unsubscribe from this group and stop receiving emails from it, send an email to
HomotopyTypeThe...@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/msgid/HomotopyTypeTheory/13f496d9-1d01-4a2f-b5c1-826dd87aebf2%40googlegroups.com.

This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.

Nov 27, 2019, 11:37:44 AM11/27/19

to Thorsten Altenkirch, Martín Hötzel Escardó, Homotopy Type Theory

I find the wide variety of opinions as to the meaning of "canonical"

to be evidence for my contention that the notion is not formalizable.

(-:

Personally, I feel that the closest semiformal approximation to a

"canonical" isomorphism is one that's declared as a coercion.

to be evidence for my contention that the notion is not formalizable.

(-:

Personally, I feel that the closest semiformal approximation to a

"canonical" isomorphism is one that's declared as a coercion.

Nov 27, 2019, 3:21:59 PM11/27/19

to Kevin Buzzard, Homotopy Type Theory

Thank you Kevin, for taking the time to write this very detailed response.

After taking a look at your original approach and the improved one of

your student, I am under the impression that the difficulties you

encountered actually had little to do with the proof assistant you were

working in. Perhaps this impression is just uninformed, and e.g. in a

system based on cubical type theory these problems would disappear. Of

course, the only way to tell is to go ahead and formalize this result in

such a system.

But to me it seems that the "commutative diagram hell" you faced really

stems from an issue that already exists within mathematics, namely the

two conflicting viewpoints on localization: viewing it as a property

(the honest viewpoint) or viewing it as an operation (the convenient

illusion). As far as I understood, in your original approach you proved

the "standard affine covering lemma"

(https://stacks.math.columbia.edu/tag/00EJ) under the operational

viewpoint, making it nontrivial to apply this lemma to "nonstandard"

localizations, whereas your student proved the lemma under the property

viewpoint, i.e. for "all" localizations at the same time.

To a "proper" mathematician this distinction must seem silly, and if he

doesn't outright dismiss it as fruitless philosophical nonsense, he will

most likely claim that one can safely pretend it doesn't exist due to

"existence of canonical isomorphisms yada yada yada".

And yet, even the (presumably "proper") author(s) of

https://stacks.math.columbia.edu/tag/01HR somehow felt the need to

justify the application of lemma 10.23.1 with a sentence like

"[...] and hence we can rewrite this sequence as the sequence [...]",

which seems to admit the issue. Thus instead of pathologies, one may

view the difficulties arising in formalization as merely pointing out

existing gaps in the proper mathematician's operational theory, that he

can in principle fill out all the details in his proofs to arbitrary

precision.

This problem and its solution remind me of something I encountered a few

years ago, when I tried to convince myself of the well-known fact that

the abstract braid group

B(W) := < T_w for w \in W | \ell(w) + \ell(w') = \ell(ww')

\Rightarrow T_w T_{w'} = T_{ww'} >

attached to a finite reflection group W can be viewed as the fundamental

group of its complexified hyperplane complement X. My ambition was only

to explicitly construct the map

\rho: B(W) --> \pi_1(X,x_0),

but even that turned out to be more difficult than expected.

The first naive attempt---of writing down explicit representative paths

in the classes \rho(T_w) and proving that these satisfy the "braid

relations" up to homotopy---was completely impractical. In fact, even

writing down representative paths seemed not merely difficult but

hopeless. Eventually I realized that the solution is to attach to a

generator T_w not a representative (operational viewpoint) but a

canonical subspace of the path-space (property viewpoint), showing that

the latter is contractible and that the collection of these subspaces

satisfy the braid relations (in the obvious sense). After that

realization, everything was completely straightforward.

--

Nicolas

Am 24.11.19 um 19:11 schrieb Kevin Buzzard:

> <mailto:HomotopyTypeTheory%2Bunsu...@googlegroups.com>.

After taking a look at your original approach and the improved one of

your student, I am under the impression that the difficulties you

encountered actually had little to do with the proof assistant you were

working in. Perhaps this impression is just uninformed, and e.g. in a

system based on cubical type theory these problems would disappear. Of

course, the only way to tell is to go ahead and formalize this result in

such a system.

But to me it seems that the "commutative diagram hell" you faced really

stems from an issue that already exists within mathematics, namely the

two conflicting viewpoints on localization: viewing it as a property

(the honest viewpoint) or viewing it as an operation (the convenient

illusion). As far as I understood, in your original approach you proved

the "standard affine covering lemma"

(https://stacks.math.columbia.edu/tag/00EJ) under the operational

viewpoint, making it nontrivial to apply this lemma to "nonstandard"

localizations, whereas your student proved the lemma under the property

viewpoint, i.e. for "all" localizations at the same time.

To a "proper" mathematician this distinction must seem silly, and if he

doesn't outright dismiss it as fruitless philosophical nonsense, he will

most likely claim that one can safely pretend it doesn't exist due to

"existence of canonical isomorphisms yada yada yada".

And yet, even the (presumably "proper") author(s) of

https://stacks.math.columbia.edu/tag/01HR somehow felt the need to

justify the application of lemma 10.23.1 with a sentence like

"[...] and hence we can rewrite this sequence as the sequence [...]",

which seems to admit the issue. Thus instead of pathologies, one may

view the difficulties arising in formalization as merely pointing out

existing gaps in the proper mathematician's operational theory, that he

can in principle fill out all the details in his proofs to arbitrary

precision.

This problem and its solution remind me of something I encountered a few

years ago, when I tried to convince myself of the well-known fact that

the abstract braid group

B(W) := < T_w for w \in W | \ell(w) + \ell(w') = \ell(ww')

\Rightarrow T_w T_{w'} = T_{ww'} >

attached to a finite reflection group W can be viewed as the fundamental

group of its complexified hyperplane complement X. My ambition was only

to explicitly construct the map

\rho: B(W) --> \pi_1(X,x_0),

but even that turned out to be more difficult than expected.

The first naive attempt---of writing down explicit representative paths

in the classes \rho(T_w) and proving that these satisfy the "braid

relations" up to homotopy---was completely impractical. In fact, even

writing down representative paths seemed not merely difficult but

hopeless. Eventually I realized that the solution is to attach to a

generator T_w not a representative (operational viewpoint) but a

canonical subspace of the path-space (property viewpoint), showing that

the latter is contractible and that the collection of these subspaces

satisfy the braid relations (in the obvious sense). After that

realization, everything was completely straightforward.

--

Nicolas

Am 24.11.19 um 19:11 schrieb Kevin Buzzard:

Reply all

Reply to author

Forward

0 new messages