Why do we need judgmental equality?

352 views
Skip to first unread message

Felix Rech

unread,
Jan 30, 2019, 6:54:07 AM1/30/19
to Homotopy Type Theory
In section 1.1 of the HoTT book it says "In type theory there is also a need for an equality judgment." Currently it seems to me like one could, in principle, replace substitution along judgmental equality with explicit transports if one added a few sensible rules to the type theory. Is there a fundamental reason why the equality judgment is still necessary?

Thanks,
Felix Rech

Matt Oliveri

unread,
Feb 5, 2019, 6:00:24 PM2/5/19
to Homotopy Type Theory
The type checking rules are nonlinear (reuses metavariables). For example, for function application, the type of the argument needs to "equal" the domain of the function. What equality is that? It gets called judgmental equality. It's there in some sense even if it's just syntactic equality. But it seems really really hard to have judgmental equality be just syntactic equality, in a dependent type system. It would also be unnatural, from a computational perspective; the judgmental equations are saying something about the computational behavior of the system.

Anders Mörtberg

unread,
Feb 5, 2019, 11:13:35 PM2/5/19
to Homotopy Type Theory
Note that judgmental equality is not only a convenience, but it also affects what is provable in your type theory. Consider the interval HIT, it is contractible and hence equivalent to Unit. But it also lets you prove function extensionallity which you definitely don't get from the Unit type.

--
Anders

Martín Hötzel Escardó

unread,
Feb 8, 2019, 4:19:24 PM2/8/19
to Homotopy Type Theory
I would also like to know an answer to this question. It is true that dependent type theories have been designed using definitional equality.

But why would anybody say that there is a *need* for that? Is it impossible to define a sensible dependent type theory (say for the purpose of serving as a foundation for univalent mathematics) that doesn't mention anything like definitional equality? If not, why not? And notice that I am not talking about *usability* of a proof assistant such as the *existing* ones (say Coq, Agda, Lean) were definitional equalities to be removed. I don't care if such hypothetical proof assistants would be impossibly difficult to use for a dependent type theory lacking definitional equalities (if such a thing exists).

The question asked by Felix is a very sensible one: why is it claimed that definitional equalities are essential to dependent type theories?

(I do understand that they are used to compute, and so if you are interested in constructive mathematics (like I am) then they are useful. But, again, in principle we can think of a dependent type theory with no definitional equalities and instead an existence property like e.g. in Lambek and Scott's "introduction to higher-order categorical logic". And like was discussed in a relatively recent message by Thierry Coquand in this list,)

Martin 

Valery Isaev

unread,
Feb 8, 2019, 6:32:15 PM2/8/19
to Martín Hötzel Escardó, Homotopy Type Theory
If you are not interested in computations and convenience of your type theory, then the definitional equality is not essential in the sense that every type theory T is equivalent to a type theory Q(T) with no computational rules. Now, what do I mean when I say that type theories T and Q(T) are equivalent? I won't give here the formal definition, but the idea is that Q(T) can be interpreted in T and, for every type A of T, there is a type in Q(T) equivalent to A in T and the same is true for terms. This implies that every statement (i.e., type) of Q(T) is provable in Q(T) if and only if it is provable in T and every statement of T has an equivalent statement in Q(T), so the theories are "logically equivalent". Moreover, equivalent theories have equivalent (in an appropriate homotopical sense) categories of models.

Regards,
Valery Isaev


сб, 9 февр. 2019 г. в 00:19, Martín Hötzel Escardó <escardo...@gmail.com>:
--
You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nicolai Kraus

unread,
Feb 8, 2019, 8:30:33 PM2/8/19
to Martín Hötzel Escardó, Homotopy Type Theory
I agree with everything Martin said below, but I want to go a step further and argue that judgmental equality is potentially holding us back, in terms of meta-theoretical results AND usability of the type theory.

I can understand that, for programming, we want built-in computation. However, for a foundational system, I think we should strive for something as minimalistic as possible without arbitrary additional features. 

It might be a bit controversial to call judgmental equality an "arbitrary additional feature", but there is certainly some arbitrariness involved (why else would Sigma have judgmental eta in some systems and not in others?). It seems to me that the cleanest, most minimalistic, and most canonical choice would be a type theory without judgmental equalities *at all*. This would also be much more in line with the approach of category theory, and I expect that it could make the lives of those of us working with models easier. Is there a deep theoretical reason for the necessity of judgmental equality?

Of course, we want convenience and usability. I just fail to see why this means that judgmental equality has to be a built-in thing (one could say: why does it have to be part of the core). It's possible that, if we do *not* insist on built-in judgmental equality, we could even get *more* convenience. What I mean is this: Let's write WTT for a "weak type theory" without any judgmental equalities (since we're doing HoTT, maybe with some axioms such as univalence, but let's not specify this here). Let's say we know that a certain set A of judgmental equalities does not change WTT, i.e. that (WTT+A) is a conservative extension of WTT (conservative in a constructive sense). We could now have a proof assistant which uses WTT as core but which allows us to "load" the conservativity proof of A. Then, at the beginning of a module, a user can tell the proof assistant that in this module, they pretend that the type theory is (WTT+A); i.e. for the user, it looks as if they are working in (WTT+A) but, in the background, the proof assistant translates everything to WTT. In another module, the user can pretend they are working in (WTT+B) instead, if (WTT+B) is conservative over WTT as well. The idea here is that some proofs benefit more from one set of judgmental equalities, while other proofs benefit more from others. Being able to choose which equalities one wants could lead to increased convenience. (I think WTT+A+B is not necessarily conservative over WTT, so we cannot just always assume all equalities for maximum convenience.)

For example, let's consider a cubical type theory. I'm aware that we don't really know many conservativity properties of cubical type theories yet, but I would be interested in knowing some. Maybe some cubical type theory gives us a set C which our WTT-based proof assistant can load. In other words, instead of saying "hey, we have found a new cubical type theory with amazing computational behaviour, an implementation is on GitHub", people could say "hey, we have found a new set of judgmental equalities, on GitHub is the conservativity proof that you can load into your proof assistant to use it".

I don't think this is something that we will manage to get anytime soon, and I find it very difficult in general to find conservativity results, but is there a theoretical reason which makes this impossible?

Nicolai


--

Nicolai Kraus

unread,
Feb 8, 2019, 8:41:39 PM2/8/19
to Valery Isaev, Homotopy Type Theory
Hi Valery,

On Fri, Feb 8, 2019 at 11:32 PM Valery Isaev <valery...@gmail.com> wrote:
Now, what do I mean when I say that type theories T and Q(T) are equivalent? I won't give here the formal definition

Would it be correct to say that T is a conservative extension of T, in the sense of Martin Hofmann's thesis? Your description sounds a bit like this, or do you have something different in mind?
Nicolai

Jon Sterling

unread,
Feb 8, 2019, 9:01:20 PM2/8/19
to 'Martin Escardo' via Homotopy Type Theory
Hi Valery,

I'm trying to square what you said with what Anders said. Consider extending MLTT with an interval in two different ways:

1. With judgmental computation rules for the recursor
2. With propositional computation axioms for the recursor

I do not expect to obtain a conservativity result for the second version over the first version, since the first one derives function extensionality, and the second one does not (afaict).

Can you give a bit more detail about how this algebraic power move that you are describing works, and whether it applies in this case?

Thanks!
Jon

Valery Isaev

unread,
Feb 9, 2019, 3:05:07 AM2/9/19
to Nicolai Kraus, Homotopy Type Theory
Hi Nicolai,

The notion of a conservative extension is certainly related to the notion of an equivalence of type theories that I'm referring to (I call it Morita equivalence). Hofmann defines various properties of the interpretation |-| : ITT -> ETT, which are weaker than Morita equivalence in general. The statement of Theorem 3.2.5 is almost equivalent to the notion of Morita equivalence. We just need to replace "|P'| = P" with "|P'| and P are propositionally equivalent" and add the condition that, for every \Gamma |-_I and every |\Gamma| |-_E A there is a type \Gamma |-_I A' such that |A'| and A are equivalent in the sense of HoTT (the latter condition follows from Theorem 3.2.5 for ITT and ETT, but not for general type theories). Then we get precisely the notion of Morita equivalence between type theories I and E.

Regards,
Valery Isaev


сб, 9 февр. 2019 г. в 04:41, Nicolai Kraus <nicola...@gmail.com>:

Valery Isaev

unread,
Feb 9, 2019, 3:17:21 AM2/9/19
to Jon Sterling, 'Martin Escardo' via Homotopy Type Theory
Hi Jon,

It is not true that Q(T) can be obtained from T simply by replacing judgmental equalities with propositional ones (I think this should be true for some theories, but certainly not for all).

So, if we denote the theory with an interval and judgmental rules by T_I and the theory with propsoitional axioms by T_a, then Q(T_I) is equivalent to T_I and Q(T_a) is equivalent to T_a, but Q(T_I) is not equivalent to T_a. Theories T_a and Q(T_a) are actually equivalent to the ordinary MLTT (since we simply add another contractible type to it).

Functional extensionality is provable in both T_I and Q(T_I). It follows from the existence of the interval type in T_I and it can be added as an axiom to Q(T_I). I believe that theories T_I and Q(T_I) are equivalent to MLTT + functional extensionality, but I don't know how to prove this yet.

Regards,
Valery Isaev


сб, 9 февр. 2019 г. в 05:01, Jon Sterling <j...@jonmsterling.com>:

Thomas Streicher

unread,
Feb 9, 2019, 6:38:17 AM2/9/19
to Martín Hötzel Escardó, Homotopy Type Theory
Working without any judgemental equality was the aim of the original
LF where elements of a type A correspond to formal derivations of A in
abstract syntax. Also Isabelle works a bit like that.

So with a modicum of judgemental equality one uses dependent types for
having a syntax for formal derivations. But, of course, this is
absolutely useless when you want to execute your proofs!

Thomas

Felix Rech

unread,
Feb 9, 2019, 6:53:15 AM2/9/19
to Homotopy Type Theory
Thank you all for the discussion!

One of the motivations for my question was that I actually expect usability benefits if one worked in a dependent type theory without judgmental equality that has good support by a proof assistant. There are primarily two reasons for this:
  1. Judgmental equality cannot be taken as assumptions. If one wants to use judgmental equalities then one has to give concrete definitions that satisfy those equalities and cannot hide the definition details. This makes it impossible to change definitions later on without breaking constructions that depend on them.
  2. In nontrivial definitions, judgmental equalities seem more arbitrary than natural. If we define addition of natural numbers for example then we can choose between x + 0 = x and 0 + x = x as judgmental equality but we cannot have both. This makes it hard to find the right definitions and to predict their behavior.
Another motivation was of course that it would simplify the implementation of proof checkers and parts of the metatheory.

I would appreciate any comments on this.

Felix Rech

unread,
Feb 9, 2019, 6:55:55 AM2/9/19
to Homotopy Type Theory
On Wednesday, 6 February 2019 05:13:35 UTC+1, Anders Mörtberg wrote:
Consider the interval HIT, it is contractible and hence equivalent to Unit. But it also lets you prove function extensionallity which you definitely don't get from the Unit type.

I have the feeling that function extensionality is special in the sense that the proof that I know of, as well as that of function extensionality in ETT, relies on the fact that we can replace judgmentally equal terms under lambdas, which is like function extensionality for judgmental equality. 

Felix Rech

unread,
Feb 9, 2019, 6:57:51 AM2/9/19
to Homotopy Type Theory
On Friday, 8 February 2019 22:19:24 UTC+1, Martín Hötzel Escardó wrote:
I do understand that they are used to compute, and so if you are interested in constructive mathematics (like I am) then they are useful.

I agree that the ability to compute canonical forms for terms is essential but one does not need to integrate computation into the type system to do that. At least, one can alway interprete terms of a type theory without an equality judgment in a stronger type theory that has judgmental equalities and do the computations there. 

Thorsten Altenkirch

unread,
Feb 9, 2019, 7:30:07 AM2/9/19
to Felix Rech, Homotopy Type Theory

Hi,

 

what we need is a strict equality on all types. If we would state the laws of type theory just using the equality type we would also need to add coherence laws. Since I would include the laws for substitution (never understood why substitution is different from application) this would include the laws for infinity categories and this would make even basic type theory certainly much more complicated if not unusable. Instead one introduces a 2-level system with strict equality on one level and weak equality on another. For historic and pragmatic reasons this is combined with the computational aspects of type theory which is expressed as judgemental equality. However, there are reasons to separate these concerns, e.g. to work with higher dimensional constructions in type theory such as semi-simplicial types it is helpful to work with hypothetical strict equalities (see our paper (http://www.cs.nott.ac.uk/~psztxa/publ/csl16.pdf).

 

I do think that the computational behaviour of type theory is important too. However, this can be expressed by demandic a form of computational adequacy, that is for every term there is a strictly equal normal form. It is not necessary that strict equality in general is decidable (indeed different applications of type theory may demand different decision procedures).

 

Thorsten

--

You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment. 

Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored 
where permitted by law.



Martín Hötzel Escardó

unread,
Feb 9, 2019, 7:39:46 AM2/9/19
to Homotopy Type Theory
Yes, this is what I meant by the sentence that follows the one quoted
above, which I quote below. Also, you don't need to compute by
reducing equalities. In fact, most functional languages (e.g. Haskell)
when compiled *do not* compute by unfolding definitional equalities.

On 08/02/2019 21:19, Martín Hötzel Escardó wrote:> (I do understand that they are used to compute, and so if you are
> interested in constructive mathematics (like I am) then they are useful.

Thorsten Altenkirch

unread,
Feb 9, 2019, 8:29:28 AM2/9/19
to Thomas Streicher, Martín Hötzel Escardó, Homotopy Type Theory
Even LF included beta eta hence has a non-trivial judgemental equality. Not to mention the equations for substitution.

Thorsten

Théo Winterhalter

unread,
Feb 9, 2019, 8:40:26 AM2/9/19
to Homotopy Type Theory
Hi,

To follow on the idea of 2-level type theory. It indeed seems enough to have a strict equality (meaning with UIP and Funext) to pull off an interpretation of weakTT.
In fact, I have updated an formalised the work of Nicolas Oury of translating Extensional Type Theory into Intensional Type Theory, and extended the proof so that it can be an instance of a translation from an extensional 2-level type theory into an intensional one. In this work, the treatment of conversion in the target seems to be orthogonal to the result itself. With Simon Boulier we plan to tweak the formalisation a bit so that it allows to target weakTT (and we already conducted informal reasoning to justify it). You can thus give an interpretation to a theory into its weak version, provided you have UIP, funext and enough equalities to interpret for instance β-reduction.

I hope this is clear.

Théo

Le samedi 9 février 2019 14:29:28 UTC+1, Thorsten Altenkirch a écrit :
Even LF included beta eta hence has a non-trivial judgemental equality. Not to mention the equations for substitution.

Thorsten

On 09/02/2019, 11:38, "homotopyt...@googlegroups.com on behalf of Thomas Streicher" <homotopyt...@googlegroups.com on behalf of stre...@mathematik.tu-darmstadt.de> wrote:

    Working without any judgemental equality was the aim of the original
    LF where elements of a type A correspond to formal derivations of A in
    abstract syntax. Also Isabelle works a bit like that.
   
    So with a modicum of judgemental equality one uses dependent types for
    having a syntax for formal derivations. But, of course, this is
    absolutely useless when you want to execute your proofs!
   
    Thomas
   
    --
    You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

Nicolai Kraus

unread,
Feb 9, 2019, 9:05:11 AM2/9/19
to Felix Rech, Homotopy Type Theory
On Sat, Feb 9, 2019 at 11:53 AM Felix Rech <s9fe...@gmail.com> wrote:
One of the motivations for my question was that I actually expect usability benefits if one worked in a dependent type theory without judgmental equality that has good support by a proof assistant.

Yes, me too (but I think *a lot* of work needs to be done before we can have a proof assistant based on this idea which provides better usability than the current ones).
I agree with your points. But I think "x + 0 = x versus 0 + x = x" is an example where I'd expect that one should be able to produce a conservativity proof and use both at the same time instead of choosing one. I think it's not necessary that we restrict ourselves to computation rules that come from actual definitions; anything that is "constructively conservative" over a weak theory should be allowed. In Agda, one can have additional computation rules, but it's not a safe feature.
Nicolai


  1. Judgmental equality cannot be taken as assumptions. If one wants to use judgmental equalities then one has to give concrete definitions that satisfy those equalities and cannot hide the definition details. This makes it impossible to change definitions later on without breaking constructions that depend on them.
  2. In nontrivial definitions, judgmental equalities seem more arbitrary than natural. If we define addition of natural numbers for example then we can choose between x + 0 = x and 0 + x = x as judgmental equality but we cannot have both. This makes it hard to find the right definitions and to predict their behavior.
Another motivation was of course that it would simplify the implementation of proof checkers and parts of the metatheory.

I would appreciate any comments on this.

--
You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.

Gabriel Scherer

unread,
Feb 9, 2019, 9:16:14 AM2/9/19
to Nicolai Kraus, Felix Rech, Homotopy Type Theory
A relevant citation for this line of thinking (if we reduce implicit computation, can we gain usability some other way?) is the work on the Zombie programming language, which tried to weaken beta-reduction in a dependent language (restricting it to values only, as in F*) to obtain a more automatic form of reasoning upto congruence (definitional equalities can be put in the context and used from there).

  Programming up to Congruence
  Vilhelm Sjöberg and Stephanie Weirich, 2015

Jon Sterling

unread,
Feb 9, 2019, 9:44:39 AM2/9/19
to 'Martin Escardo' via Homotopy Type Theory
Hi all,

I think there's two different issues being conflated -- there is the question of whether definitional equalities can be removed, in order to obtain more direct interpretation into weaker models, and then there is the question of whether definitional equalities can be removed as a way to make a proof assistant more modular, exploiting conservativity results locally to add defintional equalities.

The first seems more interesting to me, and is indeed the topic of Curien's famous paper. Many interesting coherence theorems lurk underneath the definitional equalities that we take for granted, and we can only see them if we treat definitional equalities as defeasible.

The issue about proof assistants seems quite different to me -- the approach that seems to be proposed by some of the people in the room, to replace judgmental equality with a good enough equality *type* seems perfectly good for studying algebra and models (for instance, the discipline of QIITs in the presence of [semantically] extensional equality types seems like a good tool for dependently-sorted algebraic theories). But this approach is not likely to yield *proof assistants* that are more usable than existing systems which have replaced definitional equality with propositional equality.

I am referring to Nuprl -- usually Nuprl is characterized as using equality reflection, but this is not totally accurate (though there is a sense in which it is true). When I say "definitional equality" for a formalism, what I mean is that if I have a proof object D of a judgment J, if J is definitionally equal to J', then D is also already a proof of J'. Definitional equality is the silent equality. In most formalisms, definitional equality includes some combination of alpha/beta/eta/xi, but in Nuprl is included ONLY alpha. What I mean is that in a proof object for Nuprl, the biggest relation by which you can "purturb" a judgment without needing to update the proof object is alpha equivalence. Proof objects must NOT be confused with realizers, of course -- just like we do not confuse Coq proofs with the OCaml code that they could be extract to.

So how does Nuprl work? "Real" equality is captured purely using the equality type, and not using any judgment except membership in the equality type (a judgment whose derivations are crucially taken only up to alpha, rather than up to beta/eta/xi). When you want to reason using beta/eta/xi (and the much richer equations that are also available), you use the equality type and its elimination rules directly, essentially in the same way that Thorsten has advocated.

Nuprl is in essence what it looks like to remove all definitional equalities and replace them with internal equalities. The main difference between Nuprl and Thorsten's proposal is that Nuprl's underlying objects are untyped -- but that is not an essential part of the idea.

The reason I bring this up is that the question of whether such an idea can be made usable, namely using a formalism with only alpha equivalence regarded silently/definitionally, and all other equations mediated through the equality type, can be essentially reduced to the question of whether Nuprl is practical and usable, or whether it is possible to implement a version of that idea which is practical and usable.

As many of the people on this list are aware, in the years past I was very bullish on this idea, but now after having implemented two separate proof assistants based on this idea, I am not as optimistic about it as I was (but I don't rule it out). Today, I have changed my methodology and embraced strong definitional equality, in order to achieve my actual goals during the very finite historical period of my PhD -- to build usable proof assistants for formalizing mathematics.

Best,
Jon




On Sat, Feb 9, 2019, at 9:05 AM, Nicolai Kraus wrote:
> On Sat, Feb 9, 2019 at 11:53 AM Felix Rech <s9fe...@gmail.com> wrote:
> > One of the motivations for my question was that I actually expect usability benefits if one worked in a dependent type theory without judgmental equality that has good support by a proof assistant.
>
> Yes, me too (but I think *a lot* of work needs to be done before we can
> have a proof assistant based on this idea which provides better
> usability than the current ones).
> I agree with your points. But I think "x + 0 = x versus 0 + x = x" is
> an example where I'd expect that one should be able to produce a
> conservativity proof and use both at the same time instead of choosing
> one. I think it's not necessary that we restrict ourselves to
> computation rules that come from actual definitions; anything that is
> "constructively conservative" over a weak theory should be allowed. In
> Agda, one can have additional computation rules, but it's not a safe
> feature.
> Nicolai
>
>
> > 1. Judgmental equality cannot be taken as assumptions. If one wants to use judgmental equalities then one has to give concrete definitions that satisfy those equalities and cannot hide the definition details. This makes it impossible to change definitions later on without breaking constructions that depend on them.
> > 2. In nontrivial definitions, judgmental equalities seem more arbitrary than natural. If we define addition of natural numbers for example then we can choose between x + 0 = x and 0 + x = x as judgmental equality but we cannot have both. This makes it hard to find the right definitions and to predict their behavior.

Michael Shulman

unread,
Feb 9, 2019, 3:34:22 PM2/9/19
to homotopyt...@googlegroups.com
From a semantic point of view, I agree that judgmental equality is
about convenience and usability. But that shouldn't be regarded as
making it any less important. Indeed, the whole *point* of using type
theory as an internal language for a category, rather than simply
reasoning about the category directly, is... convenience and
usability. Of course there are degrees of convenience, and even type
theory without judgmental equality might be slightly more convenient
than reasoning directly in categorical language. But I think we need
judgmental equality if we really want to get some substantial benefit
out of type theory as an internal language. The problem of course is
to bridge the gap between the type theory and the category theory, and
for that purpose intermediate structures may be useful: categories
that are more like type theory (such as tribes and contextual
categories) and type theories that are more like category theory (such
as ones with fewer judgmental equalities). But I don't think I would
want to regard the intermediate structures as fundamental, rather than
tools to get from one side to another.

From an implementation point of view, I agree that in the long run we
should have proof assistants that don't hardcode a fixed set of
judgmental equalities. But I don't think that means eliminating all
judgmental equalities; it just means making the logical framework more
flexible, such as Agda's ability to postulate new reductions or
Andromeda's framework with equality reflection. In particular, the
new equalities that we postulate should still be *substitutive* (as
Jon says, allowing to perturb a judgment without altering the proof
object) rather than *transportive* (requiring the proof object to be
altered) -- I think Vladimir was the one who suggested words like
those.

From a computation point of view, of course some kind of reduction is
necessary, and it makes sense to consider an object to be "equal" in
some sense to the result of computing it. This would presumably be a
stronger sense than typal equality inside the system, whether or not
it is actually substitutive. Conversely, in order to implement a
substitutive equality it seems very useful to have some algorithm that
can at least go some part of the way towards deciding it, and
computation is likely to play an important role in such an algorithm.

From a philosophical point of view, I think definitional/substitutive
equality is very important: as Steve Awodey pointed out some time ago,
it represents the intensional "equality of sense" rather than the
extensional "equality of reference". This explains why it is
substitutive: if two terms have the same sense, there is no
explanation necessary for why a statement about one also applies to
the other, while if they only have the same reference, then some
explanation may be required. If I say "I saw the morning star this
morning" and you reply "yes, and I saw it last night", it sounds weird
because there is a transport necessary along the equality of reference
"morning star = evening star", in a way that doesn't happen if instead
you reply "yes, and I saw it yesterday morning". Foundations for
mathematics based on first-order logic, like ZFC, have no ability to
talk about equality of sense; the only equality they know about is
reference. A type theory without definitional equality would have the
same limitation.

Judgmental, definitional, substitutive, and computational equalities
are not exactly the same thing. But the fact that there are so many
different but related points of view on similar and overlapping
concepts, and so many different but related uses and applications for
them, suggests to me that there is an important underlying
mathematical concept that should not lightly be discarded. On the
other hand, the proof of the pudding is in the eating, and this whole
discussion would be much more concrete if we had some actual
references working out what type theory without judgmental equality
looks like, with proofs of whatever properties it may be claimed to
have, and perhaps even an implementation of it; so I don't mean to
discourage anyone from trying to develop such a thing.

Matt Oliveri

unread,
Feb 11, 2019, 1:51:29 AM2/11/19
to Homotopy Type Theory
On Saturday, February 9, 2019 at 6:53:15 AM UTC-5, Felix Rech wrote:
Judgmental equality cannot be taken as assumptions. If one wants to use judgmental equalities then one has to give concrete definitions that satisfy those equalities and cannot hide the definition details. This makes it impossible to change definitions later on without breaking constructions that depend on them.

I worry that this will indeed be a modularity problem in practice. You generally can't specify behavior strictly at the interface between subsystems. Semantic purists would point out that such an interface wouldn't make sense anyway, unless the subsystem's type is an hset, and you use paths. In general, you should specify a bunch of coherences. Aside from being way harder though, we don't know how to do it in general, because of the infinite object problem.

With a 2-level theory, you could at least have second-class module signatures with strict equality.

In nontrivial definitions, judgmental equalities seem more arbitrary than natural. If we define addition of natural numbers for example then we can choose between x + 0 = x and 0 + x = x as judgmental equality but we cannot have both. This makes it hard to find the right definitions and to predict their behavior.
Another motivation was of course that it would simplify the implementation of proof checkers and parts of the metatheory.

This problem is solved by the special case of 2-level theories where strict equality is reflective (like extensional type theory (ETT)). What this does for you in practice is that you never have to see coercions between e.g. types (F (x + 0)) and (F (0 + x)), since they can be proven judgmentally equal by induction. (What I mean is that there's no record in terms of rewriting with nat equality. Not that all such reasoning can be done automatically. (It presumably is easy to automate both plus-zero rewrites though.))

Without reflective equality, strict equality doesn't seem to provide any benefit over paths in this case, because you need a coercion, and paths between nats are already unique.

Matt Oliveri

unread,
Feb 11, 2019, 1:58:11 AM2/11/19
to Homotopy Type Theory
If the stronger type theory is not conservative, you generally can't draw mathematical conclusions on the basis of those computations, though.

Matt Oliveri

unread,
Feb 11, 2019, 2:01:42 AM2/11/19
to Homotopy Type Theory
I think you're right. From discussions about autophagy, it seems like no one knows how to match judgmental equality using equality types, unless that equality type family is propositionally truncated in some way.

Consequently, my guess is that Valery's Q transformation actually yields something rather like a 2-level system.

Valery Isaev

unread,
Feb 11, 2019, 3:04:58 AM2/11/19
to Matt Oliveri, Homotopy Type Theory
Hi Matt,

I should point out that when I say "a type theory" I mean an ordinary dependently typed language, without any strict equalities or 2-levelness. Any such theory has only (dependent) types and terms and usual structural rules.
Any "coherence problem" is solved by simply adding an infinite tower of terms. For example, in MLTT, if f : A -> B -> C, you can define function swap(f) = \x y -> f y x : B -> A -> C. Then swap \circ swap = id definitionally. Of course, you cannot have such a definitional equality in Q(MLTT), but instead you have a propositional equality p : Id(swap \circ swap, id) and also an infinite tower of terms, which assure that p is coherent. Any rule that holds definitionally in MLTT will be true in Q(MLTT) propositionally.

Regards,
Valery Isaev


пн, 11 февр. 2019 г. в 10:01, Matt Oliveri <atm...@gmail.com>:
--

Matt Oliveri

unread,
Feb 11, 2019, 3:23:02 AM2/11/19
to Homotopy Type Theory
On Saturday, February 9, 2019 at 9:44:39 AM UTC-5, Jonathan Sterling wrote:
But this approach is not likely to yield *proof assistants* that are more usable than existing systems which have replaced definitional equality with propositional equality.

I am referring to Nuprl --

A discussion of the usability of propositional equality would not be complete without distinguishing equality that's reflective, or at least "subsumptive", from equality that's merely strict. Subsumptive equality is when the equality elimination rule rewrites in the type of a term without changing the term. There are no coercions. The way reflective equality is implemented in Nuprl is essentially a combination of subsumptive equality and extensionality principles. In ITT + UIP or Observational TT (OTT), there's a strict propositional equality, but you still have coercions.

I see the avoidance of coercions as the main practical benefit of Nuprl's approach.

One's approach to automation of equality checking is somewhat orthogonal, and I'm less opinionated about that. A number of dependent type systems exist based on an idea of Aaron Stump to use a combination of some algorithmic equality, and subsumptive (but non-extensional, non-reflective) equality. My impression is that Zombie is one such system.

usually Nuprl is characterized as using equality reflection, but this is not totally accurate (though there is a sense in which it is true).

To clarify, it depends on what you take to be judgmental equality. If it's the equality that determines what's well-typed, then Nuprl has equality reflection. Of course no useful system will implement reflective equality as an algorithm, since it's infeasible. So any algorithmic equality will be some other equality judgment. But the "real" judgmental equality is precisely the equality type. (As you say later.)

When I say "definitional equality" for a formalism, what I mean is that if I have a proof object D of a judgment J, if J is definitionally equal to J', then D is also already a proof of J'. Definitional equality is the silent equality. In most formalisms, definitional equality includes some combination of alpha/beta/eta/xi, but in Nuprl is included ONLY alpha.

Interesting. So you're counting Nuprl's proof trees and, say, Agda's terms as proof objects?

But what about Nuprl's direct computation rules? These allow untyped beta reduction and expansion anywhere in a goal. This justifies automatic beta normalization by all tactics. I don't know if Nuprlrs take advantage of this, but I think they should.

Proof objects must NOT be confused with realizers, of course -- just like we do not confuse Coq proofs with the OCaml code that they could be extract to.

To clarify, the realizers in Nuprl are Nuprl's *terms*. They are what get normalized; they are what appear in goals. The thing in Coq corresponding most closely to Nuprl's proof trees are Coq's proof scripts, not terms. The passage from Nuprl's proofs to terms is called "witness extraction", and is superficially similar to Coq's program extraction, but its role is completely different. A distinction between proof objects and terms is practically necessary to avoid coercions, since you still need to tell the proof assistant how to use equality *somehow*. In other words, whereas Coq's proof scripts are an extra, Nuprl's proof engine is primitive. (Similarly, in Andromeda, the distinction between AML programs and terms is necessary.)

...the equality type (a judgment whose derivations are crucially taken only up to alpha, rather than up to beta/eta/xi).

Although you may wish otherwise, Nuprl's judgments all respect computational equivalence, which includes beta conversion. (This is the justification of the direct computation rules.)

Nuprl is in essence what it looks like to remove all definitional equalities and replace them with internal equalities. The main difference between Nuprl and Thorsten's proposal is that Nuprl's underlying objects are untyped -- but that is not an essential part of the idea.

This doesn't seem right, since Nuprl effectively has untyped beta conversion as definitional equality. So I would say it *is* essential that Nuprl is intrinsically untyped. Its untyped definitional equality is all about the underlying untyped computation system.

The reason I bring this up is that the question of whether such an idea can be made usable, namely using a formalism with only alpha equivalence regarded silently/definitionally, and all other equations mediated through the equality type, can be essentially reduced to the question of whether Nuprl is practical and usable, or whether it is possible to implement a version of that idea which is practical and usable.

This is an interesting comparison. But because I consider Nuprl as having untyped definitional equality, and a powerful approach to avoiding coercions, I have to disagree strongly. By your argument, Thorsten's proposal would be at least as bad as Nuprl. (For practical usability.) But I think it would probably be much worse, because I think Nuprl is pretty good. Some of that is because of beta conversion. But avoiding coercions using subsumptive equality is also really powerful. Thorsten didn't say, but I'm guessing his proposal wouldn't use that.

(I would really like it if Nuprl could be accurately likened to some other proposal, since it'd probably get more people thinking about it. Oh well. The most similar systems to Nuprl, other than its successors, are Andromeda, with reflective equality, and the Stump lineage I mentioned, with subsumptive equality. PVS, Mizar, and F* might be similar too.)

The main purpose of this message is not to disagree with you, Jon. I'm mostly trying to sing the praises of Nuprl, because I feel that you've badly undersold it.

I don't know what the best way to deal with dependent types is. But I think avoiding type annotations and coercions while getting extensional equality is really good. I don't know about avoiding *all* type annotations; maybe that makes automation too hard. But I suspect the ideal proof assistant will be more like Nuprl than like Coq or Agda.

Matt Oliveri

unread,
Feb 11, 2019, 3:28:53 AM2/11/19
to Homotopy Type Theory
OK, thanks. I stand corrected. But in that case, I'm with Thorsten: 2-level seems easier.

Matt Oliveri

unread,
Feb 11, 2019, 3:37:35 AM2/11/19
to Homotopy Type Theory
Come to think of it, that's pretty cool though, that you can systematically produce all the higher coherences for a system of judgmental equations!

Rafaël Bocquet

unread,
Feb 11, 2019, 4:32:31 AM2/11/19
to HomotopyT...@googlegroups.com
I have been thinking about the relationship between type theories with
weak or strong computation rules, here is what I currently know about it.

Note that in the presence of UIP, type theories with weak and strong
computation rules are equivalent. This is a consequence of the
conservativity of extensional type theory over intensional type theory.
(As Théo said, the proof does not rely on the presence of strong
computation rules in ITT)

Without UIP, there are coherence theorems to be proven. As explained by
Thorsten, we can replace a type theory T (presented by generators (type
and term constructors) and relations (definitional equalities)) by a
two-level type theory whose outer layer has strict identity types, and
in which the definitional equalities of T become generators of the
identity types of the outer layer. Here, "strict" means that we either
include the equality reflection rule, or the UIP principle. By the same
arguments as in Hofmann's conservativity proof, both choices are equivalent.

What is interesting is that for some choices of type-theoretic
structures and definitional equalities, UIP can be derived in the
two-level type theory, rather than taken as an axiom, in which case the
strong type theory is a conservative extension of the weak theory.
Proving that UIP is derivable is very similar to proving coherence
statements of the form "all diagrams commute". (it is an acylicity
condition)


If F : WTT ---> STT is an extension of a weak type theory WTT by new
equations, and W2TT is a two-level type theory associated to WTT, the
situation is as follows:

  WTT -------------------> STT
   |          F             |
  G|                       H|~
   v                        v
  W2TT ---> W2TT+UIP ---> W2TT+EXT
       K              L

(The objects of this diagram are the initial models of the theories)
G and L are always conservative, H is an isomorphism, and if UIP is
derivable in W2TT, then K is conservative.


I think that I know how to prove that UIP is derivable when considering
extensions corresponding to weak computation rules (propositional
identity types, ...), although I have not written a detailed proof yet.
The core of the proof is a procedure that performs weak head
normalization, up to homotopy and coherently, in the weak two-level type
theory. Because the strong theory is strongly normalising, repeatedly
applying that procedure terminates.

--
Rafaël Bocquet

Matt Oliveri

unread,
Feb 11, 2019, 7:17:51 AM2/11/19
to Homotopy Type Theory
On Saturday, February 9, 2019 at 3:34:22 PM UTC-5, Michael Shulman wrote:
From an implementation point of view, I agree that in the long run we
should have proof assistants that don't hardcode a fixed set of
judgmental equalities.  But I don't think that means eliminating all
judgmental equalities; it just means making the logical framework more
flexible, such as Agda's ability to postulate new reductions or
Andromeda's framework with equality reflection.  In particular, the
new equalities that we postulate should still be *substitutive* (as
Jon says, allowing to perturb a judgment without altering the proof
object) rather than *transportive* (requiring the proof object to be
altered) -- I think Vladimir was the one who suggested words like
those.

I first heard those terms was on this list:
https://groups.google.com/forum/#!topic/homotopytypetheory/1bUtH8CLGQg

It seems from that discussion that they were associated with Vladimir Voevodsky's proposal for HTS. As a form of extensional type theory without any "built-in" implementation proposal, it seems like HTS has no notion of "proof object" in Jon's sense, which seems to be formal, checkable proofs. It's not that you couldn't come up with one, it just isn't specified. So I don't think HTS has any "definitional equality", in Jon's sense. But it seems like HTS' exact equality was considered substitutive nonetheless. In fact, it seems to me like what Vladimir meant by "substitutional" was that it doesn't cause coercions. Either because it's definitional, or because it's subsumptive (my term, from another message in this thread).

So I think you're misusing those terms.

Judgmental, definitional, substitutive, and computational equalities
are not exactly the same thing.  But the fact that there are so many
different but related points of view on similar and overlapping
concepts, and so many different but related uses and applications for
them, suggests to me that there is an important underlying
mathematical concept that should not lightly be discarded.

This is too vague. I wouldn't know whether I'm discarding it or not. You seem to be downplaying the differences between these notions. Why? If you don't care about the difference, why don't you just deal with strict or exact equality, and leave the implementation details to someone else? Coherence issues don't penetrate to a lower level than strict equality. Judgmental, definitional, and substitutive equality are special cases of strict equality that differ in their implementation properties.

Jon Sterling

unread,
Feb 11, 2019, 8:03:26 AM2/11/19
to 'Martin Escardo' via Homotopy Type Theory
Hi Matt, I don't have time to get sucked further into this conversation, but let me just say that what you are saying about "untyped definitional equality" and beta about nuprl is not accurate, or at least, it is not accurate if you take the broad definition of 'definitional equality' that I set in my message.

The lack of coercions in the realizers is true, but I did (apparently pointlessly) exhort readers to not confuse a realizer with a proof. There are coercions in the proof objects -- we must compare apples with apples.

Michael Shulman

unread,
Feb 11, 2019, 8:04:35 AM2/11/19
to Matt Oliveri, Homotopy Type Theory
On Mon, Feb 11, 2019 at 4:17 AM Matt Oliveri <atm...@gmail.com> wrote:
> As a form of extensional type theory without any "built-in" implementation proposal, it seems like HTS has no notion of "proof object" in Jon's sense, which seems to be formal, checkable proofs. It's not that you couldn't come up with one, it just isn't specified. So I don't think HTS has any "definitional equality", in Jon's sense. But it seems like HTS' exact equality was considered substitutive nonetheless. In fact, it seems to me like what Vladimir meant by "substitutional" was that it doesn't cause coercions. Either because it's definitional, or because it's subsumptive (my term, from another message in this thread).
>
> So I think you're misusing those terms.

Well, after many years I still have not managed to figure out how
NuPRLites use words, so it's possible that I misinterpreted what Jon
meant by "proof object". But if you interpret what I meant in ITT,
where I know what I am talking about, then it makes sense. In ITT the
relevant sort of "witness of a proof" is just a term, so "not
perturbing the proof object" means the same thing as "not causing
coercions".

> You seem to be downplaying the differences between these notions. Why?

Maybe things are different in computer science, but in mathematics it
often happens that there are things called "ideas" that are, in fact,
vaguer than anything that can be written down precisely, and can be
realized precisely by a variety of different formal definitions with
different formal properties. The differences -- even conceptual
differences -- between these definitions are not unimportant or
ignorable, but do not detract from the importance of the idea or our
ability to think about it. Indeed, the presence of multiple formal
approaches to the idea with different connections to different
subjects make it *more* important and provide us *more* options to
work with it formally. I am thinking of for instance all the
different constructions of a highly structured category of spectra, or
all the different definitions of (oo,1)-category.

Matt Oliveri

unread,
Feb 11, 2019, 8:22:20 AM2/11/19
to Homotopy Type Theory
I'm not convinced that I've made a mistake about Nuprl. If had mistakenly taken realizers to be the proof objects, then *all* equality would be definitional. That's not what I'm saying. The untyped beta conversion is special because beta normalization can be automatically used as necessary by all tactics invoked in the proof object. That means all proof objects (following this approach) are robust to "perturbations" that are beta conversions. Like I said, I'm not sure Nuprl folks do that. If not, you are technically correct that definitional equality is alpha. But they can and should do that. So the design of the system fully supports definitional beta conversion. (And nothing more.)

Your exhortation was not pointless. I agreed with it, and attempted to explain the reason for the difference in more detail.

Jon Sterling

unread,
Feb 11, 2019, 8:38:00 AM2/11/19
to 'Martin Escardo' via Homotopy Type Theory
Hi Matt, my final comment is just to clarify that even if you take the tactic trees as the proof objects (which is a position that I think is fine to argue), it is not acceptable to say that they are robust to "beta perturbations", because if your tactics run beta rules automatically, it is trivial for me to cook up an example where a beta-perturbation would make your tactic script fail to terminate (even if the theorem is true). This is because in Nuprl, you are often working with fixed points (a very cool feature!), and must be circumspect in the computation rules that you apply to them -- for instance, consider proving something about an infinite stream of binary digits. The "substitutive" or "perturbative equality" in Nuprl is indeed alpha, I can confirm, regardless of whether you are thinking of tactic scripts or finished proofs as the proof objects.

I just want to re-emphasize what Mike was saying about "ideas" being so fuzzy that they can be characterized using multiple, conflicting explanations -- definitional equality is one of these things, and is a scientific or philosophical concept rather than a technical concept (as Martin-Lof convincingly argued in the 1970s in his famous paper on definitional equality). You may have your own way to understand what is happening in Nuprl, and that's ok --- but the way that I am propounding in this thread is an analogy that allows us to compare apples with apples, a very difficult thing to do in this context. If there is anything mathematical to be said about these things, we need a common technical basis, and it seems like the simplest way to do that is to analyze what function definitional equality plays in practice -- it is the thing which (1) doesn't require proof, and (2) doesn't add any coercions to the thing that counts as proof. That is the "universal property" of definitional equality.

I'm too busy to respond further (paper deadline!), so I would request that people wait a few days to contact me about it :)

Matt Oliveri

unread,
Feb 11, 2019, 10:09:49 AM2/11/19
to Homotopy Type Theory
It looks like Jon is with you regarding definitional/substitutive equality, since he considers Nuprl's subtitutive equality to be alpha conversion. From the old discussion about it, I had figured Nuprl's substitutive equality was the equality type. I guess I'll avoid that term.

So I'll ask a more concrete question. You seem to be more open to Andromeda than Nuprl. It has a difference between definitional equality (in Jon's sense) and the (strict/exact) equality type for approximately the same reason as Nuprl. If the theory you're using is HTS, then there's also path types. So I gather path types are what you want to take as equality of reference. Which is equality of sense?

Regarding the paragraph I said was vague, my complaint really is that I don't know what you're getting at. I recommended strict or exact equality as possible (informal) interpretations. This doesn't mean there needs to be something called "strict equality" in the system definition, for example, it means you recognize a strict equality notion when you see one. I don't know how to recognize the kind of equality you were getting at in that paragraph.

Michael Shulman

unread,
Feb 11, 2019, 12:20:46 PM2/11/19
to Matt Oliveri, Homotopy Type Theory
FWIW, I think the only thing I have against NuPRL "in principle" is
that it seems to be closely tied to one particular model, which is the
opposite of what I want my type theories to achieve. I say "seems"
because then someone says something like Jon's "Nuprl's underlying
objects are untyped -- but that is not an essential part of the idea",
providing an instance of the problem I have with NuPRL "in practice",
which is that every time I think I understand it someone proves me
wrong. (-:O

Can you explain the difference between "definitional" (whatever that
means) and "strict" equality in Andromeda? Of course there is the
technical difference between judgmental equality and the equality
type, but that doesn't seem to me to be a "real" difference in the
presence of equality reflection, particularly at the purely
philosophical level at which a phrase like "equality of sense" has to
be interpreted. As far as I know, even beta-reduction has no
privileged status in the Andromeda core -- although I suppose
alpha-conversion probably does.

Thorsten Altenkirch

unread,
Feb 11, 2019, 1:17:25 PM2/11/19
to Michael Shulman, Matt Oliveri, Homotopy Type Theory
I have got something else against NuPRL. It explains Type Theory via untyped objects which are then typed. In my view there is no reason why we need to explain typed things via untyped objects. When we talk about mathematical objects we think about typed entities, and the formalism of type theory should reflect this. It is not that we find an untyped object on the street and then try to find out which type it has. We are thinking of a type first and then we construct an element.

Thorsten

Alexander Kurz

unread,
Feb 11, 2019, 1:45:24 PM2/11/19
to Thorsten Altenkirch, Michael Shulman, Matt Oliveri, Homotopy Type Theory
Hi Thorsten,

"When we talk about mathematical objects we think about typed entities"

I agree, but does it follow that types and not objects come first?

For example, 0 can simultaneously be the empty set, a natural number, an integer, a boolean, etc etc

The importance of the "etc etc" is that this list is not fixed in advance. It can change during mathematical course.

This seems to indicate to me that objects come first and types later.

Another example happens when I say that the dual category has the same objects and arrows with domain and codomain reversed. The same object then belongs to different categories.

Do you think that type theory provides enough flexibility to model this aspect of mathematical discourse conveniently?

All the best,

Alexander

Matt Oliveri

unread,
Feb 11, 2019, 2:27:23 PM2/11/19
to Homotopy Type Theory
Jon's definitional equality is based on proof objects. The Andromeda terms aren't proof objects, since you can't check just a term. So the fact that Andromeda terms don't have coercions for strict equality doesn't do anything for definitional equality. I would guess AML programs should be considered the relevant proof objects. I'd say definitional equality in Andromeda is pretty interesting, since algebraic effect handlers let you locally add new automation for equality. But it can't be as rich as strict equality (if you have e.g. induction on nats). And globally, it sounds like it's just alpha conversion.

Matt Oliveri

unread,
Feb 11, 2019, 3:11:58 PM2/11/19
to Homotopy Type Theory
You'll have to ask Jon about what "idea" Nuprl's intrinsic untypedness is "not an essential part of". I'd say the most important thing about Nuprl is dependent refinement typing. In particular, Nuprl is extrinsic dependent typing, since the intrinsic type system is trivial. That turns out to be very interesting too, but makes the approach less broadly applicable.

I have some outlandish views about Nuprl. I don't consider its PER semantics to be a model, in the usual sense of model theory. It's proof-theoretic semantics. It's a semantic justification of some proof principles. Kind of like a strong normalization proof for ITT. You can point out that it's technically a realizability model. But I'd say that's because the terms are realizers. *What are they realizing?* That would be a model. The model theory of Nuprlish systems is currently virtually nonexistent. Somebody ought to fix that. There's a set-theoretic semantics (actually two, and they are different... sort of) for an old version of Nuprl. That's it, AFAIK.

Michael Shulman

unread,
Feb 11, 2019, 4:50:19 PM2/11/19
to Matt Oliveri, Homotopy Type Theory
Well, I can't tell exactly what Jon meant by "I have a proof object D
of a judgment J, if J is definitionally equal to J', then D is also
already a proof of J'.". If he meant that an entire typing judgment
"M:A" is only "definitionally equal" to a typing judgment "N:B" if
*the very same derivation tree* of M:A counts also as a derivation of
N:B, then in the ordinary presentations of *any* ordinary type theory
I don't think any two judgments that differ by anything more than
alpha-equivalence would be considered "definitionally equal". Look
for instance at appendix A.2 of the HoTT book: coercion along
judgmental equality is a rule that alters a derivation tree. What it
doesn't alter is the *term*, and since Jon also said "In most
formalisms, definitional equality includes some combination of
alpha/beta/eta/xi" I assumed that his "proof objects" in a MLTT-like
theory would refer to the terms rather than the derivation trees.

Regardless of what Jon meant, it seems to me that insofar as
"definitional equality" is distinguished from all these other kinds of
strict equality, it should refer to pairs of terms (i.e. bits of
syntax that denote something in a model) that are *equal by
definition*. For instance, (\lambda x. x^2)(3) is equal by definition
to 3^2, because \lambda x. x^2 denotes by definition the function that
takes its argument and squares it. I think this is roughly the same
as what I would mean by "equality of sense", although the latter is a
primarily philosophical concept and as such cannot be pinned down with
mathematical rigor. AML programs do not denote something in a model,
and I can't think of any sense in which two of them could be "equal by
definition". A judgmental, strict, exact, or substitutional equality
might include only definitional equalities, as in ordinary ITT, or it
might include other non-definitional ones, as in ETT with reflection
rule. So I would say that Andromeda with its reflection rule does not
include a "definitional equality" as a distinguished notion of the
core language. However, when using Andromeda as a logical framework
to implement some object language, one might assert a particular class
of substitutional equalities in the object language that are all
definitional.

Thorsten Altenkirch

unread,
Feb 11, 2019, 5:58:45 PM2/11/19
to Alexander Kurz, Michael Shulman, Matt Oliveri, Homotopy Type Theory
I think it is rather misleading to think that 0 and the empty set have something in common because in set theory they are represented by the same construction {}. This is rather an accident of representation, it is like saying that the number 49 and the letter 'a' have something in common because they are both represented by the same bit sequence.

Indeed, I think having access and being able to talk about the accidental representation of objects as you can do in set theory is detrimental because it stops abstraction and this is precisely one of the main advantages of type theory.

Thorsten

Jacques Carette

unread,
Feb 11, 2019, 9:09:07 PM2/11/19
to Thorsten Altenkirch, Alexander Kurz, Michael Shulman, Matt Oliveri, Homotopy Type Theory
Some of you might already have seen the interesting discussion "Set
theories without junk theorems" at MO
https://mathoverflow.net/questions/90820/set-theories-without-junk-theorems
that is very much about these accidents of representation.
Jacques

Matt Oliveri

unread,
Feb 12, 2019, 4:01:33 AM2/12/19
to Homotopy Type Theory
On Monday, February 11, 2019 at 4:50:19 PM UTC-5, Michael Shulman wrote:
Well, I can't tell exactly what Jon meant by "I have a proof object D
of a judgment J, if J is definitionally equal to J', then D is also
already a proof of J'.".  If he meant that an entire typing judgment
"M:A" is only "definitionally equal" to a typing judgment "N:B" if
*the very same derivation tree* of M:A counts also as a derivation of
N:B, then in the ordinary presentations of *any* ordinary type theory
I don't think any two judgments that differ by anything more than
alpha-equivalence would be considered "definitionally equal".

Right. So I assumed he didn't mean that. I'm pretty sure that for ITT, the terms are the proof objects. But it's not clear what should count as proof objects otherwise. This is a weakness of the definition. I guessed at what the proof objects were for Nuprl, but it looks like Jon had something else in mind.

If the proof objects are taken to be whatever notion of formal proof the formalizer needs to produce, then definitional equality is an important consideration for proof engineering. That's what I was thinking about. Crucially, proof automation changes the notion of proof, in practice. But this makes things fuzzy.

AML programs do not denote something in a model,
and I can't think of any sense in which two of them could be "equal by
definition".

You seem confused. Definitional equality is not a relation on proof objects, it's a relation on expressions appearing in the judgments they prove. AML programs don't appear in judgments. (Well, not any semantically relevant judgments.)

For instance, (\lambda x. x^2)(3) is equal by definition
to 3^2, because \lambda x. x^2 denotes by definition the function that
takes its argument and squares it.

This sure seems completely different from what Jon was getting at. But anyway, what's the difference between "denoting by definition" and regular denoting?

An important aspect of Jon's definition of "definitional equality", which I think is right, is that it doesn't have to do with models. Definitional equality is an intentional issue. An issue tied to implementation. ITT pins definitional equality to judgmental equality, which *does* have to do with models, and I think that's *bad*. I suspect that the discomfort or lack of understanding many mathematicians supposedly have with definitional equality stems from the fact that it is, in fact, an implementation issue.

So I would say that Andromeda with its reflection rule does not
include a "definitional equality" as a distinguished notion of the
core language.

Agree.

However, when using Andromeda as a logical framework
to implement some object language, one might assert a particular class
of substitutional equalities in the object language that are all
definitional.

How would you tell when a class of equalities is definitional?

Matt Oliveri

unread,
Feb 12, 2019, 6:03:45 AM2/12/19
to Homotopy Type Theory
Let's say that "objects" are untyped, and "elements" are typed, by definition. These are semantic entities. Nuprl's (closed) terms are untyped in that you are allowed to think of them as objects. You can also think of them as elements. This is not the same. I'd like to distance Nuprl, based on PERs, from set theory. In set theory, objects are given their final meaning by the universe, and sets are types only in the most boring possible sense that you gather up objects and call them elements. In Nuprl, terms obtain their computational meaning as objects, but their mathematical meaning as elements. Interpreted as PERs, Nuprl's types let you ask whether two objects implement the same element. Equality of elements is not equality of objects, and the way you think about objects and elements is different. A variable has a declared type, and it denotes an arbitrary element of that type. It is meaningless to think of it as an object (unless the declared type is a subtype of the type of objects).

You are right that you don't need to explain elements in terms of objects. That doesn't necessarily make it a bad idea.

Each PER indicates how its elements are implemented by objects. There's no requirement that an object will have a unique type. Each type can be thought of as an abstract view of certain objects.

Constructions in Nuprl are heavily based on types, as in any strong type system. You wouldn't bother with a strong type system if you weren't going to do that. You can construct elements using the usual sorts of rules. Because Nuprl also has objects, you also have the option of constructing objects and then proving they implement elements of one or more types. If you've never felt at all constrained by a strong type system, great; you don't have to do that. But for many people, the intrinsically untyped approach is sometimes helpful. An element is an element; it doesn't matter whether it's an element by construction, or a verified object. I see this as a tremendous benefit for strongly typed programming.

Thorsten Altenkirch

unread,
Feb 12, 2019, 10:36:08 AM2/12/19
to Matt Oliveri, Homotopy Type Theory


You are right that you don't need to explain elements in terms of objects. That doesn't necessarily make it a bad idea.

Not necessarily but I think it is a bad idea. Why do I need to understand objects to understand elements? Why do I have to capture all possible constructions if I just want to talk about some specific ones?

 

This is something that irritates me about set theory: if I want to say for all natural numbers …, I am actually saying for all sets which turn out to be natural numbers… At least conceptually this is rubbish. Who thinks like this?

 

Yes and in computer science we think about types sorting untyped programs. But why? Why do I need to think about untyped programs to understand typed ones. Untyped programs are just weird. Do you understand the untyped lambda calculus? It’s syntax is simple but its meaning is a headfuck. It took Dana Scott a while to come up with a mathematical semantics.

 

I think we should put our mouth where our heart is. Typed objects are easier to understand, they make sense by themselves so lets refer to them. Yes we implement them based in intyped things, trees, strings, bit sequences but we don’t need low level concepts to understand high level concepts!! Yes we need them to implement them but this is another issue.

 

Thorsten

--
You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Matt Oliveri

unread,
Feb 12, 2019, 10:59:41 AM2/12/19
to Homotopy Type Theory
On Tuesday, February 12, 2019 at 10:36:08 AM UTC-5, Thorsten Altenkirch wrote:
Why do I need to understand objects to understand elements?

As a user of Nuprl, you don't. Objects give you a new way to reason, which you don't have to use. You would only need to understand the object-based semantics of types to fully understand the language.

Why do I have to capture all possible constructions if I just want to talk about some specific ones?

I don't understand this.

This is something that irritates me about set theory: if I want to say for all natural numbers …, I am actually saying for all sets which turn out to be natural numbers… At least conceptually this is rubbish. Who thinks like this?


This doesn't seem to have anything to do with Nuprl. In Nuprl, saying for all natural numbers is a Pi type with domain nat, as usual.

Yes and in computer science we think about types sorting untyped programs. But why?


Some computer scientists like untyped programs.

Do you understand the untyped lambda calculus?


Yeah, I'd say so.

It took Dana Scott a while to come up with a mathematical semantics.


Because understanding this connection between computation and math turned out to be much harder than understanding computation on its own, operationally. Before denotational models, the untyped lambda calculus was already well-understood operationally. Scott deepened our understanding of it, and of general recursive types more generally. It's not all or nothing.

I think we should put our mouth where our heart is. Typed objects are easier to understand, they make sense by themselves so lets refer to them. Yes we implement them based in intyped things, trees, strings, bit sequences but we don’t need low level concepts to understand high level concepts!! Yes we need them to implement them but this is another issue.


I truly believe that it's useful for the type system to let users reason about the implementation of typed elements in terms of untyped or less-typed objects. This is refinement typing, and it's not just Nuprl vs the world.

Michael Shulman

unread,
Feb 12, 2019, 12:54:24 PM2/12/19
to Matt Oliveri, Homotopy Type Theory
For sure definitional equality doesn't have to do with models. Like
all the kinds of equality we are discussing, it is a syntactic notion.
Actually I would say it is a *philosophical* notion, and as such is
imprecisely specified; syntactic notions like judgmental equality can
do a better or worse job of capturing it in different theories (and in
some cases may not even be intended to capture it at all).

> what's the difference between "denoting by definition" and regular denoting?

x+(y+z) and (x+y)+z denote the same natural number for any natural
numbers x,y,z, because we can prove that they are equal. But they
don't denote the same natural number *by definition*, because this
proof is not just unfolding the meanings of definitions; it involves
at least a little thought and a pair of inductions.

For a more radical example, "1+1=2" and "there do not exist positive
integers x,y,z,n with n>2 and x^n+y^n=z^n" denote the same
proposition, namely "true". But that's certainly not the case by
definition! Same reference; different senses.

Matt Oliveri

unread,
Feb 13, 2019, 1:37:51 AM2/13/19
to Homotopy Type Theory
OK. So it sounds like definitional equality is another way of thinking about equality of sense, and is generally not the same as strict equality. That's a relief.

But the use of judgmental equality for capturing a system of paths that's fully coherent is actually about strict equality, in general; not necessarily judgmental or definitional equality.

So to bring things back to HoTT, what are people's opinions about the best use of these three equalities?

My opinion is that strict equality should be implemented as judgmental equality, which should be richer than definitional equality, by using a 2-level system with reflective equality. This is the case in HTS and computational higher dimensional type theory. We would still probably want to consider different theories of strict equality, but there would be no obligation to implement them as equality algorithms. Definitional equality would be a quite separate issue, pertaining to proof automation.

Ansten Mørch Klev

unread,
Feb 13, 2019, 5:02:01 AM2/13/19
to Matt Oliveri, Homotopy Type Theory
I would object to the characterization of definitional identity as identity of sense. Sense is often glossed as mode of givenness; or one might say that the sense of an expression determines a route to its reference. Now take terms such as 7+5 and 9+3. These are definitionally identical, but they present the number twelve in different ways; qua senses each determines a different route to ssssssssssss(0). Hence they are not the same senses. Identity of sense is, to my mind, a stricter relation than definitional identity.

Very often, also in this thread, the notions of judgemental identity and definitional identity seem to be treated as one and the same thing. But by judgemental identity we should mean a certain form of identity assertion that is different from the assertion that an identity proposition/type is true/inhabited; and by definitional identity we should mean one among several relations on terms that may properly be called a relation of identity. 

Here is an argument for the need of a form of identity assertion other than p : Id(A,a,b). The proposition Id(A,a,b) is formed by applying the function Id to the type of individuals (set in Martin-Löf's terminology) A, and the elements a,b : A. The notion of a function, however, presupposes a notion of identity: f is a function only if, when applied to identical arguments a, b, we get identical results f(a), f(b). On pain of circularity, the notion of identity appealed to here cannot be propositional identity. 
A notion of identity is also implicit in the notion of domain presupposed by the notion of a function: f is a function from the domain A to the domain B. To have the right to speak of A as a domain we need to know what it is for elements of A to be identical, we need to know a criterion of identity for A. 

In type theory the alternative form of identity assertion, used for instance in the explanation of what a function is, is the identity judgement a = b : A. There is nothing in the argument just given that forces the relation picked out by this form of identity assertion to be definitional identity; but there may be other arguments why such an interpretation is preferable. 





--

Thorsten Altenkirch

unread,
Feb 16, 2019, 10:59:18 AM2/16/19
to Anders Mörtberg, Homotopy Type Theory

Hi,

 

I have already written on this topic but I had some thoughts, actually this is related to my explanation of judgemental equality when teaching type theory.

 

In Set Theory we have the element relation and equality and both are propositions, so in some sense dynamic. Eg whether a element relation holds may depend on some assumptions you have made and the same goes for equality. In type theory as in statically typed programming languages typing is a judgement, i.e. it is static (Actually I think there is a nice anology of set theory vs type theory = python vs Haskell). The reason is that we only talk about typed objects and as a consequence a phrase in our language refers to an object whose type is known statically. In particular the type doesn’t depend on some assumptions. If I say that x is a natural number then this is not dependent on some other assumptions I have made. Once I have dependent types this static reasoning needs to be extended. If I define n = 3 and I am checking that (1,2,3) is a vector of size n I need to unfold the definition of n. Actually I am already unfolding the definition of Vec anyway. Hence I am introducing a static equality judgement accompanying my static element judgement. Now what exactly are equalities that are known statically. It seems very natural to include beta equalities and the unfolding of recursive definitions. It is less obvious whether we should include eta equalities because we can only do this for some types and not for all. Hence one of the issues with definitional / judgemental equality is that it is not clear what exactly should be included and what not.

 

With the introduction of HoTT another important aspect of judgemental equality became apparent: it is a universal strict equality. As I have argued it may be a good idea to have a stronger universal strict equality, in particular one which is not a static judgement. However, now we end up with 3 equalities: judgemental, strict and propositional equality (I stick to this term even if propositional equality may not be a proposition). This may make it difficult to convince the non type theorists that type theory is cool. Hence maybe we can only have 2 or 1 equality and reduce judgemental equality to a mere convenience but not a fundamental feature of type theory.

 

Thorsten

 

 

 

From: <homotopyt...@googlegroups.com> on behalf of Anders Mörtberg <andersm...@gmail.com>
Date: Wednesday, 6 February 2019 at 04:13
To: Homotopy Type Theory <homotopyt...@googlegroups.com>
Subject: [HoTT] Re: Why do we need judgmental equality?

 

Note that judgmental equality is not only a convenience, but it also affects what is provable in your type theory. Consider the interval HIT, it is contractible and hence equivalent to Unit. But it also lets you prove function extensionallity which you definitely don't get from the Unit type.

 

--

Anders


On Tuesday, February 5, 2019 at 6:00:24 PM UTC-5, Matt Oliveri wrote:

The type checking rules are nonlinear (reuses metavariables). For example, for function application, the type of the argument needs to "equal" the domain of the function. What equality is that? It gets called judgmental equality. It's there in some sense even if it's just syntactic equality. But it seems really really hard to have judgmental equality be just syntactic equality, in a dependent type system. It would also be unnatural, from a computational perspective; the judgmental equations are saying something about the computational behavior of the system.

On Wednesday, January 30, 2019 at 6:54:07 AM UTC-5, Felix Rech wrote:

In section 1.1 of the HoTT book it says "In type theory there is also a need for an equality judgment." Currently it seems to me like one could, in principle, replace substitution along judgmental equality with explicit transports if one added a few sensible rules to the type theory. Is there a fundamental reason why the equality judgment is still necessary?

 

Thanks,

Felix Rech

--
You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


This message and any attachment are intended solely for the addressee
and may contain confidential information. If you have received this
message in error, please contact the sender and delete the email and
attachment. 

Any views or opinions expressed by the author of this email do not
necessarily reflect the views of the University of Nottingham. Email
communications with the University of Nottingham may be monitored 
where permitted by law.



Michael Shulman

unread,
Feb 16, 2019, 8:25:43 PM2/16/19
to Homotopy Type Theory
The supposed arbitrariness of some types having strict eta and others
not has been mentioned at least twice. However, I don't find it
arbitrary at all: *negative* types have strict eta, while positive
types don't. Since negative types are determined by maps *in*, all of
their elements, identifications, and so on are determined directly,
and strictly; whereas positive types are determined by maps *out*, and
in particular their higher groupoid structure must be freely generated
for some sense of "free", giving rise to lots of scope for flabbiness.
It's quite common in higher category theory for "semi-strictified
objects" to admit some strict limits and other "mapping in" universal
properties (e.g. the category of algebras and pseudomorphisms for a
2-monad has strict PIE-limits, the category of fibrant objects in a
model category has strict pullbacks of fibrations, etc.), but much
rarer for them to have strict colimits and other "mapping out"
universal properties.

In particular, the difference in whether Sigma-types have strict eta
or not simply lies in whether we are talking about positive
Sigma-types or negative Sigma-types, which are different things, even
though they turn out to be equivalent. Type theorists and logicians
are familiar by now with the fact that positive and negative binary
products / conjunctions are different things, even though they happen
to be equivalent in "cartesian" type theory and logic, because they
can be disentangled in linear logic. I don't know whether any kind of
"linear dependent type theory" can disentangle positive and negative
Sigma-types, but both syntactically and semantically it still makes
sense to me to consider them as different things that just happen to
be equivalent. It also happens sometimes in category theory that a
limit and a colimit coincide (e.g. split idempotents, biproducts in an
additive category, etc.
https://ncatlab.org/nlab/show/absolute+colimit); but depending on
whether we consider them as a limit or a colimit, it's natural that we
might get different amounts of strictness.

Thorsten Altenkirch

unread,
Feb 17, 2019, 2:56:40 AM2/17/19
to Michael Shulman, Homotopy Type Theory, agda list
On 17/02/2019, 01:25, "homotopyt...@googlegroups.com on behalf of Michael Shulman" <homotopyt...@googlegroups.com on behalf of shu...@sandiego.edu> wrote:

However, I don't find it
arbitrary at all: *negative* types have strict eta, while positive
types don't.

This is a very good point. However Streams are negative types but for example agda doesn't use eta conversion on them, I think for a good reason. Actually I am not completely sure whether this is undecidable.

E.g. the following equation cannot be proven using refl (it can be proven in cubical agda btw). The corresponding equation for Sigma types holds definitionally.

infix 5 _∷_

record Stream (A : Set) : Set where
constructor _∷_
coinductive
field
hd : A
tl : Stream A

open Stream
etaStream : {A : Set}{s : Stream A} → hd s ∷ tl s ≡ s
etaStream = {!refl!}

CCed to the agda list. Maybe somebody can comment on the decidabilty status?

Matt Oliveri

unread,
Feb 17, 2019, 4:05:08 AM2/17/19
to Homotopy Type Theory
What about strict uniqueness for the empty type? In general, what about the "discrete" inductive types of computational HDTT, including HTS-style natural numbers? I understand that this is not something we expect all models to have, but they're useful when available, and they have strict extensionality.


On Saturday, February 16, 2019 at 8:25:43 PM UTC-5, Michael Shulman wrote:
The supposed arbitrariness of some types having strict eta and others
not has been mentioned at least twice.  However, I don't find it
arbitrary at all: *negative* types have strict eta, while positive
types don't.  ...

Matt Oliveri

unread,
Feb 17, 2019, 4:14:45 AM2/17/19
to Homotopy Type Theory
(Since I've already made such a fuss about Nuprl anyway...)

Nuprl has a type of streams (of whatever) up to bisimilarity. That is, bisimilar streams are judgmentally equal. Bisimilarity of bit streams is already undecidable. This is essentially the same problem as extensional equality of functions (nat->bool). I think eta *reduction* of streams would be OK to throw in, but it won't get you strict extensionality.

Michael Shulman

unread,
Feb 17, 2019, 4:19:01 AM2/17/19
to Thorsten Altenkirch, Homotopy Type Theory, agda list
Well, I'm not really convinced that coinductive types should be
treated as basic type formers, rather than simply constructed out of
inductive types and extensional functions. For one thing, I have no
idea how to construct coinductive types as basic type formers in
homotopical models. I think the issue that you raise, Thorsten, could
be regarded as another argument against treating them basically, or at
least against regarding them as really "negative" in the same way that
Pis and (negative) Sigmas are.

And as for adding random extra strict equalities pertaining certain
positive types that happen to hold in some particular model, Matt, I
would say similarly that the general perspective gives yet another
reason why you shouldn't do that. (-:

But the real point is that the general perspective I was proposing
doesn't claim to be the *only* way to do things; obviously it isn't.
It's just a non-arbitrary "baseline" that is consistent and makes
sense and matches a common core of equalities used in many type
theories, so that when you deviate from it you're aware that you're
being deviant. (-:

Thorsten Altenkirch

unread,
Feb 17, 2019, 5:52:49 AM2/17/19
to Michael Shulman, Homotopy Type Theory, agda list
For me the idea of inductive vs coinductive or how I called this a while ago data vs codata is an important basic intuition which comes before formal model constructions. Types are defined by constructors or by destructors, eg coproducts are defined by constructors while functions are defined by destructors, namely application. That is a function is something you can apply to arguments obtaining a result. Lambda is a derived construction, I can construct a function if I have a method to compute the result. Similarily natural numbers and lists are given by constructors, while streams are defined by destructors, to give a stream means to be able to say what its head and its tail are. And that is perfectly right Sigma types can be either given by a constructor or by destructors so in this sense they are twitters.

There are reductions which just means that certain type formers are universal in that we can define all other from them, e.g. function types together with some inductive types are sufficient to derive a certain class of codata types. That doesn't mean that the dichotomy between data and codata isn't an important basic intuition.

stre...@mathematik.tu-darmstadt.de

unread,
Feb 17, 2019, 6:35:05 AM2/17/19
to Thorsten Altenkirch, Michael Shulman, Homotopy Type Theory, agda list
But function types are neither inductive nor conductive. Only N-> (-) ist
coinductive.
In presence of function types most coinductive types can be implemented.

Maybe coinductive is a style of programming but nothing really foundational.

Thomas

Thorsten Altenkirch

unread,
Feb 17, 2019, 6:44:31 AM2/17/19
to stre...@mathematik.tu-darmstadt.de, Michael Shulman, Homotopy Type Theory, agda list


But function types are neither inductive nor conductive. Only N-> (-) ist
coinductive.
In presence of function types most coinductive types can be implemented.

You think of terminal coalgebras - that is not what I mean. To avoid this confusion I am using the term codata but most people in CS understand coinductive better. You can define a type by saying how elements can be constructed, this is data Alternatively you say what you can do with it, i.e. how elements can be destructed. The function type is not data, functions are not understood by the way they are constructed but what you can do with it. A function is something you can apply to an argument and you get a result. Codata types always introduce non-trivial equalities, because all you can do with a function is applying it to arguments two functions which provide the same output for all inputs are equal. Hence functional extensionality should hold.

Maybe coinductive is a style of programming but nothing really foundational.

I think you miss something important here.

Matt Oliveri

unread,
Feb 17, 2019, 7:08:52 AM2/17/19
to Homotopy Type Theory
What about infinitary (non-elementary) limits? Are there infinitary homotopy-limits? They are more common than discrete inductive types, right? So what if I considered a stream of A to be essentially an omega-fold product of A. Would that have a strict extensionality principle?

More generally, I would try and say that a coinductive type is some limit of an external diagram of elimination spines. This might address Thomas Streicher's objection, since the collection of elimination spines is countable externally.

Matt Oliveri

unread,
Feb 17, 2019, 7:13:22 AM2/17/19
to Homotopy Type Theory
On Sunday, February 17, 2019 at 7:08:52 AM UTC-5, Matt Oliveri wrote:
since the collection of elimination spines is countable externally.

Oh wait, I guess not.

Nicolai Kraus

unread,
Feb 17, 2019, 8:29:37 AM2/17/19
to Michael Shulman, Homotopy Type Theory
On Sun, Feb 17, 2019 at 1:25 AM Michael Shulman <shu...@sandiego.edu> wrote:
The supposed arbitrariness of some types having strict eta and others
not has been mentioned at least twice.  However, I don't find it
arbitrary at all [...] In particular, the difference in whether Sigma-types have strict eta

or not simply lies in whether we are talking about positive
Sigma-types or negative Sigma-types

Since I've mentioned eta for Sigma in this context: thanks for this explanation, Mike! 
-- Nicolai


Andreas Abel

unread,
Feb 17, 2019, 9:22:26 AM2/17/19
to Thorsten Altenkirch, Michael Shulman, Homotopy Type Theory, agda list
> CCed to the agda list. Maybe somebody can comment on the decidabilty
status?

Ulrich Berger and Anton Setzer: Undecidability of Equality for Codata Types

Talk given at CMCS'18 Thessaloniki, Greece, 15 April 2018


http://www.cs.swan.ac.uk/~csetzer/articles/CMCS2018/bergerSetzerProceedingsCMCS18.pdf
> _______________________________________________
> Agda mailing list
> Ag...@lists.chalmers.se
> https://lists.chalmers.se/mailman/listinfo/agda
>

Bas Spitters

unread,
Feb 17, 2019, 9:25:10 AM2/17/19
to Thorsten Altenkirch, stre...@mathematik.tu-darmstadt.de, Michael Shulman, Homotopy Type Theory, agda list
I think Thomas is referring to M-types which can be constructed in
HoTT, moreover, they have the right equality in guarded or cubical
type theory.
I've collected some references here. Please feel free to expand.
https://ncatlab.org/nlab/show/M-type

Thomas Streicher

unread,
Feb 17, 2019, 2:36:15 PM2/17/19
to Thorsten Altenkirch, Michael Shulman, Homotopy Type Theory, agda list
admittedly, by coinductive I understand terminal coalgebra

in MLTT even function types are inductive though that's a bit of
cheating in my eyes

so I must say I don't understand what you mean by codata types
technically

I know positive and negative polarity in the sense of linear logic people
presumably, that'scloser to what you have in mind

thomas

Thorsten Altenkirch

unread,
Feb 17, 2019, 4:41:43 PM2/17/19
to Thomas Streicher, Michael Shulman, Homotopy Type Theory, agda list
I tried to explain in common sense term what is a function? And my answer is that it is something you can apply to an element of the domain and you get an element of the codomain. You don’t know how you can inspect the function that is all you know. Hence the function type is explained by what you can do with it. That’s coastal. Compare this with natural numbers. A natural number is 0 or the successor of a natural number. You explain how to produce them. That’s data. What is a stream? A stream is something you can get the head and the tail of. So that is codata as well.

Sent from my iPhone

This message and any attachment are intended solely for the addressee

Martín Hötzel Escardó

unread,
Feb 18, 2019, 12:37:27 PM2/18/19
to Homotopy Type Theory
On Friday, 8 February 2019 21:19:24 UTC, Martín Hötzel Escardó wrote:
 why is it claimed that definitional equalities are essential to dependent type theories?

I've tested what happens if we replace definitional/judgmental
equalities in MLTT by identity types, working in the fragment of Agda 
that only has Π and universes (built-in) and hence my Π has its usual 
judgemental rules, including the η rule (but this rule could be disabled).

In order to switch off definitional equalities for the identity type
and Σ types, I work with them given as hypothetical arguments to a module
(and no hence with no definitions):

  _≡_ : {X : 𝓤} → X → X → 𝓤        -- Identity type.

  refl : {X : 𝓤} (x : X) → x ≡ x

  J : {X : 𝓤} (x : X) (A : (y : X) → x ≡ y → 𝓤)
    → A x (refl x) → (y : X) (r : x ≡ y) → A y r

  J-comp : {X : 𝓤} (x : X) (A : (y : X) → x ≡ y → 𝓤)
            → (a : A x (refl x)) → J x A a x (refl x) ≡ a

  Σ : {X : 𝓤} → (X → 𝓤) → 𝓤

  _,_ : {X : 𝓤} {Y : X → 𝓤} (x : X) → Y x → Σ Y

  Σ-elim : {X : 𝓤} {Y : X → 𝓤} {A : Σ Y → 𝓤}
         → ((x : X) (y : Y x) → A (x , y))
         → (z : Σ Y) → A z

  Σ-comp : {X : 𝓤} {Y : X → 𝓤} {A : Σ Y → 𝓤}
         → (f : (x : X) (y : Y x) → A (x , y))
         → (x : X)
         → (y : Y x)
         → Σ-elim f (x , y) ≡ f x y

Everything I try to do with the identity type just works, including
the more advanced things of univalent mathematics - but obviously I
haven't tried everything that can be tried.

However, although I can define both projections pr₁ and pr₂ for Σ, and
prove the η-rule σ ≡ (pr₁ σ , pr₂ σ), I get

 p : pr₁ (x , y) ≡ x

but only

     pr₂ (x , y) ≡ transport Y (p ⁻¹) y

It doesn't seem to be possible to get pr₂ (x , y) ≡ y without further
assumptions. But maybe I overlooked something.

One idea, which I haven't tested, is to replace Σ-elim and Σ-comp by
the the single axiom

  Σ-single :
      {X : 𝓤} {Y : X → 𝓤} (A : Σ Y → 𝓤)
    → (f : (x : X) (y : Y x) → A (x , y))
    → is-contr (Σ λ (g : (z : Σ Y) → A z)
                       → (x : X) (y : Y x) → g (x , y) ≡ f x y)

I don't know whether this works. (I suspect the fact that "induction
principles ≃ contractibility of the given data with their computation
rules" is valid only in the presence of definitional equalities for
the induction principle, and that the contractibility version has a
better chance to work in the absence of definitional equalities.)

One can also wonder whether the presentation taking the projections
with their computation rules as primitive, and the η rule would
work. But again we can define Σ-elim but not Σ-comp for it.

In any case, without further identities, simply replacing definitional
equalities by identity types doesn't seem to work, although I may be 
missing something.

Martin



Licata, Dan

unread,
Feb 18, 2019, 2:22:23 PM2/18/19
to Martín Hötzel Escardó, Homotopy Type Theory
Hi,

I’m confused — does it even type check to ask pr₂ (x , y) ≡ y ?

I would expect

x : A
y : B(x)
(x,y) : Σ A B
fst (x,y) : A
snd (x,y) : B(fst (x,y))

so if beta for fst is weak, then

pr₂ (x , y) ≡ transport Y (p ⁻¹) y

is the best you could hope for. I.e., you have a path for the first beta, and a path over it for the second.

-Dan

Martín Hötzel Escardó

unread,
Feb 18, 2019, 3:23:53 PM2/18/19
to Homotopy Type Theory
Sorry, I was being silly. Martin
> To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeTheory+unsub...@googlegroups.com.

Michael Shulman

unread,
Feb 19, 2019, 7:22:26 PM2/19/19
to Matt Oliveri, Homotopy Type Theory
On Sun, Feb 17, 2019 at 4:08 AM Matt Oliveri <atm...@gmail.com> wrote:
> What about infinitary (non-elementary) limits? Are there infinitary homotopy-limits? They are more common than discrete inductive types, right? So what if I considered a stream of A to be essentially an omega-fold product of A. Would that have a strict extensionality principle?

Yes, that might work. I think that the last time I thought about
coinductive types in homotopy models I was trying to give them some
kind of dependent coinduction principle. But if we take seriously the
positive/negative perspective that I advanced before, we would expect
coinductive types to have simply a destructor, a non-dependent
corecursor, a beta-rule for destructing the corecursor, and an
eta-principle saying something like that the endomorphism defined from
the destructor and the corecursor is the identity. And that would
probably be modeled by the omega-fold product of a type with itself.

More generally, if F is a functor that preserves fibrations, fibrant
objects, and sequential limits (like F(X) = A x X for streams), then
the infinite limit

... -> F(F(F(1))) -> F(F(1)) -> F(1) -> 1

might be a model for the corresponding coinductive type with such an
eta-principle.

So maybe there's nothing wrong with coinductive types as such, and the
point is just that computational considerations like decidability
don't always match up with the principled category-theoretic
explanations. (-:
Reply all
Reply to author
Forward
0 new messages