So if f is an equivalence, and I pass it through Univalence and look at the equivalence I get out, it will be f circ f circ ... circ f, m times, for some m? Has anyone written about the amount of work required to figure out such m?
--
You received this message because you are subscribed to the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this group and stop receiving emails from it, send an email to HomotopyTypeThe...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Sorry to double-post, but a more subtle interpretation of your response occurred to me. Should I instead be thinking of Univalence as a tool generating words in some presentation? Is my question about the "m" s better phrased as a question about a Dehn function?
Should I walk away with either of these impressions?1. Univalence is all about the absence of certain equivalences, rather than the ability to magically find a suitable identification.2. If UA winds up being "computationally effective," and identifications and equivalences are the same thing, then when I execute a proof which uses UA via Curry-Howard, the call to the runtime's UA routine will run in constant-time and constant-memory, since it is indistinguishable from the identity function.
Thank you for your time; I'm sure these are painfully obvious questions, and I'll be sure to give the list a healthy break before returning with more.
I think we're talking about different things. If "UA" means the
operation making an equivalence into an identification, then that can
certainly be constant-space. But *applying* that equivalence to an
*input*, or equivalently transporting across the resulting
identification, can be as slow as you like.
Does anyone know if any of the HoTT models would admit a squash type
that actually erases computational content?
The question, to be repetitive, is what is the difference between (ii)
and (iii) in the absence of excluded middle?
What is the logical/mathematical meaning of the operation "erase"?
That of ¬¬X is clear (namely (X->0)->0), and so is that of ||X||
(essentially forall P. isProp P -> (X->P)->P)). The operation "erase" is
between these two, but it doesn't really have a definition (just an
implementation).
Perhaps Bob Harper can chime in, but I would guess (based on my
ignorance of NuPrl) that you won't get away with PERs. Doesn't NuPrl
have a type of propositions of some sort? How would that be
represented as a PER? An assembly sounds more likely to me.
The difference between a PER and an assembly is that an assembly is
like a per in which a realizer may realize many elements. Thus, for
instance, one can have the assembly of all PERs, or "nabla 2" which is
the assembly of classical truth values. It is the set {0,1} where both
elements are realized by the unit ().
Also, let me point out that the is *no* type theory, however
intensional, in which the terms contain the *entire* derivation. At
the very least judgmenal equalities are not recorded,
which is consistent with the philosophy of type
theory: a judgmental equality is supposed to be analytic, or
"self-establishing" (a computer scientist would say "decidable" and
"proof irrelevant", or just "bool").
On Thu, Aug 6, 2015 at 12:20 PM, Matt Oliveri <atm...@gmail.com> wrote:
>> which is consistent with the philosophy of type
>> theory: a judgmental equality is supposed to be analytic, or
>> "self-establishing" (a computer scientist would say "decidable" and
>> "proof irrelevant", or just "bool").
>
> Well I personally don't see why judgmental equality should have to be
> decidable. And I wouldn't have thought you'd think it should either, since
> Andromeda is implementing extensional type theory, where judgmental equality
> is undecidable.
I learned this from Bob Harper so perhaps he would be a better person
to explain. But I explain this to myself as follows: in a formalism
there will be *some* notion of self-evident equality (I am skipping
over the somewhat subtle difference between "self-evident" and
"analytic" – I shall use "self-evident" because it is less scary).
Now let judgmental equality be whatever notion of equality is deemed
self-evident. We may then incorporate this notion into type theory by
postulating that we may replace a thing with one that is
self-evidently equal to it. We will be tempted to push the limits of
"self-evident" to obtain a richer system (and to appear to be
smarter), but probably not beyond "decidable".
But it is also important to understand the
motivations and philosophy behind Martin-Löf type theory, for then one
can respect the "design limitations" imposed on it.
Type theory with
equality reflection is not in my mind type theory a la Martin-Löf. It
is not married to computation and constructivism (as a philosophy). It
is more like abstract algebra.
Thanks for pointing that out, I should have known it!
But still I don't see why validity of isProp(A) entails that A is
subterminal. But it says A is equivalent to a subterminal.
On Tuesday, August 4, 2015 at 1:35:38 AM UTC-4, Michael Shulman wrote:I think we're talking about different things. If "UA" means the
operation making an equivalence into an identification, then that can
certainly be constant-space. But *applying* that equivalence to an
*input*, or equivalently transporting across the resulting
identification, can be as slow as you like.
Exactly. I was saying UA itself should be fast. The equivalences it works with may or may not be fast.
If you start with a fast proof that (m + n = n + m) for binary nats, and convert it to a slow proof that (m' + n' = n' + m') for unary nats, this isn't UA's fault. It's because the equivalence between binary and unary nats already includes an exponential-time conversion in both directions. We can write arbitrarily slow programs using univalence, but that doesn't mean univalence is intractable. Similarly, we don't consider for loops intractable just because you can write very slow programs using for loops. The possibility of intractable equivalences does not necessarily mean UA is impractical for programs that you really run. It just means real programs should not feed UA with intractable equivalences.On Mon, Aug 3, 2015 at 8:55 PM, Piyush P Kurur <p...@cse.iitk.ac.in> wrote:
> On Sun, Aug 02, 2015 at 07:32:54PM -0700, Matt Oliveri wrote:
>> On Sunday, August 2, 2015 at 6:11:09 PM UTC-4, Timothy Carstens wrote:
>> >
>> > Should I walk away with either of these impressions?
>> > 1. Univalence is all about the absence of certain equivalences, rather
>> > than the ability to magically find a suitable identification.
>> > 2. If UA winds up being "computationally effective," and identifications
>> > and equivalences are the same thing, then when I execute a proof which uses
>> > UA via Curry-Howard, the call to the runtime's UA routine will run in
>> > constant-time and constant-memory, since it is indistinguishable from the
>> > identity function.
>> >
>>
>> Well UA definitely isn't magic, but it really does "positively" provide
>> identifications in constructive models.
>>
>> It doesn't seem unreasonable that UA itself would be constant-time and
>> constant-space, but that's an implementation issue. You could ask the
>> authors of existing computational interpretations, like the cubical
>> model(s) how their UA performs.
>
> I thought that UA can potentially be computationally intractable
> (although still computable). After all the UA says that there is an
> equivalence between A and B being equivalent to A and B being equal
> but nothing about how computational hard this mapping could be.
>
> This is not precise but: If we take binary and unary nats since there
> is an exponential blow up in the input sizes in both form, the proofs
> that m + n == n + m when translated from binary to unary can take
> exponential time as measured in the input size.
There is one further, essential difference between propositional
truncation in HoTT/UF and various forms of "brackets" in type theory
-- but this one is in a different direction.
In HoTT/UF concepts are generally defined in terms of mathematical
content. One may *discover* that a type is contractible, or a
proposition, or a set, and this discovery can be a non-trivial piece
of mathematics. I would call this the "mathematical tradition".
In traditional type theories concepts are generally defined in terms
of structure or form. One does not have to "discover" that something
is the unit type, a bracket type, a squash type, or a Coq-style Prop
-- at worst it is a matter of normalization. I would call this the
"logical tradition" or maybe even the "formalist tradition".
The formalist tradition is very strong in the type-theoretic
community, which of course is not a surprise. I have privately
hypothesized before that if a graduate student presented the
univalence axiom and some of its significant consequences at a
workshop, people would just remember it as an interesting but
non-workable idea which broke normalization and generally did things
"the wrong way". I remember the opposition Steve and I met when we
formulated the bracket types; the first thing almost every type
theorist said was "but you have a judgmental equality as a premise in
one of your elimination rules" I never really found out why that was a
faux pas.
One reason for HoTT/UF having created such excitement in the
type-theoretic community is that it brought in some good old
mathematics.
With kind regards,
Andrej
On 05/08/15 21:21, Matt Oliveri wrote:
> On Wednesday, August 5, 2015 at 11:03:16 AM UTC-4, Jonathan Sterling wrote:
>
> Does anyone know if any of the HoTT models would admit a squash type
> that actually erases computational content?
>
>
> Seems like they would, doesn't it? Using subobjects to do subset-squash?
> But I don't know.
The question, to be repetitive, is what is the difference between (ii)
and (iii) in the absence of excluded middle?
What is the logical/mathematical meaning of the operation "erase"?
That of ¬¬X is clear (namely (X->0)->0), and so is that of ||X||
(essentially forall P. isProp P -> (X->P)->P)). The operation "erase" is
between these two, but it doesn't really have a definition (just an
implementation).
Martin
So I guess for Nuprl, erase(X) refers to the subset-squash of X. Then yeah. In particular, erase(A x)->¬¬(A x).
The question, to be repetitive, is what is the difference between (ii)
and (iii) in the absence of excluded middle?
What is the logical/mathematical meaning of the operation "erase"?
That of ¬¬X is clear (namely (X->0)->0), and so is that of ||X||
(essentially forall P. isProp P -> (X->P)->P)). The operation "erase" is
between these two, but it doesn't really have a definition (just an
implementation).
I don't know anything about Agda's erase. As for subset in Nuprl, would it be cheating to refer you to the PER semantics? That is something different from Nuprl's implementation.
Informally, imagine you have a computer program. You can reason about it. But none of that reasoning affects the program. It's already compiled and released, say, but we still have the source code, to figure out how bad we screwed up. Well, do you use intuitionistic logic to reason about it, or classical logic? There's some difference, but in neither case is this going to have any effect at all on the program. That's the difference between erase and ¬¬, sort of. Neither of them have nontrivial computational content, but erase is still intuitionistic.
I find this discussion of time and space complexity strange. Neither MLTT nor the HoTT variant have any inherent computational meaning. In particular, they do not have an operational semantics to which one could assign costs. So what does it mean to talk about terms being "fast" or "small", much less what influence univalence might have on the complexity of terms?
Having said that, I think the gist here has been that the "cost" of univalence is whatever it is in the sense that the equivalence you put in is the equivalence you get out when you use transport, for example. If a computational meaning is given to terms, then uses of transport would have a cost determined by the cost of executing the particular equivalence to move a value from one fiber of a family to another.
On Tuesday, August 4, 2015 at 7:28:18 PM UTC-4, Matt Oliveri wrote:
On Tuesday, August 4, 2015 at 1:35:38 AM UTC-4, Michael Shulman wrote:I think we're talking about different things. If "UA" means the
operation making an equivalence into an identification, then that can
certainly be constant-space. But *applying* that equivalence to an
*input*, or equivalently transporting across the resulting
identification, can be as slow as you like.
Exactly. I was saying UA itself should be fast. The equivalences it works with may or may not be fast.
[...]
I don't see anything wrong with a single realizer being the realizer of many things. It is a synthetic judgment whether or not a realizer realizes a type/proposition. For example, in NuPRL axiom realizes all equalities, and * realizes all squashed (in the subsingleton sense) types. No problem afaik, or did I miss the point?
Of course, being a synthetic judgment, the realization judgment in NuPRL is not at all decidable, nor should it be. It's a source of great expressive power that it not be the case. Tactic trees, as they are called, are the explorable records of a proof derivation in a manner not much different from how one explores derivations in Coq --- it is a script that must be executed to understand what is going on, and you can do that exploration incrementally.
But what can we say *internally* in type theory about erase X?
In MLTT, ||X|| (if it exists) is the propositional reflection of X,
where a proposition is a type in which any two elements are Id-equal.
We can write the following specification of ||X||, where I use R for
||X|| in MLTT.
isPropReflection R X = isProp R * (X->R)
* Pi(P:U). isProp P -> (X->P) -> (R->P)
[...]
Is it possible to complete the following internal definition of
computational erasure, before we know whether it exists?
isErasure E X = ?
On 06/08/15 17:06, Jon Sterling wrote:
> Then, we can say
>
> isErasure E X =def= (X -> E =ext= unit) /\ ((not X) -> E =ext= void).
Looks like E = ¬¬X should always satisfy this, but I don't know the
precise details of the rules of CTT.
Let's take this definition in HoTT as an experiment and see what
happens:
isErasure E X = (X -> E=1) * (¬X -> E=0).
Hence this (in its HoTT modification) doesn't uniquely define a notion
of erasure.
But that maybe is fine. Constable's paper wishes to settle with ¬¬(-).
But then I don't see why one would like to have both erase X and ¬¬X
if they are logically equivalent...
On Aug 6, 2015, at 5:03 PM, Peter LeFanu Lumsdaine <p.l.lu...@gmail.com> wrote:
> > But in the set, groupoid, and simplicial set models, it will correspond to “being equivalent to a subterminal” as you say, because one can use the epi-mono factorisation to get a “definitional squashing”, and this preserves fibrations in those models. And I’d expect that in general, viewing models of TT as quasi-categories (as constructed by Chris Kapulkin), inhabitedness of “isProp(A)” should correspond precisely to being subterminal in the ∞-categorical sense. This would I guess be rather non-trivial, and quite interesting, to prove!
>
> Does one really know how to interpret type theory in \infty-cats.> Where has Kris written up this?
A lot is certainly still to be worked out, but Chris’s results have made significant progress, enough for a statement like mine above to be completely precise.
The main writeup is “Locally Cartesian Closed Quasicategories from Type Theory”, http://arxiv.org/abs/1507.02648. Its main results are: let C be a CwA (or contextual category) with the structure to model Σ-types, Id-types, and Π types with the eta rule and functional extensionality. Then:(a) there is a quasi-category N_f(C), constructed from C in a fairly concrete way (in particular, allowing objects and maps of C to be viewed as objects and arrows of N_f(C), well-defined up to equivalence)
(and moreover N_f(C) is a model for the homotopy coherent nerve of (C, viewed as a category with weak equivalences));(b) N_f(C) is locally cartesian closed, in the ∞-categorical sense.He also posted recently on the n-category café about what’s known about the idea that TT should be an internal language for higher categories, and what sorts of things one might hope for: https://golem.ph.utexas.edu/category/2015/07/internal_languages_of_higher_c.html> Now moving from a model cat to the associated \infty-cat (not at all
> trivial BTW) means replacing equality by weak equivalence.
> Accordingly, I think one cannot interpret judgemental equality in
> \infty-cats! Isn't that a problem?
>> But there are ways of restoring it since one can find a "representing"
> model cat for sufficiently many \infty-cats (see Szumilo's Thesis for
> the most developed account in this vein). But there are various
> representing model cats.
>
> I tend to think that one can't interpret type theory in \infty-cats
> due to this absence of judgemental equality.
Quite agreed, it’s a highly non-trivial problem! But I don’t think it’s a non-starter by any means. The sort of things one could reasonably hope for might be e.g.:(a) given any lcc quasicategory X, there’s an associated CwA C(X), with Π, Σ, Id-structure as above (maybe constructed via model-categorical presentations of X);(b) there’s a map ε_X : N_f(C(X)) -> X (a putative co-unit);(c) maybe ε_X is even an equivalence of quasicategories, or something.(a) and (b) would I think be enough to justify the statement “type theory [with Σ, Π, Id] can be interpreted in lcc ∞-categories”: it would give a function from (at least) the pure syntax of TT into the objects and arrows of any lccc qcat. Some form of (c) could moreover justify saying that TT can be used to reason about arbitrary objects, arrows etc. in an lcc ∞-cat. And obviously one would want to flesh out that picture in lots more ways: do different representing model cats for an lcc qcat give equivalent models of TT, in some sense? Does the whole thing fit together in an infinity-adjunction?But in any case, I just want to make the point that the absence (or ill-behavedness) of judgemental equality in quasi-cats isn’t an insurmountable obstacle to interpreting TT in them. Just like the fact that pullback in LCCC’s isn’t strictly functorial, it’s a technical difficulty that needs to be overcome somehow, but it’s not (hopefully) a deal-breaker. It tells us that the construction of C(X) from X can’t be as straightforward as the construction of, say, the ETT CwA associated to an LCCC; but that’s ok, and not really awfully surprising.–p.
> 3. Could you elaborate on the issue of Nuprl's universes? I must admit
> that the explanation of Nuprl's universes became a bit complicated in
> order to support the intensional equality that they imposed, and I'm
> curious if this is related to what you said.
You can model universes in modest sets if U is a set of codes. This
was done by Beeson in some paper in the 80s. I didn't like the coding
involved and was looking for something more natural (see Chapter 2 of
my book, more precisely Def.2.8). There U is the set of all pers
endowed with the trivial realizability structure.
So as you incline the NuPRL people were rather considering universes
`a la Beeson. Those validate structural recursive definitions over the
universe whereas the one Andrej and I favour don't.
One cannot say what is the right notion they are just different. But I
never saw too much use in inductive universes because in mathematics
one doesn't want to preform induction over syntactic objects, it's too
intensional as you said.
--
You received this message because you are subscribed to a topic in the Google Groups "Homotopy Type Theory" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/HomotopyTypeTheory/5snMdo3LnP8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to HomotopyTypeThe...@googlegroups.com.